Replace libavcodec's native buffer allocation with code taken from
ffplay/ffmpeg's libavfilter support. The code in lavc_dr1.c is directly
copied from cmdutils.c. Note that this is quite arcane code, which
contains some workarounds for decoder bugs and the like. This is not
really a maintainance burden, since fixes from ffmpeg can be directly
applied to the code in lavc_dr1.c.
It's unknown why libavcodec doesn't provide such a function directly.
avcodec_default_get_buffer() can't be reused for various reasons.
There's some hope that the work known as The Evil Plan [1] will make
custom get_buffer implementations unneeded.
The DR1 support as of this commit does nothing. A future commit will
use it to implement ref-counting for mp_image (similar to how AVFrame
will be ref-counted with The Evil Plan.)
[1] http://lists.libav.org/pipermail/libav-devel/2012-December/039781.html
Deprecate the hardware specific video codec entries (like ffh264vdpau).
Replace them with the --hwdec switch, which requests that a specific
hardware decoding API should be used. The codecs.conf entries will be
removed at a later time, but for now they are useful for testing and
compatibility.
Instead of --vc=ffh264vdpau, --hwdec=vdpau should be used.
Add a fallback if hardware decoding fails. Most hardware decoders
(including vdpau) support only a subset of h264, and having such a
fallback is supposed to enable a better user experience.
This was buggy and didn't even work in the simplest cases. It was
disabled when multithreading was used, and always disabled for h264.
A better alternative (reference counting) will be added later.
Hardware decoding still uses the ffmpeg DR mechanism, but has been
decoupled from mpv's DR in the previous commit.
vdpau hardware decoding used the DR (direct rendering) path to let the
decoder query a surface from the VO. Special-case the HW decoding path
instead, to make it separate from DR.
This mutated the variable for the thread count option
(lavc_param->threads) on decoder initialization. This didn't have any
practical relevance, unless formats supporting hardware video decoding
and other formats were played in the same mpv instance. In this case,
hardware decoding would set threads to 1, and all files played after
that would use only one thread as well even with software decoding.
Remove XvMC leftover (CODEC_CAP_HWACCEL).
Simplify the decoder pixel format handling by making it handle only
the case vd_lavc needs: a video stream always decodes to a single
pixel format.
Remove the handling for multiple pixel formats, and remove the
codecs.conf pixel format declarations that are left.
Remove the handling of "ambiguous" pixel formats like YV12 vs. I420 (via
VDCTRL_QUERY_FORMAT etc.). This is only a problem if the video chain
supports I420, but not YV12, which doesn't seem to be the case anywhere,
and in fact would not have any advantage.
Make the "flip" flag a global per-codec flag, rather than a pixel format
specific flag. (Some ffmpeg decoders still return a flipped image, so
this has to be done manually.) Also fix handling of the flip operation:
do not overwrite the global flip option, and make the --flip option
invert the codec flip option rather than overriding it.
Slices allowed filtering or drawing video in horizontal bands or
blocks. This allowed working on the video in smaller units. In theory,
this could bring a performance win by lowering cache pressure, as you
didn't have to keep the whole video frame in cache while filtering,
only the slice.
In practice, the slice code path was barely used for the following
reasons:
- Multithreaded decoding with ffmpeg didn't use slices. The ffmpeg
slice callback was disabled, because it can be called from another
thread, and the mplayer video chain is not thread-safe.
- There was nothing that would turn "full" images into appropriate
slices, so slices were rarely used.
- Most filters didn't actually support slices.
On the other hand, supporting slices lead to code duplication and more
complex code in general. I made some experiments and didn't find any
actual measurable performance improvements when using slices. Even
ffmpeg removed slices based filtering from libavfilter in favor of
simpler code.
The most broken thing about the slices code path is that slices can't
be queued, like it is done for images in vo.c.
For some reason, libavcodec abuses the slices rendering code path for
hardware decoding: in that case, the only purpose of the draw callback
is to pass a vdpau video surface object to video output. (It is unclear
to me why this had to use the slices code, instead of just returning an
AVFrame with the required vdpau state.)
Make this code separate within mpv, so that the internal slices code
path is not used for hardware decoding. Pass the vdpau state with
VOCTRL_HWDEC_DECODER_RENDER instead.
Remove the mencoder specific VOCTRLs.
Remove VOCTRL_DRAW_IMAGE and always set vo_driver.draw_image in VOs.
Make draw_image mandatory: change some VOs (like vo_x11) to support it,
and remove the image-to-slices fallback in vf_vo.
Remove vo_driver.is_new. This member indicated whether draw_image is
supported unconditionally, which is now always the case.
draw_image_pts is a hack until the video filter chain is changed to
include the PTS as field in mp_image. Then vo_vdpau and vo_lavc will
be changed to use draw_image.
This allowed to move the input stream layer across the network, allowing
the user to play anything that mplayer could play remotely. For example,
playing a DVD related on a remote server (say, with the host name
"remotehost1") could be done by starting the netstream server on that
remote server, and then running:
mplayer mpst://remotehost1/dvd://
This would open the DVD on the remote host, and transfer the raw DVD
sector reads over network. It works the same for other protocols, and
all accesses to the stream layer are marshaled over network. It's
comparable to the way the cache layer (--cache) works.
It has questionable use and most likely was barely used at all. There's
lots of potential for breakage, because it doesn't translate the stream
CTRLs to network packets. Just get rid of it.
The server used to be in TOOLS/netstream.c, and was accidentally removed
earlier.
This function sucks and apparently is not very portable (at least on
mingw, the configure check fails). Also remove the emulation of that
function from osdep/strsep*, and remove the configure check.
I have no idea when or how this broke, but _wstati64() is the function
we want anyway (64 bit filesize). Possibly this was a mingw-w64 bug.
It's unknown why "wstat()" just doesn't work in this case, as it's not
defined by MSDN and could be defined by mingw as it needs.
vsscanf() is in POSIX, C99, mingw, etc. Further, the implementation in
osdep/vsscanf.c was completely broken, and if it worked, it worked only
by chance.
The check determined whether the argument for .align is in bytes, or
log2(bytes). Apparently it's always in bytes for ELF i386 systems, and
this check is used for x86 inline assembler only. Even if this
assumption should be wrong, it likely won't cause much damage: the
existing code uses it only in the form ".align 4", which means in the
worst case it will try to align to 16 bytes, which doesn't cause any
problems (unless the object file format does not support such a high
alignment).
Update the filters that used this.
Quoting the GNU as manual:
For other systems, including ppc, i386 using a.out format, arm and
strongarm, it is the number of low-order zero bits the location counter
must have after advancement. For example `.align 3' advances the
location counter until it a multiple of 8. If the location counter is
already a multiple of 8, no change is needed.
Change the only usage of HAVE_BUILTIN_EXPECT, demux.h, to use an #ifdef
instead. In theory, a configure check is better, but nobody does it this
way anyway, and we seek to reduce the configure script.
mixer_setvolume() accepts float values for volume, but used the
integer function av_clip() to limit range, losing the fractional part
as a side effect. Change the code to use av_clipf() instead. For most
uses this shouldn't make any real difference; actual AO volume
settings may not have that much precision anyway.
af_volnorm can process either int16_t or float audio data. The float
version used 0 to INT_MAX as full value range, when it should be 0 to
1. This effectively disabled the filter (due to all input being
considered to fall in the silence range). Fix.
Reported by Tobias Jacobi <liquid.acid@gmx.net>.
Something produces corrupt Matroska files with audio tracks that have
SamplingFrequency set to 44100 and OutputSamplingFrequency to 96000,
when the correct playback rate is 44100. Add a special case for this
44100/96000 combination and override it to 44100/44100; it's unlikely
that anyone would ever want to use this 44100/96000 combination for
real in valid files.
Ensure that even if a seek is inaccurate it will not show video from
outside the defined timeline. Previously, seeking to the beginning of
a segment could show frames from before the start of the segment if
the seek was done in inaccurate mode and the demuxer seeked to an
earlier position. Now hr-seek machinery is used to skip at least the
frames that should not be part of playback timeline at all.
Now external subtitles essentially use the playback time, instead of
the segment time.
This is more useful when using external subtitles with mkv ordered
chapters. The previous behavior is not necessarily incorrect, and e.g.
makes it easier to use subtitles directly extracted from ordered
chapters segments. But we consider the new behavior more useful.
Also see commit 06e3dc8.
This is simpler and more useful. We could add a new switch for the old
functionality, but that would probably be more confusing than helpful.
When passing only a single file to the command line, this commit
shouldn't change behavior.
(Classic mplayer provided both features by duplicating the loop
functionality in the "playtree".)
When the last frame is displayed, and a frame step command is issued,
playback ands and advances to the next file. But before this commit,
the next file was played unpause. Fix this, and make sure pause is
kept.
Looks like unicode support was broken with this simple `fonts.conf`. Copy more
(all) of fontconfig's default `fonts.conf`.
Fixes#13
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
This causes trouble when a hw device is used:
pcm_hw.c:514:(snd_pcm_hw_delay) SNDRV_PCM_IOCTL_DELAY failed (-77): File descriptor in bad state
when running mpv test.mkv --ao=alsa:device=iec958,alsa and pausing
during playback.
Historically, mplayer usually did not call snd_pcm_delay() (which is
called by get_delay()) while paused, so this problem never showed up.
But at least mpv has changes that cause get_delay() to be called when
updating the status line (see commit 3f949cf).
It's possible that calling snd_pcm_delay() is not always legal when the
audio is paused, and at least fails with the error message mentioned
above is the device is a hardware device. Change get_delay() to return
the last delay before the audio was paused. The intention is to get a
continuous playback status display, even when pausing or frame stepping,
otherwise we could just return the audio buffer fill status in
get_delay() or even just 0 when paused.
Setting some subtitle options may lead to incorrect rendering of complex
ASS subtitle scripts, such as displaced signs or visual artifacts. The
user should be made aware that this can happen.
In theory, libass could make using some of these options relatively
safe, but it doesn't.
Note that there are potentially much more options that could in theory
break subtitle rendering, but add a warning only to the most fragile
ones.
Before this commit, the --osd-* options (like --osd-font-size etc.)
configured both the OSD and subtitle font. Make them separate, and add
--sub-text-* options (like --sub-text-size etc.). Now --osd-* affects
the OSD font only, and --sub-text-* unstyled text subtitles only.
They were more or less grouped by usefulness, but since everything
else in the manpage is sorted alphabetically, it's better to be
consistent and sort these options as well.