The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
Found with valgrind. This is somewhat terrifying, because the VA-API API
function is supposed to fill these values, and we access them only if
the API functions return success. So this shouldn't have happened.
Completely useless, and could accidentally be enabled by cycling
framedrop modes. Just get rid of it.
But still allow triggering the old code with --vd-lavc-framedrop, in
case someone asks for it. If nobody does, this new option will be
removed eventually.
Use OPT_KEYVALUELIST() for all places where AVOptions are directly set
from mpv command line options. This allows escaping values, better
diagnostics (also no more "pal"), and somehow reduces code size.
Remove the old crappy option parser (av_opts.c).
Since the new hwaccel API is now merged in ffmpeg's stable release, we can
finally remove support for the old API.
I pretty much kept lu_zero's new code unchanged and just added some error
printing (that we had with the old glue code) to make the life of our users
less miserable.
DVD and Bluray (and to some extent cdda) require awful hacks all over
the codebase to make them work. The main reason is that they act like
container, but are entirely implemented on the stream layer. The raw
mpeg data resulting from these streams must be "extended" with the
container-like metadata transported via STREAM_CTRLs. The result were
hacks all over demux.c and some higher-level parts.
Add a "disc" pseudo-demuxer, and move all these hacks and special-cases
to it.
This add support for reading primary information from lavc, categorized
into BT.601-525, BT.601-625, BT.709 and BT.2020; and passes it on to the
vo. In vo_opengl, we always generate the 3dlut against the wider BT.2020
and transform our source into this colorspace in the shader.
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
This means use of the min/max fields can be dropped for the flag option
type, which makes some things slightly easier. I'm also not sure if the
client API handled the case of flag not being 0 or 1 correctly, and this
change gets rid of this concern.
While I'm not very fond of "const", it's important for declarations
(it decides whether a symbol is emitted in a read-only or read/write
section). Fix all these cases, so we have writeable global data only
when we really need.
The i_bps members of the sh_audio and dev_video structs are mostly used
for displaying the average audio and video bitrates. Keeping them in
bits-per-second avoids truncating them to bytes-per-second and changing
them back lateron.
This is incomplete; the video chain will still hold some vaapi objects
after destroying the decoder and thus the vaapi context. This is very
bad. Fixing it would require something like refcounting the vaapi
context, but I don't really want to.
mpv supports two hardware decoding APIs on Linux: vdpau and vaapi. Each
of these has emulation wrappers. The wrappers are usually slower and
have fewer features than their native opposites. In particular the libva
vdpau driver is practically unmaintained.
Check the vendor string and print a warning if emulation is detected.
Checking vendor strings is a very stupid thing to do, but I find the
thought of people using an emulated API for no reason worse.
Also, make --hwdec=auto never use an API that is detected as emulated.
This doesn't work quite right yet, because once one API is loaded,
vo_opengl doesn't unload it, so no hardware decoding will be used if the
first probed API (usually vdpau) is rejected. But good enough.
Sometimes, Matroska files store monotonic PTS for h264 tracks with
b-frames, which means the decoder actually returns non-monotonic PTS.
Handle this with an evil trick: if DTS is missing, set it to the PTS.
Then the existing logic, which deals with falling back to DTS if PTS is
broken. Actually, this trick is not so evil at all, because usually, PTS
has no errors, and DTS is either always set, or always unset. So this
_should_ provoke no regressions (famous last words).
libavformat actually does something similar: it derives DTS from PTS in
ways unknown to me. The result is very broken, but it causes the DTS
fallback to become active, and thus happens to work.
Also, prevent the heuristic from being active if PTS is merely monotonic
instead of strictly-monotonic. Non-unique PTS is broken, but we can't
fallback to DTS anyway in these cases.
The specific mkv file that is fixed with this commit had the following
fields set:
Muxing application: libebml v1.3.0 + libmatroska v1.4.1
Writing application: mkvmerge v6.7.0 ('Back to the Ground') [...]
But I know that this should also fix playback of mencoder produced mkv
files.
This was broken for some time, and it didn't recover correctly.
Redo decoder display preemption. Instead of trying to reinitialize the
hw decoder, simply fallback to software decoding. I consider display
preemption a bug in the vdpau API, so being able to _somehow_ recover
playback is good enough.
The approach taking here will probably also make it easier to handle
multithreading.
Also remove MSGL_SMODE and friends.
Note: The indent in options.rst was added to work around a bug in
ReportLab that causes the PDF manual build to fail.
Change how the video decoding loop works. The structure should now be a
bit easier to follow. The interactions on format changes are (probably)
simpler. This also aligns the decoding loop with future planned changes,
such as moving various things to separate threads.
This collects statistics and other things. The option dumps raw data
into a file. A script to visualize this data is included too.
Litter some of the player code with calls that generate these
statistics.
In general, this will be helpful to debug timing dependent issues, such
as A/V sync problems. Normally, one could argue that this is the task of
a real profiler, but then we'd have a hard time to include extra
information like audio/video PTS differences. We could also just
hardcode all statistics collection and processing in the player code,
but then we'd end up with something like mplayer's status line, which
was cluttered and required a centralized approach (i.e. getting the data
to the status line; so it was all in mplayer.c). Some players can
visualize such statistics on OSD, but that sounds even more complicated.
So the approach added with this commit sounds sensible.
The stats-conv.py script is rather primitive at the moment and its
output is semi-ugly. It uses matplotlib, so it could probably be
extended to do a lot, so it's not a dead-end.
It's not really needed to be public. Other code can just use mp_image.
The only disadvantage is that the other code needs to call an accessor
to get the VASurfaceID.
Although I at first thought it would be better to have a separate
implementation for hwaccels because the difference to software images
are too large, it turns out you can actually save some code with it.
Note that the old implementation had a small memory management bug. This
got painted over in commit 269c1e1, but is hereby solved properly.
Also note that I couldn't test vf_vavpp.c (due to lack of hardware), and
I hope I didn't accidentally break it.
They were used by ancient libavcodec versions. This also removes the
need to distinguish vdpau image formats at all (since there is only
one), and some code can be simplified.
Instead of doing it on every seek (libavcodec calls get_format on every
seek), reinitialize the decoder only if the video resolution changes.
Note that this may be relatively naive, since we e.g. (or: in
particular) don't check for profile changes. But it's not worse than the
state before the get_format change, and at least it paints over the
current vaapi breakage (issue #646).
This "sometimes" crashed when seeking. The fault apparently lies in
libavcodec: the decoder returns an unreferenced frame! This is
completely insane, but somehow I'm apparently still expected to
work this around. As a reaction, I will drop Libav 9 support in the
next commit. (While this commit will go into release/0.3.)
Apparently the "right" place to initialize the hardware decoder is in
the libavcodec get_format callback.
This doesn't change vda.c and vdpau_old.c, because I don't have OSX, and
vdpau_old.c is probably going to be removed soon (if Libav ever manages
to release Libav 10). So for now the init_decoder callback added with
this commit is optional.
This also means vdpau.c and vaapi.c don't have to manage and check the
image parameters anymore.
This change is probably needed for when libavcodec VDA supports gets a
new iteration of its API.
Like with the previous commit, this is probably not needed, but it's
unclear whether that really is the case. Most likely, it used to be
needed by some demuxer, and now the only demuxer left that could
_possibly_ trigger this is demux_mkv.c.
Note that mjpeg is the only decoder that reads the extra_huff option,
and nothing in libavformat actually sets the option. So maybe it's
fundamentally not needed anymore.
This case can't happen with the normal realvideo codepath in
demux_mkv.c, because the code would errors out if the extradata is too
small, and everything would be broken anyway in the case the vd_lavc.c
condition is actually triggered.
It still might happen with VfW-muxed realvideo in Matroska, though.
Basically, I'm hoping this doesn't matter anyway, and that the vd_lavc.c
code was for other old demuxers, like demux_avi or demux_rm. Following
the commit history, it's not really clear for what demuxer this code
was added.