This should get rid of some flickering. Since this actually skips all
the wacky fullscreening code on startup, this might lead to certain
wacky features to stop working. In this case, you'll have to use the
--x11-fstype option, and disable _NETWM_STATE_FULLSCREEN usage.
vo_x11_map_window() was attempting to clear the window on map. However,
it did so immediately after the map request. It probably assumed that
the drawing calls for clearing the window would be queued along with the
map request, and then executed in the right order. However, this
assumption was wrong - the map request first has to go to the window
manager (I guess?), so a lot of things happen before the window is even
mapped.
Fix this by moving the call to the MapNotify message handler, when the
window (apparently) becomes really visible.
I also tried to set CWBackPixel to black instead, but this seemed to
result in flickering on manual resizing.
This blocks everything, until the window is actually reported as mapped.
This fixes the race condition between VO initialization and mapping the
window, which resulted in possibly different window sizes, leading to an
immediate redraw, visible as flashing.
Note that if the map event never comes for some reason, we're out of
luck and will block forever.
It could in theory happen that the filter loop will enter a blocking
wait, even though it could make progress by emptying the list of
already-filtered images. I'm not quite sure if this could actually cause
a real issue - probably not.
VapourSynth won't just filter multiple frames at once on its own. You
have to request multiple frames at once manually. This is what this
commit introduces: a sub-option controls how many frames will be
requested at once. This also changes the semantics of the maxbuffer sub-
option, now renamed to buffered-frames.
Preparation so that various things related to video can run in different
threads. One part to this is making the video surface pool safe.
Another issue is the preemption mechanism, which continues to give us
endless pain. In theory, it's probably impossible to handle preemption
100% correctly and race-condition free, unless _every_ API user in the
same process uses a central, shared mutex to protect every vdpau API
call. Otherwise, it could happen that one thread recovering from
preemption allocates a vdpau object, and then another thread (which
hasn't recovered yet) happens to free the object for some reason. This
is because objects are referenced by integer IDs, and vdpau will reuse
IDs invalidated by preemption after preemption.
Since this is unreasonable, we're as lazy as possible when it comes to
handling preemption. We don't do any locking around the mp_vdpau_ctx
fields that are normally immutable, and only can change when recovering
from preemption. In practice, this will work, because it doesn't matter
whether not-yet-recovered components use the old or new vdpau function
pointers or device ID. Code calls mp_vdpau_handle_preemption() anyway to
check for the preemption event and possibly to recover, and that
function acquires the lock protecting the preemption state.
Another possible source of potential grandiose fuckup is the fact that
the vdpau library is in fact only a tiny wrapper, and the real driver
lives in a shared object dlopen()ed by the wrapper. The wrapper also
calls dlclose() on the loaded shared object in some situations. One
possible danger is that failing to recreate a vdpau device could trigger
a dlclose() call, and that glibc might unload it. Currently, glibc
implements full unloading of shared objects on the last dlclose() call,
and if that happens, calls to function pointers pointing into the shared
object would obviously crash. Fortunately, it seems the existing vdpau
wrapper won't trigger this case and never unloads the driver once it's
successfully loaded.
To make it short, vdpau preemption opens up endless depths of WTFs.
Another issue is that any participating thread might do the preemption
recovery (whichever comes first). This is easier to implement. The
implication is that we need threadsafe xlib. We just hope and pray that
this will actually work. This also means that once vdpau code is
actually involved in a multithreaded scenario, we have to add
XInitThreads() to the X11 code.
Use the newly provided mp_vdpau_handle_preemption() function, instead of
accessing mp_vdpau_ctx fields directly. Will probably make multithreaded
access to the vdpau context easier.
Mostly unrelated to the actual changes, I've noticed that using hw
decoding with vo_opengl sometimes leads to segfaults inside of nvidia's
libGL when doing the following:
1. use hw decoding + vo_opengl
2. switch to console (will preempt on nvidia systems)
3. switch back to X (mpv will recover, switches to sw decoding)
4. enable hw decoding again
5. exit mpv
Then it segfaults when mpv finally calls exit(). I'll just blame nvidia,
although it seems likely that something in the gl_hwdec_vdpau.c
preemption handling triggers corner cases in nvidia's code.
This was broken for some time, and it didn't recover correctly.
Redo decoder display preemption. Instead of trying to reinitialize the
hw decoder, simply fallback to software decoding. I consider display
preemption a bug in the vdpau API, so being able to _somehow_ recover
playback is good enough.
The approach taking here will probably also make it easier to handle
multithreading.
Until recently, the VO was an unavoidable part of the seeking code path.
This was because vdpau deinterlacing could double the framerate, and hr-
seek and framestepping etc. all had to "see" the additional frames. But
we've removed the frame doubling from the vdpau VO and moved it into a
video filter (vf_vdpaupp), and there's no reason left why the VO should
participate in seeking.
Instead of queuing frames to the VO during seek and skipping them
afterwards, drop the frames early.
This actually might make seeking with vo_vdpau and software decoding
faster, although I haven't measured it.
It doesn't look like vo_wayland_config() necessarily sets this flag, so
it seems safer to trigger an explicit resize. This accounts for the case
when playing a new file with different size than the one before.
Currently, vo_reconfig() calculates the requested window size and sets
the vo->dwidth/dheight fields _if_ VOCTRL_UPDATE_SCREENINFO is
implemented by the VO or the windowing backend. The window size can be
different from the display size if e.g. the --geometry option is used.
It will also set the vo->dx/dy fields and read vo->xinerama_x/y.
It turned out that this is very backwards and actually requires the
windowing backends to workaround these things. There's also
MPOpts.screenwidth/screenheight, which used to map to actual options,
but is now used only to communicate the screen size to the vo.c code
calculating the window size and position.
Change this by making the window geometry calculations available as
separate functions. This commit doesn't change any VO code yet, and just
emulates the old way using the new functions. VO code will remove its
usage of VOCTRL_UPDATE_SCREENINFO and use the new functions directly.
Commit 433161 actually broke vo_opengl (and maybe others), because
config_ok is not necessarily set correctly yet _during_ reconfig. So a
vo_get_src_dst_rects() call during reconfig did nothing.
When the VO was not initialized with vo_reconfig(), or if the last
vo_reconfig() failed, changing panscan would cause a crash due to
vo_get_src_dst_rects() dereferencing vo->params (NULL if not
configured).
Just do nothing if that happens, as there is no video that could be
displayed anyway.
Doesn't really seem to be much of use. Get rid of the remaining uses of
it.
Concerning vo_opengl_old, it seems uninitGl() works fine even if called
before initialization.
This was a minor code duplication between vf_vdpaupp.c and vo_vdpau.c.
(In theory, we could always require using vf_vdpaupp with vo_vdpau, but
I think it's better if vo_vdpau can work standalone.)
Also remove MSGL_SMODE and friends.
Note: The indent in options.rst was added to work around a bug in
ReportLab that causes the PDF manual build to fail.
Change how the video decoding loop works. The structure should now be a
bit easier to follow. The interactions on format changes are (probably)
simpler. This also aligns the decoding loop with future planned changes,
such as moving various things to separate threads.
vf_fix_img_params() takes care of overwriting image parameters that are
normally not set correctly by filters. But this makes no sense for input
images. So instead, check that the input is correct.
It still has to be done for the first input image, because that's used
to handle some overrides (see video_reconfig_filters()).
These replace vf_read_output_frame(), although we still emulate that
function. This change is preparation for another commit (and this is
basically just to reduce the diff and signal/noise ratio in that
commit).