This shouldn't really matter, but it's probably best to avoid.
vo_wayland_control would execute set_cursor_visibility while wl->pointer
existed but it didn't check if wl->pointer_id existed. So
wl_pointer_set_cursor would be set to a null surface with an id of 0.
Instead, just wait until we have an actual, non-zero pointer id so that
the cursor is set with the correct, actual id and not a fictious 0 id.
This ensures that the pointer isn't set until it enters the wl_surface
which is what we want.
at the time of the initial dpi query the window is not instantiated yet.
we use a proper fallback in that case, eg the target configured screen
or the main screen if none is set.
also change some weird oversight and a small optimisation.
Theoretically possible (and quite unlikely due to the small texture
size). The code was originally written with the assumption that texture
allocations can't fail, and it was never updated out of laziness.
Untested.
It turns out that gnome wayland still has very serious issues that make
it unusable for playback with mpv. Other compositors mostly behave fine
(Plasma is just missing feature but it's not seriously broken), so GNOME
gets the special honor of having a warning printed out. The only
solution for GNOME users at this time of writing is to either use the
Xorg session or use another wayland compositor.
In the distant past, the cuviddec backed copy hwaccel could be
configured directly using lavc options. However, since that time,
we gained support for automatic hw ctx creation which ended up
bypassing the lavc options.
Rather than trying to find a way to pass those options again, a
better idea is to make the 'cuda-decode-device' option, used by
the interop hwaccels, work for the copy hwaccels too.
And that's pretty simple: we have to add a create function that
checks the option and passes it on to ffmpeg.
Note that this does require a slight re-jig to the configuration
flags, as we now have a scenario where we want to build with support
for the cuda copy hwaccels but not the interop ones. So we need
a distinct configuration flag for that combination.
Fixes#7295.
In vaapi 1.1.0 (which confusingly is libva release 2.1.0), they
introduced a new surface export API that is more efficient, and
we've been supporting that and the old API ever since (Feb 2018).
If we drop support for the old API, we can do some fairly nice cleanup
of the code.
Note that the pkgconfig entries are explicitly versioned by the API
version and not the library version. I confirmed the upstream pkgconfig
files.
As we are less and less interested in vpdpau, with nvdec and vaapi
being better choices in general on nvidia and AMD respectively, we
might consider removing direct_mode, where we bypass the vdpau
mixer and work directly with yuv textures. Normally, working with
yuv textures would be great, but vdpau built in an assumption that
all frames are delivered as separate fields, causing us to have
to re-interleave them.
nvidia then introduces a new OpenGL extension that can return the
yuv frames as frames, but we can't just unconditionally switch to
that as we'd want to keep supporting older hardware where the drivers
are no longer getting new features. The end result is that we
wouldn't be able to get rid of the old code paths.
Removing direct_mode means we always use the mixer, and work with
rgba frame textures. There are some theoretical limitations to
this, but in practice they probably don't matter much - unsupported
colourspaces don't matter because without 10bit decoding support,
we can't use them anyway, and apparently we're not doing separate
chroma scaling these days, so scaling the rbga doesn't really lose
anything (and the vdpau hq scaling option remains available).
GCC 9.2 warns about this. It was always a bit sketchy, so get rid of it.
VK_F10 generates WM_SYSKEYDOWN, so it only needs to be handled in the
WM_SYSKEYDOWN case.
in certain circumstances the video was not redrawn even when the size
or the backing scale factor changed. this could lead to a lower
resolution output than intended.
now it redraws the video when screen properties or the window size
changes.
Add an "auto-safe" mode, mostly triggered by Ubuntu's nonsense to force
hwdec=vaapi in the global config file in their mpv package. But to be
honest it's probably something more people want.
This is implemented as explicit whitelist. On Windows, HEVC/Intel is
sometimes broken, but it's still whitelisted, and in theory we'd need a
detailed whitelist of device names etc. (like for example browsers tend
to do). On OSX, videotoolbox is a pretty bad choice, but unfortunately
the only one, so it's whitelisted too. There may be a larger number of
hwdec wrappers that work anyway, and I'm for example ignoring Android.
Apparently there are two different options for controlling which
screen an mpv window goes onto: --fs-screen and --screen. The former
explicitly only controls which screen a fullscreened window goes onto,
but does not appear to actually care about this option at runtime for
X11, so pressing f will always fullscreen to the screen mpv is currently
on. This means the option is of questionable usefulness for starters.
Making it worse, if you use --screen=1 --fs, mpv will actually fullscreen
on screen 0, because --fs-screen isn't set. Instead of doing that, fall
back to whatever --screen is set to.
(X11 does not support different per-screen DPI (or only via hacks), so
this is pretty simple. If other backends are going to implement this,
then they should send VO_EVENT_WIN_STATE if the DPI for the mpv window
changes by moving it to another screen or such.)
The size overflow check was inverted: instead of allowing reading only
the first dst_size bytes of the property, it allowed copying past the
property buffer (as returned by xlib). xlib doesn't return the size of
the buffer in bytes, so it has to be computed and checked manually.
Wouldn't it be great if C allowed me to write the overflow check in a
readable way, so it doesn't trick me into writing dumb security bugs?
Relying on X security is even dumber than creating a X security bug,
though, so this was not a real problem. But I found that one specific
call tried to read more than what the property provided, so reduce that.
Also, len*ib obviously can't overflow, so there's an additional layer of
dumb to this whole thing.
While we're at dumb things, why the hell does xlib use "long" for 32 bit
types. It's a god damn pain.
libavcodec's nvdec wrapper can return invalid frames, that do not have
any data fields set. This is not allowed by the API, but why would they
follow their own API?
Add a workaround to specifically detect this situation. In practice,
this should fall back to software decoding if it happens too often in a
row. (But single errors are still tolerated, because I don't know why.)
Untested due to lack of hardware from the regrettable graphics company.
Better do this here than deal with the moronic project we unfortunately
depend on.
See: #7185
These all have been replaced recently.
There was a leftover in window.swift. It couldn't have done anything
useful in the current state of the code, so drop these lines.
* Instead of following VOCTRL_FULLSCREEN, check for option changes.
* Instead of signaling VO_EVENT_FULLSCREEN_STATE, update the cached
option structure and have it propagated to the origin.
Additionally, gets rid of all the straight usage of the VO options
structure.
Done in a similar style to the Wayland common file, where in case
of reading the value, the "payload" from cache is utilized.
In this combination, the [current-]window-scale properties still
incorrectly applied scaling.
For some reason, vo_calc_window_geometry2() handled this option
(basically ignored the dpi_scale parameter passed to it), but since the
DPI compensation for window-scale is implemented in x11_common.c, we
need to check and honor this option here too. (What a mess.)
"window-scale" is 1.0 by default; however, x11 implicitly set that to
2.0 on hidpi screens. This made the default 2.0, which was inconsistent
with the option. The "window-scale" property jumped from 1.0 to 2.0 when
a window was created.
Avoid this by factoring the DPI into the window-scale. This makes the
UNFS_WINDOW_SIZE return a virtual size; since this value is used for the
window-scale property only, this is fine and has no further
consequences. (Originally, this was possibly meant to be used for other
purposes, but I'm perfectly fine with redoing this again should that
ever happen.)
This changes user-visible behavior, and it's as if setting window-scale
multiplies its argument by 2 suddenly. Hopefully no user will get angry.
This tries to deal with the crazy EGL situation. The summary is:
- using eglGetDisplay() with multiple windowing platforms doesn't really
work, but Mesa had an awful hack for it
- this hack can be disabled at build time, and some distros sometimes
accidentally or intentionally do so
- Mesa will probably eventually disable it by default
- we switched to eglGetPlatformDisplay(), but this requires EGL 1.5
- the very regrettable graphics company (also known as Nvidia) ships
drivers (for old hardware I think) that are EGL 1.4 only
- that means even though we "require" EGL 1.5 and link against it, the
runtime EGL may be 1.4
- trying to run mpv there crashes in the dynamic linker
- so we have to go through some more awful compatibility hacks
This commit tries to do it "properly", but using EGL 1.4 as base. The
plaform selection mechanism is a messy extension there, which got
elevated to core API in 1.5 (but OF COURSE in incompatible ways).
I'm not sure whether the EGL 1.5 code path (by parsing the EGL_VERSION)
is really needed, but if you ask me, it feels slightly saner not to rely
on an EGL 1.4 kludge forever. But maybe this is just an instance of
self-harm, since they will most likely never drop or not provide this
API.
Also, unlike before, we actually check the extension string for the
individual platform extensions, because who knows, some EGL
implementations might curse us if we pass unknown platform parameters.
(But actually, the more I think about this, the more bullshit it is.)
X11 and Wayland were the only ones trying to call eglGetPlatformDisplay,
so they're the only ones which are adjusted in this commit.
Unfortunately, correct function of this commit is unconfirmed. It's
possible that it crashes with the old drivers mentioned above.
Why didn't they solve it like this:
struct native_display {
int platform_type;
void *native_display;
};
Could have kept eglGetDisplay() without all the obnoxious extension BS.
This assert() sometimes triggered (and still triggers) with lavc API
bugs. It tries to check that at least 1 plane is set to a non-NULL
value. Obviously, a valid frame returned by successful decoding should
never have it.
The problem is that some hwdecs use integer surface IDs cast to a
pointer. Recently, it happened that newer Intel drivers started using
surface ID 0 under certain circumstances (for unknown reasons), which
triggers this assert.
Just get rid of it.
For the sake of #7185, add an assert() specifically for nvdec. That
failure needs to be further analyzed, is probably a FFmpeg bug, and
without this assert() would just crash somewhere further down the video
chain.
Fixes: #7261
This code checked AVFrame.buf[0] instead of the decode return code to
see whether a frame was decoded. This is sort of suspicious; while I
think that the lavc API actually guarantees it, it's not intuitive
anyway. In addition, the code was unnecessarily roundabout.
Replace it with a proper error code check. Remove the other error return
(that was, or should have been, redundant before). The no-frame path is
now cleanly separated. Add an assert on the frame-returned path; if this
fails, lavc violated its own API.
the old event tap has several problems, like no proper priority support
or having to set accessibility permissions for mpv or the terminal.
it is now replaced by the new MediaPlayer which has proper priority
support and isn't as greedy as previously. this only includes Media Key
support and not any of the other features included in the MediaPlayer
framework, like proper Now Playing data (only set dummy data for now).
this is only available on macOS 10.12.2 and higher.
also removes some unnecessary redefines.
Fixes#6389
this removes the direct access of the mp_vo_opts stuct via the vo struct
and replaces it with the m_config_cache usage. this updates the
fullscreen and window-minimized property via m_config_cache_write_opt
instead of the old mechanism via VOCTRL and event flagging. also use the
new VOCTRL_VO_OPTS_CHANGED event for fullscreen and border changes.
See commit 4e4252f916 and the following as an example how this would
have to be done if done properly.
Since I'm unable to test on OSX, and nobody is interested in fixing this
code (including myself, actually), just remove the deprecated
definitions to make sure the code still builds. This will break runtime
switching of fullscreen, ontop, border. (The way the minimized state is
reported was also deprecated, but commit 40c2f2eeb0 already broke it
anyway.)
...probably.
The EGL backend had a strange problem: when recreating the window, EGL
surface creation sometimes mysteriously failed. For example, keeping the
"_" key down (cycles video by default) destroys and recreates the window
in rapid succession, which will often enough show the "Could not create
EGL surface!" message.
This was puzzling because due to mpv's architecture, the X11 Window and
even the X11 Display were fully destroyed, the thread on which they ran
was destroyed, and then everything was recreated. There shouldn't have
been any state that could make subsequent EGL initialization fail.
It turns out mpv forgot to free EGLSurfaces in the x11 code. EGL is a
pretty crazy API (full of thread local and global state with weird
lifetime requirements), and for example it seems EGLDisplay cannot be
explicitly released, but apparently implicitly dies when the native
display is closed (at least EGL 1.5 claims eglTerminate() does _not_
invalidate the display, only certain objects linked to it). It appears
that Mesa still referenced at least EGLSurface in some form, and either
some pointer or some X11 ID was dangling, and when it randomly matched
when eglCreateWindowSurface() was called, it failed.
Fix this by calling eglTerminate(), which supposedly destroys (or rather
unreferences) contexts and surfaces created from the display (but
absurdly not the display itself).
Now why can't you just destroy the display? If it's implicitly
invalidated, why can't it just call eglTerminate() implicitly when this
happens? Did Mesa do something wrong when they somehow didn't
automatically remove the dangling object (so I could claim not to be
responsible for the bug)? Who the fuck knows, and I'm too tired to
figure this out (both because it's late, and because I'm tired of this
EGL crap API).
Still not sure if the code is correct now. I think EGL was designed to
maximize implementation and API-use complications. How else could you
possibly come up with something like the EGLDisplay life cycle? Or am I
just making a fuss? Anyway, fuck EGL, fuck computers, fuck technology.
Fixes: #7129
Get rid of the legacy VOCTRL (which will be removed later). I'm not sure
what exactly fullscreen was supposed to do (toggling between using the
entire display, and what --geometry forced?), but I don't care, just get
rid of the VOCTRL. PRs to fix regressions caused by this will be
accepted, but personally I don't care since this is excessively fringe
and obscure.
The wayland backend needs to keep track of whether or not a window is
hidden for presentation time. There is no presentation feedback when a
window is hidden which means we shouldn't be sending information to the
vo_sync_info structure (i.e. just leave it all at -1). This seemed to
work fine, but recent changes to presentation time in one notable
compositor (Sway; it was probably always broken in Weston actually)
changed the presentation time behavior.
For reasons that aren't clear, there is a greater than 16.666ms delay
between the first presentation time event and the second presentation
time event (compositor latency?) when you switch back to an mpv window
after it is hidden for long enough (a few seconds). When using
presentation time, this causes mpv to feed in some bad values in its
vsync timing mechanism thus causing the A/V desync spike as described in
issue #7223.
This solution is not really ideal. It would be better if the
presentation time events received by the compositors did not have the
aforementioned inconsistency. However since this occurs in both Sway and
Weston and clients can't really fight compositors in wayland-world,
here's a reasonable enough workaround. Basically, just add a slight
delay before we start feeding information into the vo_sync_info again.
We already do this when the window is hidden, so it's not a huge leap.
The delay chosen here is arbitrary, and it basically just recycles the
same parameters used to detect if a window is hidden. If
vo_wayland_wait_frame times out 60 times in a row (or whatever your
monitor's refresh rate is), then we assume the window is hidden. This is
a pretty safe assumption; something has to be terribly wrong for you to
miss 60 vblanks in a row while a window is on the screen.
In this case, we basically just do the reverse of that. If mpv receives
60 frame callbacks in a row (or whatever your monitor's refresh rate
is), then it assumes the window is not hidden. Previously, as soon as it
received 1 frame callback it was declared not hidden. Essentially,
there's just 1 second of delay after reshowing a window before the
presentation time statistics are used again. This should be more than
enough time to skip over the weird inconsistent behavior presentation
time behavior and avoid the A/V desync spike.
Fixes#7223
drmModeAddFB is legacy, and might not pick the pixel format you
expect, depending on your driver. Use drmModeAddFB2 which specifies
this explicitly using a fourcc.
Seems like some drivers only increment msc every other page flip when
running in interlaced mode (I'm looking at you nouveau). I.e. it seems
to be incremented at the frame rate, rather than the field rate.
Obviously we can't work with this, so shame the driver and bail.
On intel this isn't an issue, as msc is incremented at field rate
there.
This means presentation feedback won't work correctly in interlaced
modes with those drivers, but who in their right mind uses an
interlaced mode these days, anyway?
In theory, using strstr() to search for extensions is a bad idea,
because some extension names might be prefixes for other names, so you
could get false positives. gl_check_extension() avoids this case.
It's not clear whether this is really needed; maybe not. Surely the EGL
committee is aware of these practices (many GL clients do this, which is
why it's widely considered bad practice), and would avoid defining new
extension names which contain existing names as sub-strings, but
whatever.