* Instead of following VOCTRL_FULLSCREEN, check for option changes.
* Instead of signaling VO_EVENT_FULLSCREEN_STATE, update the cached
option structure and have it propagated to the origin.
Additionally, gets rid of all the straight usage of the VO options
structure.
Done in a similar style to the Wayland common file, where in case
of reading the value, the "payload" from cache is utilized.
In this combination, the [current-]window-scale properties still
incorrectly applied scaling.
For some reason, vo_calc_window_geometry2() handled this option
(basically ignored the dpi_scale parameter passed to it), but since the
DPI compensation for window-scale is implemented in x11_common.c, we
need to check and honor this option here too. (What a mess.)
"window-scale" is 1.0 by default; however, x11 implicitly set that to
2.0 on hidpi screens. This made the default 2.0, which was inconsistent
with the option. The "window-scale" property jumped from 1.0 to 2.0 when
a window was created.
Avoid this by factoring the DPI into the window-scale. This makes the
UNFS_WINDOW_SIZE return a virtual size; since this value is used for the
window-scale property only, this is fine and has no further
consequences. (Originally, this was possibly meant to be used for other
purposes, but I'm perfectly fine with redoing this again should that
ever happen.)
This changes user-visible behavior, and it's as if setting window-scale
multiplies its argument by 2 suddenly. Hopefully no user will get angry.
This tries to deal with the crazy EGL situation. The summary is:
- using eglGetDisplay() with multiple windowing platforms doesn't really
work, but Mesa had an awful hack for it
- this hack can be disabled at build time, and some distros sometimes
accidentally or intentionally do so
- Mesa will probably eventually disable it by default
- we switched to eglGetPlatformDisplay(), but this requires EGL 1.5
- the very regrettable graphics company (also known as Nvidia) ships
drivers (for old hardware I think) that are EGL 1.4 only
- that means even though we "require" EGL 1.5 and link against it, the
runtime EGL may be 1.4
- trying to run mpv there crashes in the dynamic linker
- so we have to go through some more awful compatibility hacks
This commit tries to do it "properly", but using EGL 1.4 as base. The
plaform selection mechanism is a messy extension there, which got
elevated to core API in 1.5 (but OF COURSE in incompatible ways).
I'm not sure whether the EGL 1.5 code path (by parsing the EGL_VERSION)
is really needed, but if you ask me, it feels slightly saner not to rely
on an EGL 1.4 kludge forever. But maybe this is just an instance of
self-harm, since they will most likely never drop or not provide this
API.
Also, unlike before, we actually check the extension string for the
individual platform extensions, because who knows, some EGL
implementations might curse us if we pass unknown platform parameters.
(But actually, the more I think about this, the more bullshit it is.)
X11 and Wayland were the only ones trying to call eglGetPlatformDisplay,
so they're the only ones which are adjusted in this commit.
Unfortunately, correct function of this commit is unconfirmed. It's
possible that it crashes with the old drivers mentioned above.
Why didn't they solve it like this:
struct native_display {
int platform_type;
void *native_display;
};
Could have kept eglGetDisplay() without all the obnoxious extension BS.
This assert() sometimes triggered (and still triggers) with lavc API
bugs. It tries to check that at least 1 plane is set to a non-NULL
value. Obviously, a valid frame returned by successful decoding should
never have it.
The problem is that some hwdecs use integer surface IDs cast to a
pointer. Recently, it happened that newer Intel drivers started using
surface ID 0 under certain circumstances (for unknown reasons), which
triggers this assert.
Just get rid of it.
For the sake of #7185, add an assert() specifically for nvdec. That
failure needs to be further analyzed, is probably a FFmpeg bug, and
without this assert() would just crash somewhere further down the video
chain.
Fixes: #7261
This code checked AVFrame.buf[0] instead of the decode return code to
see whether a frame was decoded. This is sort of suspicious; while I
think that the lavc API actually guarantees it, it's not intuitive
anyway. In addition, the code was unnecessarily roundabout.
Replace it with a proper error code check. Remove the other error return
(that was, or should have been, redundant before). The no-frame path is
now cleanly separated. Add an assert on the frame-returned path; if this
fails, lavc violated its own API.
the old event tap has several problems, like no proper priority support
or having to set accessibility permissions for mpv or the terminal.
it is now replaced by the new MediaPlayer which has proper priority
support and isn't as greedy as previously. this only includes Media Key
support and not any of the other features included in the MediaPlayer
framework, like proper Now Playing data (only set dummy data for now).
this is only available on macOS 10.12.2 and higher.
also removes some unnecessary redefines.
Fixes#6389
this removes the direct access of the mp_vo_opts stuct via the vo struct
and replaces it with the m_config_cache usage. this updates the
fullscreen and window-minimized property via m_config_cache_write_opt
instead of the old mechanism via VOCTRL and event flagging. also use the
new VOCTRL_VO_OPTS_CHANGED event for fullscreen and border changes.
See commit 4e4252f916 and the following as an example how this would
have to be done if done properly.
Since I'm unable to test on OSX, and nobody is interested in fixing this
code (including myself, actually), just remove the deprecated
definitions to make sure the code still builds. This will break runtime
switching of fullscreen, ontop, border. (The way the minimized state is
reported was also deprecated, but commit 40c2f2eeb0 already broke it
anyway.)
...probably.
The EGL backend had a strange problem: when recreating the window, EGL
surface creation sometimes mysteriously failed. For example, keeping the
"_" key down (cycles video by default) destroys and recreates the window
in rapid succession, which will often enough show the "Could not create
EGL surface!" message.
This was puzzling because due to mpv's architecture, the X11 Window and
even the X11 Display were fully destroyed, the thread on which they ran
was destroyed, and then everything was recreated. There shouldn't have
been any state that could make subsequent EGL initialization fail.
It turns out mpv forgot to free EGLSurfaces in the x11 code. EGL is a
pretty crazy API (full of thread local and global state with weird
lifetime requirements), and for example it seems EGLDisplay cannot be
explicitly released, but apparently implicitly dies when the native
display is closed (at least EGL 1.5 claims eglTerminate() does _not_
invalidate the display, only certain objects linked to it). It appears
that Mesa still referenced at least EGLSurface in some form, and either
some pointer or some X11 ID was dangling, and when it randomly matched
when eglCreateWindowSurface() was called, it failed.
Fix this by calling eglTerminate(), which supposedly destroys (or rather
unreferences) contexts and surfaces created from the display (but
absurdly not the display itself).
Now why can't you just destroy the display? If it's implicitly
invalidated, why can't it just call eglTerminate() implicitly when this
happens? Did Mesa do something wrong when they somehow didn't
automatically remove the dangling object (so I could claim not to be
responsible for the bug)? Who the fuck knows, and I'm too tired to
figure this out (both because it's late, and because I'm tired of this
EGL crap API).
Still not sure if the code is correct now. I think EGL was designed to
maximize implementation and API-use complications. How else could you
possibly come up with something like the EGLDisplay life cycle? Or am I
just making a fuss? Anyway, fuck EGL, fuck computers, fuck technology.
Fixes: #7129
Get rid of the legacy VOCTRL (which will be removed later). I'm not sure
what exactly fullscreen was supposed to do (toggling between using the
entire display, and what --geometry forced?), but I don't care, just get
rid of the VOCTRL. PRs to fix regressions caused by this will be
accepted, but personally I don't care since this is excessively fringe
and obscure.
The wayland backend needs to keep track of whether or not a window is
hidden for presentation time. There is no presentation feedback when a
window is hidden which means we shouldn't be sending information to the
vo_sync_info structure (i.e. just leave it all at -1). This seemed to
work fine, but recent changes to presentation time in one notable
compositor (Sway; it was probably always broken in Weston actually)
changed the presentation time behavior.
For reasons that aren't clear, there is a greater than 16.666ms delay
between the first presentation time event and the second presentation
time event (compositor latency?) when you switch back to an mpv window
after it is hidden for long enough (a few seconds). When using
presentation time, this causes mpv to feed in some bad values in its
vsync timing mechanism thus causing the A/V desync spike as described in
issue #7223.
This solution is not really ideal. It would be better if the
presentation time events received by the compositors did not have the
aforementioned inconsistency. However since this occurs in both Sway and
Weston and clients can't really fight compositors in wayland-world,
here's a reasonable enough workaround. Basically, just add a slight
delay before we start feeding information into the vo_sync_info again.
We already do this when the window is hidden, so it's not a huge leap.
The delay chosen here is arbitrary, and it basically just recycles the
same parameters used to detect if a window is hidden. If
vo_wayland_wait_frame times out 60 times in a row (or whatever your
monitor's refresh rate is), then we assume the window is hidden. This is
a pretty safe assumption; something has to be terribly wrong for you to
miss 60 vblanks in a row while a window is on the screen.
In this case, we basically just do the reverse of that. If mpv receives
60 frame callbacks in a row (or whatever your monitor's refresh rate
is), then it assumes the window is not hidden. Previously, as soon as it
received 1 frame callback it was declared not hidden. Essentially,
there's just 1 second of delay after reshowing a window before the
presentation time statistics are used again. This should be more than
enough time to skip over the weird inconsistent behavior presentation
time behavior and avoid the A/V desync spike.
Fixes#7223
drmModeAddFB is legacy, and might not pick the pixel format you
expect, depending on your driver. Use drmModeAddFB2 which specifies
this explicitly using a fourcc.
Seems like some drivers only increment msc every other page flip when
running in interlaced mode (I'm looking at you nouveau). I.e. it seems
to be incremented at the frame rate, rather than the field rate.
Obviously we can't work with this, so shame the driver and bail.
On intel this isn't an issue, as msc is incremented at field rate
there.
This means presentation feedback won't work correctly in interlaced
modes with those drivers, but who in their right mind uses an
interlaced mode these days, anyway?
In theory, using strstr() to search for extensions is a bad idea,
because some extension names might be prefixes for other names, so you
could get false positives. gl_check_extension() avoids this case.
It's not clear whether this is really needed; maybe not. Surely the EGL
committee is aware of these practices (many GL clients do this, which is
why it's widely considered bad practice), and would avoid defining new
extension names which contain existing names as sub-strings, but
whatever.
Adding an ifdef mess to deal with insufficient system headers is kind of
a mess. It's easier to just provide the definitions manually. This sucks
a bit too, but it's the approach we've been using with OpenGL headers in
general, and I think that worked pretty well.
On systems whose EGL headers do not define these extensions, the build
still failed due to missing ..._PLANE3_... defines. Although we supplied
missing EGL_LINUX_DMA_BUF_EXT defines manually, the PLANE3 ones are
actually from a separate extension, which explains why they were not
added to the fallback defines in the first place.
Add them, now it builds without the eglext.h include.
See #6838.
When frame-stepping with display-sync mode enabled in high framerate
video, the frame was sometimes not redrawn correctly. Only the first OSD
interaction (or something similar) made it visible.
In this case, the core schedules many frames as dropped (because it's
ignorant of pausing/frame-stepping, as in theory the player is _not_
paused during frame-stepping, only at the end of it). There's a race
between the VO rendering the queued frame, and the core calling
vo_set_paused() after it has queued the frame. If the latter happens
first, the existing logic to redraw the previous dropped frame does
things correctly. If the former happens, the frame is not redrawn
automatically, but will be redrawn on the next user input (or if OSD is
enabled, and the pause state change updates it, which leads to an
immediate redraw).
Fix this by never actually dropping a frame in paused mode. The request
by the core to drop it is simply ignored.
Maybe this could be done slightly nicer by updating the pause state with
the VO atomically. Then we wouldn't have the frame drop counter going up
either (it's actually dropped, but then redrawn; but I doubt any user,
or me in a few weeks, would understand this). But I'm not really
interested in polishing this by increasing the complexity of the
frame-step code.
To aid in discoverability, and to address the most common case
directly, I'm adding an 'auto' mode for the window controls. In
this case, we will show the controls if there is no window border
and hide them if there are borders. This also respects the option
being toggled at runtime.
To ensure that it works in the wayland case, I've also made sure
that the wayland code explicitly forces the option to false if
decoration support is missing.
Based on feedback, I've split the config in two, with one option
for whether controls are active, and one for alignment. These are
new enough that we can get away with ignoring compatibility.
This small regression was introduced by #7216. Previously, the wayland
backend used a trick which kept track of the previous fullscreen state
and used that logic for showing the cursor. Since vo_opts now keeps
track of the current fullscreen state, most of this stopped being
neccessary.
However, there was one edge case where the cursor didn't
behave the same: passing a fullscreen flag for the inital window. The
cursor would initially be visible here which is not desirable. This can
be remedied pretty easily by just setting the cursor visiblity to false
if the pointer entry event occurs on fullscreen. The only thing we need
to do is to make sure that the autohide delay isn't completely disabled
(i.e. the cursor is always visible). Hence the need for the previous
commit.
The remaining legacy VOCTRLs are for the fullscreen and border
properties. For fullscreen this largely just replacing the private
state field with the vo option but there are small semantic
differences that we need to be careful of.
For the border setting, it's trivial as we don't have external
mechanisms for changing the state, but I also can't test it as
I'm not using a compositor that supports it.
I wanted to get this done quickly as I introduced the new VOCTRL
behaviour for minimize and maximize and it was immediately made
legacy, so best to purge it before anyone gets confused.
I did not sort out fullscreen as that's more involved and not something
I've educated myself about yet. But I did replace the VOCTRL_FULLSCREEN
usage with the new option change mechanism as that seemed simple
enough.
Pretty annoying affair. The vo_gpu code could of course not trigger
rendering from filters yet, so it needed to be extended. Also, this uses
some icky stuff made for vf_sub (and this was the reason I marked vf_sub
as deprecated), so everything is terrible.
glx.h recursively includes gl.h, and there is no way to prevent this.
Old Mesa defines some GL symbols, but not all which mpv needs. In
particular, one user who was too lazy to update his ancient Ubuntu and
preferred to bother us with obscure bug reports, had Mesa headers which
did not define GL 3.2, so GLsync was not defined.
All in all I still think the idea of providing the GL API definitions
ourselves was a good idea; just GLX should have been isolated better.
But isolating GLX now is too much effort.
Not sure why I'm bothering with this at all.
Fixes: #7201 (unconfirmed)
Probably pretty useless in this form (see: the wall of warnings), but
someone wanted this.
I think this should be useful to perform some automated tests, maybe.
Fixes: #7194
This function always expects the GL struct pointer to be a talloc
allocation. So far so bad. But the terrible thing is that _lots_ of code
in mpv didn't quite get this (including the code which introduced the
way it is used this way). For example, in context_glx.c you see this:
struct priv {
GL gl;
...
GL is not a talloc allocation, but since it's at the start of a talloc
allocation, it works anyway. So far so bad. But the really terrible
thing is that mpgl_load_functions2() calls talloc_free_children() on the
GL pointer, which means that all of priv's. This would be unintentional
and could create dangling pointers. And this happens at the about 1
dozen of callers. I'm amazed it didn't broke yet anywhere.
Removing this anti-pattern with making GL "implicitly" a talloc
allocation would be too much effort at this point. So just manually free
the only allocation that the function attached to GL.
Should restore full functionality.
The initial state setting is a bit shoddy (instead of setting the
properties before map, we use the WM commands to change it after, so you
will see the normal window state for a moment; the WM commands do not
work on unmapped windows, so fixing this would require more code).
- remove VOCTRL_FULLSCREEN and VOCTRL_GET_FULLSCREEN
- have your own m_config_cache for the fullscreen option
(vo->opts_cache cannot be used because you lose per-option change
notifications, and it'd be a mess anyway)
- use VOCTRL_VO_OPTS_CHANGED to update it
(it's used for convenience)
- when updating it, check for the fullscreen option
(wasn't sure how to do it best; currently, it compares the raw
option pointers, but this could be changed)
- do not send VO_EVENT_FULLSCREEN_STATE on FS change
- instead write the option on FS change
(assign in opt. struct + m_config_cache_write_opt)
We primarily care about pseudo-decorations for wayland, where
the compositor may not support server-side decorations. So let's
implement the minimize and maximize commands and return the
maximized window state.
If we want to implement window pseudo-decorations via OSC, we need a
way to tell the vo to minimize and maximize the window. Today, we have
minimized as a read-only property, and no property for maximized.
Let's made minimized settable and add a maximized property to go with
it. In turn, that requires us to add VOCTRLs for minimizing or
maximizing a window, and an additional WIN_STATE to indicate a
maximized window.
At least with gnome-shell (I know, I know), the compositor does
not provide the old window size when leaving the maximized state.
Instead, we get a toplevel_config event with a 0x0 size and no
additional states.
Today, we already save the window geometry to restore it when leaving
the fullscreen state, so we just need a small change for it to
kick in for leaving the maximized state. If I read this correctly,
we'll still respect the size passed by a compositor that actually
provides the old size.
Rather than hard-coding the edge grab zone width, we can make it
user configurable. It seems worthwhile to have separate configs
for pointer and touch usage as the defaults should be different,
and a user might have both input methods in use.
Today, we support resizing wayland windows when we detect a touch
event in a defined grab zone. As part of implementing
pseudo-decorations, we should have equivalent functionality for
mouse input. And if we detect support for actual decorations we
will not activate the grab zone as the decorations will provide this.
Use x11->opts instead of vo->opts. This doesn't matter currently, and
x11->opts is actually set to vo->opts. However, there's a chance that
either option access changes, or that the way backends integrate with
struct vo changes. This is just a preemptive change to make this less of
a mess, and it's generally a good idea to reduce accesses to struct vo
anyway.
This is preparation to get rid of the option-to-property bridge
(mp_on_set_option). This is a pretty insane thing that redirects
accesses to options to properties. It was needed in the ever ongoing
transition from something to... something else.
A good example for the need of this bridge is applying profiles at
runtime. This obviously goes through the config parser, but should also
make all changes effective, for which traditionally the property layer
is used.
There isn't much left that needs this bridge. This commit changes a
bunch of options (which also have a property implementation) to use
option change notifications instead. Many of the properties are still
left, but perform unrelated functions like OSD formatting.
This should be mostly compatible. There may be some subtle behavior
changes. For example, "hwdec" and "record-file" do not check for changes
anymore before applying them, so writing the current value to them
suddenly does something, while it was ignored before.
DVB changes untested, but should work.