On some platforms the ZPOS property might exist, but be immutable.
This is at least the case on Intel Sandy Bridge since Linux kernel
5.5.0. Trying to set an immutable property will cause.
drmModeAtomicCommit to fail with -EINVAL.
On other platforms we might want to set ZPOS to tweak the layering of
planes.
To reconcile these two, simply have drm_object_set_property check if a
property is immutable before attempting to add it to the atomic
commit, instead returning an error code (which is, as previously,
ignored in the case of ZPOS as we don't strictly need it)
This originally existed as a hack for weston. In certain scenarios, a
frame taking too long to render would cause vo_wayland_wait_frame to
timeout which would result in a ton of dropped frames. The naive
solution was to just to add a slight delay to the time value. If a
frame took too long, it would likely to fall under the timeout value and
all was well. This was exposed to the user since the default delay
(1000) was completely arbitrary.
However with presentation time, this doesn't appear to be neccesary.
Fresh frames that take longer than the display's refresh rate (16.666 ms
in most cases) behave well in Weston. In the other two main compositors
without presentation time (GNOME and Plasma), they also do not
experience any ill effects. It's better not to overcomplicate things, so
this "feature" can be removed now.
Add support for setting window-minimized and window-maximized in
Windows. The minimized and maximized state can be set independently.
When the window is minimized, the value of window-maximized will
determine whether the window is restored to the maximized state or not.
Changing state is done with ShowWindow(), which has commands that change
the window state and activate it (eg. SW_RESTORE) and commands that
change the window state without activating it (eg. SW_SHOWNOACTIVATE.)
It would be nice if we could use commands that don't activate the
window, so scripts could change the window state in the backrgound
without bringing it to the foreground, but there are some problems with
that. There is no command to maximize a window without activating it, so
SW_MAXIMIZE is used instead. Also, restoring a window from minimize
without activating it seems buggy. On my Windows 10 1909 PC, it always
moves the window to the back of the z-order. SW_RESTORE is used instead
of SW_SHOWNOACTIVATE because of this.
This also changes the way the window is initially shown. Previously, the
window was made visible as a consequence of the SWP_SHOWWINDOW flag in
the first call to SetWindowPos. In order to set the initial minimized or
maximized state of the window, the window is shown with the ShowWindow
function instead, where the ShowWindow command is determined by whether
the window should be initially maximized or minimized.
Even when showing the window normally, we should still call ShowWindow
with the SW_SHOW command instead of using SetWindowPos, since the first
call a process makes to ShowWindow(SW_SHOW) has special behaviour
where it uses the show command in the process' STARTUPINFO instead of
the command passed to the function, which should fix#5724.
Note: While changes to window-minimized while in fullscreen mode should
work as expected, changing window-maximized while in fullscreen does not
work and won't result in the window changing state, even after leaving
fullscreen. For this to work correctly, the fullscreen logic needs to be
changed to apply the new maximized state on leaving fullscreen.
Fixes: #5724Fixes: #7351
it was possible for mouse events to be triggered when the core was
already being shut down. to prevent this properly close and remove the
window and additional remove the reference to MPVHelper object.
this deprecates the old cocoa backend only option and moves it to the
general macos ones. add support for the new option in the cocoa-cb
layer creation and use the new option in the olde cocoa backend.
Fixes#7272
due to the bundle config the icon is set automatically via the bundle
system mechanisms. this also makes it possible to set the icon to a
custom one with the standard macOS copy paste method via the file info
dialogue.
Fixes#6874
As documented on struct mp_hwdec_ctx, hw_imgfmt specifies the hardware
surface wrapper format for which supported_formats is valid. If this was
not set, f_hwtransfer ignored supported_formats, and assumed all formats
were supported.
Allow the --window-maximized and --window-minimized flags to actually
work when the player is started on wayland. If the compositor doesn't
support maximization or minimization, then these options just do
nothing.
Fixes#7345
There was a multitude of issues with cursor handling in wayland and
behavior seemed to vary for strange reasons across compositors and also
bad things were being done in wayland_common. The problem is complicated
and involved fullscreen states being set incorrectly under certain
instances and so on. The best solution is to just remove most of the
extra cruft. In handle_toplevel_config, instead of automatically
assuming is_fullscreen and is_maximized are false, we should use
whatever value is currently set vo_opts which matters when we initially
launch the window.
hwdec_vaapi tries to probe all available surface formats in advance. For
that, we iterate over _all_ profiles in an attempt to collect possible
surface formats. This means we try profiles we normally wouldn't use for
decoding or filtering, and which could be "unrelated" services.
It seems some drivers report at least one profile, for which
vaQueryConfigEntrypoints() fails (because the profile is not supported;
not sure why it lists it, then). So turn the error message into a
verbose message to avoid confusing output.
Fixes: #7347
This shouldn't really matter, but it's probably best to avoid.
vo_wayland_control would execute set_cursor_visibility while wl->pointer
existed but it didn't check if wl->pointer_id existed. So
wl_pointer_set_cursor would be set to a null surface with an id of 0.
Instead, just wait until we have an actual, non-zero pointer id so that
the cursor is set with the correct, actual id and not a fictious 0 id.
This ensures that the pointer isn't set until it enters the wl_surface
which is what we want.
at the time of the initial dpi query the window is not instantiated yet.
we use a proper fallback in that case, eg the target configured screen
or the main screen if none is set.
also change some weird oversight and a small optimisation.
Theoretically possible (and quite unlikely due to the small texture
size). The code was originally written with the assumption that texture
allocations can't fail, and it was never updated out of laziness.
Untested.
It turns out that gnome wayland still has very serious issues that make
it unusable for playback with mpv. Other compositors mostly behave fine
(Plasma is just missing feature but it's not seriously broken), so GNOME
gets the special honor of having a warning printed out. The only
solution for GNOME users at this time of writing is to either use the
Xorg session or use another wayland compositor.
In the distant past, the cuviddec backed copy hwaccel could be
configured directly using lavc options. However, since that time,
we gained support for automatic hw ctx creation which ended up
bypassing the lavc options.
Rather than trying to find a way to pass those options again, a
better idea is to make the 'cuda-decode-device' option, used by
the interop hwaccels, work for the copy hwaccels too.
And that's pretty simple: we have to add a create function that
checks the option and passes it on to ffmpeg.
Note that this does require a slight re-jig to the configuration
flags, as we now have a scenario where we want to build with support
for the cuda copy hwaccels but not the interop ones. So we need
a distinct configuration flag for that combination.
Fixes#7295.
In vaapi 1.1.0 (which confusingly is libva release 2.1.0), they
introduced a new surface export API that is more efficient, and
we've been supporting that and the old API ever since (Feb 2018).
If we drop support for the old API, we can do some fairly nice cleanup
of the code.
Note that the pkgconfig entries are explicitly versioned by the API
version and not the library version. I confirmed the upstream pkgconfig
files.
As we are less and less interested in vpdpau, with nvdec and vaapi
being better choices in general on nvidia and AMD respectively, we
might consider removing direct_mode, where we bypass the vdpau
mixer and work directly with yuv textures. Normally, working with
yuv textures would be great, but vdpau built in an assumption that
all frames are delivered as separate fields, causing us to have
to re-interleave them.
nvidia then introduces a new OpenGL extension that can return the
yuv frames as frames, but we can't just unconditionally switch to
that as we'd want to keep supporting older hardware where the drivers
are no longer getting new features. The end result is that we
wouldn't be able to get rid of the old code paths.
Removing direct_mode means we always use the mixer, and work with
rgba frame textures. There are some theoretical limitations to
this, but in practice they probably don't matter much - unsupported
colourspaces don't matter because without 10bit decoding support,
we can't use them anyway, and apparently we're not doing separate
chroma scaling these days, so scaling the rbga doesn't really lose
anything (and the vdpau hq scaling option remains available).
GCC 9.2 warns about this. It was always a bit sketchy, so get rid of it.
VK_F10 generates WM_SYSKEYDOWN, so it only needs to be handled in the
WM_SYSKEYDOWN case.
in certain circumstances the video was not redrawn even when the size
or the backing scale factor changed. this could lead to a lower
resolution output than intended.
now it redraws the video when screen properties or the window size
changes.
Add an "auto-safe" mode, mostly triggered by Ubuntu's nonsense to force
hwdec=vaapi in the global config file in their mpv package. But to be
honest it's probably something more people want.
This is implemented as explicit whitelist. On Windows, HEVC/Intel is
sometimes broken, but it's still whitelisted, and in theory we'd need a
detailed whitelist of device names etc. (like for example browsers tend
to do). On OSX, videotoolbox is a pretty bad choice, but unfortunately
the only one, so it's whitelisted too. There may be a larger number of
hwdec wrappers that work anyway, and I'm for example ignoring Android.
Apparently there are two different options for controlling which
screen an mpv window goes onto: --fs-screen and --screen. The former
explicitly only controls which screen a fullscreened window goes onto,
but does not appear to actually care about this option at runtime for
X11, so pressing f will always fullscreen to the screen mpv is currently
on. This means the option is of questionable usefulness for starters.
Making it worse, if you use --screen=1 --fs, mpv will actually fullscreen
on screen 0, because --fs-screen isn't set. Instead of doing that, fall
back to whatever --screen is set to.
(X11 does not support different per-screen DPI (or only via hacks), so
this is pretty simple. If other backends are going to implement this,
then they should send VO_EVENT_WIN_STATE if the DPI for the mpv window
changes by moving it to another screen or such.)
The size overflow check was inverted: instead of allowing reading only
the first dst_size bytes of the property, it allowed copying past the
property buffer (as returned by xlib). xlib doesn't return the size of
the buffer in bytes, so it has to be computed and checked manually.
Wouldn't it be great if C allowed me to write the overflow check in a
readable way, so it doesn't trick me into writing dumb security bugs?
Relying on X security is even dumber than creating a X security bug,
though, so this was not a real problem. But I found that one specific
call tried to read more than what the property provided, so reduce that.
Also, len*ib obviously can't overflow, so there's an additional layer of
dumb to this whole thing.
While we're at dumb things, why the hell does xlib use "long" for 32 bit
types. It's a god damn pain.
libavcodec's nvdec wrapper can return invalid frames, that do not have
any data fields set. This is not allowed by the API, but why would they
follow their own API?
Add a workaround to specifically detect this situation. In practice,
this should fall back to software decoding if it happens too often in a
row. (But single errors are still tolerated, because I don't know why.)
Untested due to lack of hardware from the regrettable graphics company.
Better do this here than deal with the moronic project we unfortunately
depend on.
See: #7185
These all have been replaced recently.
There was a leftover in window.swift. It couldn't have done anything
useful in the current state of the code, so drop these lines.
* Instead of following VOCTRL_FULLSCREEN, check for option changes.
* Instead of signaling VO_EVENT_FULLSCREEN_STATE, update the cached
option structure and have it propagated to the origin.
Additionally, gets rid of all the straight usage of the VO options
structure.
Done in a similar style to the Wayland common file, where in case
of reading the value, the "payload" from cache is utilized.
In this combination, the [current-]window-scale properties still
incorrectly applied scaling.
For some reason, vo_calc_window_geometry2() handled this option
(basically ignored the dpi_scale parameter passed to it), but since the
DPI compensation for window-scale is implemented in x11_common.c, we
need to check and honor this option here too. (What a mess.)
"window-scale" is 1.0 by default; however, x11 implicitly set that to
2.0 on hidpi screens. This made the default 2.0, which was inconsistent
with the option. The "window-scale" property jumped from 1.0 to 2.0 when
a window was created.
Avoid this by factoring the DPI into the window-scale. This makes the
UNFS_WINDOW_SIZE return a virtual size; since this value is used for the
window-scale property only, this is fine and has no further
consequences. (Originally, this was possibly meant to be used for other
purposes, but I'm perfectly fine with redoing this again should that
ever happen.)
This changes user-visible behavior, and it's as if setting window-scale
multiplies its argument by 2 suddenly. Hopefully no user will get angry.
This tries to deal with the crazy EGL situation. The summary is:
- using eglGetDisplay() with multiple windowing platforms doesn't really
work, but Mesa had an awful hack for it
- this hack can be disabled at build time, and some distros sometimes
accidentally or intentionally do so
- Mesa will probably eventually disable it by default
- we switched to eglGetPlatformDisplay(), but this requires EGL 1.5
- the very regrettable graphics company (also known as Nvidia) ships
drivers (for old hardware I think) that are EGL 1.4 only
- that means even though we "require" EGL 1.5 and link against it, the
runtime EGL may be 1.4
- trying to run mpv there crashes in the dynamic linker
- so we have to go through some more awful compatibility hacks
This commit tries to do it "properly", but using EGL 1.4 as base. The
plaform selection mechanism is a messy extension there, which got
elevated to core API in 1.5 (but OF COURSE in incompatible ways).
I'm not sure whether the EGL 1.5 code path (by parsing the EGL_VERSION)
is really needed, but if you ask me, it feels slightly saner not to rely
on an EGL 1.4 kludge forever. But maybe this is just an instance of
self-harm, since they will most likely never drop or not provide this
API.
Also, unlike before, we actually check the extension string for the
individual platform extensions, because who knows, some EGL
implementations might curse us if we pass unknown platform parameters.
(But actually, the more I think about this, the more bullshit it is.)
X11 and Wayland were the only ones trying to call eglGetPlatformDisplay,
so they're the only ones which are adjusted in this commit.
Unfortunately, correct function of this commit is unconfirmed. It's
possible that it crashes with the old drivers mentioned above.
Why didn't they solve it like this:
struct native_display {
int platform_type;
void *native_display;
};
Could have kept eglGetDisplay() without all the obnoxious extension BS.
This assert() sometimes triggered (and still triggers) with lavc API
bugs. It tries to check that at least 1 plane is set to a non-NULL
value. Obviously, a valid frame returned by successful decoding should
never have it.
The problem is that some hwdecs use integer surface IDs cast to a
pointer. Recently, it happened that newer Intel drivers started using
surface ID 0 under certain circumstances (for unknown reasons), which
triggers this assert.
Just get rid of it.
For the sake of #7185, add an assert() specifically for nvdec. That
failure needs to be further analyzed, is probably a FFmpeg bug, and
without this assert() would just crash somewhere further down the video
chain.
Fixes: #7261
This code checked AVFrame.buf[0] instead of the decode return code to
see whether a frame was decoded. This is sort of suspicious; while I
think that the lavc API actually guarantees it, it's not intuitive
anyway. In addition, the code was unnecessarily roundabout.
Replace it with a proper error code check. Remove the other error return
(that was, or should have been, redundant before). The no-frame path is
now cleanly separated. Add an assert on the frame-returned path; if this
fails, lavc violated its own API.
the old event tap has several problems, like no proper priority support
or having to set accessibility permissions for mpv or the terminal.
it is now replaced by the new MediaPlayer which has proper priority
support and isn't as greedy as previously. this only includes Media Key
support and not any of the other features included in the MediaPlayer
framework, like proper Now Playing data (only set dummy data for now).
this is only available on macOS 10.12.2 and higher.
also removes some unnecessary redefines.
Fixes#6389
this removes the direct access of the mp_vo_opts stuct via the vo struct
and replaces it with the m_config_cache usage. this updates the
fullscreen and window-minimized property via m_config_cache_write_opt
instead of the old mechanism via VOCTRL and event flagging. also use the
new VOCTRL_VO_OPTS_CHANGED event for fullscreen and border changes.
See commit 4e4252f916 and the following as an example how this would
have to be done if done properly.
Since I'm unable to test on OSX, and nobody is interested in fixing this
code (including myself, actually), just remove the deprecated
definitions to make sure the code still builds. This will break runtime
switching of fullscreen, ontop, border. (The way the minimized state is
reported was also deprecated, but commit 40c2f2eeb0 already broke it
anyway.)
...probably.
The EGL backend had a strange problem: when recreating the window, EGL
surface creation sometimes mysteriously failed. For example, keeping the
"_" key down (cycles video by default) destroys and recreates the window
in rapid succession, which will often enough show the "Could not create
EGL surface!" message.
This was puzzling because due to mpv's architecture, the X11 Window and
even the X11 Display were fully destroyed, the thread on which they ran
was destroyed, and then everything was recreated. There shouldn't have
been any state that could make subsequent EGL initialization fail.
It turns out mpv forgot to free EGLSurfaces in the x11 code. EGL is a
pretty crazy API (full of thread local and global state with weird
lifetime requirements), and for example it seems EGLDisplay cannot be
explicitly released, but apparently implicitly dies when the native
display is closed (at least EGL 1.5 claims eglTerminate() does _not_
invalidate the display, only certain objects linked to it). It appears
that Mesa still referenced at least EGLSurface in some form, and either
some pointer or some X11 ID was dangling, and when it randomly matched
when eglCreateWindowSurface() was called, it failed.
Fix this by calling eglTerminate(), which supposedly destroys (or rather
unreferences) contexts and surfaces created from the display (but
absurdly not the display itself).
Now why can't you just destroy the display? If it's implicitly
invalidated, why can't it just call eglTerminate() implicitly when this
happens? Did Mesa do something wrong when they somehow didn't
automatically remove the dangling object (so I could claim not to be
responsible for the bug)? Who the fuck knows, and I'm too tired to
figure this out (both because it's late, and because I'm tired of this
EGL crap API).
Still not sure if the code is correct now. I think EGL was designed to
maximize implementation and API-use complications. How else could you
possibly come up with something like the EGLDisplay life cycle? Or am I
just making a fuss? Anyway, fuck EGL, fuck computers, fuck technology.
Fixes: #7129
Get rid of the legacy VOCTRL (which will be removed later). I'm not sure
what exactly fullscreen was supposed to do (toggling between using the
entire display, and what --geometry forced?), but I don't care, just get
rid of the VOCTRL. PRs to fix regressions caused by this will be
accepted, but personally I don't care since this is excessively fringe
and obscure.
The wayland backend needs to keep track of whether or not a window is
hidden for presentation time. There is no presentation feedback when a
window is hidden which means we shouldn't be sending information to the
vo_sync_info structure (i.e. just leave it all at -1). This seemed to
work fine, but recent changes to presentation time in one notable
compositor (Sway; it was probably always broken in Weston actually)
changed the presentation time behavior.
For reasons that aren't clear, there is a greater than 16.666ms delay
between the first presentation time event and the second presentation
time event (compositor latency?) when you switch back to an mpv window
after it is hidden for long enough (a few seconds). When using
presentation time, this causes mpv to feed in some bad values in its
vsync timing mechanism thus causing the A/V desync spike as described in
issue #7223.
This solution is not really ideal. It would be better if the
presentation time events received by the compositors did not have the
aforementioned inconsistency. However since this occurs in both Sway and
Weston and clients can't really fight compositors in wayland-world,
here's a reasonable enough workaround. Basically, just add a slight
delay before we start feeding information into the vo_sync_info again.
We already do this when the window is hidden, so it's not a huge leap.
The delay chosen here is arbitrary, and it basically just recycles the
same parameters used to detect if a window is hidden. If
vo_wayland_wait_frame times out 60 times in a row (or whatever your
monitor's refresh rate is), then we assume the window is hidden. This is
a pretty safe assumption; something has to be terribly wrong for you to
miss 60 vblanks in a row while a window is on the screen.
In this case, we basically just do the reverse of that. If mpv receives
60 frame callbacks in a row (or whatever your monitor's refresh rate
is), then it assumes the window is not hidden. Previously, as soon as it
received 1 frame callback it was declared not hidden. Essentially,
there's just 1 second of delay after reshowing a window before the
presentation time statistics are used again. This should be more than
enough time to skip over the weird inconsistent behavior presentation
time behavior and avoid the A/V desync spike.
Fixes#7223
drmModeAddFB is legacy, and might not pick the pixel format you
expect, depending on your driver. Use drmModeAddFB2 which specifies
this explicitly using a fourcc.
Seems like some drivers only increment msc every other page flip when
running in interlaced mode (I'm looking at you nouveau). I.e. it seems
to be incremented at the frame rate, rather than the field rate.
Obviously we can't work with this, so shame the driver and bail.
On intel this isn't an issue, as msc is incremented at field rate
there.
This means presentation feedback won't work correctly in interlaced
modes with those drivers, but who in their right mind uses an
interlaced mode these days, anyway?
In theory, using strstr() to search for extensions is a bad idea,
because some extension names might be prefixes for other names, so you
could get false positives. gl_check_extension() avoids this case.
It's not clear whether this is really needed; maybe not. Surely the EGL
committee is aware of these practices (many GL clients do this, which is
why it's widely considered bad practice), and would avoid defining new
extension names which contain existing names as sub-strings, but
whatever.
Adding an ifdef mess to deal with insufficient system headers is kind of
a mess. It's easier to just provide the definitions manually. This sucks
a bit too, but it's the approach we've been using with OpenGL headers in
general, and I think that worked pretty well.
On systems whose EGL headers do not define these extensions, the build
still failed due to missing ..._PLANE3_... defines. Although we supplied
missing EGL_LINUX_DMA_BUF_EXT defines manually, the PLANE3 ones are
actually from a separate extension, which explains why they were not
added to the fallback defines in the first place.
Add them, now it builds without the eglext.h include.
See #6838.
When frame-stepping with display-sync mode enabled in high framerate
video, the frame was sometimes not redrawn correctly. Only the first OSD
interaction (or something similar) made it visible.
In this case, the core schedules many frames as dropped (because it's
ignorant of pausing/frame-stepping, as in theory the player is _not_
paused during frame-stepping, only at the end of it). There's a race
between the VO rendering the queued frame, and the core calling
vo_set_paused() after it has queued the frame. If the latter happens
first, the existing logic to redraw the previous dropped frame does
things correctly. If the former happens, the frame is not redrawn
automatically, but will be redrawn on the next user input (or if OSD is
enabled, and the pause state change updates it, which leads to an
immediate redraw).
Fix this by never actually dropping a frame in paused mode. The request
by the core to drop it is simply ignored.
Maybe this could be done slightly nicer by updating the pause state with
the VO atomically. Then we wouldn't have the frame drop counter going up
either (it's actually dropped, but then redrawn; but I doubt any user,
or me in a few weeks, would understand this). But I'm not really
interested in polishing this by increasing the complexity of the
frame-step code.
To aid in discoverability, and to address the most common case
directly, I'm adding an 'auto' mode for the window controls. In
this case, we will show the controls if there is no window border
and hide them if there are borders. This also respects the option
being toggled at runtime.
To ensure that it works in the wayland case, I've also made sure
that the wayland code explicitly forces the option to false if
decoration support is missing.
Based on feedback, I've split the config in two, with one option
for whether controls are active, and one for alignment. These are
new enough that we can get away with ignoring compatibility.
This small regression was introduced by #7216. Previously, the wayland
backend used a trick which kept track of the previous fullscreen state
and used that logic for showing the cursor. Since vo_opts now keeps
track of the current fullscreen state, most of this stopped being
neccessary.
However, there was one edge case where the cursor didn't
behave the same: passing a fullscreen flag for the inital window. The
cursor would initially be visible here which is not desirable. This can
be remedied pretty easily by just setting the cursor visiblity to false
if the pointer entry event occurs on fullscreen. The only thing we need
to do is to make sure that the autohide delay isn't completely disabled
(i.e. the cursor is always visible). Hence the need for the previous
commit.
The remaining legacy VOCTRLs are for the fullscreen and border
properties. For fullscreen this largely just replacing the private
state field with the vo option but there are small semantic
differences that we need to be careful of.
For the border setting, it's trivial as we don't have external
mechanisms for changing the state, but I also can't test it as
I'm not using a compositor that supports it.
I wanted to get this done quickly as I introduced the new VOCTRL
behaviour for minimize and maximize and it was immediately made
legacy, so best to purge it before anyone gets confused.
I did not sort out fullscreen as that's more involved and not something
I've educated myself about yet. But I did replace the VOCTRL_FULLSCREEN
usage with the new option change mechanism as that seemed simple
enough.
Pretty annoying affair. The vo_gpu code could of course not trigger
rendering from filters yet, so it needed to be extended. Also, this uses
some icky stuff made for vf_sub (and this was the reason I marked vf_sub
as deprecated), so everything is terrible.
glx.h recursively includes gl.h, and there is no way to prevent this.
Old Mesa defines some GL symbols, but not all which mpv needs. In
particular, one user who was too lazy to update his ancient Ubuntu and
preferred to bother us with obscure bug reports, had Mesa headers which
did not define GL 3.2, so GLsync was not defined.
All in all I still think the idea of providing the GL API definitions
ourselves was a good idea; just GLX should have been isolated better.
But isolating GLX now is too much effort.
Not sure why I'm bothering with this at all.
Fixes: #7201 (unconfirmed)
Probably pretty useless in this form (see: the wall of warnings), but
someone wanted this.
I think this should be useful to perform some automated tests, maybe.
Fixes: #7194
This function always expects the GL struct pointer to be a talloc
allocation. So far so bad. But the terrible thing is that _lots_ of code
in mpv didn't quite get this (including the code which introduced the
way it is used this way). For example, in context_glx.c you see this:
struct priv {
GL gl;
...
GL is not a talloc allocation, but since it's at the start of a talloc
allocation, it works anyway. So far so bad. But the really terrible
thing is that mpgl_load_functions2() calls talloc_free_children() on the
GL pointer, which means that all of priv's. This would be unintentional
and could create dangling pointers. And this happens at the about 1
dozen of callers. I'm amazed it didn't broke yet anywhere.
Removing this anti-pattern with making GL "implicitly" a talloc
allocation would be too much effort at this point. So just manually free
the only allocation that the function attached to GL.
Should restore full functionality.
The initial state setting is a bit shoddy (instead of setting the
properties before map, we use the WM commands to change it after, so you
will see the normal window state for a moment; the WM commands do not
work on unmapped windows, so fixing this would require more code).
- remove VOCTRL_FULLSCREEN and VOCTRL_GET_FULLSCREEN
- have your own m_config_cache for the fullscreen option
(vo->opts_cache cannot be used because you lose per-option change
notifications, and it'd be a mess anyway)
- use VOCTRL_VO_OPTS_CHANGED to update it
(it's used for convenience)
- when updating it, check for the fullscreen option
(wasn't sure how to do it best; currently, it compares the raw
option pointers, but this could be changed)
- do not send VO_EVENT_FULLSCREEN_STATE on FS change
- instead write the option on FS change
(assign in opt. struct + m_config_cache_write_opt)
We primarily care about pseudo-decorations for wayland, where
the compositor may not support server-side decorations. So let's
implement the minimize and maximize commands and return the
maximized window state.
If we want to implement window pseudo-decorations via OSC, we need a
way to tell the vo to minimize and maximize the window. Today, we have
minimized as a read-only property, and no property for maximized.
Let's made minimized settable and add a maximized property to go with
it. In turn, that requires us to add VOCTRLs for minimizing or
maximizing a window, and an additional WIN_STATE to indicate a
maximized window.
At least with gnome-shell (I know, I know), the compositor does
not provide the old window size when leaving the maximized state.
Instead, we get a toplevel_config event with a 0x0 size and no
additional states.
Today, we already save the window geometry to restore it when leaving
the fullscreen state, so we just need a small change for it to
kick in for leaving the maximized state. If I read this correctly,
we'll still respect the size passed by a compositor that actually
provides the old size.
Rather than hard-coding the edge grab zone width, we can make it
user configurable. It seems worthwhile to have separate configs
for pointer and touch usage as the defaults should be different,
and a user might have both input methods in use.
Today, we support resizing wayland windows when we detect a touch
event in a defined grab zone. As part of implementing
pseudo-decorations, we should have equivalent functionality for
mouse input. And if we detect support for actual decorations we
will not activate the grab zone as the decorations will provide this.
Use x11->opts instead of vo->opts. This doesn't matter currently, and
x11->opts is actually set to vo->opts. However, there's a chance that
either option access changes, or that the way backends integrate with
struct vo changes. This is just a preemptive change to make this less of
a mess, and it's generally a good idea to reduce accesses to struct vo
anyway.
This is preparation to get rid of the option-to-property bridge
(mp_on_set_option). This is a pretty insane thing that redirects
accesses to options to properties. It was needed in the ever ongoing
transition from something to... something else.
A good example for the need of this bridge is applying profiles at
runtime. This obviously goes through the config parser, but should also
make all changes effective, for which traditionally the property layer
is used.
There isn't much left that needs this bridge. This commit changes a
bunch of options (which also have a property implementation) to use
option change notifications instead. Many of the properties are still
left, but perform unrelated functions like OSD formatting.
This should be mostly compatible. There may be some subtle behavior
changes. For example, "hwdec" and "record-file" do not check for changes
anymore before applying them, so writing the current value to them
suddenly does something, while it was ignored before.
DVB changes untested, but should work.
Handling the window with this function makes no sense, since windows
and kernels are not the same thing and don't share the same option list.
The only reason it's done is to make sure the char* points at the static
string rather than the dynamically allocated one, which we can do
manually in this function. Rewrite a bit for clarity/quality.
We should only be printing errors that occur when not probing, to
avoid creating the impression that something is wrong - and errors
during probing isn't a problem.
Surprisingly, we've managed to get this far without context_glx ever
adding the X11 display as a native resource. But with the recent change
to attempt to enable vdpau when using EGL, the hwdec now requires the
display to be added. So let's add it.
eglGetPlatform() is a broken API, since it takes a windowing specific
argument, yet is supposed to work for multiple APIs at the same time. On
Linux, it can take both a X11 "Display" and a "wl_display". Obviously
there is no way to specify what kind of display the argument is (it's
just a void*).
Mesa has _eglNativePlatformDetectNativeDisplay, which does funny stuff
to try to guess the display type, including trying to call mincore() to
determine whether the pointer can be accessed at all. I guess this
recently accidentally broke (as a bug), but on the other hand, maybe
it's time to do this properly.
The fix is using eglGetPlaformDisplay(). This requires EGL 1.5, plus
Mesa needs to support the associated platform extension
(EGL_KHR_platform_x11).
Since I see no reasonable way to do this in a compatible way, just
require that EGL 1.5 is available. The problem is that EGL 1.4 seems to
require you to create a display to query EGL version and extension, and
you have a chicken-and-egg problem. It's very stupid. Maybe you could
jump through some more hoops to get something compatible, but fuck that.
Users on "too old" Mesa will fall back to GLX (which we keep around for
a regrettable company known by the name of Nvidia).
I think Wayland and GBM should do the same. They're sufficiently
bleeding-edge that you can expect them to have EGL 1.5. On the other
hand, the cursed RPI code will have to stay with a eglGetDisplay().
Speculative fix for #7154.
(Rant about EGL follows. Actually I deleted it.)
pass_color_map() (in video_shaders.c) and pass_colormanage() (video.c)
both duplicate the condition on whether to do peak computation. Peak
computation requires a compute shader, so if the duplicated conditions
don't match, video_shaders.c will generate a compute shader, but video.c
will try to run it as fragment shader. This leads to a "blue screen".
This can be reproduced by playing a HDTV video with --target-peak=99.
It's not clear how to fix this. Should pass_tone_map() be only invoked
if mp_trc_is_hdr() == true (what pass_colormanage() uses to decide
whether to enable peak computation), or should pass_colormanage() just
tell pass_color_map() to skip peak computation? Decide for the latter,
as it's more robust.
Even if not correct, at least it gets rid of the blue shit.
Fixes: #7149
This makes the condition for including each init match the condition for
compiling the file that defines it.
It's possible to e.g. HAVE_GL and HAVE_VAAPI without HAVE_VAAPI_EGL,
which resulted in "undefined reference to `vaapi_gl_init'" with the old
code.
Probably. It's not like these pixel formats are formally specified -
FFmpeg added them because _some_ file format or decoder supports it, and
while that format/codec may define it precisely, the pixel format is
sort of disconnected and just a FFmpeg thing.
In any case, the yuva sample I had at hand uses the full range the
component data type can provide. The old code used the same "shifted"
range as for Y/U/V components, which must have been wrong.
This will not work correctly for packed YUVA formats, but fortunately
they matter even less.
This is fragile enough that it warrants getting "monitored".
This takes the commented test program code from img_format.c, makes it
output to a text file, and then compares it to a "ref" file stored in
git.
Originally, I wanted to do the comparison etc. in a shell or Python
script. But why not do it in C. So mpv calls /usr/bin/diff as a
sub-process now.
This test will start producing different output if FFmpeg adds new pixel
formats or pixel format flags, or if mpv adds new IMGFMT (either aliases
to FFmpeg formats or own formats). That is unavoidable, and requires
manual inspection of the results, and then updating the ref file.
The changes in the non-test code are to guarantee that the format ID
conversion functions only translate between valid IDs.
The use of glXGetCurrentDisplay() restricted this to the GLX backend.
But actually it works under EGL as well. Removing the GLX-specific call
and using the general mpv-internal method to get the X "Display" makes
it work in mpv.
I didn't know this. Nvidia didn't list this as extension in the EGL
context when I still used their GPUs.
Note that this might in theory break use of vdpau in some libmpv clients
using the render API. But only if MPV_RENDER_PARAM_X11_DISPLAY is not
used, and they relied on mpv using glXGetCurrentDisplay(). EGL does not
provide such an API, and hwdec_vaapi.c also uses what hwdec_vdpau.c uses
now. Considering that vaapi is preferable these days, it's not bad at
all if these clients get "broken". They can be easily fixed by passing
the display to mpv correctly.
For some reason, the first frame displayed on X11 with amdgpu and OpenGL
will be garbled. This is especially visible if the player starts,
displays a frame, but then still takes a while to properly start
playback.
With --interpolation, the behavior somehow changes (usually gets worse).
I'm not sure what exactly is going on, and the code in video.c is way
too abstruse. Maybe there is some slight possibility that a frame with
uncleared contents gets displayed, which somehow also corrupts another
frame that is displayed immediately after that.
If clear is unconditionally run, this somehow doesn't happen, and you
see a video frame. By any logic this shouldn't happen: a video frame
should always overwrite the background. So I can't exclude that this
isn't some sort of driver bug, or at least very obscure interaction.
Clearing should be practically free anyway, so always do it.
Fixes: #7105
They were used at some point, but then fell into disuse. In general,
these old flags are all a bit fuzzy, so it's a good idea to remove them
as much as possible.
The comment about MP_IMGFLAG_PAL isn't true anymore. The old meaning was
deprecated at some point, and the flag was removed from "pseudo
paletted" formats. I think mpv at one point changed its own flag from
AV_PIX_FMT_FLAG_PSEUDOPAL to AV_PIX_FMT_FLAG_PAL, when the former was
deprecated, and it became unnecessary to allocate a palette for
non-paletted formats. (The one who deprecated in FFmpeg was me, if you
wonder.)
MP_IMGFLAG_PLANAR was used in command.c, use a relatively similar flag
as replacement.
This is slightly helpful for testing, and otherwise useless and without
consequence.
I'm not using the correct output format and using IMGFMT_RGB0 as
placeholder. This doesn't matter currently, as both sws and zimg support
this as output (and support any input for it). I'm doing this because
it's surprisingly tricky to get the correct output format at this point,
without digging deeper into x11 shit or refactoring parts of the VO. I
don't care enough about this.
The user can raise the number of tolerated hardware decoding errors. On
the other hand, we have a static limit on packets that are "saved" for
fallback handling (and that's a good idea to avoid unbounded memory
usage). In this case, it could happen that the start of a file was fine
after a fallback, but after that buffered amount of data, it would
suddenly skip.
It's more useful to skip buffering entirely if the number of tolerated
decoding errors exceeds the fixed buffer.
(And also, I'm sure nobody gives a shit about this feature.)
prepare_decoding() returned a bool that was supposed to tell whether
decoding could work, or if something was fucked. After recent changes to
the decoder loop, this did not work anymore, and caused an endless loop.
Redo it, so it makes more sense. avctx being NULL (software fallback
initialization failed) now signals EOF. hwdec_failed needs to be handled
on send_packet() only, where it probably never happens anyway.
(Who was the idiot who made libavcodec have two entrypoints for
decoding? Oh right, it was me. PEBKAC.)
Shovel the code around to make the data flow slightly simpler (?). At
least there's only one send_packet function now. The old code had the
problem that send_packet() could be called even if there were queued
packets; due to sending the queued packets in the receive_frame
function, this should not happen anymore (the code checking for this
case in send_packet should normally never be called).
Untested with actual full stream hw decoders (none available here); I
created a test case by making hwaccel decoding fail.
Forgotten in commit 5d5fdb7. This failed to return the error code
properly. In particular, if the decoder rejected the packet, this was
not properly detected. Normally, this mattered only in specific cases.
Fixes: #7115
Instead, use a YUV planar format. It doesn't matter, since we use the
format only internally and for "management" purposes. We're only
interested in the physical layout, not what colorspace FFmpeg "forcibly"
associates with it.
Also get rid of using the old and slightly sketchy mp_imgfmt_find()
function. Yep, the IMGFMT_RGB30 now "constructs" the planar format,
instead of using a pixfmt constant. Slightly inconvenient, tricky, and
fragile, but I like it, so bugger off.
This whole thing gets rid of some of the strange plane permutations that
were needed earlier.
According to the definition of the GL format, and the definition in
img_format.h, and the actual output by vo_gpu, the order of components
was probably wrong. It's exceedingly likely that the vo_drm format (for
which this was originally written) has the same layout, so this was
probably a bug from when the zimg wrapper code was refactored.
This was too hardcoded to libswscale. In particular, IMGFMT_RGB30 output
is only possible with the zimg wrapper, so the context needs to be taken
into account (since this depends on the --sws-allow-zimg option
dynamically). This is still slightly risky, because zimg currently will
still fall back to swscale in some cases, such as when it refuses to
initialize the particular color conversion that is requested.
f_autoconvert.c could actually handle this better, but I'm tool fucking
lazy right now, and nobody cares anyway, so go away, OK?
This integrates it as "special" format, with no alpha component, as the
equivalent IMGFMT_RGB30 isn't meant to contain any.
Nothing can produce this format in the video chain yet, so the next
commits are needed to make this actually work.
This is mostly just because of the odd RGB default gamma issue, which
shouldn't have any real impact. This also sets allow_approximate_gamma,
which I hope is fine for normal use cases.
Normally, the Y plane can just be passed directly to zimg, and only the
chroma plane needs to be (de)interleaved. It still needs a copy if the Y
pointer is not aligned, though. (Whether this is actually a problem
depends on the CPU and probably zimg's compiler.)
This requires deciding per plane whether the plane should go through the
repack buffer or not. This logic is active in non-nv12 cases, because
not doing so would require extra code (maybe 2 lines or so).
repack_align is now always called, even if it's planar->planar with all
input aligned, but it won't actually do anything in that case. The
assumption is that zimg won't change behavior if you pass a callback
that does nothing versus passing NULL as callback.
This is for formats like nv12 (including p010, nv24, etc.). Might be
important for hardware decoding. Previously, this would have forced a
libswscale fallback.
The genericism makes this only slightly more complicated. The main
complication is due to the fact that mixing planar and packed stuff is
insane (thanks, Nvidia).
P010 output will actually happily set any of the 6 bit "padding" LSB,
that are normally supposed to be 0 (for unpadded data there is P016).
Scaling happens with 16 bit precision. Not going to bother adding an
extra packer which zeros them out, or with shifting them in
packing/unpacking. Lets just hope nobody notices.
This is similar to mp_imgfmt_find(), but probably a bit saner. Used by
the next commit. The previous commit is required to map this
unambiguously between all formats.
As the code comment says, this is needed to disambiguate FFmpeg formats.
This struct only describes the "physical" layout of a format, while
FFmpeg also attaches part of the colorspace information to the format.
We've set all planes to the same zmask. But for subsampled chroma, the
zmask obviously needs to be smaller. This could lead to out of bounds
memory read and write accesses.
Move the align repacker to a single function, since this is now more
convenient.
This probably covers all packed formats which have byte-aligned
component, no alpha, and no subsampling. Everything else needs more
imgfmt metadata, or something even more complicated. Alpha is primarily
not supported, because zimg requires a second scaler instance for it,
and handling packing/unpacking with it is an unacceptable mess.
Raise swscale and zimg default parameters. This restores screenshot
quality settings (maybe) unset in the commit before. Also expose some
more libswscale and zimg options.
Since these options are also used for VOs like x11 and drm, this will
make x11/drm/etc. much slower. For compensation, provide a profile that
sets the old option values: sw-fast. I'm also enabling zimg here, just
as an experiment.
The core problem is that we have a single set of command line options
which control the settings used for most swscale/zimg uses. This was
done in the previous commit. It cannot differentiate between the VOs,
which need to be realtime and may accept/require lower quality options,
and things like screenshots or vo_image, which can be slower, but should
not sacrifice quality by default.
Should this have two sets of options or something similar to do the
right thing depending on the code which calls libswscale? Maybe. Or
should I just ignore the problem, make it someone else's problem (users
who want to use software conversion VOs), provide a sub-optimal
solution, and call it a day? Definitely, sounds good, pushing to master,
goodbye.
Lots of dumb crap to do... something. Instead of adding yet another dumb
helper, just use the main" sws_utils API in both callers. (Which,
unfortunately, has been duplicated for glorious webp screenshots,
despite the fact that webp is crap.)
Good part: can enable zimg for screenshots (as far as needed).
Bad part: uses "default" swscale parameters instead of HQ now.
Purpose uncertain. I guess it's slightly better, maybe.
The move of the sws/zimg options from VO opts (vo_opt_list) to the
top-level option list is tricky. VO opts have some helper code in vo.c,
that sends VOCTRL_SET_PANSCAN to the VO on every VO opts change. That's
because updating certain VO options used to be this way (and not just
the panscan option). This isn't needed anymore for sws/zimg options, so
explicitly move them away.
By default utilizes the color space of the desktop on which the
swap chain is located. If a specific value is defined, it will be
instead be utilized.
Enables configuration of the PQ color space (BT.2020 primaries,
PQ transfer function) for HDR.
Additionally, signals the swap chain color space to the renderer,
so that the render looks correct without having to specify
target-trc or target-prim manually.
Due to all of the APIs being Win10+ only, will only work starting
with Windows 10.
This lets us set primaries, transfer function and the target peak
based on what the presenting layer would want us to have.
Now that this mechanism is available, warn if the user has
overridden values such as primaries or transfer function.
This seems to have been missed when the storable flag was added, since
all the other flags were logged here. It can be useful to know if an RA
format is storable, so log it as well.
Using e.g. --vf=format=0bgr showed obviously wrong colors with --vo=gpu.
The reason is that leading padding wasn't handled correctly.
Try to hack fix it. While the code in copy_image() is somewhat
reasonable, I can't tell what the fuck is going on with that HOOKED
shit. For some reason this HOOKED shit doesn't use copy_image() (???),
or uses it incorrectly. It affects debanding. --deband=no works
correctly. If it's enabled, the crap in hook_prelude() is needed.
I bet there are many more bugs with this. For example, the deband shader
will try to deband the alpha channel if the format abgr is used (because
the correct component order is only established later). This can be
tested by inserting a "color.x = 0;" at the end of the deband shader,
and using --vf=format=rgba vs. abgr.
I cannot comprehend why it doesn't just store explicitly which
components a texture contains, and why it doesn't just read the
components always in an uniform way.
There's a big chance this fix works only by coincidence. This shouldn't
have been so hard either. Time for a complete rewrite?
With hwdec copy modes, mp_image_copy_attributes() is used to transfer
metadata other than the image data when copying the image from the
hardware surface. It didn't copy the closed caption data.
Fix this. This makes closed captions in copy mode work.
Fixes: #6376
sdl_gamepad.c and vo_sdl.c both have their own event loops and run in
separate threads. They don't know of each other (and shouldn't). Since
SDL only has one global event loop (why didn't they fix this in SDL2?),
these obviously clash. The actual behavior is relatively subtle, which
event being randomly dispatched to either of the threads.
This is very regrettable. Very.
Work this around. "Fortunately" SDL exposes its global state to some
degree. SDL_WasInit() returns whether a "subsystem" was initialized, and
you could say the one who initialized it owns it. Both SDL_INIT_VIDEO
and SDL_INIT_GAMECONTROLLER implicitly enable SDL_INIT_EVENTS, and the
event loop is indeed the resource that cannot be shared.
Unfortunately, this is still racy, since SDL_InitSubSystem is a second
call, and succeeds if the subsystem is already initialized (increases a
refcount I think). But good enough. Blame SDL for everything.
(I think I made this commit message too long. Nobody cares even.)
Fixes: #7085
It seems some users try to use it (!). This VO was always an experiment,
and intended for low power devices. Whether this experiment succeeded or
not, it's a rather obscure VO. Recently I've seen a regrettable user,
who seemed to use this only because mpv was built without x11 support
(!). Add this warning, like other fallback VOs have it. (The message was
copied from vo_x11.)
Commit 5d5fdb77e9 changed details of the decoding control flow, and
called it a "high-risk" change. It turns out that this broke with with
hwdec copy mode, where there is some sort of delay queue (supposedly
increases efficiency, but more likely worthless cargo-cult).
It simply used the wrong (basically inverted) condition for the draining
case.
This was the only case that did not work properly. Other tests,
including video/audio decoding errors, software decoding fallbacks,
etc., seemed to work well. Might still not be exhaustive, as there are
so many corner cases.
Also change two error code returns. This don't/shouldn't really matter,
though the second error code led it to return both a frame and
AVERROR_EOF, which is unexpected, and makes lavc_process() leak a frame.
But also see next commit.
Fixes: 5d5fdb77e9