This originally existed as a hack for weston. In certain scenarios, a
frame taking too long to render would cause vo_wayland_wait_frame to
timeout which would result in a ton of dropped frames. The naive
solution was to just to add a slight delay to the time value. If a
frame took too long, it would likely to fall under the timeout value and
all was well. This was exposed to the user since the default delay
(1000) was completely arbitrary.
However with presentation time, this doesn't appear to be neccesary.
Fresh frames that take longer than the display's refresh rate (16.666 ms
in most cases) behave well in Weston. In the other two main compositors
without presentation time (GNOME and Plasma), they also do not
experience any ill effects. It's better not to overcomplicate things, so
this "feature" can be removed now.
Add support for setting window-minimized and window-maximized in
Windows. The minimized and maximized state can be set independently.
When the window is minimized, the value of window-maximized will
determine whether the window is restored to the maximized state or not.
Changing state is done with ShowWindow(), which has commands that change
the window state and activate it (eg. SW_RESTORE) and commands that
change the window state without activating it (eg. SW_SHOWNOACTIVATE.)
It would be nice if we could use commands that don't activate the
window, so scripts could change the window state in the backrgound
without bringing it to the foreground, but there are some problems with
that. There is no command to maximize a window without activating it, so
SW_MAXIMIZE is used instead. Also, restoring a window from minimize
without activating it seems buggy. On my Windows 10 1909 PC, it always
moves the window to the back of the z-order. SW_RESTORE is used instead
of SW_SHOWNOACTIVATE because of this.
This also changes the way the window is initially shown. Previously, the
window was made visible as a consequence of the SWP_SHOWWINDOW flag in
the first call to SetWindowPos. In order to set the initial minimized or
maximized state of the window, the window is shown with the ShowWindow
function instead, where the ShowWindow command is determined by whether
the window should be initially maximized or minimized.
Even when showing the window normally, we should still call ShowWindow
with the SW_SHOW command instead of using SetWindowPos, since the first
call a process makes to ShowWindow(SW_SHOW) has special behaviour
where it uses the show command in the process' STARTUPINFO instead of
the command passed to the function, which should fix#5724.
Note: While changes to window-minimized while in fullscreen mode should
work as expected, changing window-maximized while in fullscreen does not
work and won't result in the window changing state, even after leaving
fullscreen. For this to work correctly, the fullscreen logic needs to be
changed to apply the new maximized state on leaving fullscreen.
Fixes: #5724Fixes: #7351
it was possible for mouse events to be triggered when the core was
already being shut down. to prevent this properly close and remove the
window and additional remove the reference to MPVHelper object.
this deprecates the old cocoa backend only option and moves it to the
general macos ones. add support for the new option in the cocoa-cb
layer creation and use the new option in the olde cocoa backend.
Fixes#7272
due to the bundle config the icon is set automatically via the bundle
system mechanisms. this also makes it possible to set the icon to a
custom one with the standard macOS copy paste method via the file info
dialogue.
Fixes#6874
As documented on struct mp_hwdec_ctx, hw_imgfmt specifies the hardware
surface wrapper format for which supported_formats is valid. If this was
not set, f_hwtransfer ignored supported_formats, and assumed all formats
were supported.
Allow the --window-maximized and --window-minimized flags to actually
work when the player is started on wayland. If the compositor doesn't
support maximization or minimization, then these options just do
nothing.
Fixes#7345
There was a multitude of issues with cursor handling in wayland and
behavior seemed to vary for strange reasons across compositors and also
bad things were being done in wayland_common. The problem is complicated
and involved fullscreen states being set incorrectly under certain
instances and so on. The best solution is to just remove most of the
extra cruft. In handle_toplevel_config, instead of automatically
assuming is_fullscreen and is_maximized are false, we should use
whatever value is currently set vo_opts which matters when we initially
launch the window.
hwdec_vaapi tries to probe all available surface formats in advance. For
that, we iterate over _all_ profiles in an attempt to collect possible
surface formats. This means we try profiles we normally wouldn't use for
decoding or filtering, and which could be "unrelated" services.
It seems some drivers report at least one profile, for which
vaQueryConfigEntrypoints() fails (because the profile is not supported;
not sure why it lists it, then). So turn the error message into a
verbose message to avoid confusing output.
Fixes: #7347
This shouldn't really matter, but it's probably best to avoid.
vo_wayland_control would execute set_cursor_visibility while wl->pointer
existed but it didn't check if wl->pointer_id existed. So
wl_pointer_set_cursor would be set to a null surface with an id of 0.
Instead, just wait until we have an actual, non-zero pointer id so that
the cursor is set with the correct, actual id and not a fictious 0 id.
This ensures that the pointer isn't set until it enters the wl_surface
which is what we want.
at the time of the initial dpi query the window is not instantiated yet.
we use a proper fallback in that case, eg the target configured screen
or the main screen if none is set.
also change some weird oversight and a small optimisation.
Theoretically possible (and quite unlikely due to the small texture
size). The code was originally written with the assumption that texture
allocations can't fail, and it was never updated out of laziness.
Untested.
It turns out that gnome wayland still has very serious issues that make
it unusable for playback with mpv. Other compositors mostly behave fine
(Plasma is just missing feature but it's not seriously broken), so GNOME
gets the special honor of having a warning printed out. The only
solution for GNOME users at this time of writing is to either use the
Xorg session or use another wayland compositor.
In the distant past, the cuviddec backed copy hwaccel could be
configured directly using lavc options. However, since that time,
we gained support for automatic hw ctx creation which ended up
bypassing the lavc options.
Rather than trying to find a way to pass those options again, a
better idea is to make the 'cuda-decode-device' option, used by
the interop hwaccels, work for the copy hwaccels too.
And that's pretty simple: we have to add a create function that
checks the option and passes it on to ffmpeg.
Note that this does require a slight re-jig to the configuration
flags, as we now have a scenario where we want to build with support
for the cuda copy hwaccels but not the interop ones. So we need
a distinct configuration flag for that combination.
Fixes#7295.
In vaapi 1.1.0 (which confusingly is libva release 2.1.0), they
introduced a new surface export API that is more efficient, and
we've been supporting that and the old API ever since (Feb 2018).
If we drop support for the old API, we can do some fairly nice cleanup
of the code.
Note that the pkgconfig entries are explicitly versioned by the API
version and not the library version. I confirmed the upstream pkgconfig
files.
As we are less and less interested in vpdpau, with nvdec and vaapi
being better choices in general on nvidia and AMD respectively, we
might consider removing direct_mode, where we bypass the vdpau
mixer and work directly with yuv textures. Normally, working with
yuv textures would be great, but vdpau built in an assumption that
all frames are delivered as separate fields, causing us to have
to re-interleave them.
nvidia then introduces a new OpenGL extension that can return the
yuv frames as frames, but we can't just unconditionally switch to
that as we'd want to keep supporting older hardware where the drivers
are no longer getting new features. The end result is that we
wouldn't be able to get rid of the old code paths.
Removing direct_mode means we always use the mixer, and work with
rgba frame textures. There are some theoretical limitations to
this, but in practice they probably don't matter much - unsupported
colourspaces don't matter because without 10bit decoding support,
we can't use them anyway, and apparently we're not doing separate
chroma scaling these days, so scaling the rbga doesn't really lose
anything (and the vdpau hq scaling option remains available).
GCC 9.2 warns about this. It was always a bit sketchy, so get rid of it.
VK_F10 generates WM_SYSKEYDOWN, so it only needs to be handled in the
WM_SYSKEYDOWN case.
in certain circumstances the video was not redrawn even when the size
or the backing scale factor changed. this could lead to a lower
resolution output than intended.
now it redraws the video when screen properties or the window size
changes.
Add an "auto-safe" mode, mostly triggered by Ubuntu's nonsense to force
hwdec=vaapi in the global config file in their mpv package. But to be
honest it's probably something more people want.
This is implemented as explicit whitelist. On Windows, HEVC/Intel is
sometimes broken, but it's still whitelisted, and in theory we'd need a
detailed whitelist of device names etc. (like for example browsers tend
to do). On OSX, videotoolbox is a pretty bad choice, but unfortunately
the only one, so it's whitelisted too. There may be a larger number of
hwdec wrappers that work anyway, and I'm for example ignoring Android.
Apparently there are two different options for controlling which
screen an mpv window goes onto: --fs-screen and --screen. The former
explicitly only controls which screen a fullscreened window goes onto,
but does not appear to actually care about this option at runtime for
X11, so pressing f will always fullscreen to the screen mpv is currently
on. This means the option is of questionable usefulness for starters.
Making it worse, if you use --screen=1 --fs, mpv will actually fullscreen
on screen 0, because --fs-screen isn't set. Instead of doing that, fall
back to whatever --screen is set to.
(X11 does not support different per-screen DPI (or only via hacks), so
this is pretty simple. If other backends are going to implement this,
then they should send VO_EVENT_WIN_STATE if the DPI for the mpv window
changes by moving it to another screen or such.)
The size overflow check was inverted: instead of allowing reading only
the first dst_size bytes of the property, it allowed copying past the
property buffer (as returned by xlib). xlib doesn't return the size of
the buffer in bytes, so it has to be computed and checked manually.
Wouldn't it be great if C allowed me to write the overflow check in a
readable way, so it doesn't trick me into writing dumb security bugs?
Relying on X security is even dumber than creating a X security bug,
though, so this was not a real problem. But I found that one specific
call tried to read more than what the property provided, so reduce that.
Also, len*ib obviously can't overflow, so there's an additional layer of
dumb to this whole thing.
While we're at dumb things, why the hell does xlib use "long" for 32 bit
types. It's a god damn pain.
libavcodec's nvdec wrapper can return invalid frames, that do not have
any data fields set. This is not allowed by the API, but why would they
follow their own API?
Add a workaround to specifically detect this situation. In practice,
this should fall back to software decoding if it happens too often in a
row. (But single errors are still tolerated, because I don't know why.)
Untested due to lack of hardware from the regrettable graphics company.
Better do this here than deal with the moronic project we unfortunately
depend on.
See: #7185
These all have been replaced recently.
There was a leftover in window.swift. It couldn't have done anything
useful in the current state of the code, so drop these lines.
* Instead of following VOCTRL_FULLSCREEN, check for option changes.
* Instead of signaling VO_EVENT_FULLSCREEN_STATE, update the cached
option structure and have it propagated to the origin.
Additionally, gets rid of all the straight usage of the VO options
structure.
Done in a similar style to the Wayland common file, where in case
of reading the value, the "payload" from cache is utilized.
In this combination, the [current-]window-scale properties still
incorrectly applied scaling.
For some reason, vo_calc_window_geometry2() handled this option
(basically ignored the dpi_scale parameter passed to it), but since the
DPI compensation for window-scale is implemented in x11_common.c, we
need to check and honor this option here too. (What a mess.)
"window-scale" is 1.0 by default; however, x11 implicitly set that to
2.0 on hidpi screens. This made the default 2.0, which was inconsistent
with the option. The "window-scale" property jumped from 1.0 to 2.0 when
a window was created.
Avoid this by factoring the DPI into the window-scale. This makes the
UNFS_WINDOW_SIZE return a virtual size; since this value is used for the
window-scale property only, this is fine and has no further
consequences. (Originally, this was possibly meant to be used for other
purposes, but I'm perfectly fine with redoing this again should that
ever happen.)
This changes user-visible behavior, and it's as if setting window-scale
multiplies its argument by 2 suddenly. Hopefully no user will get angry.
This tries to deal with the crazy EGL situation. The summary is:
- using eglGetDisplay() with multiple windowing platforms doesn't really
work, but Mesa had an awful hack for it
- this hack can be disabled at build time, and some distros sometimes
accidentally or intentionally do so
- Mesa will probably eventually disable it by default
- we switched to eglGetPlatformDisplay(), but this requires EGL 1.5
- the very regrettable graphics company (also known as Nvidia) ships
drivers (for old hardware I think) that are EGL 1.4 only
- that means even though we "require" EGL 1.5 and link against it, the
runtime EGL may be 1.4
- trying to run mpv there crashes in the dynamic linker
- so we have to go through some more awful compatibility hacks
This commit tries to do it "properly", but using EGL 1.4 as base. The
plaform selection mechanism is a messy extension there, which got
elevated to core API in 1.5 (but OF COURSE in incompatible ways).
I'm not sure whether the EGL 1.5 code path (by parsing the EGL_VERSION)
is really needed, but if you ask me, it feels slightly saner not to rely
on an EGL 1.4 kludge forever. But maybe this is just an instance of
self-harm, since they will most likely never drop or not provide this
API.
Also, unlike before, we actually check the extension string for the
individual platform extensions, because who knows, some EGL
implementations might curse us if we pass unknown platform parameters.
(But actually, the more I think about this, the more bullshit it is.)
X11 and Wayland were the only ones trying to call eglGetPlatformDisplay,
so they're the only ones which are adjusted in this commit.
Unfortunately, correct function of this commit is unconfirmed. It's
possible that it crashes with the old drivers mentioned above.
Why didn't they solve it like this:
struct native_display {
int platform_type;
void *native_display;
};
Could have kept eglGetDisplay() without all the obnoxious extension BS.
This assert() sometimes triggered (and still triggers) with lavc API
bugs. It tries to check that at least 1 plane is set to a non-NULL
value. Obviously, a valid frame returned by successful decoding should
never have it.
The problem is that some hwdecs use integer surface IDs cast to a
pointer. Recently, it happened that newer Intel drivers started using
surface ID 0 under certain circumstances (for unknown reasons), which
triggers this assert.
Just get rid of it.
For the sake of #7185, add an assert() specifically for nvdec. That
failure needs to be further analyzed, is probably a FFmpeg bug, and
without this assert() would just crash somewhere further down the video
chain.
Fixes: #7261
This code checked AVFrame.buf[0] instead of the decode return code to
see whether a frame was decoded. This is sort of suspicious; while I
think that the lavc API actually guarantees it, it's not intuitive
anyway. In addition, the code was unnecessarily roundabout.
Replace it with a proper error code check. Remove the other error return
(that was, or should have been, redundant before). The no-frame path is
now cleanly separated. Add an assert on the frame-returned path; if this
fails, lavc violated its own API.
the old event tap has several problems, like no proper priority support
or having to set accessibility permissions for mpv or the terminal.
it is now replaced by the new MediaPlayer which has proper priority
support and isn't as greedy as previously. this only includes Media Key
support and not any of the other features included in the MediaPlayer
framework, like proper Now Playing data (only set dummy data for now).
this is only available on macOS 10.12.2 and higher.
also removes some unnecessary redefines.
Fixes#6389
this removes the direct access of the mp_vo_opts stuct via the vo struct
and replaces it with the m_config_cache usage. this updates the
fullscreen and window-minimized property via m_config_cache_write_opt
instead of the old mechanism via VOCTRL and event flagging. also use the
new VOCTRL_VO_OPTS_CHANGED event for fullscreen and border changes.