Since we divide by it in a couple of places and compositors can be crazy,
its better to be safe than sorry.
Also checks cursor spawn durinig init (pointless since it does again on
cursor entry but its more correct).
It seems the cursor hadn't had its position properly adjusted when scaled.
Hence, bring back correct buffer scaling to make the cursor look fine.
Also the cursor surface now gets created sooner so that's better.
Regression since ec6e8a31e0. Removal of the explicit else case
always applies the conversion to premultiplied alpha in the else branch.
We want to scale with multiplied alpha, but we don't want to multiply
with alpha again on top of it.
Fixes#4983, hopefully.
This should be functionally identical to rgba16f, since the formats only
differ in their representation on the CPU, but it could be useful for RA
backends that don't expose rgba16f, like Vulkan. It's definitely useful
for the WIP D3D11 backend.
With video paused, changing the brightness controls (or similar) would
sometimes not rerender the video frame. So the OSD would redraw, but the
video wouldn't change. This is caused by output caching, and a redraw
request is free to return the cached frame. Change it such to invalidate
the cached frame if any of the options or the equalizer change.
In theory, gl_video_reset_surfaces() could be called if the equalizer
changes - this would apparently force interpolatzion to redraw all
frames. But this looks kind of crappy when changing the equalizer during
playback. It'll "eventually" use the correct settings anyway, and when
paused interpolation is off.
This was confusing at best. Change it to output the actual choices.
(Seems like in the end it's always me who has to clean up other people's
bullshit.)
Context names were not unique - but they should be, so fix it. The whole
point of the original --opengl-backend option was to side-step the
tricky auto-detection, so you know exactly what you get. The goal of
this commit is to make --gpu-context work the same way. Fix the
non-unique names by appending "vk" to the names.
Keep in mind that this was not suitable for slecting the "UI" backend
anyway, since "x11" would force GLX, whereas people on not-NVIDIA
actually want "x11egl". Users trying to use --gpu-context=x11 to force
the X11 backend would always end up with GLX, which would at least break
VAAPI hardware decoding for them. Basically the idea that this option
could select the "UI" type is completely broken - it selects an
implementation, which implies a UI. Selecting the UI type This would
require a separate mechanism. (Although in theory this separate
mechanism could be part of the --gpu-context option - in any case,
someone would have to implement it.)
To achieve help output that can actually be understood, just duplicate
the code. Most of that code is duplicated anyway, and trying to share
just the list code with the result of making the output unreadable
doesn't make too much sense. If we wanted to save code/effort, we could
just remove the help output altogether.
--gpu-api has non-unique entries, and it would be nice to group them
(e.g. list all OpenGL capable contexts with "opengl"), but C makes this
simple idea too much of a pain, so don't do it.
Also remove a stray tab from the android entry on the manpage.
Every compositor (including toy compositors) has had support for wl_output v2
since forever, so there's little point in supporting degraded output for 5 year
old releases (especially considering we require zxdg6 which is far more recent).
This adds symbol information to the generated SPIR-V, which shows up in
the SPIR-V assembly dump. It's also useful for potential RA backends
that use SPIRV-Cross, since the symbol information is used in the
generated shader source.
This should actually cover all of them, if you take into account that
some unchanged GPL source files include header files with such checks.
Also this was done already for the libaf derived code.
This is only for "safety" and to avoid misunderstandings.
It turns out compositors which do scaling scale the cursor as well,
so every single surface needs to get scaled too.
Also, 32 corresponds to the default size for both GTK+ and KDE.
This new interface in libva2 offers a cleaner way to export surfaces
which can then be imported to EGL. In particular, this works with
the Mesa driver, so we can have proper playback without a pointless
download and upload on AMD cards.
This change does nothing with libva1, and will fall back to the
libva1 interface (vaDeriveImage() + vaAcquireBufferHandle()) if
vaExportSurfaceHandle() is not present.
At the moment, rendering on Android requires ``--vo=opengl-cb`` and
a lot of java<->c++ bridging code to receive the receive and react to
the render callback in java. Performance also suffers with opengl-cb,
due to the overhead of context switching in JNI.
With this patch, Android can render using ``--vo=gpu --gpu-context=android``
(after setting ``--wid`` to point to an android.view.Surface on-screen).
MediaCodec uses a fixed number of output buffers to hold frames, and
expects that output buffers will be released as soon as possible. Once
rendered, the underlying frame is automatically released and cannot be
reused or rerendered.
The new VO_CAP_NOREDRAW forces mpv to release frames immediately after
they are rendered or dropped, to ensure that MediaCodec decoder does not
run out of buffers and stall out.
This commit:
- Implements output tracking (e.g. monitor plug/unplug)
- Creates the surface during registry (no other dependencies)
- Queues the callback immediately after surface creation
- Cleaner and better event handling (functions return directly)
- Better reconfigure handling (resizes reduced to 1 during init)
- Don't unnecessarily resize (if dimensions match)
Apart from that fixes 2 potential memory leaks (mime type and window
title), 2 string ownership issues (output name and make need to be
dup'd), fixes some style issues (switches were indented) and finally
adds messages when disabling/enabling idle inhibition.
The callback setter function was removed in preparation for the commit
which will use the frame event cb because it was unnecessary.
The VO code resets each flag individually, and it doesn't do it for this one.
Also make the prints use the struct names rather than the hardcoded ones,
forgot to add those to the last wayland_common commit.
The wayland code was written more than 4 years ago when wayland wasn't
even at version 1.0. This commit rewrites everything in a more modern way,
switches to using the new xdg v6 shell interface which solves a lot of bugs
and makes mpv tiling-friedly, adds support for drag and drop, adds support
for touchscreens, adds support for KDE's server decorations protocol,
and finally adds support for the new idle-inhibitor protocol.
It does not yet use the frame callback as a main rendering loop driver,
this will happen with a later commit.
The existing code in check_ext() avoided false positive due to
sub-strings, but allowed false negatives. Fix this with slightly better
search code, and make it available as function to other source files.
(There are some cases of strstr() still around.)
Unless FBOs are unsupported, this works. In particular, it's required to
get ICC profiles working in voluntary dumb mode. So instead of
blanket-disabling it, only disable it in the !have_fbo false case.
Originally mpv vaapi support was based on the MPlayer-vaapi patches.
These were never merged in upstream MPlayer. The license headers
indicated they were GPL-only. Although the actual author agreed to
relicensing, the company employing him to write this code did not, so
the original code is unusable to us.
Fortunately, vaapi support was refactored and rewritten several times,
meaning little code is actually left. The previous commits removed or
moved that to GPL-only code. Namely, vo_vaapi.c remains GPL-only. The
other code went away or became unnecessary mainly because libavcodec
itself gained the ability to manage the hw decoder, and libavutil
provides code to manage vaapi surfaces. We also changed to mainly using
EGL interop, making any of the old rendering code unnecessary.
hwdec_vaglx.c is still GPL. It's possibly relicensable, because much of
it was changed, but I'm not too sure and further investigation would be
required. Also, this has been disabled by default for a while now, so
bothering with this is a waste of time. This commit simply disables it
at compile time as well in LGPL mode.
Done for license reasons. vo_vaapi.c is turned into some kind of
dumpster fire, and we'll remove it as soon as I'm mentally ready for
unkind users to complain about removal of this old POS.
Seems to be fixed upstream in the nvidia driver, so it's probably a good
idea to 1. force the layout and 2. remove the warning, as it now
actually works. Users with older drivers would run into errors, but they
can still use shaderc as a replacement. (And it's not like the old
status quo was any better)
This was always set to the length of the VAO, but it should have been
set to the number of vertex attribs actually in use for this frame. No
idea how that managed to survive the test framework on nvidia/linux, but
ANGLE caught it.
This has several advantages:
1. no more redundant texcoords when we don't need them
2. no more arbitrary limit on how many textures we can bind
3. (that extends to user shaders as well)
4. no more arbitrary limits on tscale radius
To realize this, the VAO was moved from a hacky stateful approach
(gl_sc_set_vertex_attribs) - which always bothered me since it was
required for compute shaders as well even though they ignored it - to be
a proper parameter of gl_sc_dispatch_draw, and internally plumbed into
gl_sc_generate, which will make a (properly mangled) deep copy into
params.vertex_attribs.
FlagBits is just the name of the enum. The actual data type representing
a combination of these flags follows the *Flags convention. (The
relevant difference is that the latter is defined to be uint32_t instead
of left implicit)
For consistency, use *Flags everywhere instead of randomly switching
between *Flags and *FlagBits.
Also fix a wrong type name on `stageFlags`, pointed out by @atomnuker
Using renderpass layout transitions is more optimal and doesn't require
a redundant pipeline barrier.
Since our render passes are static and don't change throughout the
lifetime of a ra_renderpass, we unfortunately don't have much
flexibility here - so just hard-code SHADER_READ_ONLY_OPTIMAL as the
output format as this will be the most common case.
We also can't short-circuit the transition when we need to preserve the
framebuffer contents, since that depends on the current layout; so we
still use an explicit tex_barrier in this case. (Most optimal for this
scenario would be an input attachment anyway)
Now you need FFmpeg git, or something.
This also gets rid of the last real use of gpu_memcpy(). libavutil does
that itself. (vaapi.c still used it, but it was essentially unused,
because the code path isn't really in use anymore. It wasn't even
included due to the d3d-hwaccel dependency in wscript.)