This originally existed as a hack for weston. In certain scenarios, a
frame taking too long to render would cause vo_wayland_wait_frame to
timeout which would result in a ton of dropped frames. The naive
solution was to just to add a slight delay to the time value. If a
frame took too long, it would likely to fall under the timeout value and
all was well. This was exposed to the user since the default delay
(1000) was completely arbitrary.
However with presentation time, this doesn't appear to be neccesary.
Fresh frames that take longer than the display's refresh rate (16.666 ms
in most cases) behave well in Weston. In the other two main compositors
without presentation time (GNOME and Plasma), they also do not
experience any ill effects. It's better not to overcomplicate things, so
this "feature" can be removed now.
There's 2 stupid things here that need to be fixed. First of all,
vulkan wasn't actually using presentation time because somehow the
get_vsync function in context.c disappeared. Secondly, if the mpv window
was hidden it was updating the ust time based on the refresh_usec but
really it should simply just not feed any information to the vsync info
structure. So this adds some logic to assume whether or not a window is
hidden.
The old way of using wayland in mpv relied on an external renderloop for
semi-accurate timings. This had multiple issues though. Display sync
would break whenever the window was hidden (since the frame callback
stopped being executed) which was really annoying. Also the entire
external renderloop logic was kind of fragile and didn't play well with
mpv's internal structure (i.e. using presentation time in that old
paradigm breaks stats.lua).
Basically the problem is that swap buffers blocks on wayland which is
crap whenever you hide the mpv window since it looks up the entire
player. So you have to make swap buffers not block, but this has a
different problem. Timings will be terrible if you use the unblocked
swap buffers call.
Based on some discussion in #wayland, the trick here is relatively
simple and works well enough for our purposes. Instead we basically
build a way to block with a timeout in the wayland buffer swap
functions.
A bool is set in the frame callback function that indicates whether or
not mpv is waiting for a frame to be displayed. In the actual buffer
swap function, we enter into a while loop waiting for this flag to be
set. At the same time, the wl_display is polled to block the thread and
wakeup if it receives any events from the compositor. This loop only
breaks if enough time has passed or if the frame callback bool is
received.
In the near future, it is better to set whether or not frame a frame has
been displayed in the presentation feedback. However as a first pass,
doing it in the frame callback is more than good enough.
The "downside" is that we render frames that aren't actually shown on
screen when the player is hidden (it seems like wayland people don't
like that). But who cares. Accurate timings are way more important. It's
probably not too hard to add that behavior back in the player though.
We collect a 'vulkan-device' option today but then don't actually
pass it on, so it's useless. Once that's fixed, it can be used
to select a specific vulkan device by name.
Tested with the new nvidia offload feature to select between the
nvidia and intel GPUs.
This change introduces a vulkan interop path for the vaapi hwdec.
The basic principles are mostly the same as for EGL, with the
exported dma_buf being imported by Vukan. The biggest difference
is that we cannot reuse the texture as we do with OpenGL - there's
no way to rebind a VkImage to a different piece of memory, as far
as I can see. So, a new texture is created on each map call.
I did not bother implementing a code path for the old libva API as
I think it's safe to assume any system with a working vulkan driver
will have access to a newer libva.
Note that we are using separate layers for the vaapi surface, just
as is done for EGL. This is because libplacebo doesn't support
multiplane images.
This change does not include format negotiation because no driver
implements the vk_ext_image_drm_format_modifier extension that
would be required to do that. In practice, the two formats we care
about (nv12, p010) work correctly, so we are not blocked. A separate
change had to be made in libplacebo to filter out non-fatal validation
errors related to surface sizes due to the lack of format negotiation.
This commit rips out the entire mpv vulkan implementation in favor of
exposing lightweight wrappers on top of libplacebo instead, which
provides much of the same except in a more up-to-date and polished form.
This (finally) unifies the code base between mpv and libplacebo, which
is something I've been hoping to do for a long time.
Note: The ra_pl wrappers are abstract enough from the actual libplacebo
device type that we can in theory re-use them for other devices like
d3d11 or even opengl in the future, so I moved them to a separate
directory for the time being. However, the rest of the code is still
vulkan-specific, so I've kept the "vulkan" naming and file paths, rather
than introducing a new `--gpu-api` type. (Which would have been ended up
with significantly more code duplicaiton)
Plus, the code and functionality is similar enough that for most users
this should just be a straight-up drop-in replacement.
Note: This commit excludes some changes; specifically, the updates to
context_win and hwdec_cuda are deferred to separate commits for
authorship reasons.
This option has been deprecated upstream for a long time, probably
doesn't even work anymore, and won't work moving forwards as we replace
the vulkan code by libplacebo wrappers.
I haven't removed the option completely yet since in theory we could
still add support for e.g. a native glslang wrapper in the future. But
most likely the future of this code is deletion.
As an aside, fix an issue where the man page didn't mention d3d11.
By design, some vulkan implementations block until vsync during
vkAcquireNextImageKHR. Since mpv only considers the time that
`swap_buffers` spent blocking as constituting part of the vsync, we can
help it out a bit by pre-emptively calling this function here in order
to improve the accuracy of vsync jitter measurements on vulkan.
(If it fails, we just ignore the error and have the user call it a
second time later - maybe it will work then)
On my system this drops vsync-jitter from ~0.030 to ~0.007, an accuracy
of +/- 100μs. (Which *might* have something to do with the fact that
this is the polling interval for command polling)
Makes performance slightly better when using multiple queues by avoiding
unnecessary semaphores due to bad queue selection.
Also remove an aeons-old workaround for an nvidia bug that only ever
existed in the earliest beta vulkan drivers anyway.
I was inconsistent about this originally, as the functionality was
moved into the core spec in 1.1 and so both suffixed and unsuffixed
versions of everything exist and can be mixed together.
There's no reason to fail to build with 1.0.39+ so I'm fixing the
names.
This is arguably a little contrived, but in the case of CUDA interop,
we have to track additional state on the cuda side for each exported
buffer. If we want to be able to manage buffers with an ra_buf_pool,
we need some way to keep that CUDA state associated with each created
buffer. The easiest way to do that is to attach it directly to the
buffers.
The CUDA/Vulkan interop works on the basis of memory being exported
from Vulkan and then imported by CUDA. To enable this, we add a way
to declare a buffer as being intended for export, and then add a
function to do the export.
For now, we support the fd and Handle based exports on Linux and
Windows respectively. There are others, which we can support when
a need arises.
Also note that this is just for exporting buffers, rather than
textures (VkImages). Image import on the CUDA side is supposed to
work, but it is currently buggy and waiting for a new driver release.
Finally, at least with my nvidia hardware and drivers, everything
seems to work even if we don't initialise the buffer with the right
exportability options. Nevertheless I'm enforcing it so that we're
following the spec.
Since the code just broke out of the loop on a match rather than jumping
straight to the end of the function body, it ended up hitting the code
path for when the end of the list was reached.
This was pased on the texture height, which was a mistake. In some cases
it could exceed the actual size of the buffer, leading to a vulkan API
error. This didn't seem to cause any problems in practice, since a
too-large synchronization is just bad for performance and shouldn't do
any harm internally, but either way, it was still undefined behavior to
submit a barrier outside of the buffer size.
Fix the calculation, thus fixing this issue.
There is now a better way. Reading the font framebuffer was always a
hack. The new code via VOCTRL_SCREENSHOT renders it into a FBO, which
does not come with the disadvantages of reading the front buffer (like
not being supported by GLES, possibly black regions due to overlapping
windows on some systems).
For now keep VOCTRL_SCREENSHOT_WIN on the VO level, because there are
still some lesser VOs and backends that use it.
The vulkan validation layers warn you if you try requesting a query
result from a timer that hasn't even been started yet, so we have to do
some extra bit of work to keep track of which indices we've seen so far,
and avoid the queries on them.
Instead of enabling every feature under the sun, make an effort to just
whitelist the ones we actually might use. Turns out the extended storage
format support is needed for some of the storage formats we use, in
particular rgba16.
The queue family index and the queue info index are not necessarily the
same, so we're forced to do a check based on the queue family index
itself.
Fixes#5049
A vulkan validation layer update pointed out that this was wrong; we
still need to use the access type corresponding to the stage mask, even
if it means our code won't be able to skip the pipeline barrier (which
would be wrong anyway).
In additiona to this, we're also not allowed to specify any source
access mask when transitioning from top_of_pipe, which doesn't make any
sense anyway.
Async compute in particular seems to cause problems on some drivers, and
even when supprted the benefits are not that massive from the tests I
have seen, so it's probably safe to keep off by default.
Async transfer on the other hand seems to work better and offers a more
substantial improvement, so it's kept on.
This gets confused by e.g. SPARSE_BIT on the TRANSFER_BIT, leading to
situations where "more specialized" is ambiguous and the logic breaks
down. So to fix it, only compare the subset we care about.
blit() implies scaling, copy() is the equivalent command to use when the
formats are compatible (same pixel size) and the rects have the same
dimensions.
This allows RAs with support for non-opaque FBO formats to use a more
appropriate FBO format for the output tex, possibly enabling a more
efficient blit operation.
This requires distinguishing between real formats (which can be used to
create textures) and fake formats (e.g. ra_gl's FBO hack).
On AMD devices, we only get one graphics pipe but several compute pipes
which can (in theory) run independently. As such, we should prefer
compute shaders over fragment shaders in scenarios where we expect them
to be better for parallelism.
This is amusingly trivial to do, and actually improves performance even
in a single-queue scenario.
Instead of using a single primary queue, we generate multiple
vk_cmdpools and pick the right one dynamically based on the intent.
This has a number of immediate benefits:
1. We can use async texture uploads
2. We can use the DMA engine for buffer updates
3. We can benefit from async compute on AMD GPUs
Unfortunately, the major downside is that due to the lack of QF
ownership tracking, we need to use CONCURRENT sharing for all resources
(buffers *and* images!). In theory, we could try figuring out a way to
get rid of the concurrent sharing for buffers (which is only needed for
compute shader UBOs), but even so, the concurrent sharing mode doesn't
really seem to have a significant impact over here (nvidia). It's
possible that other platforms may disagree.
Our deadlock-avoidance strategy is stupidly simple: Just flush the
command every time we need to switch queues, and make sure all
submission and callbacks happen in FIFO order. This required lifting the
cmds_pending and cmds_queued out from vk_cmdpool to mpvk_ctx, and some
functions died/got moved as a result, but that's a relatively minor
change.
On my hardware this is a fairly significant performance boost, mainly
due to async transfers. (Nvidia doesn't expose separate compute queues
anyway). On AMD, this should be a performance boost as well due to async
compute.
This is especially interesting for vulkan since it allows completely
skipping the layout transition as part of the renderpass. Unfortunately,
that also means it needs to be put into renderpass_params, as opposed to
renderpass_run_params (unlike #4777).
Closes#4777.
This uses the new vk_signal mechanism to order all access to textures.
This has several advantageS:
1. It allows real synchronization of image access across multiple frames
when using multiple queues for parallelism.
2. It allows using events instead of pipeline barriers, which is a
finer-grained synchronization primitive that allows for more
efficient layout transitions over longer durations.
This commit also restructures some of the implicit transition code for
renderpasses to be more flexible and correct. (Note: this technically
drops the ability to transition the image out of undefined layout when
not blending, but that was a bug anyway and needs to be done properly)
vo_gpu: vulkan: remove no-longer-true optimization
The change to the output_tex format makes this no longer true, and it
actually seems to hurt performance now as well. So just don't do it
anymore. I also realized it hurts performance when drawing an OSD, so
it's probably not a good idea anyway.
This combines VkSemaphores and VkEvents into a common umbrella
abstraction which can resolve to either.
We aggressively try to prefer VkEvents over VkSemaphores whenever the
conditions are met (1. we can unsignal the semaphore, i.e. it comes from
the same frame; and 2. it comes from the same queue).
Instead of being submitted immediately, commands are appended into an
internal submission queue, and the actual submission is done once per
frame (at the same time as queue cycling). Again, the benefits are not
immediately obvious because nothing benefits from this yet, but it will
make more sense for an upcoming vk_signal mechanism.
This also cleans up the way the ra_vk submission interacts with the
synchronization/callbacks from the ra_vk_ctx. Although currently, the
way the dependency is signalled is a bit hacky: normally it would be
associated with the ra_tex itself and waited on in the appropriate stage
implicitly. But that code is just temporary, so I'm keeping it in there
for a better commit order.
Instead of associating a single VkSemaphore with every command buffer
and allowing the user to ad-hoc wait on it during submission, make the
raw semaphores-to-signal array work like the raw semaphores-to-wait-on
array. Doesn't really provide a clear benefit yet, but it's required for
upcoming modifications.
1. No more static arrays (deps / callbacks / queues / cmds)
2. Allows safely recording multiple commands at the same time
3. Uses resources optimally by never over-allocating commands
ra_d3d11 uses the SPIR-V compiler to translate GLSL to SPIR-V, which is
then translated to HLSL. This means it always exposes the same GLSL
version that the SPIR-V compiler supports (4.50 for shaderc/glslang.)
Despite claiming to support GLSL 4.50, some features that are tied to
the GLSL version in OpenGL are not supported by ra_d3d11 when targeting
legacy Direct3D feature levels.
This includes two features that mpv relies on:
- Reading from gl_FragCoord in the fragment shader (requires FL 10_0)
- textureGather from any texture component (requires FL 11_0)
These features have been exposed as new RA caps.
Backported from @haasn's change to libplacebo, except in the current RA,
there's nothing to indicate an ra_format can be bound as a storage
image, so there's no way to force all of these formats to have a
glsl_format. Instead, the layout qualifier will be removed if
glsl_format is NULL.
This is needed for the upcoming ra_d3d11 backend. In Direct3D 11, while
loading float values from unorm images often works as expected, it's
technically undefined behaviour, and in Windows 10, it will cause the
debug layer to spam the log with error messages. Also, apparently in
GLSL, the format name must match the image's format exactly (but in
Direct3D, it just has to have the same component type.)