The use of glXGetCurrentDisplay() restricted this to the GLX backend.
But actually it works under EGL as well. Removing the GLX-specific call
and using the general mpv-internal method to get the X "Display" makes
it work in mpv.
I didn't know this. Nvidia didn't list this as extension in the EGL
context when I still used their GPUs.
Note that this might in theory break use of vdpau in some libmpv clients
using the render API. But only if MPV_RENDER_PARAM_X11_DISPLAY is not
used, and they relied on mpv using glXGetCurrentDisplay(). EGL does not
provide such an API, and hwdec_vaapi.c also uses what hwdec_vdpau.c uses
now. Considering that vaapi is preferable these days, it's not bad at
all if these clients get "broken". They can be easily fixed by passing
the display to mpv correctly.
This integrates it as "special" format, with no alpha component, as the
equivalent IMGFMT_RGB30 isn't meant to contain any.
Nothing can produce this format in the video chain yet, so the next
commits are needed to make this actually work.
There's 2 stupid things here that need to be fixed. First of all,
vulkan wasn't actually using presentation time because somehow the
get_vsync function in context.c disappeared. Secondly, if the mpv window
was hidden it was updating the ust time based on the refresh_usec but
really it should simply just not feed any information to the vsync info
structure. So this adds some logic to assume whether or not a window is
hidden.
This was obviously missing from the recent commit, which probably broke
10 bit decoding. The original commit didn't test this for lack of
working hardware; this commit isn't tested either.
Fixes: a1c7d61393
2 years ago, ANGLE removed the old NV12-specific extension, and added
a new one that supports a number of formats, including P010. Actually
they just renamed it and removed their initial annoying and obvious
design error (bravo, Google).
Since it broke 2 years ago, nobody should give a shit about this code,
and it should just be removed. But for some reason I still dived the
shit-tank (Windows development).
I guess Intel code monkeys can't write drivers (or maybe the issue is
because we're doing zero-copy, which probably maybe is not actually
allowed by D3D11 due to array textures, see --d3d11va-zero-copy), so
the P010 path is completely untested. It doesn't work, I'll delete all
this ANGLE hwdec code.
Fixes: #7054
The old way of using wayland in mpv relied on an external renderloop for
semi-accurate timings. This had multiple issues though. Display sync
would break whenever the window was hidden (since the frame callback
stopped being executed) which was really annoying. Also the entire
external renderloop logic was kind of fragile and didn't play well with
mpv's internal structure (i.e. using presentation time in that old
paradigm breaks stats.lua).
Basically the problem is that swap buffers blocks on wayland which is
crap whenever you hide the mpv window since it looks up the entire
player. So you have to make swap buffers not block, but this has a
different problem. Timings will be terrible if you use the unblocked
swap buffers call.
Based on some discussion in #wayland, the trick here is relatively
simple and works well enough for our purposes. Instead we basically
build a way to block with a timeout in the wayland buffer swap
functions.
A bool is set in the frame callback function that indicates whether or
not mpv is waiting for a frame to be displayed. In the actual buffer
swap function, we enter into a while loop waiting for this flag to be
set. At the same time, the wl_display is polled to block the thread and
wakeup if it receives any events from the compositor. This loop only
breaks if enough time has passed or if the frame callback bool is
received.
In the near future, it is better to set whether or not frame a frame has
been displayed in the presentation feedback. However as a first pass,
doing it in the frame callback is more than good enough.
The "downside" is that we render frames that aren't actually shown on
screen when the player is hidden (it seems like wayland people don't
like that). But who cares. Accurate timings are way more important. It's
probably not too hard to add that behavior back in the player though.
This is the proper fix to the memory leak @wm4 pointed out. It turns out
that when you autoprobe opengl and vo_wayland_init returns false,
vo_wayland_uninit is never actually executed. So you have a leftover
pointer. The vulkan context does this correctly which was why my old,
dumb "fix" broke it.
This allows to use drm hwaccels that require a hwdevice.
Tested with v4l2request hwaccel and cedrus driver on an allwinner device
running mpv with --vo=gpu --gpu-context=drm --hwdec=drm.
Check if eglGetPlatformDisplayEXT is available and try to
use it to obtain the display connection. Fall back to eglGetDisplay
if eglGetPlatformDisplayEXT is not available or failing.
From PR #5992
Useless garbage.
This was once added to test whether vdpau presentation feedback could be
used. Results were always unsatisfactory, and now vdpau is dead.
Extending the client-allocated mpv_opengl_drm_params struct
constituted a break of ABI that could cause UB.
Create a clean break by deprecating "drm_params" and related structs
and enum values, and replacing it with "drm_params_v2".
Also fix some comments and code that wrongly assumed that open could
return any other negative number than -1 for failure.
This commit updates the libmpv version to 1.104
Originally, vo_gpu/vo_opengl considered the case of Nvidia proprietary
drivers, which required vdpau/GLX, and Intel open source drivers, which
require vaapi/EGL. Since window creation and GPU context creation are
inseparable in mpv's internal API, it had to pick the correct API very
early, or hardware decoding wouldn't work. "x11probe" was introduced for
this reason. It created a GLX context (without showing the window yet),
and checked whether vdpau was available. If yes, it used GLX, if not, it
continued probing x11/EGL. (Obviously it couldn't always fail on GLX
without vdpau, which is why it was a separate "probe" backend.)
Years passed, and now the situation is different. Vdpau is dead. Nvidia
drivers and libavcodec now provide CUDA interop, which requires EGL, and
fixes some of the vdpau problems. AMD drivers now provide vaapi, which
generally works better than vdpau. Intel didn't change.
In particular, vaapi provides working HEVC Main10 support. In theory, it
should work on vdpau too, with quality reduction (no 10 bit surfaces),
but I couldn't get it to work.
So always prefer EGL. And suddenly hardware decoding works. This is
actually rather important, because HEVC is unfortunately on the rise,
despite shitty encoders and unoptimized decoders. The latter may mean
that hardware decoding works better than libavcodec.
This should have been done a long, long time ago.
Mesa supports the EGL_CHROMIUM_sync_control extension, and it's
available out of the box with AMD drivers. In practice, this is exactly
the same as GLX_OML_sync_control, but for EGL. The extension
specification is separate from the GLX one though, and buried somewhere
in the Chromium code.
This appears to work, although I don't know if it really works.
In theory, this could be useful for other EGL targets. Support code for
it could have been added to egl_helpers.c to avoid some minor duplicated
glue code if another EGL target were to provide this extension. I didn't
bother with that. ANGLE on Windows can't support it, because the
extension spec. explicitly requires POSIX timers. ANGLE on Linux/OSX is
actively harmful for mpv and hopefully won't ever use it. Wayland uses
EGL, but has its own fancy presentation feedback stuff (and besides, I
don't think basic video player functionality works on Wayland at all).
context_drm_egl maybe? But I think DRM has its own stuff.
So the next commit can make EGL use it. EGL has a quite similar
function, that practically works the same. Although it's relatively
trivial, it's still tricky, and probably shouldn't end up as duplicated
code.
There are no functional changes, except initialization, and how failure
of the glXGetSyncValues call is handled. Also, some comments mention the
EGL extension.
Note that there's no intention for this code to handle anything else
than the very specific OML sync extension (and its EGL equivalent). This
is just too weirdly specific to the weird idiosyncrasies of the
extension, and it makes no sense to extend it to handle anything else.
(Such as Wayland or DXGI presentation feedback.)
New releases of VDPAU support decoding 4:4:4 content, and that comes
back as NV24 when using 'direct mode' in OpenGL Interop. That means we
need to be a little bit smarter about how we set up the OpenGL
textures.
If the compositor sends a configure event before the surface is initially
mapped, resize gets called before the egl_window gets created, resulting
in a crash in wl_egl_window_resize.
This was fixed back in 618361c697, but was reintroduced when the wayland
code was rewritten in 68f9ee7e0b.
While `ra` supports the concept of a texture as a storage
destination, it does not support the concept of a texture format
being usable for a storage texture. This can lead to us attempting
to create a texture from an incompatible format, with undefined
results.
So, let's introduce an explicit format flag for storage and use
it. In `ra_pl` we can simply reflect the `storable` flag. For
GL and D3D, we'll need to write some new code to do the compatibility
checks. I'm not going to do it here because it's not a regression;
we were already implicitly assuming all formats were storable.
Fixes#6657
This was implemented by using OPT_STRING_VALIDATE for drm-mode,
instead of OPT_INT. Using a string here also prepares for future
additions to drm-mode that aim to allow specifying a mode by its
resolution.
It is useful when debugging to be able to force atomic off, or as a
workaround if atomic breaks for some user. Legacy modesetting is less
likely to break by virtue of being a less complex API.
The amount of code now present that's specific to Vulkan or OpenGL
has reached the point where we really want to split it out to
avoid a mess of #ifdefs.
At the same time, I'm moving the code to an api neutral location.
This change updates the vulkan interop code to work with the
libplacebo based ra_vk, but also introduces direct VkImage
sharing to avoid the use of the intermediate buffer.
It is also necessary and desirable to introduce explicit
semaphore bsed synchronisation for operations on the shared
images.
Synchronisation means we can safely reuse the same VkImage for every
mapped frame, by ensuring the frame is copied to the VkImage before
mapping the next frame.
This functionality requires a 417.xx or newer nvidia driver, due to
bugs in the VkImage interop in the earlier 411 and 415 drivers.
It's definitely worth the effort, as the raw throughput is about
twice that of implementation using an intermediate buffer.
This commit rips out the entire mpv vulkan implementation in favor of
exposing lightweight wrappers on top of libplacebo instead, which
provides much of the same except in a more up-to-date and polished form.
This (finally) unifies the code base between mpv and libplacebo, which
is something I've been hoping to do for a long time.
Note: The ra_pl wrappers are abstract enough from the actual libplacebo
device type that we can in theory re-use them for other devices like
d3d11 or even opengl in the future, so I moved them to a separate
directory for the time being. However, the rest of the code is still
vulkan-specific, so I've kept the "vulkan" naming and file paths, rather
than introducing a new `--gpu-api` type. (Which would have been ended up
with significantly more code duplicaiton)
Plus, the code and functionality is similar enough that for most users
this should just be a straight-up drop-in replacement.
Note: This commit excludes some changes; specifically, the updates to
context_win and hwdec_cuda are deferred to separate commits for
authorship reasons.
Manual changes done:
* Merged the interface-changes under the already master'd changes.
* Moved the hwdec-related option changes to video/decode/vd_lavc.c.
This allows context_drm_egl to use as many buffers as libgbm or the
swapchain_depth setting allows (whichever is smaller).
On pause and on still images (cover art etc.) to make sure that output does not
lag behind user input, the swapchain is drained and reverts to working in a dual
buffered (equivalent to swapchain-depth=1) manner.
When possible (swapchain-depth>=2), the wait on the page flip event is now not
done immediately after queueing, but is deferred to the next invocation of
swap_buffers. Which should give us more CPU time between invocations.
Although, since gbm_surface_has_free_buffers() can only tell us a boolean value
and not how many buffers we have left, we are forced to do this contortionist
dance where we first overshoot until gbm_surface_has_free_buffers() reports 0,
followed by immediately waiting so we can free a buffer, to be able to get the
deferred wait on page flip rolling.
With this commit we do not rely on the default vsync fences/latency emulation of
video/out/opengl/context.c, but supply our own, since the places we create and
wait for the fences needs to be somewhat different for best performance.
Minor fixes:
* According to GBM documentation all BO:s gotten with
gbm_surface_lock_front_buffer must be released before gbm_surface_destroy is
called on the surface.
* We let the page flip handler function handle the waiting_for_flip flag.
I misunderstood how this extension works. If I understand it correctly
now, it's worse than I thought. They key thing is that the (ust, msc,
sbc) tripple is not for a single swap event. Instead, (ust, msc) run
independently from sbc. Assuming a CFR display/compositor, this means
you can at best know the vsync phase and frequency, but not the exact
time a sbc changed value.
There is GLX_INTEL_swap_event, which might work as expected, but it has
no EGL equivalent (while GLX_OML_sync_control does, in theory).
Redo the context_glx sync code. Now it's either more correct or less
correct. I wanted to add proper skip detection (if a vsync gets skipped
due to rendering taking too long and other problems), but it turned out
to be too complex, so only some unused fields in vo.h are left of it.
The "generic" skip detection has to do.
The vsync_duration field is also unused by vo.c.
Actually this seems to be an improvement. In cases where the flip call
timing is off, but the real driver-level timing apparently still works,
this will not report vsync skips or higher vsync jitter anymore. I could
observe this with screenshots and fullscreen switching. On the other
hand, maybe it just introduces an A/V offset or so.
Why the fuck can't there be a proper API for retrieving these
statistics? I'm not even asking for much.
Use the extension to compute the (hopefully correct) video delay and
vsync phase.
This is very fuzzy, because the latency will suddenly be applied after
some frames have already been shown. This means there _will_ be "jumps"
in the time accounting, which can lead to strange effects at start of
playback (such as making initial "dropped" etc. frames worse). The only
reasonable way to fix this would be running a few dummy frame swaps at
start of playback until the latency is known. The same happens when
unpausing.
This only affects display-sync mode.
Correct function was not confirmed. It only "looks right". I don't have
the equipment to make scientifically correct measurements.
A potentially bad thing is that we trust the timestamps we're receiving.
Out of bounds timestamps could wreak havoc. On the other hand, this will
probably cause the higher level code to panic and just disable DS.
As a further caveat, this makes a bunch of assumptions about UST
timestamps. If there are delayed frames (i.e. we skipped one or more
vsyncs), the latency logic is mostly reset. There is no attempt to make
the vo.c skipped vsync logic to use this. Also, the latency computation
determines a vsync duration, and there's no effort to reconcile or share
the vo.c logic for determining vsync duration.
This commit bumps the libmpv version to 1.102
drm-osd-plane -> drm-draw-plane
drm-video-plane -> drm-drmprime-video-plane
drm-osd-size -> drm-draw-surface-size
"draw plane", as in the plane that OpenGL draws to, whether it be
video + OSD or just OSD.
"drmprime video plane", as in the plane used for hwdec video imported
via drmprime.
"draw surface size", as in the size of the surface used for the draw plane
The new names are invariant whether or not hwdec_drmprime_drm is being
used or not. The original naming was very confusing, as when doing
regular rendering (swdec or vaapi) the video would be displayed on the
"OSD plane", and the "Video plane" would remain unused.
We are currently unnecessarily including vulkan headers even when
not building with vulkan support. I also guarded the GL header
inclusion even though this doesn't appear to break anything today.
Fixes#6330.