This was already returning true/false but the type was int. Also
simplify a few places in the wayland contexts where we can just return
the value of this function instead of doing redundant checks.
Instead of just returning true/false, it's better to have this function
cleanup itself. We can eliminate some redundant uninit calls elsewhere
in the code as well.
This change enables mpv to work in the WSL2 (WSLg) environment where
OpenGL is implemented on top of D3D12.
This reverts commit 149d98d244.
Mentioned OpenGL implementation (GDI Generic) in the original change
will be rejected by version check, so there is no need to handle it
manually.
Signed-off-by: Kacper Michajłow <kasper93@gmail.com>
The legacy DRM API adds some complexity to the DRM code. There
are only 4 drivers that do not support the DRM Atomic API:
1. radeon (early GCN amd cards)
2. gma500 (ancient intel GPUs)
3. ast (ASPEED SoCs)
4. nouveau
Going forward, new DRM drivers will be guaranteed to support the atomic
API so this is a safe removal.
We've had some annoying names for interops, which we can't simply
rename because that would break config files and command lines. So we
need to put a little more effort in and add a concept of legacy names
that allow us to continue loading them, but with a warning.
The two I'm renaming here are:
* vaapi-egl -> vaapi (vaapi works with Vulkan too)
* drmprime-drm -> drmprime-overlay (actually describes what it does)
* cuda-nvdec -> cuda (cuda interop is not nvdec specific)
The wayland presentation time code currently always assumes that only
CLOCK_MONOTONIC can be used. There is a naive attempt to ignore clocks
other than CLOCK_MONOTONIC, but the logic is actually totally wrong and
the timestamps would be used anyway. Fix this by checking a use_present
bool (similar to use_present in xorg) which is set to true if we receive
a valid clock in the clockid event. Additionally, allow
CLOCK_MONOTONIC_RAW as a valid clockid. In practice, it should be the
same as CLOCK_MONOTONIC for us (ntp/adjustime difference wouldn't
matter). Since this is a linux-specific clock, add a define for it if it
is not found.
It turns out that it's generally more useful to look up hwdecs by image
format, rather than device type. In the situations where we need to
find one, we generally know the image format we're dealing with. Doing
this avoids us having to create mappings from image format to device
type.
The most significant part of this change is filling in the image format
for the various hw interops. There is a hw_imgfmt field today today, but
only a couple of the interops fill it in, and that seems to be because
we've never actually used this piece of metadata before. Well, now we
have a good use for it.
Facing down the multitude of ways to somehow wrangle the get_fn pointer
out of the GL environment and into libplacebo, I decided the easiest is
to just store it inside the GL struct itself.
The lifetime of this get_fn function is a bit murky, since it's not
clear on whether or not it survives the VO init call (especially in the
case of the GL render API, which explicitly states that parameters do
not need to survive the call they're passed to), so just add a
disclaimer.
(It's fine for us to use like this because `gpu_ctx_create` is still
part of VO init)
Closes https://code.videolan.org/videolan/libplacebo/-/issues/216
On S905X (meson) boards drmModeAtomicCommit called from
disable_video_plane in hwdec_drmprime_drm.c might still be running when
another call is made from queue_flip in context_drm_egl.c.
This causes EBUSY error in queue_flip, and causes mpv to hang.
This was introduced in 7fb972fd39 and
later revised in f5a094db04. Transparency
in EGL/X11 has been broken upstream for years in Mesa unfortunately.
However, the first commit claimed to have found a way to preserve
transparency by doing a trick with picking EGLConfigs (the second commit
revises this but keeps the logic in place). However, it doesn't appear
that the first commit actually fixes anything (transparency doesn't work
on my machine) and no one else seems to have reported it working. On the
other hand, if Mesa does ever actually fix this, transparency would
immediately be broken since mpv would always set the EGL_ALPHA_SIZE to
0. Go ahead and remove this since it doesn't seem to have any actual
utility and is technically a bit of a timebomb (not that deleting two
lines is a lot of work but still) if upstream ever does fix this.
This commit kind of mixes several related things together. The main
thing is to avoid calling any XPresent functions or internal functions
related to presentation when the feature is not auto-whitelisted or
enabled by the user. Internally rework this so it all works off of a
use_present bool (have_present is eliminated because having a non-zero
present_code covers exactly the same thing) and make sure it updates on
runtime. Finally, put some actual logging in here whenever XPresent is
enabled/disabled. Fixes#10326.
This builds off of present_sync which was introduced in a previous
commit to support xorg's present extension in all of the X11 backends
(sans vdpau) in mpv. It turns out there is an Xpresent library that
integrates the xorg present extention with Xlib (which barely anyone
seems to use), so this can be added without too much trouble. The
workflow is to first setup the event by telling Xorg we would like to
receive PresentCompleteNotify (there are others in the extension but
this is the only one we really care about). After that, just call
XPresentNotifyMSC after every buffer swap with a target_msc of 0. Xorg
then returns the last presentation through its usual event loop and we
go ahead and use that information to update mpv's values for vsync
timing purposes. One theoretical weakness of this approach is that the
present event is put on the same queue as the rest of the XEvents. It
would be nicer for it be placed somewhere else so we could just wait
on that queue without having to deal with other possible events in
there. In theory, xcb could do that with special events, but it doesn't
really matter in practice.
Unsurprisingly, this doesn't work on NVIDIA. Well NVIDIA does actually
receive presentation events, but for whatever the calculations used make
timings worse which defeats the purpose. This works perfectly fine on
Mesa however. Utilizing the previous commit that detects Xrandr
providers, we can enable this mechanism for users that have both Mesa
and not NVIDIA (to avoid messing up anyone that has a switchable
graphics system or such). Patches welcome if anyone figures out how to
fix this on NVIDIA.
Unlike the EGL/GLX sync extensions, the present extension works with any
graphics API (good for vulkan since its timing extension has been in
development hell). NVIDIA also happens to have zero support for the
EGL/GLX sync extensions, so we can just remove it with no loss. Only
Xorg ever used it and other backends already have their own present
methods. vo_vdpau VO is a special case that has its own fancying timing
code in its flip_page. This presumably works well, and I have no way of
testing it so just leave it as it is.
Wayland had some specific code that it used for implementing the
presentation time protocol. It turns out that xorg's present extension
is extremely similar, so it would be silly to duplicate this whole mess
again. Factor this out to separate, independent code and introduce the
mp_present struct which is used for handling the ust/msc values and some
other associated values. Also, add in some helper functions so all the
dirty details live specifically in present_sync. The only
wayland-specific part is actually obtaining ust/msc values. Since only
wayland or xorg are expected to use this, add a conditional to the build
that only adds this file when either one of those are present.
You may observe that sbc is completely omitted. This field existed in
wayland, but was completely unused (presentation time doesn't return
this). Xorg's present extension also doesn't use this so just get rid of
it all together. The actual calculation is slightly altered so it is
correct for our purposes. We want to get the presentation event of the
last frame that was just occured (this function executes right after the
buffer swap). The adjustment is to just remove the vsync_duration
subtraction. Also, The overly-complicated queue approach is removed.
This has no actual use in practice (on wayland or xorg). Presentation
statistics are only ever used after the immediate preceding swap to
update vsync timings or thrown away.
Some wayland compositors (i.e. weston) get extremely picky about
committed buffer sizes not matching the configured state. In particular,
weston throws an error on you if you attempt to launch with
--window-maximized and use opengl (vo_vaapi_wayland actually errors as
well in this case, but that's a different issue). The culprit here is
actually wl_egl_window_create. This creates an initial buffer at the
sizes passed in the arguments which is what weston doesn't like.
Instead, move the egl_window creation call to the resize function. This
ensures that mpv is using the size obtained via the toplevel event, and
it should always be the buffer size we want.
This was actually always bugged, but we just got lucky that compositors
ignored it. The egl window was created only using wl->geometry's
coordinates but those do not include the scale factor. So technically,
the initial window creation always had the wrong size (off by whatever
the scaling factor is). The resize call later fixes it because that
correctly uses wl->scaling so in practice nothing bad was seen.
wlroots's master branch has started sending an error in this case
however and this is what trips it. Fix it correctly by using the scale
factor. This is what cd3b4edea0 tried to
fix (but was incorrect).
These values and options were simply never looked at in the drm egl
context. This pretty much is just a copy and paste of what is in vo_drm.
Fixes#10157.
Because wayland is a special snowflake, mpv wound up incorporating a lot
of logic into its render loop where visibilty checks are performed
before rendering anything (in the name of efficiency of course). Only
wayland actually uses this, but there's no reason why other backends
(x11 in this commit) can't be smarter. It's far easier on xorg since we
can just query _NET_WM_STATE_HIDDEN directly and not have to do silly
callback dances.
The function, vo_x11_check_net_wm_state_change, already tracks net wm
changes, including _NET_WM_STATE_HIDDEN. There is an already existing
window_hidden variable but that is actually just for checking if the
window was mapped and has nothing to do with this particular atom. mpv
also currently assumes that a _NET_WM_STATE_HIDDEN is exactly the same
as being minimized but according to the spec, that's not neccesarily
true (in practice, it's likely that these are the same though). Anyways,
just keep track of this state in a new variable (hidden) and use that
for determing if mpv should render or not.
There is one catch though: this cannot work if a display sync mode is
used. This is why the previous commit is needed. The display sync modes
in mpv require a blocking vsync implementation since its render loop is
directly driven by vsync. In xorg, if nothing is actually rendered, then
there's nothing for eglSwapBuffers (or FIFO for vulkan) to block on so
it returns immediately. This, of course, results in completely broken
video. We just need to check to make sure that we aren't in a display
sync mode before trying to be smart about rendering. Display sync is
power inefficient anyways, so no one is really being hurt here. As an
aside, this happens to work in wayland because there's basically a
custom (and ugly) vsync blocking function + timeout but that's off
topic.
A bit of a personal pet peeve. vulkan, opengl, and wlshm all had
different methods for doing wayland's "check for visibility before
drawing" thing. The specific backend doesn't matter in this case and the
logic should all be shared. Additionally, the external swapchain that
the opengl code on wayland uses is done away with and it instead copies
vulkan by using a param. This keeps things looking more uniform across
backends and also makes it easier to extend to other platforms (see the
next couple of commits).
Previously on wayland, it would result in an egl config with only 2 alpha
bits, which technically matches what was requested, but is not very useful.
Fixes#9862
Variable Refresh Rate (VRR), aka Freesync or Adaptive Sync can be used
with DRM by setting the VRR_ENABLED property on a crtc if the
connector reports that it is VRR_CAPABLE. This is a useful feature
for us as it is common to play 24/25/50 fps content on displays that
are nominally locked to 60Hz. VRR can allow this content to play at
native framerates.
This is a simple change as we just need to check the capability
and set the enabled property if requested by the user. I've defaulted
it to disabled for now, but it might make sense to default to auto
in the long term.
We've been assuming that maximum number of compute group threads is
never less than the 1024 defined by the desktop GL spec. Given that we
haven't had working compute shaders for GLES and I guess the Vulkan
spec defines at least as high a value, we've gotten away with it so
far.
But we should really look the value up and respect it.
Some of the extension declarations did not include the ES version where
they became core functionality, and in some of these cases, there was
never actually an ES extension - it first appeared in core. We also had
a number of buggy version checks where ES versions were compared
against required desktop GL versions.
nvidia follow the OpenGL spec very strictly, with two particular
consequences:
* They will give you the exact context version that you ask for,
rather than the highest possible version that meets your request.
* They will hide extensions that the specs say require a higher
version than you request, even if it's technically possible to
provide the extension at lower versions.
In our case, we really want a variety of extensions, particularly
compute shaders that are only available in 4.2 or higher. That means
that we must explicitly include a high enough version in our list of
versions to check for us to be able to get a 'good' enough context.
As for which version? We restore the 4.4 version that we had in the
old version selection logic. This is the highest version we ever asked
for, and we have separate logic that clamps the GLSL version to 4.4,
so anything newer wouldn't make a difference.
This is a workaround for nvidia proprietary drivers. The authors of
those drivers interpret the spec such that eglMakeCurrent will not
reconfigure the read and draw buffers. Thus windows wont display
anything drawn by opengl. nvidia authors refer to
https://www.khronos.org/registry/EGL/extensions/KHR/EGL_KHR_no_config_context.txt
specifically Issues 2/3 which reference eglMakeCurrent.
On mesa this is a non-issue and the read/draw targets are assigned with
eglMakeCurrent.
The context must be made current in order to query OpenGL strings. An
earlier proposal to create the wayland window surface similarly to X11
during init was deemed inappropriate so instead we manually set the
targets once we have created a window surface.
This case was added in 662c793a55
for use in vo_gpu_next as a visibility test before rendering a frame.
The OpenGL context doesn't have this so it just returns true.
GLX_CONTEXT_PROFILE_MASK_ARB and related constants are provided by
GLX_ARB_create_context_profile but the check was for _create_context.
The former implies the latter (which we also need) so just replace
the checked extension.
It abstracts EGL 1.5, extension checks and other inconsistencies away.
This can be used in context code as the (preferred) alternative to
eglCreateWindowSurface().
Using a simple substring match for extension checks is considered bad practice
because it's incorrect when one extension is a prefix of another's name.
This will almost surely not make a difference in practice but do it for correctness anyway.
Although there are no known problems with this, using the helper should
be more portable. It will also prefer EGL 1.5's eglGetPlatformDisplay
over eglGetPlatformDisplayEXT if available.
d2e8bc4499 was the the commit that
originally introduced the usage of this bit. As the message states, the
purpose was to force creating GLES 3 contexts on drivers that do not
return a higher version context than what was requested. With the recent
opengl refactors, mpv's gl selection has already moved away from such
complicated queries. Perhaps when that commit was added things were
different, but nowadays it seems like Mesa simply returns the highest
driver version available that is compatibile with the request (i.e.
requesting GLES 2 returns a GLES 3 context on my machine). In that case,
let's just simply drop EGL_OPENGL_ES3_BIT altogether as it does break
GLES 2 only machines. Fixes#9431.
The new GBM supporting nvidia drivers declare support for 10bit
surfaces using BGR ordering, rather than RGB, so add support for them.
We've also seen examples of hardware supporting BGR8888 but not
RGB8888 so let's support those too.
Of course, the nvidia EGL driver doesn't publish support for any 10bit
formats so you can't actually do 10bit display. Perhaps they'll
eventually fix that.
The GBM supporting nvidia driver doesn't support creating surfaces
without modifiers and using modifiers is more and more recommended as
the right way to do this.
Enumerating modifiers is painfully verbose, but necessary if we are to
allow the driver to pick the best possible one.
Regression from e13fe1299d. Apparently,
Broadcom's EGL does not support the EGL_CONTEXT_FLAGS_KHR attribute nor
EGL_CONTEXT_OPENGL_DEBUG_BIT_KHR. Just define these as EGL_NONE and 0
respectively if we do not have them.
With the recent refactor and quick look against the GLX code path, it's
fairly obvious, and trivial, how to add support for debug contexts.
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
Drop the gl3 suffix from the function name - it's no longer needed, with
the _old function gone.
Push the mpgl_min_required_gl_versions[] looping within the function,
reducing the identical glXGetProcAddress/glXQueryExtensionsString calls
while making the code neater.
v2:
- tabs -> spaces indentation
- mpgl_preferred_gl_versions -> mpgl_min_required_gl_versions
- 320 -> 300 (in glx code path)
v3:
- legacy code path is gone \o/
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
The old/legacy code-path isn't really needed for mpv use-cases.
In particular:
All Mesa drivers (even GL 2.1 ones like lima/vc4) work fine with the
"non-legacy" path.
From proprietary/binary drivers - the vendor either does not support
desktop GL or the drivers/HW is not actively supported.
Looking at the Nvidia HW - anything GeForce 7 and older is GL 2.1 and
lacks decent video acceleration. The latest official drivers are from
2017.
All newer Nvidia HW is GL 3.3+ thus must have GLX_ARB_create_context as
in the non-legacy path, while also good acceleration albeit via VDPAU
in some cases.
With the old path gone, provide meaningful error message in the very
unlikely case that GLX_ARB_create_context is missing.
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
With earlier commit f8e62d3d82 ("egl_helpers: fix create_context
fallback behavior") we added a fallback for creating OpenGL context
while EGL_KHR_create_context is missing.
While it looked correct at first, it is missing the eglMakeCurrent()
call after creating the EGL context. Thus calling glGetString() fails.
Instead of doing that we can just remove some code - simply pass the
CLIENT_VERSION 2, as attributes which is honoured by EGL regardless of
the client API. This allows us to remove the special case and drop some
code.
v2:
- mpgl_preferred_gl_versions -> mpgl_min_required_gl_versions
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
The ra_gl_ctx_test_version() helper is quite clunky, in that it pushes a
simple check too deep into the call chain. As such it makes it hard to
reason, let alone have the GLX and EGL code paths symmetrical.
Introduce a simple helper ra_gl_ctx_get_glesmode() which returns the
current glesmode, so the platforms can clearly reason about should and
should not be executed.
v2:
- mpgl_preferred_gl_versions -> mpgl_min_required_gl_versions
- 320 -> 300 (in glx code path)
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
As the documentation of the toggle says - the implementation can (and
will actually if they follow the GLX/EGL spec) return context version
greater than the one requested.
This happens with all Mesa drivers that I've tested as well as the
Nvidia binary drivers.
This toggle seems like a workaround for buggy drivers, yet it's lacking
context about the vendor and version.
Remove it for now - I'll be happy to reinstate it (partially or in full)
as we get concrete details.
This allows us to simplify ra_gl_ctx_test_version() making the whole
context creation business easier to follow by mere mortals.
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>