Commit Graph

19 Commits

Author SHA1 Message Date
Dudemanguy 6158bb5be2 x11: avoid wasteful rendering when possible
Because wayland is a special snowflake, mpv wound up incorporating a lot
of logic into its render loop where visibilty checks are performed
before rendering anything (in the name of efficiency of course). Only
wayland actually uses this, but there's no reason why other backends
(x11 in this commit) can't be smarter. It's far easier on xorg since we
can just query _NET_WM_STATE_HIDDEN directly and not have to do silly
callback dances.

The function, vo_x11_check_net_wm_state_change, already tracks net wm
changes, including _NET_WM_STATE_HIDDEN. There is an already existing
window_hidden variable but that is actually just for checking if the
window was mapped and has nothing to do with this particular atom. mpv
also currently assumes that a _NET_WM_STATE_HIDDEN is exactly the same
as being minimized but according to the spec, that's not neccesarily
true (in practice, it's likely that these are the same though). Anyways,
just keep track of this state in a new variable (hidden) and use that
for determing if mpv should render or not.

There is one catch though: this cannot work if a display sync mode is
used. This is why the previous commit is needed. The display sync modes
in mpv require a blocking vsync implementation since its render loop is
directly driven by vsync. In xorg, if nothing is actually rendered, then
there's nothing for eglSwapBuffers (or FIFO for vulkan) to block on so
it returns immediately. This, of course, results in completely broken
video. We just need to check to make sure that we aren't in a display
sync mode before trying to be smart about rendering. Display sync is
power inefficient anyways, so no one is really being hurt here. As an
aside, this happens to work in wayland because there's basically a
custom (and ugly) vsync blocking function + timeout but that's off
topic.
2022-04-11 18:14:22 +00:00
sfan5 9afa41945f context_{wayland,x11egl}: use mpegl_create_window_surface() too
Again no functional difference, just uses better APIs when they're available.
2021-11-17 22:38:34 +01:00
Dudemanguy f5a094db04 vo_gpu: EGL: hack for alpha on different platforms
7fb972f fixed transparency on x11/EGL/Mesa but happened to also break it
for wayland and nvidia. Ideally on wayland, you should just be able to
pick the right EGLConfig that has alpha but this doesn't seem to work
because reasons. So just go back to setting the EGL_ALPHA_SIZE bit if
the user asks for alpha. Apparently this worked before for nvidia as
well. The hack is to just run an eglQueryString in the x11egl context.
If it picks up Mesa as the EGL_VENDOR, then force ctx->opts.want_alpha
to 0 and let pick_xrgba_config take care of the rest.
2020-10-15 13:44:07 +00:00
wm4 e1586585b4 vo_gpu: opengl: make it work with EGL 1.4
This tries to deal with the crazy EGL situation. The summary is:

- using eglGetDisplay() with multiple windowing platforms doesn't really
  work, but Mesa had an awful hack for it
- this hack can be disabled at build time, and some distros sometimes
  accidentally or intentionally do so
- Mesa will probably eventually disable it by default
- we switched to eglGetPlatformDisplay(), but this requires EGL 1.5
- the very regrettable graphics company (also known as Nvidia) ships
  drivers (for old hardware I think) that are EGL 1.4 only
- that means even though we "require" EGL 1.5 and link against it, the
  runtime EGL may be 1.4
- trying to run mpv there crashes in the dynamic linker
- so we have to go through some more awful compatibility hacks

This commit tries to do it "properly", but using EGL 1.4 as base. The
plaform selection mechanism is a messy extension there, which got
elevated to core API in 1.5 (but OF COURSE in incompatible ways).

I'm not sure whether the EGL 1.5 code path (by parsing the EGL_VERSION)
is really needed, but if you ask me, it feels slightly saner not to rely
on an EGL 1.4 kludge forever. But maybe this is just an instance of
self-harm, since they will most likely never drop or not provide this
API.

Also, unlike before, we actually check the extension string for the
individual platform extensions, because who knows, some EGL
implementations might curse us if we pass unknown platform parameters.
(But actually, the more I think about this, the more bullshit it is.)

X11 and Wayland were the only ones trying to call eglGetPlatformDisplay,
so they're the only ones which are adjusted in this commit.

Unfortunately, correct function of this commit is unconfirmed. It's
possible that it crashes with the old drivers mentioned above.

Why didn't they solve it like this:

struct native_display {
    int platform_type;
    void *native_display;
};

Could have kept eglGetDisplay() without all the obnoxious extension BS.
2019-12-16 00:25:51 +01:00
wm4 2c6d42e704 vo_gpu: x11egl: log EGL config ID
Somewhat useful for debugging.
2019-12-15 23:33:23 +01:00
wm4 cc746c9508 vo_gpu: x11egl: cleanup EGL correctly
...probably.

The EGL backend had a strange problem: when recreating the window, EGL
surface creation sometimes mysteriously failed. For example, keeping the
"_" key down (cycles video by default) destroys and recreates the window
in rapid succession, which will often enough show the "Could not create
EGL surface!" message.

This was puzzling because due to mpv's architecture, the X11 Window and
even the X11 Display were fully destroyed, the thread on which they ran
was destroyed, and then everything was recreated. There shouldn't have
been any state that could make subsequent EGL initialization fail.

It turns out mpv forgot to free EGLSurfaces in the x11 code. EGL is a
pretty crazy API (full of thread local and global state with weird
lifetime requirements), and for example it seems EGLDisplay cannot be
explicitly released, but apparently implicitly dies when the native
display is closed (at least EGL 1.5 claims eglTerminate() does _not_
invalidate the display, only certain objects linked to it). It appears
that Mesa still referenced at least EGLSurface in some form, and either
some pointer or some X11 ID was dangling, and when it randomly matched
when eglCreateWindowSurface() was called, it failed.

Fix this by calling eglTerminate(), which supposedly destroys (or rather
unreferences) contexts and surfaces created from the display (but
absurdly not the display itself).

Now why can't you just destroy the display? If it's implicitly
invalidated, why can't it just call eglTerminate() implicitly when this
happens? Did Mesa do something wrong when they somehow didn't
automatically remove the dangling object (so I could claim not to be
responsible for the bug)? Who the fuck knows, and I'm too tired to
figure this out (both because it's late, and because I'm tired of this
EGL crap API).

Still not sure if the code is correct now. I think EGL was designed to
maximize implementation and API-use complications. How else could you
possibly come up with something like the EGLDisplay life cycle? Or am I
just making a fuss? Anyway, fuck EGL, fuck computers, fuck technology.

Fixes: #7129
2019-12-12 01:50:05 +01:00
wm4 aacc1942fb x11: require EGL 1.5 and use eglGetPlatformDisplay()
eglGetPlatform() is a broken API, since it takes a windowing specific
argument, yet is supposed to work for multiple APIs at the same time. On
Linux, it can take both a X11 "Display" and a "wl_display". Obviously
there is no way to specify what kind of display the argument is (it's
just a void*).

Mesa has _eglNativePlatformDetectNativeDisplay, which does funny stuff
to try to guess the display type, including trying to call mincore() to
determine whether the pointer can be accessed at all. I guess this
recently accidentally broke (as a bug), but on the other hand, maybe
it's time to do this properly.

The fix is using eglGetPlaformDisplay(). This requires EGL 1.5, plus
Mesa needs to support the associated platform extension
(EGL_KHR_platform_x11).

Since I see no reasonable way to do this in a compatible way, just
require that EGL 1.5 is available. The problem is that EGL 1.4 seems to
require you to create a display to query EGL version and extension, and
you have a chicken-and-egg problem. It's very stupid. Maybe you could
jump through some more hoops to get something compatible, but fuck that.
Users on "too old" Mesa will fall back to GLX (which we keep around for
a regrettable company known by the name of Nvidia).

I think Wayland and GBM should do the same. They're sufficiently
bleeding-edge that you can expect them to have EGL 1.5. On the other
hand, the cursed RPI code will have to stay with a eglGetDisplay().

Speculative fix for #7154.

(Rant about EGL follows. Actually I deleted it.)
2019-11-16 20:55:03 +01:00
wm4 cd8fd4b788 vo_gpu: context_x11egl: check eglGetConfigAttrib() for errors
Not sure why it assumes that it always succeeds (although generally it
won't fail).
2019-11-08 21:22:49 +01:00
wm4 10a1b98082 vo_gpu: x11egl: support Mesa OML sync extension
Mesa supports the EGL_CHROMIUM_sync_control extension, and it's
available out of the box with AMD drivers. In practice, this is exactly
the same as GLX_OML_sync_control, but for EGL. The extension
specification is separate from the GLX one though, and buried somewhere
in the Chromium code.

This appears to work, although I don't know if it really works.

In theory, this could be useful for other EGL targets. Support code for
it could have been added to egl_helpers.c to avoid some minor duplicated
glue code if another EGL target were to provide this extension. I didn't
bother with that. ANGLE on Windows can't support it, because the
extension spec. explicitly requires POSIX timers. ANGLE on Linux/OSX is
actively harmful for mpv and hopefully won't ever use it. Wayland uses
EGL, but has its own fancy presentation feedback stuff (and besides, I
don't think basic video player functionality works on Wayland at all).
context_drm_egl maybe? But I think DRM has its own stuff.
2019-09-08 23:23:43 +10:00
wm4 52dd38a48a client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).

This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.

Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.

The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).

To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.

The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-26 19:47:08 +02:00
Niklas Haas 65979986a9 vo_opengl: refactor into vo_gpu
This is done in several steps:

1. refactor MPGLContext -> struct ra_ctx
2. move GL-specific stuff in vo_opengl into opengl/context.c
3. generalize context creation to support other APIs, and add --gpu-api
4. rename all of the --opengl- options that are no longer opengl-specific
5. move all of the stuff from opengl/* that isn't GL-specific into gpu/
   (note: opengl/gl_utils.h became opengl/utils.h)
6. rename vo_opengl to vo_gpu
7. to handle window screenshots, the short-term approach was to just add
   it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to
   ra itself (and vo_gpu altered to compensate), but this was a stop-gap
   measure to prevent this commit from getting too big
8. move ra->fns->flush to ra_gl_ctx instead
9. some other minor changes that I've probably already forgotten

Note: This is one half of a major refactor, the other half of which is
provided by rossy's following commit. This commit enables support for
all linux platforms, while his version enables support for all non-linux
platforms.

Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the
--opengl- options like --opengl-early-flush, --opengl-finish etc. Should
be a strict superset of the old functionality.

Disclaimer: Since I have no way of compiling mpv on all platforms, some
of these ports were done blindly. Specifically, the blind ports included
context_mali_fbdev.c and context_rpi.c. Since they're both based on
egl_helpers, the port should have gone smoothly without any major
changes required. But if somebody complains about a compile error on
those platforms (assuming anybody actually uses them), you know where to
complain.
2017-09-21 15:00:55 +02:00
wm4 c9d3a79187 vo_opengl: add a generic EGL function loader function
This is pretty trivial, but also quite annoying due to details like
mismatching eglGetProcAddress() function signature (most callers just
cast the function pointer), and ARM/Linux hacks. So move them all to one
place.
2017-04-06 14:50:19 +02:00
wm4 b14fc38590 vo_opengl: x11egl: fix alpha mode
The way it should (probably) work is that selecting a RGBA framebuffer
format will simply make the compositor use the alpha. It works this way
on Wayland. On X11, this is... not done. Instead, both GLX and EGL
report two FB configs, which are exactly the same, except for the
platform-specific visual. Only the latter (non-default) points to a
visual that actually has alpha. So you can't make the pure GLX and EGL
APIs select alpha mode, and you have to override manually.

Or in other words, alpha was hacked violently into X11, in a way that
doesn't really make sense for the sake of compatibility, and forces API
users to wade through metaphorical cow shit to deal with it.

To be fair, some other platforms actually also require you to enable
alpha explicitly (rather than looking at the framebuffer type), but they
skip the metaphorical cow shit step.
2016-12-30 20:04:47 +01:00
wm4 6dc9280b58 vo_opengl: factor some EGL context creation code
Add a function to egl_helpers.c for creating an EGL context and make
context_x11egl.c use it. This is meant to be generic, and should work
with other windowing APIs as well. The other EGL-using code in mpv can
be switched to it.
2016-09-13 18:03:43 +02:00
wm4 fde784129f x11: stop using vo.event_fd
Instead let it do its own event loop wakeup handling.
2016-07-20 20:52:08 +02:00
wm4 788929e4e0 vo_opengl: use standard functions to retrieve display depth
Until now, we've used system-specific API (GLX, EGL, etc.) to retrieve
the depth of the default framebuffer. (We equal this to display depth
and use the determined depth for dithering.)

We can actually retrieve this value through standard GL API, and it
works everywhere (except GLES 2 of course). This simplifies everything a
great deal.

egl_helpers.c is empty now. But I expect that some EGL boilerplate will
be moved to it, so don't remove it yet.
2016-06-14 10:35:43 +02:00
wm4 bea2e39721 vo_opengl: request core profile on X11/EGL too
Avoids that some OpenGL implementation will pin it to 3.0.
2016-06-10 20:31:28 +02:00
wm4 e4ec0f42e4 Change GPL/LGPL dual-licensed files to LGPL
Do this to make the license situation less confusing.

This change should be of no consequence, since LGPL is compatible with
GPL anyway, and making it LGPL-only does not restrict the use with GPL
code.

Additionally, the wording implies that this is allowed, and that we can
just remove the GPL part.
2016-01-19 18:36:34 +01:00
wm4 4cc1861378 vo_opengl: prefix per-backend source files with context_ 2015-12-19 14:14:12 +01:00