Commit Graph

41 Commits

Author SHA1 Message Date
wm4 0c0a06140c vo_opengl: restructure format setup
Instead of setting up a weird swizzle (which is linked to how the
internal renderer code works, rather than the generic format code), add
per-component mapping to gl_imgfmt_desc.

The renderer still computes the weird swizzle, but at least it's
confined to itself. Also, it appears the hwdec backends don't need this
anymore.

It's really nice that the messy init_format() goes away too.
2017-06-30 17:07:55 +02:00
wm4 b6d0b57e85 Drop/move img_fourcc.h
This file is an leftover from when img_format.h was changed from using
the ancient FourCCs (based on Microsoft multimedia conventions) for
pixel formats to a simple enum. The remaining cases still inherently
used FourCCs for whatever reasons.

Instead of worrying about residual copyrights in this file, just move it
into code we don't want to relicense (the ancient Linux TV code). We
have to fix some other code depending on it. For the most part, we just
replace the MP_FOURCC macro with libavutil's MKTAG (although the macro
definition is exactly the same). In demux_raw, we drop some pre-defined
FourCCs, but it's not like it matters. (Instead of
--demuxer-rawvideo-format use --demuxer-rawvideo-mp-format.)
2017-06-18 15:13:45 +02:00
wm4 2b616c0682 vo_opengl: drop TLS usage
TLS is a headache. We should avoid it if we can.

The involved mechanism is unfortunately entangled with the unfortunate
libmpv API for returning pointers to host API objects. This has to be
kept until we change the API somehow.

Practically untested out of pure laziness. I'm sure I'll get a bunch of
reports if it's broken.
2017-05-11 17:47:33 +02:00
wm4 f59371de21 video: drop vaapi/vdpau hw decoding support with FFmpeg 3.2
This drops support for the old libavcodec APIs. Now FFmpeg 3.3 or FFmpeg
git is required. Libav has no release with the new APIs yet, so for
Libav git as of a few weeks or months ago or so is required if you want
to use Libav.

Not much actually changes in hwdec_vaegl.c - some code is removed, but
the reindentation inflates the diff.
2017-04-23 16:07:03 +02:00
wm4 d8bf000d29 vo_opengl: hwdec_vaegl: use new format setup function
Plus add a helper.
2017-02-17 17:26:01 +01:00
wm4 3e474f4654 vo_opengl: hwdec_vaegl: fix potentially undefined memory access 2017-02-14 13:30:48 +01:00
wm4 443d3a91d3 vaapi: remove central lock around vaapi API calls
The lock was disabled recently. This commit gets rid of the dummied out
calls. The main reason for removing it is that there is no apparent need
for it anymore, and the new FFmpeg vaapi code does not use or provide
such a lock (there are some places which we cannot control and which do
vaapi API calls, like frame destructors).
2017-01-28 18:27:30 +01:00
wm4 6c28824a92 vo_opengl: hwdec_vaegl: add a lie for compatibility
EGL rendering + new decode API didn't work due to a certain libva bug
with sort-of legacy API use hitting again. It will report the wrong
vaapi pixel format. It's old code and always nv12 anyway, so stop
worrying about it.
2017-01-13 18:43:35 +01:00
wm4 812128bab7 vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.

Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.

I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.

Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.

With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 18:43:35 +01:00
wm4 88dfb9a5e7 vo_opengl: hwdec_vaegl: remove redundant vaapi surface format check
For surfaces allocated by libavutil, we assume that the sw_format (i.e.
in hw_subfmt in mp_image_params) is always correct. The API guarantees
that it explicitly sets the equivalent vaapi format on surface
allocation.

For surfaces allocated by mpv's old vaapi code, we explicitly retrieve
the format right after decoding. Unless the driver magically changes the
format asynchronously, it will still be correct once the surface reaches
the renderer.

In both cases, checking the format again is obviously redundant. In
addition, it doesn't require us to maintain a libva fourcc <-> mpfmt
table and the va_fourcc_to_imgfmt() function. This also unbreaks 10 bit
rendering support (still disabled by default).
2017-01-13 10:28:58 +01:00
wm4 6de00d10c8 vo_opengl: hwdec_vaegl: fix terminology in comment
Bad idea to call a component "pixel" - that's true only for the Y plane.
2017-01-13 10:22:11 +01:00
Mark Thompson 14010e7cf6 vo_opengl: hwdec_vaegl: DRM_FORMAT_GR16 was renamed to DRM_FORMAT_GR32
Signed-off-by: wm4 <wm4@nowhere>
2017-01-13 10:18:54 +01:00
wm4 5174add43a vo_opengl: hwdec_vaegl: add experimental P010 support
This does not work, because Mesa has no support for the proposed
DRM_FORMAT_R16 and DRM_FORMAT_GR16 formats. It's also untested of
course.

As long as video/decode/vaapi.c doesn't hand down P010 surfaces, this is
fine anyway.

This can be tested by removing the code that disables P010 output:

diff --git a/video/decode/vaapi.c b/video/decode/vaapi.c
--- a/video/decode/vaapi.c
+++ b/video/decode/vaapi.c
@@ -55,13 +55,6 @@ static int init_decoder(struct lavc_ctx *ctx, int w, int h)

     assert(!ctx->avctx->hw_frames_ctx);

-    // If we use direct rendering, disallow 10 bit - it's probably not
-    // implemented yet, and our downstream components can't deal with it.
-    if (!p->own_ctx && required_sw_format != AV_PIX_FMT_NV12) {
-        MP_WARN(ctx, "10 bit surfaces are currently supported.\n");
-        return -1;
-    }
-
2017-01-12 14:01:09 +01:00
wm4 33c24b07e4 vo_opengl: vaegl: log more debugging infos 2016-09-30 14:36:42 +02:00
wm4 ae94f329a9 vo_opengl: hwdec: reset hw_subfmt field
In theory, mp_image_params with hw_subfmt set to non-0 if imgfmt is not
a hwaccel format is invalid. (It worked fine because nothing checks this
yet.)
2016-07-15 13:04:17 +02:00
wm4 85488f6892 video: change hw_subfmt meaning
The hw_subfmt field roughly corresponds to the field
AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type
AVPixelFormat (instead of the underlying hardware format), so it's a
good idea to switch to this too for preparation.

Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API-
specific number. VDPAU and Direct3D11 already used mp_imgfmt, but
Videotoolbox and VAAPI had to be switched.

One somewhat user-visible change is that the verbose log will now always
show the hw_subfmt as image format, instead of as nonsensical number.

(In the end it would be good if we could switch to AVHWFramesContext
completely, but the upstream API is incomplete and doesn't cover
Direct3D11 and Videotoolbox.)
2016-07-15 13:04:17 +02:00
wm4 b0b01aa250 vo_opengl: refactor how hwdec interop exports textures
Rename gl_hwdec_driver.map_image to map_frame, and let it fill out a
struct gl_hwdec_frame describing the exact texture layout. This gives
more flexibility to what the hwdec interop can export. In particular, it
can export strange component orders/permutations and textures with
padded size. (The latter originating from cropped video.)

The way gl_hwdec_frame works is in the spirit of the rest of the
vo_opengl video processing code, which tends to put as much information
in immediate state (as part of the dataflow), instead of declaring it
globally. To some degree this duplicates the texplane and img_tex
structs, but until we somehow unify those, it's better to give the hwdec
state its own struct. The fact that changing the hwdec struct would
require changes and testing on at least 4 platform/GPU combinations
makes duplicating it almost a requirement to avoid pain later.

Make gl_hwdec_driver.reinit set the new image format and remove the
gl_hwdec.converted_imgfmt field.

Likewise, gl_hwdec.gl_texture_target is replaced with
gl_hwdec_plane.gl_target.

Split out a init_image_desc function from init_format. The latter is not
called in the hwdec case at all anymore. Setting up most of struct
texplane is also completely separate in the hwdec and normal cases.

video.c does not check whether the hwdec "mapped" image format is
supported. This should not really happen anyway, and if it does, the
hwdec interop backend must fail at creation time, so this is not an
issue.
2016-05-10 18:42:42 +02:00
wm4 46fff8d31a video: refactor how VO exports hwdec device handles
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.

The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)

This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.

Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.

The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.

This is a big refactor. Some things may be untested and could be broken.
2016-05-09 20:03:22 +02:00
wm4 f56555b514 vo_opengl: EGL: fix hwdec probing
If ANGLE was probed before (but rejected), the ANGLE API can remain
"initialized", and eglGetCurrentDisplay() will return a non-NULL
EGLDisplay. Then if a native GL context is used, the ANGLE/EGL API will
then (apparently) keep working alongside native OpenGL API. Since GL
objects are just numbers, they'll simply fail to interact, and OpenGL
will get invalid textures. For some reason this will result in black
textures.

With VAAPI-EGL, something similar could happen in theory, but didn't in
practice.
2016-05-05 13:38:08 +02:00
wm4 f5ff2656e0 vaapi: determine surface format in decoder, not in renderer
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).

In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.

Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.

This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).

Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.

In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
2016-04-11 22:03:26 +02:00
wm4 05ffde6599 vo_opengl: hwdec: use IDs for API, and log which backend is used
Since there can be multiple backends for a single API (vaapi can use GLX
or EGL), not logging the exact backend name is annoying. So add it. At
the same time, there is no need to duplicate the name as used by the
--hwdec options, so replace it with using the numeric hwdec API ID.
2016-02-01 20:02:52 +01:00
wm4 4c25f49236 vo_opengl: vaapi: don't expect EGL exts. to be in common ext. string 2016-01-22 19:56:10 +01:00
wm4 03b50d4bf6 vo_opengl: vaapi: reorganize platform entrypoints as table 2016-01-21 13:32:29 +01:00
wm4 68366b05f2 vo_opengl: add KMS/DRM VAAPI hardware decoding interop
Just requires glueing it together with Bloat Super Glue (tm).
2016-01-20 19:41:29 +01:00
wm4 453ea2cb6c vaapi: replace VA_STR_FOURCC 2016-01-11 20:30:36 +01:00
wm4 e69d1f2780 vo_opengl: hwdec_vaegl: change license to LGPL 2.1
This file claims to be based on the "MPlayer VA-API patch", but this is
untrue. Only some glue code was copied from hwdec_vaglx.c, and this glue
code was never in MPlayer or the MPlayer VA-API patch in any form, and
instead part of the mpv-original way we do hardware decoding OpenGL
interop. The EGL interop method didn't exist at the time the MPlayer
VA-API patch was created either.
2016-01-07 10:46:15 +01:00
wm4 11888a9270 vo_opengl: never load vaapi GLX interop by default
Causes more harm than it helps. Will eventually be removed.

Also rename the "reject_emulated" field to "probing" - this is more
appropriate now.
2015-11-09 11:58:38 +01:00
wm4 0344abd67a vo_opengl: vaapi: fix compilation failure on older systems
Older systems have certain EGL extension definitions missing. We
redefine them to make the build system easier, and because it's trivial.
But we forgot to define the EGL_LINUX_DMA_BUF_EXT identifier. (I hope
it's the only missing one.)
2015-10-23 15:56:17 +02:00
Emmanuel Gil Peyrot a9ed9e329c vo_opengl: remove leftover variable from vaglx in vaegl
This was preventing compilation on systems without X11 headers.
2015-10-02 03:49:02 +01:00
wm4 8aa8417aa3 vo_opengl: vaapi: add Wayland support
Pretty trivial with the new EGL interop.

Fixes #478.
2015-09-27 21:38:45 +02:00
wm4 710872bc22 vaapi: remove dependency on X11
There are at least 2 ways of using VAAPI without X11 (Wayland, DRM).
Remove the X11 requirement from the decoder part and the EGL interop.
This will be used by a following commit, which adds Wayland support.

The worst about this is the decoder part, which includes a bad hack for
using the decoder without any VO interop (also known as "vaapi-copy"
mode). Separate the X11 parts so that they're self-contained. For the
EGL interop code we do something similar (it's kept slightly simpler,
because it essentially only has to translate between our silly
MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
2015-09-27 21:33:15 +02:00
wm4 2a7811176c vo_opengl: vaapi: provide symbols for missing extensions
We also could just check at build time, but since it's not much, just
redefine them inline if not present.
2015-09-27 16:25:03 +02:00
wm4 2e5df94f0f vo_opengl: vaapi: redo how EGL extensions are loaded
It looks like my hope that we can unconditionally include EGL headers in
the OpenGL code is not coming true, because OSX does not support EGL at
all. So I prefer loading the VAAPI EGL/GL specific extensions manually,
because it's less of a mess. Partially reverts commit d47dff3f.
2015-09-27 16:18:06 +02:00
wm4 e0cb65e8ac vo_opengl: vaapi: probe the surface format
Probe the surface format, and check whether it's really something we
support. This also does a complete check whether the EGL interop works
at all (the only way to find this out is actually running this code).
Also, support YV12. Under some circumstances, vaapi (with Intel
drivers) can be made to use this format.

Unfortunately, the Intel drivers show some very weird behavior, which
is hopefully a bug. insane_hack() provides a very evil workaround (see
comments). A proper solution might be passing the hw format as part of
mp_image_params, but as long as hw surfaces appear to be able to change
the format on the fly, attempting this is probably not worth the extra
complexity and likely fragility. The hack allows us to pretend that
there is sane behavior for now.
2015-09-26 20:52:10 +02:00
wm4 0a6c334b59 vo_opengl: vaapi: undo vaAcquireBufferHandle() correctly on error
Checking and resetting the VAImage.buf field is non-sense, even if it
happened to work out in the normal case. buf is actually freed when
vaDestroyImage() is called (not quite intuitive), and we need an extra
field to know whether vaReleaseBufferHandle() has to be called.
2015-09-25 12:14:19 +02:00
wm4 993bee38ca vo_opengl: vaapi: handle YV12 correctly
This specific FourCC has its planes swapped compared to FFmpeg yuv420p.
2015-09-25 12:07:20 +02:00
wm4 0b87bf9b72 vo_opengl: vaapi: document DRM fourcc upstream defines
Add the upstream symbolic names as comments. Normally, these should be
defined in libdrm's drm_fourcc.h header. But DRM_FORMAT_R8 and
DRM_FORMAT_GR88 are not defined anywhere, except in the kernel userland
headers of Linux 4.3 (!). We don't want mpv to depend on bleeding-edge
Linux kernel headers, so this will have to do.

Also, just for completeness, add fourccs for the 3 and 4 channel
formats. I didn't manage to test them, though.
2015-09-25 12:03:36 +02:00
wm4 456366b63b vo_opengl: vaapi: use dummy image to determine plane layout
Reduces the amount of hardcoded assumptions about the layout
drastically. (Now adding yuv420 support would be just adjusting an if,
if you ignore the other problems, such as determining the hw format at
all early enough.)
2015-09-25 10:22:10 +02:00
wm4 ec356d3efe vo_opengl: vaapi: remove unnecessary loop
Not sure what I was thinking.
2015-09-25 10:03:17 +02:00
wm4 5786d07450 vo_opengl: vaapi: fix cleanup
Don't call eglDestroyImageKHR() on the same ID possibly more than once.

Clear the image reference on termination, or we would leak up to 1 image
per VO recreation.
2015-09-25 10:02:54 +02:00
wm4 8d8a2045bd vo_opengl: support new VAAPI EGL interop
Should work much better than the old GLX interop code. Requires Mesa 11,
and explicitly selecting the X11 EGL backend with:

    --vo=opengl:backend=x11egl

Should it turn out that the new interop works well, we will try to
autodetect EGL by default.

This code still uses some bad assumptions, like expecting surfaces to be
in NV12. (This is probably ok, because virtually all HW will use this
format. But we should at least check this on init or so, instead of
failing to render an image if our assumption doesn't hold up.)

This repo was a lot of help: https://github.com/gbeauchesne/ffvademo
The kodi code was also helpful (the magic FourCC it uses for
EGL_LINUX_DRM_FOURCC_EXT are nowhere documented, and
EGL_IMAGE_INTERNAL_FORMAT_EXT as used in ffvademo does
not actually exist).

(This is the 3rd VAAPI GL interop that was implemented in this player.)
2015-09-25 00:26:19 +02:00