2015-09-24 22:20:57 +00:00
|
|
|
/*
|
|
|
|
* This file is part of mpv.
|
|
|
|
*
|
2016-01-07 09:46:15 +00:00
|
|
|
* mpv is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
2015-09-24 22:20:57 +00:00
|
|
|
*
|
|
|
|
* mpv is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
2016-01-07 09:46:15 +00:00
|
|
|
* GNU Lesser General Public License for more details.
|
2015-09-24 22:20:57 +00:00
|
|
|
*
|
2016-01-07 09:46:15 +00:00
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
|
2015-09-24 22:20:57 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#include <stddef.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <assert.h>
|
2017-10-07 16:49:56 +00:00
|
|
|
#include <unistd.h>
|
2015-09-24 22:20:57 +00:00
|
|
|
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
#include <libavutil/hwcontext.h>
|
|
|
|
#include <libavutil/hwcontext_vaapi.h>
|
|
|
|
|
|
|
|
#include "config.h"
|
|
|
|
|
2019-09-15 03:05:21 +00:00
|
|
|
#include "video/out/hwdec/hwdec_vaapi.h"
|
2018-12-30 22:59:48 +00:00
|
|
|
|
2019-09-15 03:05:21 +00:00
|
|
|
#if HAVE_VAAPI_DRM
|
2018-06-28 13:06:41 +00:00
|
|
|
#include "libmpv/render_gl.h"
|
2015-09-27 14:25:03 +00:00
|
|
|
#endif
|
|
|
|
|
2015-09-27 18:09:10 +00:00
|
|
|
#if HAVE_VAAPI_X11
|
|
|
|
#include <va/va_x11.h>
|
|
|
|
|
client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-22 16:05:01 +00:00
|
|
|
static VADisplay *create_x11_va_display(struct ra *ra)
|
2015-09-27 18:09:10 +00:00
|
|
|
{
|
client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-22 16:05:01 +00:00
|
|
|
Display *x11 = ra_get_native_resource(ra, "x11");
|
2015-09-27 18:09:10 +00:00
|
|
|
return x11 ? vaGetDisplay(x11) : NULL;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2015-09-27 19:24:35 +00:00
|
|
|
#if HAVE_VAAPI_WAYLAND
|
|
|
|
#include <va/va_wayland.h>
|
|
|
|
|
client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-22 16:05:01 +00:00
|
|
|
static VADisplay *create_wayland_va_display(struct ra *ra)
|
2015-09-27 19:24:35 +00:00
|
|
|
{
|
client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-22 16:05:01 +00:00
|
|
|
struct wl_display *wl = ra_get_native_resource(ra, "wl");
|
2015-09-27 19:24:35 +00:00
|
|
|
return wl ? vaGetDisplayWl(wl) : NULL;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-01-20 18:41:29 +00:00
|
|
|
#if HAVE_VAAPI_DRM
|
|
|
|
#include <va/va_drm.h>
|
|
|
|
|
client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-22 16:05:01 +00:00
|
|
|
static VADisplay *create_drm_va_display(struct ra *ra)
|
2016-01-20 18:41:29 +00:00
|
|
|
{
|
2019-09-12 19:00:50 +00:00
|
|
|
mpv_opengl_drm_params_v2 *params = ra_get_native_resource(ra, "drm_params_v2");
|
|
|
|
if (!params || params->render_fd == -1)
|
2018-06-28 13:06:41 +00:00
|
|
|
return NULL;
|
|
|
|
|
|
|
|
return vaGetDisplayDRM(params->render_fd);
|
2016-01-20 18:41:29 +00:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-01-21 12:32:29 +00:00
|
|
|
struct va_create_native {
|
2016-09-30 12:36:42 +00:00
|
|
|
const char *name;
|
client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-22 16:05:01 +00:00
|
|
|
VADisplay *(*create)(struct ra *ra);
|
2016-01-21 12:32:29 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
static const struct va_create_native create_native_cbs[] = {
|
2015-09-27 18:09:10 +00:00
|
|
|
#if HAVE_VAAPI_X11
|
2016-09-30 12:36:42 +00:00
|
|
|
{"x11", create_x11_va_display},
|
2015-09-27 19:24:35 +00:00
|
|
|
#endif
|
|
|
|
#if HAVE_VAAPI_WAYLAND
|
2016-09-30 12:36:42 +00:00
|
|
|
{"wayland", create_wayland_va_display},
|
2016-01-20 18:41:29 +00:00
|
|
|
#endif
|
|
|
|
#if HAVE_VAAPI_DRM
|
2016-09-30 12:36:42 +00:00
|
|
|
{"drm", create_drm_va_display},
|
2015-09-27 18:09:10 +00:00
|
|
|
#endif
|
2016-01-21 12:32:29 +00:00
|
|
|
};
|
|
|
|
|
client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-22 16:05:01 +00:00
|
|
|
static VADisplay *create_native_va_display(struct ra *ra, struct mp_log *log)
|
2016-01-21 12:32:29 +00:00
|
|
|
{
|
|
|
|
for (int n = 0; n < MP_ARRAY_SIZE(create_native_cbs); n++) {
|
2016-09-30 12:36:42 +00:00
|
|
|
const struct va_create_native *disp = &create_native_cbs[n];
|
|
|
|
mp_verbose(log, "Trying to open a %s VA display...\n", disp->name);
|
client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-22 16:05:01 +00:00
|
|
|
VADisplay *display = disp->create(ra);
|
2016-01-21 12:32:29 +00:00
|
|
|
if (display)
|
|
|
|
return display;
|
|
|
|
}
|
|
|
|
return NULL;
|
2015-09-27 18:09:10 +00:00
|
|
|
}
|
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
static void determine_working_formats(struct ra_hwdec *hw);
|
2015-09-24 22:20:57 +00:00
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
static void uninit(struct ra_hwdec *hw)
|
2015-09-24 22:20:57 +00:00
|
|
|
{
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
struct priv_owner *p = hw->priv;
|
2016-05-09 17:42:03 +00:00
|
|
|
if (p->ctx)
|
|
|
|
hwdec_devices_remove(hw->devs, &p->ctx->hwctx);
|
2015-09-24 22:20:57 +00:00
|
|
|
va_destroy(p->ctx);
|
|
|
|
}
|
|
|
|
|
2019-09-15 03:05:21 +00:00
|
|
|
const static vaapi_interop_init interop_inits[] = {
|
|
|
|
#if HAVE_GL
|
|
|
|
vaapi_gl_init,
|
|
|
|
#endif
|
|
|
|
#if HAVE_VULKAN
|
|
|
|
vaapi_vk_init,
|
|
|
|
#endif
|
|
|
|
NULL
|
|
|
|
};
|
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
static int init(struct ra_hwdec *hw)
|
2015-09-24 22:20:57 +00:00
|
|
|
{
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
struct priv_owner *p = hw->priv;
|
2015-09-27 14:10:22 +00:00
|
|
|
|
2019-09-15 03:05:21 +00:00
|
|
|
for (int i = 0; interop_inits[i]; i++) {
|
|
|
|
if (interop_inits[i](hw)) {
|
|
|
|
break;
|
2018-12-30 22:59:48 +00:00
|
|
|
}
|
2019-09-15 03:05:21 +00:00
|
|
|
}
|
2018-12-30 22:59:48 +00:00
|
|
|
|
2019-09-15 03:05:21 +00:00
|
|
|
if (!p->interop_map || !p->interop_unmap) {
|
|
|
|
MP_VERBOSE(hw, "VAAPI hwdec only works with OpenGL or Vulkan backends.\n");
|
|
|
|
return -1;
|
2018-12-30 22:59:48 +00:00
|
|
|
}
|
2015-09-24 22:20:57 +00:00
|
|
|
|
client API: add a new way to pass X11 Display etc. to render API
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
2018-03-22 16:05:01 +00:00
|
|
|
p->display = create_native_va_display(hw->ra, hw->log);
|
2016-09-30 12:36:42 +00:00
|
|
|
if (!p->display) {
|
|
|
|
MP_VERBOSE(hw, "Could not create a VA display.\n");
|
2015-09-24 22:20:57 +00:00
|
|
|
return -1;
|
2016-09-30 12:36:42 +00:00
|
|
|
}
|
2015-09-24 22:20:57 +00:00
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
p->ctx = va_initialize(p->display, hw->log, true);
|
2015-09-24 22:20:57 +00:00
|
|
|
if (!p->ctx) {
|
|
|
|
vaTerminate(p->display);
|
|
|
|
return -1;
|
|
|
|
}
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
if (!p->ctx->av_device_ref) {
|
|
|
|
MP_VERBOSE(hw, "libavutil vaapi code rejected the driver?\n");
|
|
|
|
return -1;
|
|
|
|
}
|
2015-09-24 22:20:57 +00:00
|
|
|
|
2015-11-09 10:58:38 +00:00
|
|
|
if (hw->probing && va_guess_if_emulated(p->ctx)) {
|
2015-09-24 22:20:57 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
determine_working_formats(hw);
|
|
|
|
if (!p->formats || !p->formats[0]) {
|
2015-09-26 18:15:52 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
p->ctx->hwctx.supported_formats = p->formats;
|
2016-05-09 17:42:03 +00:00
|
|
|
p->ctx->hwctx.driver_name = hw->driver->name;
|
|
|
|
hwdec_devices_add(hw->devs, &p->ctx->hwctx);
|
2015-09-24 22:20:57 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
static void mapper_unmap(struct ra_hwdec_mapper *mapper)
|
|
|
|
{
|
|
|
|
struct priv_owner *p_owner = mapper->owner->priv;
|
|
|
|
VADisplay *display = p_owner->display;
|
|
|
|
struct priv *p = mapper->priv;
|
|
|
|
VAStatus status;
|
|
|
|
|
2019-09-15 03:05:21 +00:00
|
|
|
p_owner->interop_unmap(mapper);
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
|
2017-11-30 23:35:28 +00:00
|
|
|
#if VA_CHECK_VERSION(1, 1, 0)
|
2017-10-07 16:49:56 +00:00
|
|
|
if (p->surface_acquired) {
|
|
|
|
for (int n = 0; n < p->desc.num_objects; n++)
|
|
|
|
close(p->desc.objects[n].fd);
|
|
|
|
p->surface_acquired = false;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
if (p->buffer_acquired) {
|
|
|
|
status = vaReleaseBufferHandle(display, p->current_image.buf);
|
|
|
|
CHECK_VA_STATUS(mapper, "vaReleaseBufferHandle()");
|
|
|
|
p->buffer_acquired = false;
|
|
|
|
}
|
|
|
|
if (p->current_image.image_id != VA_INVALID_ID) {
|
|
|
|
status = vaDestroyImage(display, p->current_image.image_id);
|
|
|
|
CHECK_VA_STATUS(mapper, "vaDestroyImage()");
|
|
|
|
p->current_image.image_id = VA_INVALID_ID;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void mapper_uninit(struct ra_hwdec_mapper *mapper)
|
|
|
|
{
|
2019-09-15 03:05:21 +00:00
|
|
|
struct priv_owner *p_owner = mapper->owner->priv;
|
|
|
|
if (p_owner->interop_uninit) {
|
|
|
|
p_owner->interop_uninit(mapper);
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool check_fmt(struct ra_hwdec_mapper *mapper, int fmt)
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
{
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
struct priv_owner *p_owner = mapper->owner->priv;
|
|
|
|
for (int n = 0; p_owner->formats && p_owner->formats[n]; n++) {
|
|
|
|
if (p_owner->formats[n] == fmt)
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
static int mapper_init(struct ra_hwdec_mapper *mapper)
|
2015-09-24 22:20:57 +00:00
|
|
|
{
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
struct priv_owner *p_owner = mapper->owner->priv;
|
|
|
|
struct priv *p = mapper->priv;
|
|
|
|
|
|
|
|
p->current_image.buf = p->current_image.image_id = VA_INVALID_ID;
|
|
|
|
|
|
|
|
mapper->dst_params = mapper->src_params;
|
|
|
|
mapper->dst_params.imgfmt = mapper->src_params.hw_subfmt;
|
|
|
|
mapper->dst_params.hw_subfmt = 0;
|
|
|
|
|
|
|
|
struct ra_imgfmt_desc desc = {0};
|
|
|
|
|
|
|
|
if (!ra_get_imgfmt_desc(mapper->ra, mapper->dst_params.imgfmt, &desc))
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
p->num_planes = desc.num_planes;
|
2018-12-30 22:59:48 +00:00
|
|
|
mp_image_set_params(&p->layout, &mapper->dst_params);
|
|
|
|
|
2019-09-15 03:05:21 +00:00
|
|
|
if (p_owner->interop_init)
|
|
|
|
if (!p_owner->interop_init(mapper, &desc))
|
|
|
|
return -1;
|
vaapi: determine surface format in decoder, not in renderer
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
2016-04-11 18:46:05 +00:00
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
if (!p_owner->probing_formats && !check_fmt(mapper, mapper->dst_params.imgfmt))
|
|
|
|
{
|
|
|
|
MP_FATAL(mapper, "unsupported VA image format %s\n",
|
|
|
|
mp_imgfmt_to_name(mapper->dst_params.imgfmt));
|
|
|
|
return -1;
|
|
|
|
}
|
vaapi: determine surface format in decoder, not in renderer
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
2016-04-11 18:46:05 +00:00
|
|
|
|
2015-09-24 22:20:57 +00:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
static int mapper_map(struct ra_hwdec_mapper *mapper)
|
2015-09-24 22:20:57 +00:00
|
|
|
{
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
struct priv_owner *p_owner = mapper->owner->priv;
|
|
|
|
struct priv *p = mapper->priv;
|
2015-09-24 22:20:57 +00:00
|
|
|
VAStatus status;
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
VADisplay *display = p_owner->display;
|
2015-09-24 22:20:57 +00:00
|
|
|
|
2017-11-30 23:35:28 +00:00
|
|
|
#if VA_CHECK_VERSION(1, 1, 0)
|
2017-10-07 16:49:56 +00:00
|
|
|
if (p->esh_not_implemented)
|
|
|
|
goto esh_failed;
|
|
|
|
|
|
|
|
status = vaExportSurfaceHandle(display, va_surface_id(mapper->src),
|
|
|
|
VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME_2,
|
|
|
|
VA_EXPORT_SURFACE_READ_ONLY |
|
|
|
|
VA_EXPORT_SURFACE_SEPARATE_LAYERS,
|
|
|
|
&p->desc);
|
2019-06-16 17:31:25 +00:00
|
|
|
if (!CHECK_VA_STATUS_LEVEL(mapper, "vaExportSurfaceHandle()",
|
|
|
|
p_owner->probing_formats ? MSGL_V : MSGL_ERR)) {
|
2017-10-07 16:49:56 +00:00
|
|
|
if (status == VA_STATUS_ERROR_UNIMPLEMENTED)
|
|
|
|
p->esh_not_implemented = true;
|
|
|
|
goto esh_failed;
|
|
|
|
}
|
2019-08-04 01:23:45 +00:00
|
|
|
vaSyncSurface(display, va_surface_id(mapper->src));
|
|
|
|
// No need to error out if sync fails, but good to know if it did.
|
|
|
|
CHECK_VA_STATUS(mapper, "vaSyncSurface()");
|
2017-10-07 16:49:56 +00:00
|
|
|
p->surface_acquired = true;
|
|
|
|
|
2019-09-15 03:05:21 +00:00
|
|
|
if (!p_owner->interop_map(mapper))
|
|
|
|
goto esh_failed;
|
2017-10-07 16:49:56 +00:00
|
|
|
|
|
|
|
if (p->desc.fourcc == VA_FOURCC_YV12)
|
|
|
|
MPSWAP(struct ra_tex*, mapper->tex[1], mapper->tex[2]);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
esh_failed:
|
|
|
|
if (p->surface_acquired) {
|
|
|
|
for (int n = 0; n < p->desc.num_objects; n++)
|
|
|
|
close(p->desc.objects[n].fd);
|
|
|
|
p->surface_acquired = false;
|
|
|
|
}
|
2018-12-30 22:59:48 +00:00
|
|
|
#endif // VA_CHECK_VERSION
|
|
|
|
|
2019-09-15 03:05:21 +00:00
|
|
|
if (p_owner->interop_map_legacy) {
|
2018-12-30 22:59:48 +00:00
|
|
|
VAImage *va_image = &p->current_image;
|
|
|
|
status = vaDeriveImage(display, va_surface_id(mapper->src), va_image);
|
|
|
|
if (!CHECK_VA_STATUS(mapper, "vaDeriveImage()"))
|
2017-02-17 16:20:33 +00:00
|
|
|
goto err;
|
|
|
|
|
2018-12-30 22:59:48 +00:00
|
|
|
VABufferInfo buffer_info = {.mem_type = VA_SURFACE_ATTRIB_MEM_TYPE_DRM_PRIME};
|
|
|
|
status = vaAcquireBufferHandle(display, va_image->buf, &buffer_info);
|
|
|
|
if (!CHECK_VA_STATUS(mapper, "vaAcquireBufferHandle()"))
|
2015-09-24 22:20:57 +00:00
|
|
|
goto err;
|
2018-12-30 22:59:48 +00:00
|
|
|
p->buffer_acquired = true;
|
|
|
|
|
|
|
|
int drm_fmts[8] = {
|
|
|
|
// 1 bytes per component, 1-4 components
|
|
|
|
MKTAG('R', '8', ' ', ' '), // DRM_FORMAT_R8
|
|
|
|
MKTAG('G', 'R', '8', '8'), // DRM_FORMAT_GR88
|
|
|
|
0, // untested (DRM_FORMAT_RGB888?)
|
|
|
|
0, // untested (DRM_FORMAT_RGBA8888?)
|
|
|
|
// 2 bytes per component, 1-4 components
|
|
|
|
MKTAG('R', '1', '6', ' '), // proposed DRM_FORMAT_R16
|
|
|
|
MKTAG('G', 'R', '3', '2'), // proposed DRM_FORMAT_GR32
|
|
|
|
0, // N/A
|
|
|
|
0, // N/A
|
|
|
|
};
|
2015-09-24 22:20:57 +00:00
|
|
|
|
2019-09-15 03:05:21 +00:00
|
|
|
if (!p_owner->interop_map_legacy(mapper, &buffer_info, drm_fmts))
|
|
|
|
goto err;
|
2015-09-24 22:20:57 +00:00
|
|
|
|
2018-12-30 22:59:48 +00:00
|
|
|
if (va_image->format.fourcc == VA_FOURCC_YV12)
|
|
|
|
MPSWAP(struct ra_tex*, mapper->tex[1], mapper->tex[2]);
|
2015-09-25 10:07:20 +00:00
|
|
|
|
2018-12-30 22:59:48 +00:00
|
|
|
return 0;
|
2019-09-15 03:05:21 +00:00
|
|
|
} else {
|
2018-12-30 22:59:48 +00:00
|
|
|
mapper_unmap(mapper);
|
|
|
|
goto err;
|
|
|
|
}
|
2015-09-24 22:20:57 +00:00
|
|
|
|
|
|
|
err:
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
if (!p_owner->probing_formats)
|
|
|
|
MP_FATAL(mapper, "mapping VAAPI EGL image failed\n");
|
2015-09-24 22:20:57 +00:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
static bool try_format(struct ra_hwdec *hw, struct mp_image *surface)
|
2015-09-26 18:15:52 +00:00
|
|
|
{
|
|
|
|
bool ok = false;
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
struct ra_hwdec_mapper *mapper = ra_hwdec_mapper_create(hw, &surface->params);
|
|
|
|
if (mapper)
|
|
|
|
ok = ra_hwdec_mapper_map(mapper, surface) >= 0;
|
|
|
|
ra_hwdec_mapper_free(&mapper);
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
return ok;
|
|
|
|
}
|
2015-09-26 18:15:52 +00:00
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
static void determine_working_formats(struct ra_hwdec *hw)
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
{
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
struct priv_owner *p = hw->priv;
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
int num_formats = 0;
|
|
|
|
int *formats = NULL;
|
|
|
|
|
|
|
|
p->probing_formats = true;
|
|
|
|
|
2017-04-23 13:56:45 +00:00
|
|
|
AVHWFramesConstraints *fc =
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
av_hwdevice_get_hwframe_constraints(p->ctx->av_device_ref, NULL);
|
2017-04-23 13:56:45 +00:00
|
|
|
if (!fc) {
|
2017-10-23 08:53:28 +00:00
|
|
|
MP_WARN(hw, "failed to retrieve libavutil frame constraints\n");
|
2017-04-23 13:56:45 +00:00
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
for (int n = 0; fc->valid_sw_formats[n] != AV_PIX_FMT_NONE; n++) {
|
|
|
|
AVBufferRef *fref = NULL;
|
|
|
|
struct mp_image *s = NULL;
|
|
|
|
AVFrame *frame = NULL;
|
|
|
|
fref = av_hwframe_ctx_alloc(p->ctx->av_device_ref);
|
|
|
|
if (!fref)
|
|
|
|
goto err;
|
|
|
|
AVHWFramesContext *fctx = (void *)fref->data;
|
|
|
|
fctx->format = AV_PIX_FMT_VAAPI;
|
|
|
|
fctx->sw_format = fc->valid_sw_formats[n];
|
|
|
|
fctx->width = 128;
|
|
|
|
fctx->height = 128;
|
|
|
|
if (av_hwframe_ctx_init(fref) < 0)
|
|
|
|
goto err;
|
|
|
|
frame = av_frame_alloc();
|
|
|
|
if (!frame)
|
|
|
|
goto err;
|
|
|
|
if (av_hwframe_get_buffer(fref, frame, 0) < 0)
|
|
|
|
goto err;
|
|
|
|
s = mp_image_from_av_frame(frame);
|
|
|
|
if (!s || !mp_image_params_valid(&s->params))
|
|
|
|
goto err;
|
|
|
|
if (try_format(hw, s))
|
|
|
|
MP_TARRAY_APPEND(p, formats, num_formats, s->params.hw_subfmt);
|
|
|
|
err:
|
|
|
|
talloc_free(s);
|
|
|
|
av_frame_free(&frame);
|
|
|
|
av_buffer_unref(&fref);
|
2015-09-26 18:15:52 +00:00
|
|
|
}
|
2017-04-23 13:56:45 +00:00
|
|
|
av_hwframe_constraints_free(&fc);
|
2015-09-26 18:15:52 +00:00
|
|
|
|
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
2017-01-13 12:36:02 +00:00
|
|
|
done:
|
|
|
|
MP_TARRAY_APPEND(p, formats, num_formats, 0); // terminate it
|
|
|
|
p->formats = formats;
|
|
|
|
p->probing_formats = false;
|
|
|
|
|
|
|
|
MP_VERBOSE(hw, "Supported formats:\n");
|
|
|
|
for (int n = 0; formats[n]; n++)
|
|
|
|
MP_VERBOSE(hw, " %s\n", mp_imgfmt_to_name(formats[n]));
|
2015-09-26 18:15:52 +00:00
|
|
|
}
|
|
|
|
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
const struct ra_hwdec_driver ra_hwdec_vaegl = {
|
2016-02-01 19:02:52 +00:00
|
|
|
.name = "vaapi-egl",
|
vo_opengl: separate hwdec context and mapping, port it to use ra
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
2017-08-10 15:48:33 +00:00
|
|
|
.priv_size = sizeof(struct priv_owner),
|
|
|
|
.imgfmts = {IMGFMT_VAAPI, 0},
|
|
|
|
.init = init,
|
|
|
|
.uninit = uninit,
|
|
|
|
.mapper = &(const struct ra_hwdec_mapper_driver){
|
|
|
|
.priv_size = sizeof(struct priv),
|
|
|
|
.init = mapper_init,
|
|
|
|
.uninit = mapper_uninit,
|
|
|
|
.map = mapper_map,
|
|
|
|
.unmap = mapper_unmap,
|
|
|
|
},
|
2015-09-24 22:20:57 +00:00
|
|
|
};
|