Hardware decoding/displaying with vo_opengl is done by replacing the
normal video textures with textures provided by the hardware decoding
API OpenGL interop code. Often, this changes the format (vaglx and vdpau
return RGBA, vda returns packed YUV).
If the format is changed, there was a chance (or at least a higher
potential for bugs) that the shader generation code could be confused by
the mismatch of formats, and would create incorrect conversions.
Simplify this by requiring the hwdec interop driver to set the format it
will return to us. This affects all fields, not just some (done by
replacing the format with the value of the converted_imgfmt field in
init_format), in particular fields like colorlevels.
Currently, no hwdec interop driver does anything sophisticated, and the
win is mostly from the mp_image_params_guess_csp() function, which will
reset fields like colorlevels to expected value if RGBA is used.
Instead of converting the hw surface to an image in the VO, provide a
generic way to convet hw surfaces, and use this in the screenshot code.
It's all relatively straightforward, except vdpau is being terrible. It
needs a huge chunk of new code, because copying back is not simple.
Before this commit, each hw backend had their own specific struct types
for context, and some, like VDA, had none at all. Add a context struct
(mp_hwdec_ctx) that provides a somewhat generic way to pass the hwdec
context around. Some things get slightly better, some slightly more
verbose.
mp_hwdec_info is still around; it's still needed, but is reduced to its
role of handling delayed loading of the hwdec backend.
This adds API to libmpv that lets host applications use the mpv opengl
renderer. This is a more flexible (and possibly more portable) option to
foreign window embedding (via --wid).
This assumes that methods like context sharing and multithreaded OpenGL
rendering are infeasible, and that a way is needed to integrate it with
an application that uses a single thread to render everything.
Add an example that does this with QtQuick/qml. The example is
relatively lazy, but still shows how relatively simple the integration
is. The FBO indirection could probably be avoided, but would require
more work (and would probably lead to worse QtQuick integration, because
it would have to ignore transformations like rotation).
Because this makes mpv directly use the host application's OpenGL
context, there is no platform specific code involved in mpv, except
for hw decoding interop.
main.qml is derived from some Qt example.
The following things are still missing:
- a way to do better video timing
- expose GL renderer options, allow changing them at runtime
- support for color equalizer controls
- support for screenshots
This wasn't done before because there was no advantage in "abstracting"
it. This changed, and putting this into its own files is better than
messing it into gl_common.c/h.