Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
This VA_FOURCC isn't even defined by latest drivers, so I'm just
assuming it doesn't exist and never existed. For planar 4:2:0,
VA_FOURCC_YV12 is normally preferred, and there's even a VA_FOURCC_IYUV
for 4:2:0 with unswapped planes.
0.34 and 0.35 don't have the buffer API, such as vaAcquireBufferHandle.
This is only needed for the EGL interop, but why bother staying
compatible for such old things (0.36 was released over a year ago).
We also can drop some minor compatibility ifdeffery.
There are at least 2 ways of using VAAPI without X11 (Wayland, DRM).
Remove the X11 requirement from the decoder part and the EGL interop.
This will be used by a following commit, which adds Wayland support.
The worst about this is the decoder part, which includes a bad hack for
using the decoder without any VO interop (also known as "vaapi-copy"
mode). Separate the X11 parts so that they're self-contained. For the
EGL interop code we do something similar (it's kept slightly simpler,
because it essentially only has to translate between our silly
MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
This makes it much faster if the surface is really mapped from GPU
memory. It's slightly slower than system memcpy if used on system
memory. We don't really know definitely in which type of memory
it's located, so we use the GPU memcpy in all cases.
Fixes#2317.
This can happen if the hw decoder allocates padded surfaces (e.g.
mod16), but the VPP output surface was allocated with the exact size.
Apparently VPP requires matching input and output sizes, or it will add
artifacts. In this case, it added mirrored pixels to the bottom few
pixels.
Note that the previous commit should have fixed this. But it didn't
work, while this commit does.
Fixes#2320.
Drop libva versions below 0.34.0. These are ancient, so I don't care.
Drop the vo_vaapi deinterlacer as well. With 0.34.0, VPP is always
available, and deinterlacing is done with vf_vavpp.
The vaCreateSurfaces() function changes its signature - actually it did
in 0.34.0 or so, and the <va/va_compat.h> defined a macro to make it use
the old signature.
When using --hwdec=auto, about half of all systems will print:
"[vdpau] Error when calling vdp_device_create_x11: 1"
this happens because usually mpv will be linked against both vdpau and
vaapi libs, but the drivers are not necessarily available. Then trying
to load a driver will fail. This is a normal part of probing, but the
error messages were printed anyway. Silence them by explicitly
distinguishing probing.
This pretty much goes through all the layers. We actually consider
loading hw backends for vo_opengl always "auto probed", even if a hw
backend is explicitly requested. In this case vd_lavc will print a
warning message anyway (adjust this message a bit).
Absence of license header implies LGPL, as mentioned in the "Copyright"
file. But vaapi.h contains some code taken from the mplayer-vaapi
patch, which was under the typical MPlayer license.
Before this commit, each hw backend had their own specific struct types
for context, and some, like VDA, had none at all. Add a context struct
(mp_hwdec_ctx) that provides a somewhat generic way to pass the hwdec
context around. Some things get slightly better, some slightly more
verbose.
mp_hwdec_info is still around; it's still needed, but is reduced to its
role of handling delayed loading of the hwdec backend.
So talking to a certain Intel dev, it sounded like modern VA-API drivers
are reasonable thread-safe. But apparently that is not the case. Not at
all. So add approximate locking around all vaapi API calls.
The problem appeared once we moved decoding and display to different
threads. That means the "vaapi-copy" mode was unaffected, but decoding
with vo_vaapi or vo_opengl lead to random crashes.
Untested on real Intel hardware. With the vdpau emulation, it seems to
work fine - but actually it worked fine even before this commit, because
vdpau was written and designed not by morons, but competent people
(vdpau is guaranteed to be fully thread-safe).
There is some probability that this commit doesn't fix things entirely.
One problem is that locking might not be complete. For one, libavcodec
_also_ accesses vaapi, so we have to rely on our own guesses how and
when lavc uses vaapi (since we disable multithreading when doing hw
decoding, our guess should be relatively good, but it's still a lavc
implementation detail). One other reason that this commit might not
help is Intel's amazing potential to fuckup anything that is good and
holy.
mpv supports two hardware decoding APIs on Linux: vdpau and vaapi. Each
of these has emulation wrappers. The wrappers are usually slower and
have fewer features than their native opposites. In particular the libva
vdpau driver is practically unmaintained.
Check the vendor string and print a warning if emulation is detected.
Checking vendor strings is a very stupid thing to do, but I find the
thought of people using an emulated API for no reason worse.
Also, make --hwdec=auto never use an API that is detected as emulated.
This doesn't work quite right yet, because once one API is loaded,
vo_opengl doesn't unload it, so no hardware decoding will be used if the
first probed API (usually vdpau) is rejected. But good enough.
It's not really needed to be public. Other code can just use mp_image.
The only disadvantage is that the other code needs to call an accessor
to get the VASurfaceID.
Although I at first thought it would be better to have a separate
implementation for hwaccels because the difference to software images
are too large, it turns out you can actually save some code with it.
Note that the old implementation had a small memory management bug. This
got painted over in commit 269c1e1, but is hereby solved properly.
Also note that I couldn't test vf_vavpp.c (due to lack of hardware), and
I hope I didn't accidentally break it.
This ended up a little bit messy. In order to get a mp_log everywhere,
mostly make use of the fact that va_surface already references global
state anyway.
Don't allocate a VAImage and a mp_image every time. VAImage are cached
in the surfaces themselves, and for mp_image an explicit pool is
created. The retry loop runs only once for each surface now.
This also makes use of vaDeriveImage() if possible.
Merged from pull request #246 by xylosper. Minor cosmetic changes, some
adjustments (compatibility with older libva versions), and manpage
additions by wm4.
Signed-off-by: wm4 <wm4@nowhere>
This is based on the MPlayer VA API patches. To be exact it's based on
a very stripped down version of commit f1ad459a263f8537f6c from
git://gitorious.org/vaapi/mplayer.git.
This doesn't contain useless things like benchmarking hacks and the
demo code for GLX interop. Also, unlike in the original patch, decoding
and video output are split into separate source files (the separation
between decoding and display also makes pixel format hacks unnecessary).
On the other hand, some features not present in the original patch were
added, like screenshot support.
VA API is rather bad for actual video output. Dealing with older libva
versions or the completely broken vdpau backend doesn't help. OSD is
low quality and should be rather slow. In some cases, only either OSD
or subtitles can be shown at the same time (because OSD is drawn first,
OSD is prefered).
Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures: the vdpau backend seems to assume
premultiplied, while a native vaapi driver uses straight. So I picked
straight alpha. It doesn't matter much, because the blending code for
straight alpha I added to img_convert.c is probably buggy, and ASS
subtitles might be blended incorrectly.
Really good video output with VA API would probably use OpenGL and the
GL interop features, but at this point you might just use vo_opengl.
(Patches for making HW decoding with vo_opengl have a chance of being
accepted.)
Despite these issues, decoding seems to work ok. I still got tearing
on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also
tested with the vdpau vaapi wrapper on a nvidia system; however this
was rather broken. (Fortunately, there is no reason to use mpv's VAAPI
support over native VDPAU.)