Often, we don't know whether hardware decoding will work until we've
tried. (This used to be different, but API changes and improvements in
libavcodec led to this situation.) We will often output that we're going
to use hardware decoding, and then print a fallback warning.
Instead, print the status once we have decoded a frame.
Some of the old messages are turned into verbose messages, which should
be helpful for debugging. Also add some new ones.
The fallback at initialization time was basically duplicated, maybe for
the sake of showing a different error message. This doesn't matter
anymore; not much can fail at initialization anymore. Most meaningful
and common errors happen either at probing or in get_format (when the
actual hw decoder is initialized).
libavcodec does not support HEVC via VAAPI yet, so this won't work.
However, there is ongoing work to add HEVC support to VAAPI, and this
change might help with testing. (Or maybe not - but there is no harm in
this change.)
This affects vo_opengl_cb in particular: it'll most likely auto-load
VDA, and then the VideoToolbox decoder won't work. And everything fails.
This is mainly caused by FFmpeg using separate pixfmts for the _same_
thing (CVPixelBuffers), simply because libavcodec's architecture demands
that hwaccel backends are selected by pixfmts. (Which makes no sense,
but now we have the mess.)
So instead of duplicating FFmpeg's misdesign, just change the format to
our own canonical one on the image output by the decoder. Now the GL
interop code is exactly the same for VDA and VT, and we use the VT name
only.
While the "old" libavcodec vdpau API is not deprecated (only the very-
old API is), it's still relatively complicated code that badly
duplicates the much simpler newer vdpau code. It exists only for the
sake of older FFmpeg releases; get rid of it.
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
Revert "win32: more wchar_t -> WCHAR replacements"
Revert "win32: replace wchar_t with WCHAR"
Doing a "partial" port of this makes no sense anymore from my
perspective. Revert the changes, as they're confusing without
context, maintenance, and progress. These changes were a bit
premature anyway, and might actually cause other issues
(locale neutrality etc. as it was pointed out).
This was essentially missing from commit 0b52ac8a.
Since L"..." string literals have the type wchar_t[], we can't use them
for UTF-16 strings. Use C11 u"..." string literals instead. These have
the type char16_t[], but we simply assume char16_t is the same
underlying type as WCHAR. In practice, they're both unsigned short.
For this reason use -std=c11 on Windows. Since Windows is a "special"
environment (we require either MinGW or Cygwin), we don't need to worry
too much about compiler compatibility.
Fixes problems with --vo=opengl:interpolation. The issue here is that
vo_opengl retains more surfaces than what was preallocated for the
decoder. Until now, we just explicitly failed to decode frames for which
no additional surfaces are available. Since modern drivers usually are
fine with not "registering" surfaces before the decoder is created, just
allow allocating additional surfaces if needed.
(We also could probably recreate the HW decoder, since the HW decoder
should be stateless. But let's try to avoid raising the overall
complexity of the code.)
Sometime recently, hardware decoding started to fail if h264 with full
reference frames was decoded, and --vo=vaapi was used. VAAPI requires
registering all surfaces that the decoder will ever use in advance, so
if the playback chain uses more surfaces than originally allocated, we
fail and drop back to software decoding.
I'm not really sure why or when this started happening. Commit 7b9d7265
for one is not the cause - it can be reproduced with earlier commits. It
also seems to be timing dependent. Possibly it has to do with the way
vo.c retains previous surfaces, and the way they can be queued/unqueued
asynchronously.
Increasing the number of reserved additional surfaces by 1 fixes it.
(Though I have no idea where exactly all these surfaces are being used.
Or rather, _when_.)
Basically, we need to make sure to allocate enough data for the pretty
dumb copy_nv12 function. (It could be avoided by making the function
less dumb, but this fix is simpler.)
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
This is basically a hack for drivers which prevent the mpv DXVA2 decoder
glue from working if OpenGL is in fullscreen mode.
Since it doesn't add any "hard" new API to the client API, some of the
code would be required for a true zero-copy hw decoding pipeline, and
sine it isn't too much code after all, this is probably acceptable.
Again. With the old OpenGL interop dropped, this probably works better
than vaapi-copy now. Last time we defaulted to vaapi-copy, because the
OpenGL interop could swap U/V planes and other stupid crap. We'll see.
MPlayer traditionally had completely separate sh_ structs for
audio/video/subs, without a good way to share fields. This meant that
fields shared across all these headers had to be duplicated. This commit
deduplicates essentially the last remaining duplicated fields.
When using --hwdec=auto, about half of all systems will print:
"[vdpau] Error when calling vdp_device_create_x11: 1"
this happens because usually mpv will be linked against both vdpau and
vaapi libs, but the drivers are not necessarily available. Then trying
to load a driver will fail. This is a normal part of probing, but the
error messages were printed anyway. Silence them by explicitly
distinguishing probing.
This pretty much goes through all the layers. We actually consider
loading hw backends for vo_opengl always "auto probed", even if a hw
backend is explicitly requested. In this case vd_lavc will print a
warning message anyway (adjust this message a bit).
When showing cover art, the decoding logic pretends that the source has
an infinite number of frames. This slightly simplifies dealing with
filter data flow. It was done by feeding the same packet repeatedly to
the decoder (each decode run produces new output).
Change this by decoding once at the video initialization. This is easier
to follow, and increases robustness in case of broken images. Usually,
we try to tolerate decoding errors, so decoding normally continues, but
in this case it would just burn the CPU for no reason.
Fixes#2056.
This must have been some non-sense in the original vaapi mplayer patch.
While I still have no good idea what this "direct mapping" business is
about, it appears to be pretty much pointless. Nothing can hold
additional "real" surface references (due to how the API and mpv/lavc
refcounting work), so removing the additional surfaces won't break
anything. It still could be that this was for achieving additional
buffering (not reusing surfaces as soon), but we buffer some additional
data anyway. Plus, the original intention of the vaapi mplayer code was
probably increasing surface count just by 1 or 2, not actually doubling
it, and/or it was a "trick" to get to the maximum count of 21 when h264
is in use.
gstreamer-vaapi uses "ref_frames + SCRATCH_SURFACES_COUNT" here, with
SCRATCH_SURFACES_COUNT defined to 4. It doesn't appear to check the
overlay attributes at all in the decoder.
In any case, remove this non-sense.
On hw decoder reinit failure we did not actually always return a sw
format, because the first format (fmt[0]) is not always a sw format.
This broke some cases of fallback. We must go through the trouble to
determine the first actual sw format.
Yet another of these dozens of hwaccel changes. This time, libavcodec
provides utility functions, which initialize the vdpau decoder and map
codec profiles. So a lot of work the API user had to do falls away.
This also will give us support for high bit depth profiles, and possibly
HEVC once libavcodec supports it.
...instead of relying on the hw decoding API to align it for us. The old
method could in theory have gone wrong if the video is cropped by an
amount large enough to step over several blocks.
There's not much of a reason to keep get_surface_hwdec() and
get_buffer2_hwdec() separate. Actually, the way the mpi->AVFrame
referencing is done makes this confusing. The separation is probably
an artifact of the pre-libavcodec-refcounting compatibility glue.
Most of hardware decoding is initialized lazily. When the first packet
is parsed, libavcodec will call get_format() to check whether hw or sw
decoding is wanted. Until now, we've returned AV_PIX_FMT_NONE from
get_format() if hw decoder initialization failed. This caused the
avcodec_decode_video2() call to fail, which in turn let us trigger the
fallback. We didn't return a sw format from get_format(), because we
didn't want to continue decoding at all. (The reason being that full
reinitialization is more robust when continuing sw decoding.)
This has some disadvantages. libavcodec vomited some unwanted error
messages. Sometimes the failures are more severe, like it happened with
HEVC. In this case, the error code path simply acted up in a way that
was extremely inconvenient (and had to be fixed by myself). In general,
libavcodec is not designed to fallback this way.
Make it a bit less violent from the API usage point of view. Return a sw
format if hw decoder initialization fails. In this case, we let
get_buffer2() call avcodec_default_get_buffer2() as well. libavcodec is
allowed to perform its own sw fallback. But once the decode function
returns, we do the full reinitialization we wanted to do.
The result is that the fallback is more robust, and doesn't trigger any
decoder error codepaths or messages either. Change our own fallback
message to a warning, since there are no other messages with error
severity anymore.
This is pretty much copy&pasted from Libav commit
a7e0380497306d9723dec8440a4c52e8bf0263cf.
Note that if FFmpeg was not compiled with HEVC DXVA2 support or your
video drivers do not support HEVC, the player will not fallback and
just fail decoding any video. This is because libavcodec appears not
to return an error in this case. The situation is made worse by the
fact that MSYS2 is on an ancient MinGW-w64 release, which does not
have the required headers for HEVC DXVA2 support.
The hardware always decodes to nv12 so using this image format causes less cpu
usage than uyvy (which we are currently using, since Apple examples and other
free software use that). The reduction in cpu usage can add up to quite a bit,
especially for 4k or high fps video.
This needs an accompaning commit in libavcodec.
Remove the old implementation for these properties. It was never very
good, often returned very innaccurate values or just 0, and was static
even if the source was variable bitrate. Replace it with the
implementation of "packet-video-bitrate". Mark the "packet-..."
properties as deprecated. (The effective difference is different
formatting, and returning the raw value in bits instead of kilobits.)
Also extend the documentation a little.
It appears at least some decoders (sipr?) need the
AVCodecContext.bit_rate field set, so this one is still passed through.
Remove the colorspace-related top-level options, add them to vf_format.
They are rather obscure and not needed often, so it's better to get them
out of the way. In particular, this gets rid of the semi-complicated
logic in command.c (most of which was needed for OSD display and the
direct feedback from the VO). It removes the duplicated color-related
name mappings.
This removes the ability to write the colormatrix and related
properties. Since filters can be changed at runtime, there's no loss of
functionality, except that you can't cycle automatically through the
color constants anymore (but who needs to do this).
This also changes the type of the mp_csp_names and related variables, so
they can directly be used with OPT_CHOICE. This probably ended up a bit
awkward, for the sake of not adding a new option type which would have
used the previous format.
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.
Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.
This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.
vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.
This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.
This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.
Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
Codecs for hardware acceleration are not blacklisted, but whitelisted.
Also, if this emssage is printed, the codec might not have any hardware
acceleration support in the first place.
Trying to handle such video is almost worthless, but it was requested by
at least 2 users.
If there are no timestamps, enable byte seeking by setting
ts_resets_possible. Use the video FPS (wherever it comes from) and the
audio samplerate for timing. The latter was already done by making the
first packet emit DTS=0; remove this again and do it "properly" in a
higher level.
For some reason there were two points in the code where it warned
against non-monotonic video PTS. The one in video.c triggered on PTS
going backwards or making large jumps forwards, while dec_video.c
triggered on PTS going backwards or PTS not changing. Merge them into a
single check, which warns against all cases.
This played e.g. a 1264x722 file as 1264x720. There was some code which
dropped the aspect ratio if the video (in original resolution) wasn't
scaled by more than 4 pixels. Commit 5f3c3f8c introduced this (although
I'm not really sure what the code replaced by it did).
Just remove this "feature".
Instead of "vaapi", simply by changing the probe order.
"vaapi" uses the GLX GL interop, which has causing us more problems than
it solved.
Unfortunately this leads also to copying if "--hwdec=auto --vo=vaapi" is
used, even though GLX is not involved in this case - but I don't care
enough to make the probe logic cleverer just for this. You can still get
the zero-copy path with --hwdec=vaapi.