This uses ID3D11VideoProcessor to convert the video to a RGBA surface,
which is then bound to ANGLE. Currently ANGLE does not provide any way
to bind nv12 surfaces directly, so this will have to do.
ID3D11VideoContext1 would give us slightly more control about the
colorspace conversion, though it's still not good, and not available
in MinGW headers yet.
The video processor is created lazily, because we need to have the coded
frame size, of which AVFrame and mp_image have no concept of. Doing the
creation lazily is less of a pain than somehow hacking the coded frame
size into mp_image.
I'm not really sure how ID3D11VideoProcessorInputView is supposed to
work. We recreate it on every frame, which is simple and hopefully
doesn't affect performance.
For Mediacodec in particular we don't care about the format. It can just
decode to whatever it wants. The only case we would care about is it not
returning an opaque format if we don't have proper interop, but
libavcodec always returns non-opaque formats by default.
Use the recently added lavc_suffix mechanism to select the wrapper
decoder.
With all hwdec callbacks being optional, and RPI/Mediacodec having only
dummy callbacks, all the callbacks can be removed as well.
The result is that the vd_lavc_hwdec struct for both of them is tiny.
It's better to move them to vd_lavc.c directly, because they are so
trivial and small.
This is intended for cases when --hwdec needs to override the decoder
implementation in use, like for example on the RPI.
It does two things:
1. Allow the hwdec to indicate a decoder suffix. libavcodec by
convention adds a suffix to all wrapper decoders, and here we start
relying on it. While not necessarily the best idea, it's the only
thing we got. libavcodec's hwaccel list is useless, because it only
has the codec ID, not the associated decoder's name.
2. Make --hwdec=auto work properly. It shouldn't fail anymore, and hwdec
probing should reliably work, even if a different decoder is selected
with --vd. The semantics of --hwdec should dictate that it overrides
the default decoder.
Until now, the presence of the process_image() callback was used to set
a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec
field instead, so the decoder can explicitly set this if it's really
needed.
Do this so process_image() can be used in the VideoToolbox glue code for
something entirely unrelated.
Some functions which expected a codec name (i.e. the name of the video
format itself) were passed a decoder name. Most "native" libavcodec
decoders have the same name as the codec, so this was never an issue.
This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal"
should now actually enable native surface mode (instead of doing copy-
back).
AVFormatContext.codec is deprecated now, and you're supposed to use
AVFormatContext.codecpar instead.
Handle this for all of the normal playback code.
Encoding mode isn't touched.
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
The mp_set_av_packet()/mp_pts_from_av() functions check whether the
timebase is set at all (i.e. AVRational.num!=0), so there's no need to
fiddle with pointers.
Completely pointless abominations that FFmpeg refuses to remove. They
are ancient, long deprecated API which we can't use anymore. They
confused users as well.
Pretend that they don't exist. Due to the way --vd works, they can't
even be forced anymore. The older hack which explicitly rejects these
can be dropped as well.
This is in preparation for a hypothetical API change in libavcodec,
which would allow the decoder to return multiple video frames before
accepting a new input packet.
In theory, the body of the if() added to vd_lavc.c could be replaced
with this code:
packet->buffer += ret;
packet->len -= ret;
but currently this is not needed, as libavformat already outputs one
frame per packet. Also, using libavcodec this way could lead to a
"deadlock" if the decoder refuses to consume e.g. garbage padding, so
enabling this now would introduce bugs.
(Adding this now for easier testing, and for symmetry with the audio
code.)
Until now (and in mplayer traditionally), avi timestamps were handled
with a timestamp FIFO. AVI timestamps are essentially just strictly
increasing frame numbers and are not reordered like normal timestamps.
Limiting the FIFO is required because frames can be dropped. To make
it worse, frame dropping can't be distinguished from the decoder not
returning output due to increasing the buffering required for B-frames.
("Measuring" the buffering at playback start seems like an interesting
idea, but won't work as the buffering could be increased mid-playback.)
Another problem are skipped frames (packets with data, but which do
not contain a video frame).
Besides dropped and skipped frames, there is the problem that we can't
always know the delay. External decoders like MMAL are not going to
tell us. (And later perhaps others, like direct VideoToolbox usage.)
In general, this works not-well enough that I prefer the solution of
passing through AVI timestamps as DTS. This is slightly incorrect,
because most decoders treat DTS as mpeg-style timestamps, which
already include a b-frame delay, and thus will be shifted by a few
frames. This means there will be a problem with A/V sync in some
situations.
Note that the FFmpeg AVI demuxer shifts timestamps by an additional
amount (which increases after the first seek!?!?), which makes the
situation worse. It works well with VfW-muxed Matroska files, though.
On RPI, the first X timestamps are broken until the MMAL decoder "locks
on".
fd339e3f53 introduced a regression that caused
segfault while uniniting dxva2 decoder (and possibly vdpau too). The problem was
that it freed the avctx earlier, before calling the backend-specific uninit
which referenced it.
Revert some of the changes of that commit, and avoid calling flush by
checking whether the codec is open instead.
(Based on a PR by Kevin Mitchell.)
Signed-off-by: wm4 <wm4@nowhere>
It can be "dangerous". In particular, the decoder might have failed to
initialize, and is now in a broken state. avcodec_flush_buffers() is not
expected to be called in this state, and could trigger undefined
behavior.
Avoids "problems". In particular, it makes MMAL output a NOPTS timestamp
if the input timestamp was NOPTS.
Don't do it for other decoders. Ideally, we will at some point in the
future switch to integer fractions for timestamps at least up until the
filter layer. But this would be a larger change, and for now I'd prefer
keeping the not-rounded demuxer timestamps (if we have them).
Commit b53cb8de added a delay queue for decoded frames. This is supposed
to be used with copy-back decoders like dxva2-copy and vaapi-copy.
Surfaces returned by them can't be referenced after uninitializing the
decoders, so they have to be released before destroying the decoder.
Move the flush_all() call above decoder uninit accordingly. Also move
the destruction of the AVFrame used for decoding (just for being
defensive - normally it doesn't hold any reference).
We just need to provide an entrypoint for it, and move the main init
code to a separate function. This gets rid of the messy video chain full
reinit in command.c, which completely destroyed and recreated the video
state for the purpose of mid-stream hw/sw switching.
Don't give the "software_fallback_decoder" field special meaning. Alwass
set it, and rename it to "decoder". Whether hw decoding is used is
determined by the "hwdec" field already.
This is mainly a refactor. I'm hoping it will make some things easier
in the future due to cleanly separating codec metadata and stream
metadata.
Also, declare that the "codec" field can not be NULL anymore. demux.c
will set it to "" if it's NULL when added. This gets rid of a corner
case everything had to handle, but which rarely happened.
MPlayer traditionally always used the display aspect ratio, e.g. 16:9,
while FFmpeg uses the sample (aka pixel) aspect ratio.
Both have a bunch of advantages and disadvantages. Actually, it seems
using sample aspect ratio is generally nicer. The main reason for the
change is making mpv closer to how FFmpeg works in order to make life
easier. It's also nice that everything uses integer fractions instead
of floats now (except --video-aspect option/property).
Note that there is at least 1 user-visible change: vf_dsize now does
not set the display size, only the display aspect ratio. This is
because the image_params d_w/d_h fields did not just set the display
aspect, but also the size (except in encoding mode).
If reinit after a fallback from hardware fails, this field can be NULL.
The check in control() was broken due to a typo (found by Coverity), and
decode() lacked the check entirely.
Approximately reverts commit 3ccac74d. This failed with some avi files,
which do pseudo-VFR by sending packets with empty frames (or repeat
frames, depending on point of view). Specifically, these packets are not
0 bytes, so they don't get skipped by libavformat, as with the usual VFR
avi hack. Instead, the packet contains a VOP with vop_coded=0, so
libavcodec will just return no frame. We could probably distinguish such
skipped frames and delayed frames by explicitly measuring the codec
delay by counting how long it takes to get the very first frame (and
then treat skips as explicit drops), but we may as well simply reinstate
the old code.
To appease to at least one semi-broken case, do not enable this logic on
the RPI, as the FFmpeg MMAL wrapper has arbitrary buffering (and MMAL
itself is asynchronous).
Until now, we've relied on the following things:
- you can send flush packets to the decoder even if it's fully flushed,
- you can send new packets to a flushed decoder,
- you can send new packers to a partially flushed decoder.
("flushing" refers to sending flush packets to the decoder until the
decoder does not return new pictures, not avcodec_flush_buffers().)
All of these are questionable. The libavcodec API probably doesn't
guarantee that these work well or at all, even though most decoders have
no issue with these. But especially with hardware decoding wrappers
(like MMAL), real problems can be expected. Isolate us from these corner
cases by handling them explicitly.
A hw decoder might fail to decode a frame for multiple reasons, and not
always just because decoding is impossible. We can't generally
distinguish these reasons well. Make it more tolerant by accepting
failures of 3 frames, but not more. The threshold can be adjusted by the
repurposed --vd-lavc-software-fallback option.
(This behavior was suggested much earlier in some PR, but at the time
the "proper" hwdec fallback was indistinguishable from decoding error.
With the current situation, "proper" fallback is still instantious.)
The uninit() function was called twice if the uninit() function failed
(once by init(), once by vd_lavc.c code), which caused crashes due to
double-free. (This failure is a corner case, and all other hwdec
backends appear to handle this case gracefully.)
I do not think this code should be able to deal with uninit() being
called other than once. Guarantee that it's called exactly once.
Fixes linker failure. How did this ever work? Apparently it did most of
the time, but apparently we just got the first case where it didn't.
Fixes#2433.
The previous commit moved the av_frame_unref() after the got_picture
check. This accidentally also deferred the software fallback
reinitialization to until a software picture was decoded (instead of the
exact time of the fallback), which is not ideal.
Just rely on the fact that calling av_frame_unref() on a frame is ok
even if nothing was decoded.
Commit 12cd48a8 started setting the hwdec_failed field even if hwdec was
not active, and because it also checked this field even if hwdec was not
active, broke decoding forever.
Fix this, and also avoid a memory leak or API misuse by releasing the
decoded picture. Passing an unreleased frame to the decoder has as far
as I know no defined effects.