Normally, audio decoder don't have a decoder delay, so the code was
fine. But FFmpeg supports multithreaded decoding for some audio codecs,
which introduces such a delay.
The delay means that we won't get decoded audio for the first few
packets, and that we need to do something to get the trailing audio
still buffered in the decoder when reaching EOF.
Two changes are needed to deal with the delay:
- If EOF is reached, pass a "flush" packet to the decoder to return the
buffered audio. Such a flush packet is automatically setup when
calling mp_set_av_packet() with a NULL packet.
- Use the PTS returned by the decoder, instead of the packet's. This is
important to get correct timestamps for decoded audio. Ignoring this
would result into offsetting the audio playback time by the decoder
delay. Note that we can still use the timestamp of the first packet
to get the timestamp for the start of the audio.
If the timebase is set, it's used for converting the packet timestamps.
Otherwise, the previous method of reinterpret-casting the mpv style
double timestamps to libavcodec style int64_t timestamps is used.
Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by
mp_pts_from_av(), which simply converts timestamps in a way the old
function did. (Plus it takes a timebase parameter, similar to the
addition to mp_set_av_packet().)
Note that this should not change anything yet. The code in ad_lavc.c and
vd_lavc.c passes NULL for the timebase parameters. We could set
AVCodecContext.pkt_timebase and use that if we want to give libavcodec
"proper" timestamps.
This could be important for ad_lavc.c: some codecs (opus, probably mp3
and aac too) have weird requirements about doing decoding preroll on the
container level, and thus require adjusting the audio start timestamps
in some cases. libavcodec doesn't tell us how much was skipped, so we
either get shifted timestamps (by the length of the skipped data), or we
give it proper timestamps. (Note: libavcodec interprets or changes
timestamps only if pkt_timebase is set, which by default it is not.)
This would require selecting a timebase though, so I feel uncomfortable
with the idea. At least this change paves the way, and will allow some
testing.
Don't bother explaining the sample format naming schema. The "ne" bit is
outdated anyway, and anyone who has to use this option will be able to
understand the naming schema just by looking at the names too.
This prevents waf from running test programs after compilation. A better
approach would be to only remove this option if the check actually errors,
but we are using this only for Lua anyway.
The vf_eq context contains a very large lookup table, and the method of
setting default values caused the vf_eq context to be included in the
compiled code.
vf_stereo3d now uses vf_lavfi, if mpv was compiled with libavfilter.
vf_swapuv is hereby undeprecated. It's too trivial to wrap it with
libavfilter, and it's also too useless that even typing this commit
message is not really worth the time to spend on it.
Just in case someone expects these are unchanged just because they're
not mentioned in changes.rst anywhere. Documenting all of these changes
would be too much work and not helpful either.
All filters now either use the generic option parser, or don't have
options. This finally finishes a transition started in 2003 (see git
commit 33b62af947).
Why are MPlayer devs so monumentally lazy? Sorry, but this takes the
cake. You had 10 years.
Mostly backwards compatible, we don't change much because we just want
to get rid of the legacy option string handling.
You can't pass an aspect as first argument anymore.
Apparently you can get this with: stereo3d=ab[2]{l,r}:sbs[2]{l,r}
So it seems the filter is redundant and can be removed.
Also see FFmpeg commit 2f11aa141a01.
Unfortunately, this forces filtering both luma and chroma, because
otherwise we'd have to deal with libavfilter's vf_noise weird handling
of YUV vs. RGB formats. Would we e.g. filter luma only, it would filter
red in RGB mode only, because it goes by component and there's no way to
distinguish YUV and RGB by just using the filter's options.
Also remove the ability to disable deinterlacing at runtime. You can
still disable deinterlacing at runtime by using the ``D`` key and its
automatical filter insertion/removal.
This will allow old filter to run libavfilter instead by calling
vf_lw_set_graph(), which turns the filter into a wrapper, using a given
libavfilter graph.
Later commits use that to automatically "reroute" a bunch of filters to
libavfilter. We want to get rid of the old MPlayer filter code, because
it's bad an unmaintained, but we still don't want to force everyone to
use vf_lavfi, so this solution will do for a while.
Previous API worked under the assumption that download_image is always called
after map_image. In practice this is true, but it's better to have a much
generic API that doesn't depend on the order in which the functions are called.