Both AVFrame.pts and AVFrame.pkt_pts have existed for a long time. Until
now, decoders always returned the pts via the pkt_pts field, while the
pts field was used for encoding and libavfilter only. Recently, pkt_pts
was deprecated, and pts was switched to always carry the pts.
This means we have to be careful not to accidentally use the wrong
field, depending on the libavcodec version. We have to explicitly check
the version numbers. Of course the version numbers are completely
idiotic, because idiotically the pkg-config and library names are the
same for FFmpeg and Libav, so we have to deal with this explicitly as
well.
These are different AVCodecContext fields. pkt_timebase is the correct
one for identifying the unit of packet/frame timestamps when decoding,
while time_base is for encoding. Some decoders also overwrite the
time_base field with some unrelated codec metadata.
pkt_timebase does not exist in Libav, so an #if is required.
Instead of passing through double float timestamps opaquely, pass real
timestamps. Do so by always setting a valid timebase on the
AVCodecContext for audio and video decoding.
Specifically try not to round timestamps to a too coarse timebase, which
could round off small adjustments to timestamps (such as for start time
rebasing or demux_timeline). If the timebase is considered too coarse,
make it finer.
This gets rid of the need to do this specifically for some hardware
decoding wrapper. The old method of passing through double timestamps
was also a bit questionable. While libavcodec is not supposed to
interpret timestamps at all if no timebase is provided, it was
needlessly tricky. Also, it actually does compare them with
AV_NOPTS_VALUE. This change will probably also reduce confusion in the
future.
This commit adds an --audio-channel=auto-safe mode, and makes it the
default. This mode behaves like "auto" with most AOs, except with
ao_alsa. The intention is to allow multichannel output by default on
sane APIs. ALSA is not sane as in it's so low level that it will e.g.
configure any layout over HDMI, even if the connected A/V receiver does
not support it. The HDMI fuckup is of course not ALSA's fault, but other
audio APIs normally isolate applications from dealing with this and
require the user to globally configure the correct output layout.
This will help with other AOs too. ao_lavc (encoding) is changed to the
new semantics as well, because it used to force stereo (perhaps because
encoding mode is supposed to produce safe files for crap devices?).
Exclusive mode output on Windows might need to be adjusted accordingly,
as it grants the same kind of low level access as ALSA (requires more
research).
In addition to the things mentioned above, the --audio-channels option
is extended to accept a set of channel layouts. This is supposed to be
the correct way to configure mpv ALSA multichannel output. You need to
put a list of channel layouts that your A/V receiver supports.
The libavcodec wmapro decoder will skip some bytes at the start of the
first packet and return each time. It will not return any audio data in
this state.
Our own code as well as libavcodec's new API handling
(avcodec_send_packet() etc.) discard the PTS on the first return, which
means the PTS is never known for the first packet. This results in a
"Failed audio resync." message.
Fixy it by remember the PTS in next_pts. This field is used only if the
decoder outputs no PTS, and is updated after each frame - and thus
should be safe to set.
(Possibly this should be fixed in libavcodec new API handling by not
setting the PTS to NOPTS as long as no real data has been output. It
could even interpolate the PTS if the timebase is known.)
Fixes the failure message seen in #3297.
Some bugs in this code are exposed by e.g. playing lossless audio files
with --ad-lavc-threads=16. (libavcodec doesn't really support threaded
audio decoding, except for lossless files.) In these cases, a major
amount of audio can be buffered, which makes incorrect handling of this
buffering obvious.
For one, draining the decoder can take a while, so if there's a new
segment, we shouldn't read audio.
The segment end check was completely wrong, and used the start value.
Workaround for an awful corner-case. The new decode API "locks" the
decoder into the EOF state once a drain packet has been sent. The
problem starts with a file containing a 0-sized packet, which is
interpreted as drain packet.
This should probably be changed in libavcodec (not treating 0-sized
packets as drain packets with the new API) or in libavformat (discard
0-sized packets as invalid), but efforts to do so have been fruitless.
Note that vd_lavc.c already does something similar, but originally for
other reasons.
Fixes#3106.
AVFormatContext.codec is deprecated now, and you're supposed to use
AVFormatContext.codecpar instead.
Handle this for all of the normal playback code.
Encoding mode isn't touched.
Fixes correctness_trimming_nobeeps.opus. One nasty thing is that this
mechanism interferes with the container-signalled mechanism with
AV_FRAME_DATA_SKIP_SAMPLES. So apply it only if that is apparently not
present. It's a mess, and it's still broken in FFmpeg CLI, so I'm sure
this will get fucked up later again.
I'm not quite sure what the FFmpeg AV_FRAME_DATA_SKIP_SAMPLES API
demands here. The code so far assumed that skipping can be more than a
frame, but not trimming. Extend it to trimming too.
This is actually already done by dec_audio.c. But if
AV_FRAME_DATA_SKIP_SAMPLES is applied, this happens too late here. The
problem is that this will slice off samples, and make it impossible for
later code to reconstruct the timestamp properly.
Missing timestamps can still happen with some demuxers, e.g. demux_mkv.c
with Opus tracks. (Although libavformat interpolates these itself.)
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.
The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.
The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)
To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
concatenating files with completely different codecs for the sake
of EDL (which also uses the timeline infrastructure). A "lighter"
approach would try to make use of decoder mechanism to update e.g.
the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
normal playback core mechanisms like hr-seek, but now the playback
core doesn't need to care about these things.
These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)
There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.
Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
The code is shared with the --vd-lavc-threads option, so using 0 for
auto-detection just works.
But no, this is not useful. Just change it for orthogonality.
Will be helpful for the coming filter support. I planned on merging
audio/video decoding, but this will have to wait a bit longer, so only
remove the duplicate status codes.
Similar to the video path. dec_audio.c now handles decoding only. It
also looks very similar to dec_video.c, and actually contains some of
the rewritten code from it. (A further goal might be unifying the
decoders, I guess.)
High potential for regressions.
Seems useless.
This only helped in one case: one audio stream in the sample
av_find_best_stream_fails.ts had a AC3 packets which couldn't be
decoded, and for which avcodec_decode_audio4() returned 0 forever. In
this specific case, playback will now not start, and you have to
deselect audio manually.
(If someone complains, the old behavior might be restored, but
differently.)
Also remove the stale "bitrate" field.
This is mainly a refactor. I'm hoping it will make some things easier
in the future due to cleanly separating codec metadata and stream
metadata.
Also, declare that the "codec" field can not be NULL anymore. demux.c
will set it to "" if it's NULL when added. This gets rid of a corner
case everything had to handle, but which rarely happened.
This is another attempt at making files with sparse video frames work
better.
The problem is that you generally can't know whether a jump in video
timestamps is just a (very) long video frame, or a timestamp reset. Due
to the existence of files with sparse video frames (new frame only every
few seconds or longer), every heuristic will be arbitrary (in general,
at least).
But we can use the fact that if video is continuous, audio should also
be continuous. Audio discontinuities can be easily detected, and if that
happens, reset some of the playback state.
The way the playback state is reset is rather radical (resets decoders
as well), but it's just better not to cause too much obscure stuff to
happen here. If the A/V sync code were to be rewritten, it should
probably strictly use PTS values (not this strange time_frame/delay
stuff), which would make it much easier to detect such situations and
to react to them.
Deal with jittering Matroska crap timestamps. This reuses the mechanism
that is needed for frames without PTS, and adds a heuristic to it. If
the interpolated timestamp is less than 1ms away from the real one, it
might be due to Matroska timestamp rounding (or other file formats with
such rounding, or files remuxed from Matroska).
While there actually isn't much of a need to do this (audio PTS
jittering by such a low amount doesn't negatively influence much), it
helps with identifying jitter from other sources.
Instead of requiring the decoder to set the PTS directly on the
dec_audio context (including handling absence of PTS etc.), transfer the
packet PTS to the decoded audio frame. Marginally simpler, and gives
more control to the generic code.
The previous commit handled not falling back to normal decoding if the
AO was reloaded (I think...), and this tries to re-engage spdif pass-
through if it was previously falling back to normal decoding (e.g.
because it temporarily switched to an audio device incapable of
passthrough).
MPlayer traditionally had completely separate sh_ structs for
audio/video/subs, without a good way to share fields. This meant that
fields shared across all these headers had to be duplicated. This commit
deduplicates essentially the last remaining duplicated fields.
This provides a new method for enabling spdif passthrough. The old
method via --ad (--ad=spdif:ac3 etc.) is deprecated. The deprecated
method will probably stop working at some point.
This also supports PCM fallback. One caveat is that it will lose at
least 1 audio packet in doing so. (I don't care enough to prevent this.)
(This is named after the old S/PDIF connector, because it uses the same
underlying technology as far as the higher level protoco is concerned.
Also, the user should be renamed that passthrough is backwards.)
This deprecates the --ad-spdif-dtshd option, and replaces it with a
pseudo decoder. This means ad_spdif will report two decoders, "dts" and
"dts-hd", of which the second simply enables what the option did.
The --ad-spdif-dtshd option will actually be deprecated in the next
commit.
Apparently some A/V receivers do not behave well if "normal" DTS is
passed through using the high bitrate spdif format normally used for
DTS-HD (other receivers are fine with it).
Parse the first packet passed to ad_spdif by decoding it with libavcodec
in order to get the profile. Ignore the --ad-spdif-dtshd if it's not
DTS-HD. (If the codec profile changes midstream, the user is out of
luck. But this is probably an insignificant corner case.)
I thought about parsing the bitstream, but let's not. While it probably
wouldn't be that much effort, we are trying to keep it down on codec
details here - otherwise we could just do our own spdif framing instead
of using libavformat's spdif pseudo-muxer.
Another possibility, using the codec parameters signalled by
libavformat, is disregarded. Our builtin Matroska decoder doesn't do
this, and also we do not want on the demuxer having to decode some
packets in order to retrieve codec params (as libavformat does).
Fixes#1949.
Remove the old implementation for these properties. It was never very
good, often returned very innaccurate values or just 0, and was static
even if the source was variable bitrate. Replace it with the
implementation of "packet-video-bitrate". Mark the "packet-..."
properties as deprecated. (The effective difference is different
formatting, and returning the raw value in bits instead of kilobits.)
Also extend the documentation a little.
It appears at least some decoders (sipr?) need the
AVCodecContext.bit_rate field set, so this one is still passed through.
We've been prefering the libavcodec mp3 decoder for half a year now.
There is likely no benefit at all for using the libmpg123 one. It's just
a maintenance burden, and tricks users into thinking it's a required
dependency.
Trying to handle such video is almost worthless, but it was requested by
at least 2 users.
If there are no timestamps, enable byte seeking by setting
ts_resets_possible. Use the video FPS (wherever it comes from) and the
audio samplerate for timing. The latter was already done by making the
first packet emit DTS=0; remove this again and do it "properly" in a
higher level.
In commit 5f8b060e I blindly assumed that the packet sizes were in
pseudo-samples, but they were actually in bytes. Oops.
(The effect was that cutting the audio was a bit less precise than it
can be.)
Also remove the packet size from ad_spdif.c; it didn't actually use it,
and simply takes what the spdif "muxer" returns.