The x264 hack requires reading the first video packet, which in turn we
handle with a hack in demux_mkv.c to get the packet without having to
add special crap to demux.c. Another useless MKV feature (which they
enabled by default at one point and which caused many demuxers to break
completely, only to disable it again when it was too late) conflicts
with this, because we actually pass a block as packet contents, instead
of after "decompression".
Fix this by calling demux_mkv_decode().
This fixes when resuming certain broken h264 files encoded by x264. See
FFmpeg commit 840b41b2a643fc8f0617c0370125a19c02c6b586 about the x264
bug itself.
Normally, the unregistered user data SEI (that contains the x264 version
string) is informational only. But libavcodec uses it to workaround a
x264 bug, which was recently fixed in both libavcodec and x264. The fact
that both encoder and decoder were buggy is the reason that it was not
found earlier, and there are apparently a lot of files around created by
the broken decoder. If libavcodec sees the SEI, this bug can be worked
around by using the old behavior.
If you resume a file with mpv (i.e. seeking when the file loads),
libavcodec never sees the first video packet. Consequently it has to
assume the file is not broken, and never applies the workaround,
resulting in garbage being played.
Fix this by always feeding the first video packet to the decoder on
init, and then flushing the codec (to avoid that an unwanted image is
output). Flushing the codec does not remove info such as the x264
version. We also abuse the fact that the first avcodec_send_packet()
always pushes the frame into the decoder (so we don't have to trigger
the decoder by requsting an output frame).
This will help with things like livestreams.
As a minor detail, subtitles are excluded, because they sometimes have
"unused" events after video and audio ends. To avoid this annoying
corner case, just ignore them.
Before this change and before the seekable stream cache became a thing,
we could possibly seek using the stream cache. But we couldn't know
whether the seek would succeed. We knew the available byte range, but
could in general not tell whether a demuxer would stay within the range
when trying to seek to a specific time position. We preferred to have
safe defaults, so seeking in streams that were detected as unseekable
were not honored. We allowed overriding this via --force-seekable=yes,
in which case it depended on your luck whether the seek would work, or
the player crapped its pants.
With the demuxer packet cache, we can tell exactly whether a seek will
work (at least if there's only 1 seek range). We can just let seeks go
through. Everything to allow this is already in place, and this commit
just moves around some minor things.
Note that the demux_seek() return value was not used before, because low
level (i.e. network level) seeks are usually asynchronous, and if they
fail, the state is pretty much undefined. We simply repurpose the return
value to signal whether cache seeking worked. If it didn't, we can just
resume playback normally, because demuxing continues unaffected, and no
decoder are reset.
This should be particularly helpful to people who for some reason stream
data into stdin via streamlink and such.
Caused by the relatively recent change to packet parsing. This time it
was probably triggered by lace type 0, which reduces the byte length of
a 0 sized packet to 3 (timestamp + flag) instead of 4 (lace count for
other lace types). The thing about laces is just my guess why it worked
for other 0 sized packets, though.
Also remove the redundant and now incorrect check below.
Fixes#5271.
This log line tells us why the demuxer is trying to read more, which us
useful when debugging queue overflows. Probably barely useful, but I
think keeping that flag separately also makes the code slightly easier
to understand.
This fixes weird behavior in the following case:
- open a file
- make sure the max. demuxer forward cache is smaller than the
file's video track
- make sure the max. readahead duration is larger than the file's
duration
- disable the audio track
- seek to the beginning of the file
- once the cache has filled enable the audio track
- a queue overflow warning should appear
(- looking at the seek ranges is also interesting)
The queue overflow warning happens because the packed queue for the
video track will use up the full quota set by --demuxer-max-bytes. When
the audio track is enabled, reading an audio packet would technically
overflow the packet cache by the size of whatever packet is read next.
This means the demuxer signals EOF to the decoder, and once playback has
consumed enough video packets so that audio packets can be read again,
the decoder resumes from EOF. This interacts badly with A/V
synchronization and the whole thing can randomly crap itself until audio
has fully recovered.
We didn't care about this so far, but we want to raise the readahead
duration to something very high, so that the demuxer cache is fully
used. This means this case can be hit quite quickly by switching audio
or subtitle tracks, and is not really an obscure corner case anymore.
Fix this by always losing all cache. Since the cache can't be used
anyway until the newly selected track has been read, this is not much of
a disadvantage. The only thing that could be brought up is that
unselecting the track again could resume operation normally. (Maybe this
would be useful if network died completely without chance of recovery.
Then you could watch the already buffered video anyway by deselecting
the audio track again.) But given the headaches, this seems like the
better solution.
Unfortunately this requires adding new new strange fields and strangely
fragmenting state management functions again. I'm sure whoever works on
this in the future will hate me. Currently it seems like the lesser
evil, and much simpler and robust than the other potential solutions.
In case this needs to be revisited, here is a reminder for readers from
the future what alternative solutions were considered, without those
disadvantages:
A first attempted solution allowed the demuxer to buffer some additional
packets on track switching. This would allow it to read enough data to
feed the decoder at least. But it was still awkward, as it didn't allow
the demuxer to continue prefetching the newly selected track. It also
barely worked, because you could make the forward buffer "over full" by
seeking back with seekable cache enabled, and then it couldn't read
packets anyway.
As alternative solution, we could always demux and cache all tracks,
even if they're deselected. This would also not require a network-level
seek for the "refresh" logic (it's the thing that lets the video decoder
continue as if nothing happened, while actually seeking back in the
stream to get the missing audio packets, in the case of enabling a
previously disabled audio track). But it would also possibly waste
network and memory resources, depending on what the user actually wants.
A second solution would just account the queue sizes for each stream
separately. We could freely fill up the audio packet queue, even if the
video queue is full. Since the demuxer API returns interleaved packets
and doesn't let you predict which packet type comes next, this is not as
simple as it sounds, but it'd probably tie in nicely with the "refresh"
logic.
A third solution would be removing buffered video packets from the end
of the packet queue. Since the "refresh" logic gets these anyway, there
is no reason to keep them if they prevent the audio packet queue from
catching up with the video one. But this would require additional logic,
would interact badly with a bunch of other corner cases. And as far as
the code goes, it's rather complex, because all the logic is written
with FIFO behavior in mind (including the fact that the packet queue is
a singly linked list with no backwards links, making removal from the
end harder).
It seems like there's nothing stopping from sub-demuxers from keeping
packets in the cache, even if it's completely pointless. The top-most
demuxer (demux_timeline) already takes care of caching, so sub-demuxers
only waste space and time with this.
Add a function that can disable the packet cache even at runtime and
after packets are read. (It's not clear whether it really can happen
that packets are read before demux_timeline gets the sub-demuxers, but
there's no reason to make it too fragile.) Call it on all sub-demuxers.
For this to work, it seems we have to move the code for setting the
seekable_cache flag to before demux_timeline is potentially initialized,
because otherwise the cache would be reenabled if the demuxer triggering
timeline support is a timeline segment itself (e.g. ordered chapters).
This fixes missing audio when cycling through audio tracks with anything
that uses nested demuxers, such as demux_timeline, which us used for
EDL, --merge-files, ordered chapters, and youtube-dl pseudo DASH
support. When this bug happened, reenabling an audio track would lead to
silence for the duration of the readahead amount.
The underlying reason is the incorrectly updated buffered range on track
switch. It accidentally included the amount covered by the deselected
stream. But the cause of the observed effect was that demux_timeline
issued a refresh seek to the underlying slave demuxer, which in turn
thought it could do a cache seek, because the seek range still included
everything.
update_stream_selection_state() calls update_seek_ranges() to update the
seek ranges after a track switch. When reenabling the track, ds->eager
was set to false during update_seek_ranges(), which made it think the
stream was sparse, and thus it didn't restrict the current seek range
(making later code think everything was buffered). Fix this by moving
some code, so we first update the ds->eager flag, then the seek ranges.
Also verbose log the low level stream selection calls.
Always display the duration as "unknown" if the duration is known. Also
fix that at least demux_lavf reported unknown duration as 0 (fix by
setting the default to unknown in demux.c).
Remove the dumb _u formatter function, and use a different approach to
avoiding displaying "unknown" as playback time on playback start (set
last_seek_pts for that).
This gives the filename or URL to the libavformat probing logic, which
might use the file extension as a "help" to decide which format the file
is. This helps with mp3 files that have large id3v2 tags and prevents
the idiotic ffmpeg probing logic to think that a mp3 file is amr.
(What we really want is knowing whether we _really_ need to feed more
data to libavformat to detect the format. And without having to pre-read
excessive amounts of data for relatively normal streams.)
If the backbuffer is much larger than the forward buffer, and if you
join a small range with a large range (larger than the forward buffer),
then the seek issues to the end of the range after joining will overflow
the queue.
Normally, read_more will be false when the forward buffer is full, but
the resume seek after joining will set need_refresh to true, which
forces more reading and thus triggers the overfloe warning.
Attempt to fix this by not setting read_more to true on refresh seeks.
Set prefetch_more instead. read_more will still be set if an A/V stream
has no data.
This doesn't help with the following problems related to using refresh
seeks for track switching:
- If the forward buffer is full, then enabling another track will
obviously immediately overflow the queue, and immediately lead to
marking the new track as having no more data (i.e. EOF). We could cut
down the forward buffer or so, but there's no simple way to implement
it. Another possibility would be dropping all buffers and trying to
resume again, but this would likely be complex as well.
- Subtitle tracks will not even show a warning (because they are sparse,
and we have no way of telling whether a packet is missing, or there's
just no packet near the current position). Before this commit,
enabling an empty subtitle track would probably have overflown the
queue, because ds->refreshing was never set to true. Possibly this
could be solved by determining a demuxer read position, which would
reflect until which PTS all subtitle packets should have been demuxed.
The forward buffer limit was intended as a last safeguard to avoid
excessive memory usage against badly interleaved files or decoders going
crazy (up to reading the whole into memory and OOM'ing the user's
system). It's not good at all to limit prefetch. Possibly solutions
include having another smaller limit for prefetch, or maybe having only
a total buffer limit, and discarding back buffer if more data has to be
read. The current solution is making the forward buffer larger than the
forward duration (--cache-secs) would require, but of course this
depends on the stream's bitrate.
The option for enabling it has now an "auto" choice, which is the
default, and which will enable it if the media is thought to be via
network or if the stream cache is enabled (same logic as --cache-secs).
Also bump the --cache-secs default from 10 to 120.
Some back buffer is required to make the immediate forward range
seekable. This is because the back buffer limit is strictly enforced.
Just set a rather high back buffer by default. It's not use if
--demuxer-seekable-cache is disabled, so this is without risk.
Limit the number of cached ranges to MAX_SEEK_RANGES, which is the same
number of maximally exported seek ranges. It makes no sense to keep
them, as the user won't see them anyway. Remove the smallest ones to
enforce the limit if the number grows too high.
Helps a little bit, I guess. But in general, t(h)rashing the cache kills
us anyway.
This has a fixed number of index entries. Entries are added/removed as
packets go through the packet queue. Only keyframes after index_distance
seconds are added. If there are too many keyframe packets, the existing
index is reduced by half, and index_distance is doubled. This should
provide somewhat even spacing between the entries.
The packet queue is sorted, so we can stop the search if we have found a
packet, and the next packet in the queue has a higher PTS than the seek
PTS (for the sake of SEEK_FORWARD, we still consider the first packet
with a higher PTS).
Also, as a mostly cosmetic change, but which might be "faster", check
target for NULL, instead of target_diff for a magic float value.
Subtitle streams are sparse, and no overlap is required to correctly
join two cached ranges. This was not correctly handled iff the two
ranges had different subtitle packets close to the join point.
demux_add_packet() must completely ignore any packets that are added
while a queued seek is not initiated yet.
The main issue is that after setting in->seeking==true, the central lock
is released, and it can take "a while" until it's reacquired on the
demux thread and the seek is actually initiated. During that time,
packets could be read and added, that have nothing to do with the new
state.
If subtitles are part of the stream, determining the seekable range
becomes harder. Subtitles are sparse, and can have packets in irregular
intervals, or even completely lack packets. The usual logic of computing
the seek range by the min/max packet timestamps fails.
Solve this by making the only assumption we can make: subtitle packets
are implicitly demuxed along with other packets. We also assume perfect
interleaving for this, but you really can't do anything with sparse
packets that makes sense without this assumption.
One special case is if we prune sparse packets within the current
seekable range. Obviously this should limit the seekable range to after
the pruned packet.
Instead of weirdly deciding this on every packet read and having the
code far away from where it's actually needed, just run it where it's
actually needed.
A typical idiom for calling functions that remove something from the
array being iterated, but it's not needed here. I have no idea why this
was ever done.
Setting ds->refreshing for unselected streams could lead to a
nonsensical queue overflow warning, because read_packet() took it as a
reason to continue reading.
Also add some more information to the queue overflow warning (even if
that one doesn't have anything to do with this bug - it was for
unselected streams only).
This fixes an endless loop with threading disabled, such as for example
when playing a file with an external subtitle file, and seeking to the
beginning. Something will set in->seeking, but the seek is never
executed, which made demux_read_packet() loop endlessly. (External
subtitles will use non-threaded mode for whatever reasons.)
Fix this by by making the unthreaded code to execute the worker thread
body, which reduces the difference in logic.
Until now, the demuxer cache was limited to a single range. Extend this
to multiple range. Should be useful for slow network streams.
This commit changes a lot in the internal demuxer cache logic, so
there's a lot of room for bugs and regressions. The logic without
demuxer cache is mostly untouched, but also involved with the code
changes. Or in other words, this commit probably fucks up shit.
There are two things which makes multiple cached ranges rather hard:
1. the need to resume the demuxer at the end of a cached range when
seeking to it
2. joining two adjacent ranges when the lowe range "grows" into it (and
resuming the demuxer at the end of the new joined range)
"Resuming" the demuxer means that we perform a low level seek to the end
of a cached range, and properly append new packets to it, without adding
packets multiple times or creating holes due to missing packets.
Since audio and video never line up exactly, there is no clean "cut"
possible, at which you could resume the demuxer cleanly (for 1.) or
which you could use to detect that two ranges are perfectly adjacent
(for 2.). The way how the demuxer interleaves multiple streams is also
unpredictable. Typically you will have to expect that it randomly allows
one of the streams to be ahead by a bit, and so on.
To deal with this, we have heuristics in place to detect when one packet
equals or is "behind" a packet that was demuxed earlier. We reuse the
refresh seek logic (used to "reread" packets into the demuxer cache when
enabling a track), which checks for certain packet invariants.
Currently, it observes whether either the raw packet position, or the
packet DTS is strictly monotonically increasing. If none of them are
true, we discard old ranges when creating a new one.
This heavily depends on the file format and the demuxer behavior. For
example, not all file formats have DTS, and the packet position can be
unset due to libavformat not always setting it (e.g. when parsers are
used).
At the same time, we must deal with all the complicated state used to
track prefetching and seek ranges. In some complicated corner cases, we
just give up and discard other seek ranges, even if the previously
mentioned packet invariants are fulfilled.
To handle joining, we're being particularly dumb, and require a small
overlap to be confident that two ranges join perfectly. (This could be
done incrementally with as little overlap as 1 packet, but corner cases
would eat us: each stream needs to be joined separately, and the cache
pruning logic could remove overlapping packets for other streams again.)
Another restriction is that switching the cached range will always
trigger an asynchronous low level seek to resume demuxing at the new
range. Some users might find this annoying.
Dealing with interleaved subtitles is not fully handled yet. It will
clamp the seekable range to where subtitle packets are.
libavcodec can't deal with them, because its API doesn't distinguish
between 0 sized packets and sending EOF. As such, keeping them doesn't
do any good, ever. This actually fixes some obscure mkv sample (see
previous commit).
Fixes some obscure sample that uses fixed size laces with 0-sized lace
size. Some broken shit. (Maybe the decoder wouldn't care about these
packets, but the demuxer attempted to resync after these packet reading
errors, even though they were perfectly recoverable. But I don't care
enough about this.)
Sample link: https://samples.ffmpeg.org/Matroska/switzler084d_dl.mkv
This directly reads individual mkv sub-packets (block laces) into a
dedicated AVBufferRefs, which can be directly used for creating packets
without a additional copy of the packet data. This also means we switch
parsing of block header fields and lacing metadata to read directly from
the stream, instead of a memory buffer.
This could have been much easier if libavcodec didn't require padding
the packet data with zero bytes. We could just have each packet
reference a slice of the block data. But as it is, the only way to get
padding without a copy is to read the laces into individually allocated
(and padded) memory block, which required a larger rewrite.
This probably makes recovering from broken mkv files slightly worse if
the transport is unseekable. We just read, and then check if we've
overread. But I think that shouldn't be a real concern.
No actual measureable performance change. Potential for some
regressions, as this is quite intrusive, and touches weird obscure shit
like mkv lacing. Still keeping it because I like how it removes some
redundant EBML parsing functions.
This adds a bunch of stuff (mostly unused or redundant) as preparation
for supporting multiple seek ranges. Actual support is probably still
far away.
One change that messes deeper with the actual code is that we account
for total buffered bytes instead of just the backwards bytes now. This
way, prune_old_packets() doesn't have to iterate over all seek ranges to
determine whether something needs pruning.
The main purpose of this commit is avoiding any hidden O(n^2) algorithms
in the code for pruning the demuxer cache, and for determining the
seekable boundaries of the cache. The old code could loop over the whole
packet queue on every packet pruned in certain corner cases.
There are two ways how to reach the goal:
1) commit a cardinal sin
2) do everything incrementally
The cardinal sin is adding an extra field to demux_packet, which caches
the determined seekable range for a keyframe range. demux_packet is a
rather general data structure and thus shouldn't have any fields that
are not inherent to its use, and are only needed as an implementation
detail of code using it. But what are you gonna do, sue me?
In the future, demux.c might have its own packet struct though. Then the
other existing cardinal sin (the "next" field, from MPlayer times) could
be removed as well.
This commit also changes slightly how the seek end is determined. There
is a note on the manpage in case anyone finds the new behavior
confusing. It's somewhat cleaner and might be needed for supporting
multiple ranges (although that's unclear).
The demuxer cache seeking mechanism looks at keyframe ranges to
determine the earlierst PTS of a packet. Instead of looping over all
packets twice (once to find the next keyframe, a second time to find the
seek PTS), do it in one go.
For that basically turn recompute_keyframe_target_pts() into an
iteration functionn. Functionality should be unchanged with this commit.
The base_ts field is used to guess the decoder position, and when set to
NOPTS, it just read ahead arbitrarily. Also demux_add_packet() sets
base_ts to the new timestamp when appending a packet, which would also
make it readahead by a too large amount.
Fix this by setting base_ts after a seek. This assumes that normally, a
cached seek target will always have the timestamp set. This is actually
not quite clear (as it calls recompute_keyframe_target_pts(), which
looks at multiple packets), but maybe it works well enough.
Don't do any of the extra work related to pruning the backbuffer if
demuxer cache seeking is disabled.
As a small extra, always prune packets with no timestamps immediately,
or queue heads that are not keyframes. (Both of them would be pruned
from the backbuffer by the normal logic anyway.)
If fulfilling --demuxer-readahead-secs requires more memory than allowed
by --demuxer-max-bytes, the queue obviously overflows. But the warning
is normally only for the case when trying to find the next video or
audio packet fails, and decoding can't continue.
Use a separate variable for determining whether to prefetch, and if the
queue has overflown, skip the message. In fact, skip the EOF setting and
waking up the decoder thread as well, because the EOF flag should not be
(have been) set in this situation, and waking up the reader thread helps
only if the EOF state changed.
In a shit show of subtle corner case interactions, making the demuxer
cache buffer the entire file can display a small buffered time if
subtitles are enabled. The reason is that some subtitle decoders may
read in advance infinitely, i.e. they read the entire subtitle stream.
Then, since the other streams (audio/video) have logically reached EOF,
and the subtitle stream is set to ds->active==true. This will have to be
fixed properly later to account buffering for subtitle-only files
(another corner case) correctly, but for now this is less annoying.