This makes ICY title changes show up at approximately the correct time,
even if the demuxer buffer is huge. (It'll still be wrong if the stream
byte cache contains a meaningful amount of data.)
It should have the same effect for mid-stream metadata changes in e.g.
OGG (untested).
This is still somewhat fishy, but in parts due to ICY being fishy, and
FFmpeg's metadata change API being somewhat fishy. For example, what
happens if you seek? With FFmpeg AVFMT_EVENT_FLAG_METADATA_UPDATED and
AVSTREAM_EVENT_FLAG_METADATA_UPDATED we hope that FFmpeg will correctly
restore the correct metadata when the first packet is returned.
If you seke with ICY, we're out of luck, and some audio will be
associated with the wrong tag until we get a new title through ICY
metadata update at an essentially random point (it's mostly inherent to
ICY). Then the tags will switch back and forth, and this behavior will
stick with the data stored in the demuxer cache. Fortunately, this can
happen only if the HTTP stream is actually seekable, which it usually is
not for ICY things. Seeking doesn't even make sense with ICY, since you
can't know the exact metadata location. Basically ICY metsdata sucks.
Some complexity is due to a microoptimization: I didn't want additional
atomic accesses for each packet if no timed metadata is used. (It
probably doesn't matter at all.)
Move dec_video.c to filters/f_decoder_wrapper.c. It essentially becomes
a source filter. vd.h mostly disappears, because mp_filter takes care of
the dataflow, but its remains are in struct mp_decoder_fns.
One goal is to simplify dataflow by letting the filter framework handle
it (or more accurately, using its conventions). One result is that the
decode calls disappear from video.c, because we simply connect the
decoder wrapper and the filter chain with mp_pin_connect().
Another goal is to eventually remove the code duplication between the
audio and video paths for this. This commit prepares for this by trying
to make f_decoder_wrapper.c extensible, so it can be used for audio as
well later.
Decoder framedropping changes a bit. It doesn't seem to be worse than
before, and it's an obscure feature, so I'm content with its new state.
Some special code that was apparently meant to avoid dropping too many
frames in a row is removed, though.
I'm not sure how the source code tree should be organized. For one,
video/decode/vd_lavc.c is the only file in its directory, which is a bit
annoying.
This directly reads individual mkv sub-packets (block laces) into a
dedicated AVBufferRefs, which can be directly used for creating packets
without a additional copy of the packet data. This also means we switch
parsing of block header fields and lacing metadata to read directly from
the stream, instead of a memory buffer.
This could have been much easier if libavcodec didn't require padding
the packet data with zero bytes. We could just have each packet
reference a slice of the block data. But as it is, the only way to get
padding without a copy is to read the laces into individually allocated
(and padded) memory block, which required a larger rewrite.
This probably makes recovering from broken mkv files slightly worse if
the transport is unseekable. We just read, and then check if we've
overread. But I think that shouldn't be a real concern.
No actual measureable performance change. Potential for some
regressions, as this is quite intrusive, and touches weird obscure shit
like mkv lacing. Still keeping it because I like how it removes some
redundant EBML parsing functions.
The main purpose of this commit is avoiding any hidden O(n^2) algorithms
in the code for pruning the demuxer cache, and for determining the
seekable boundaries of the cache. The old code could loop over the whole
packet queue on every packet pruned in certain corner cases.
There are two ways how to reach the goal:
1) commit a cardinal sin
2) do everything incrementally
The cardinal sin is adding an extra field to demux_packet, which caches
the determined seekable range for a keyframe range. demux_packet is a
rather general data structure and thus shouldn't have any fields that
are not inherent to its use, and are only needed as an implementation
detail of code using it. But what are you gonna do, sue me?
In the future, demux.c might have its own packet struct though. Then the
other existing cardinal sin (the "next" field, from MPlayer times) could
be removed as well.
This commit also changes slightly how the seek end is determined. There
is a note on the manpage in case anyone finds the new behavior
confusing. It's somewhat cleaner and might be needed for supporting
multiple ranges (although that's unclear).
The new_segment field was used to track the decoder data flow handler of
timeline boundaries, which are used for ordered chapters etc. (anything
that sets demuxer_desc.load_timeline). This broke seeking with the
demuxer cache enabled. The demuxer is expected to set the new_segment
field after every seek or segment boundary switch, so the cached packets
basically contained incorrect values for this, and the decoders were not
initialized correctly.
Fix this by getting rid of the flag completely. Let the decoders instead
compare the segment information by content, which is hopefully enough.
(In theory, two segments with same information could perhaps appear in
broken-ish corner cases, or in an attempt to simulate looping, and such.
I preferred the simple solution over others, such as generating unique
and stable segment IDs.)
We still add a "segmented" field to make it explicit whether segments
are used, instead of doing something silly like testing arbitrary other
segment fields for validity.
Cached seeking with timeline stuff is still slightly broken even with
this commit: the seek logic is not aware of the overlap that segments
can have, and the timestamp clamping that needs to be performed in
theory to account for the fact that a packet might contain a frame that
is always clipped off by segment handling. This can be fixed later.
If a packet uses segmentation, the codec field must be set. Copying the
codec field was forgotten as an oversight, which is why this just
crashes. This showed up only now, because demux_copy_packet() was not
used before in the main demux path until recently.
Fixes#5027.
All contributors have agreed. In 3a43f13fce, someone who potentially
disagreed reverted a commit by someone else (restoring the original
state). This shouldn't matter for Copyright, and all of the affected
code was rewritten/removed anyway.
It's all explained in the DOCS changes. Although this option was always
kind of obscure and pointless. Until it is removed, the only reason for
setting it would be to raise the static default limit, so change its
default to INT_MAX so that it does nothing by default.
The FFmpeg versions we support all have the APIs we were checking for.
Only Libav missed them. Simplify this by explicitly checking for FFmpeg
in the code, instead of trying to detect the presence of the API.
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.
The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.
The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)
To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
concatenating files with completely different codecs for the sake
of EDL (which also uses the timeline infrastructure). A "lighter"
approach would try to make use of decoder mechanism to update e.g.
the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
normal playback core mechanisms like hr-seek, but now the playback
core doesn't need to care about these things.
These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)
There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.
Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
Makes it somewhat more uniform, and breaks up the awfully deep nesting.
This implicitly changes multiple small details, rather than only moving
code around. In particular, this computes the packet fields first and
parses them afterwards, which is needed for the next commit.
This mechanism was introduced for Opus, and allows correct skipping of
"preroll" data, as well as discarding trailing audio if the file's
length isn't a multiple of the audio frame size.
Not sure how to handle seeking. I don't understand the purpose of the
SeekPreRoll element.
This was tested with correctness_trimming_nobeeps.opus, remuxed to mka
with mkvmerge v7.2.0. It seems to be correct, although the reported file
duration is incorrect (maybe a mkvmerge issue).
This is a simplification, because it lets us use the AVPacket
functions, instead of handling the details manually.
It also allows the libavcodec rawvideo decoder to use reference
counting, so it doesn't have to memcpy() the full image data. The change
in av_common.c enables this.
This change is somewhat risky, because we rely on the following AVPacket
implementation details and assumptions:
- av_packet_ref() doesn't access the input padding, and just copies the
data. By the API, AVPacket is always padded, and we violate this. The
lavc implementation would have to go out of its way to make this a
real problem, though.
- We hope that the way we make the AVPacket refcountable in av_common.c
is actually supported API-usage. It's hard to tell whether it is.
Of course we still use our own "old" demux_packet struct, just so that
libav* API usage is somewhat isolated.