The old implementation didn't work for the OGG case. Discard the old
shit code (instead of fixing it), and write new shit code. The old code
was already over a year old, so it's about time to rewrite it for no
reason anyway.
While it's true that the old code appears to be broken, the main reason
to rewrite this is to make it simpler. While the amount of code seems to
be about the same, both the concept and the actual tag handling are
simpler. The result is probably a bit more correct.
The packet struct shrinks by 8 byte. That fact that it wasted 8 bytes
per packet for a rather obscure use case was the reason I started this
at all (and when I found that OGG updates didn't work). While these 8
bytes aren't going to hurt, the packet struct was getting too bloated.
If you buffer a lot of data, these extra fields will add up. Still quite
some effort for 8 bytes. Fortunately, it's not like there are any
managers that need to be convinced whether it's worth doing. The freedom
to waste time on dumb shit.
The old implementation attached the current metadata to each packet.
When the decoder read the packet, the packet's metadata was made
current. The new implementation stores metadata as separate list, and
requires that the player frontend tells it the current playback time,
which will be used to find the currently valid metadata. In both cases,
the objective was to correctly update metadata even if a lot of data is
buffered ahead (and to update them correctly when seeking within the
demuxer cache).
The new implementation is actually slightly more correct, because it
uses the playback time for the metadata lookup. Consider if you have an
audio filter which buffers 15 seconds (unfortunately such a filter
exists), then the old code would update the current title 15 seconds too
early, while the new one does it correctly.
The new code also simplifies mixing the 3 metadata sources (global, per
stream, ICY). We assume these aren't mixed in a meaningful way. The old
code tried to be a bit more "exact". I didn't bother to look how the old
code did this, but the new code simply always "merges" with the previous
metadata, so if a newer tag removes a field, it's going to stick around
anyway.
I tried to keep it simple. Other approaches include making metadata a
special sh_stream with metadata packets. This would have been
conceptually clean, but the implementation would probably have been
unnatural (and doesn't match well with libavformat's API anyway). It
would have been nice to make the metadata updates chapter points (makes
a lot of sense for the intended use case, web radio current song
information), but I don't think it would have been a good idea to make
chapters suddenly so dynamic. (Still an idea to keep in mind; the new
code actually makes it easier to work towards this.)
You could mention how subtitles are timed metadata, and actually are
implemented as sparse packet streams in some formats. mp4 implements
chapters as special subtitle stream, AFAIK. (Ironically, this is very
not-ideal for files. It would be useful for streaming like web radio,
but mp4 is extremely bad for streaming by design for other reasons.)
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
Probably doesn't change anything, other than looking slightly better. In
theory, the common function has some stuff that makes it more likely
that timestamps round-trip through conversions properly, but I didn't
confirm that.
This may call memmove() with size==0 and a NULL data pointer. In
addition to this being UB with memmove(), I think it's UB to do
arithmetic on a NULL pointer too. Of course, this doesn't matter in
practice at all, and is just stupidity to torture programmers.
Remove the duplicated creation of the first range. Explicitly destroy
ranges, including the last one on final deinit.
It looks like this also fixes a leak of removed range structs, which was
never noticed because they're so small, and were freed on final deinit
due to having the demuxer as talloc parent.
This improves upon the previous commit too (that change should have
been part of it I guess). Sub-demuxers (demux_timeline only) now
automatically don't use the cache (like it was intended by the previous
commit). The cache is "initialized" (or disabled) last in the recursive
call chain, which is messy, but this sub demuxer stuff FUCKING SUCKS, as
mentioned in the previous commit message. This would be no problem if
the caching layer and actual demuxer implementations were separate.
Most of this change has no purpose. Might make (de-)initialization of
further cache exerpiments simpler.
It seems the so called demuxer cache wasn't really disabled for
sub-demuxers (timeline stuff). This was relatively harmless, since the
actual packet data was shared anyway via refcounting. But with the
addition of a mmap cache backend, this may change a lot.
So strictly disable any caching for sub-demuxers. This assumes that
users of sub-demuxers (only demux_timeline.c by now?) strictly use
demux_read_any_packet(), since demux_read_packet_async() will require
some minor read-ahead if a low level packet read returned a packet for a
different stream.
This requires some awkward messing with this fucking heap of trash. The
thing that is really wrong here is that the demuxer API mixes different
concepts, and sub-demuxers get the same API as decoders, and use the
cache code.
The demuxer cache tries to track the number of bytes allocated for the
cache. In addition to the packet queue, the seek index is another data
structure that roughly depends on the amount of packets cached. So the
index size should somehow be part of the total number of bytes tracking.
Until now, this was handled with KF_SEEK_ENTRY_WORST_CASE, basically a
shitty heuristic. It was a guess (and probably rather an upper bound
than a lower bound). The implementation details made it annoying, and it
was conceptually inaccurate too.
Change this, and instead simply add the index size to the total cache
size. This essentially makes it part of the backbuffer. It's nice that
this cleanly decouples it from the packet size tracking itself.
Since it's part of the backbuffer number of bytes now, packet pruning
can't necessarily free enough space in the backbuffer anymore. Before
this commit, the backbuffer consisted of packets only, so it was
possible to reduce its size to 0 by pruning all packets until the
decoder reader position, at which point a packet was accounted as
forward buffered. Now the index is added to this, and it can't be
pruned. Replace the assert() because of this changed invariant.
The conversion to string as the pretty printer returns it is
sometimes used on OSD. I think it's pretty odd that quantities below 1
KB are shown as number without suffix. So use "B" for them.
For orthogonality, allow the same for parsing. (Although strictly
speaking, this is not a requirement of the option API. Option parsers
don't need to accept pretty-printed strings.)
Someone who rams a knife into his own hand just to see what happens is
normally put in a psychiatric ward. But in software, this is acceptable
behavior. Programs are not supposed to crash just because a user did
something unreasonably dumb.
Switching tracks during backward playback is such a thing. It triggered
an assertion because the newly enabled stream was not properly
initialized for backward playback. Fix this, and make it actually work
(mostly; it still takes a "while" until playback recovers fully).
This actually makes some aspects of initialization slightly cleaner.
Track switching doesn't run reset_playback_state(), so a track enabled
at runtime during backward playback would lead to a messed up state.
This commit just does a bad code monkey fix to this. It feels like there
needs to be a much better way to propagate this state.
Not sure if this is bug-free. You _always_ make bugs when writing a
binary search from scratch (and such is the curse of C, though if I did
this in C++ it would probably end in blood). It seems to work though,
checking against the normal linear search.
It's slightly faster. Not much.
I wonder if the termination condition can be written in a nicer/elegant
way. I guess the fact that it's not a == predicate makes this slightly
messier?
The purpose of the seek index is to avoid having to iterate over the
full linked list of cached packets, which should speed up seeking. Until
now, there was an excuse of a seek index, that didn't really work.
The idea of the old index was nice: have a fixed number of entries (no
need to worry about exceeding memory requirements), which are
"stretched" out as the cache gets bigger. The size of it was 16 entries,
which in theory should speed up seeking by the factor 16, given evenly
spaced out entries. To achieve this even spacing, it attempted to "thin
out" the index by half once the index was full (see e.g. index_distance
field). In my observations this didn't really work, and the distribution
of the index entries was very uneven. Effectively, this did nothing. It
probably worked once and I can't be assed to debug my own shit code.
Writing new shit code is more fun.
Write new shit code for fun. This time it's a complete index. It's kept
in a ringbuffer (for easier LIFO style appending/removing), which is
resized with realloc if it becomes too small.
Actually, the index is not completely completely; it's still "thinned
out" by a hardcoded time value (INDEX_STEP_SIZE). This should help with
things like audio or crazy subtitle tracks (do they still create
those?), where we can just iterate over a small part of the linked
packet list to get the exact seek position. For example, for AAC audio
tracks with typical samplerates/framesizes we'd iterate about 50 packets
in the linked list.
The results are good. Seeking in large caches is much faster now,
apparently at least 1 or 2 orders of magnitude. Part of this is because
we don't need to touch every damn packet in a huge linked list (bad
cache behavior - the index is a linear memory region instead), but
"thinning" out the search space also helps. Both aspects can be easily
tested (setting INDEX_STEP_SIZE to 0, and replacing e->pts with
e->pkt->kf_seek_pts in find_seek_target()).
This does use more memory of course. In theory, we could tolerate memory
allocation failures (the index is optional and only for performance),
but I didn't bother and inserted an apologetic comment instead, have fun
with the shit code). the memory usage doesn't seem to be that bad,
though. Due to INDEX_STEP_SIZE it's bounded by the file duration too.
Try to account for the additional memory usage with an approximation
(see KF_SEEK_ENTRY_WORST_CASE). It's still a bit different, because the
index needs a single, potentially large allocation.
The search was slightly more complicated and slow than it had to be. It
didn't assume that the packet list was sorted, which is responsible for
much of this. (I think the search code was borrowed from demux_mkv.c,
which does not sort index entries.)
There was a half-hearted attempt to make it exit early, but it was
mostly ineffective.
Simplify the code based on the assumption that the list is sorted. This
will exit the search loop once the worst case candidate entry was
checked.
Mostly about the packet queue and the subtitle handling of it.
(This mess sure sounds like a good argument to give up the separate
stream queues, and using a single packet queue per cached range.)
Until now, this usually passed a single audio frame to the decoder, and
then did a backstep operation (cache seek + frame search) again. This is
probably not very efficient, especially considering it has to search the
packet queue from the "start" every time again.
Also, with most audio codecs, an additional "preroll" frame was passed
first. In these cases, the preroll frame would make up 50% of audio
decoding time. Also not very efficient.
Attempt to fix this by returning multiple frames at once. This reduces
the number of backstep operations and the ratio the preoll frames. In
theory, this should help efficiency. I didn't test it though, why would
I do this? It's just a pain. Set it to unscientific 10 frames.
(Actually, these are 10 keyframes, so it's much more for codecs like
TrueHD. But I don't care about TrueHD.)
This commit changes some other implementation details. Since we can
return more than 1 non-preroll keyframe to the decoder, some new state
is needed to remember how much. The resume packet search is adjusted to
find N ("total") keyframe packets in general, not just preroll frames.
I'm removing the special case for 1 preroll packet; audio used this, but
doesn't anymore, and it's premature optimization anyway.
Expose the new mechanism with 2 new options. They're almost completely
pointless, since nobody will try them, and if they do, they won't
understand what these options truly do. And if they actually do, they
most likely would be capable of editing the source code, and we could
just hardcode the parameters. Just so you know that I know that the
added options are pointless.
The following two things are truly unrelated to this commit, and more
like general refactoring, but fortunately nobody can stop me.
Don't set back_seek_pos in dequeue_packet() anymore. This was sort of
pointless, since it was set in find_backward_restart_pos() anyway (using
some of the same packets). The latter function tries to restrict this to
the first keyframe range though, which is an optimization that in theory
might break with broken files (duh), but in these cases a lot of other
things would be broken anyway.
Don't set back_restart_* in dequeue_packet(). I think this is an
artifact of the old restart code (cf. ad9e473c55). It can be done
directly in find_backward_restart_pos() now. Although this adds another
shitty packet search loop, I prefer this, because clearer what's
actually happening.
The size of all forward buffered packets is used to control maximum
buffering.
Until now, this size was incrementally adjusted, but had to be
recomputed on seeks within the cache. Doing this was actually pretty
expensive. It iterates over a linked list of separate memory allocations
(which are probably spread all over the heap due to the allocation
behavior), and the demux_packet_estimate_total_size() call touches a lot
of further memory locations. I guess this affects the cache rather
negatively. In an unscientific test, the recompute_buffers() function
(which contained this loop) was responsible for roughly half of the time
seeking took.
Replace this with a way that computes the buffered size between 2
packets in constant times. The demux_packet.cum_pos field contains the
summed sizes of all previous packets, so subtracting cum_pos between two
packets yields the size of all packets in between. We can do this
because we never remove packets from the middle of the queue. We only
add packets to the end, or remove packets at the beginning.
The tail_cum_pos field is needed because we don't store the end position
of a packet, so the last packet's position would be unknown. We could
recompute the "estimated" packet size, or store the estimated size in
the packet struct, but I just didn't like this.
This also removes the cached fw_bytes fields. It's slightly nicer to
just recompute them when needed. Maintaining them incrementally was
annoying. total_size stays though, since recomputing it isn't that cheap
(would need to loop over all ranges every time).
I'm always using uint64_t for sizes. This is certainly needed (a stream
could easily burn through more than 4GB of data, even if much less of
that is cached). The actual cached amount should always fit into size_t,
so it's casted to size_t for printfs (yes, I hate the way you specify
stdint.h types in printfs, the less I have to use that crap, the
better).
In ancient times, the number of packets was used to limit excessive
read-ahead. This was completely replaced by tracking the size in bytes.
The number of packets was used in debugging output only.
In one case (packet got demuxed and is added to a queue), only log
whether there were packets on this stream before. (Unknown whether it's
useful.)
In another case (queue overflow), actually count the number of packets.
It's vaguely useful, and the message with the number of packets is shown
only once after a seek reset, so it doesn't matter whether it's slow.
Some state wasn't reset when decoding was started without a seek reset
before it. The code used to rely on reset_decoder() resetting this
state, but since the commit referenced below, reset_decoder() does less
than reset().
Fix this by explicitly calling reset() on initialization.
Fixes: "f_decoder_wrapper: avoid full reset on timeline switch etc."
Some files don't start with keyframe packets. Normally, this is not
sane, but the sample file which triggered this was a cut TV capture
transport stream. And this shouldn't happen anyway.
Introduce a further heuristic: if the last seek target was before the
start of the cached data, and the start of the cache is marked as BOF
(beginning of file), then we won't find anything better. This is
possibly a bit shaky, because both seek_start and back_seek_pos weren't
made for this purpose. But I can't come up with situations where this
would actually break. (Leave this to shitty broken files I hit later.)
I also considered finding the first packet in the cache that is marked
as keyframe, i.e. the first actual seek target, and comparing it to
"first", but I didn't like it much. Well whatever.
It's a bit silly that this caused a hard freeze (and similar issues
still will). The problem is that the demuxer holds the lock and has no
reason to release it. And in general, there's a single lock for the
entire demuxer cache. Finer grained locking would probably not make much
sense. In theory status of available data and maybe certain commands to
the demuxer could be moved to separate locks, but it would raise
complexity, and you'd probably still need to get the central lock in
some cases, which would deadlock you anyway.
It would still be nice if some minor corner case in the wonderfully
terrible and complex backward demuxer state machine couldn't lock up the
player. As a hack, unlock and then immediately lock again. Depending on
the OS mutex implementation, this may give other waiters a chance to
grab the lock. This is not a guarantee (some OSes may for example not
wake up other waiters until the next time slice or something), but works
well on Linux.
The step_backwards function set reader_head to the start of the current
cache range. This was completely unnecessary and made it _much_ slower.
Remove the code that adjusts reader_head. Merge the rest of the code
into the only caller and remove the function.
The comment on the removed code was quite right. It was "inefficient".
Removing it delegates going to an early position to the normal seek
code, triggered by find_backward_restart_pos() incremental back seek
logic. I suppose especially audio benefits from this, because this
happens for every single audio packet (except maybe freaky bullshit like
TrueHD, which has "keyframes").
The blabla about performance in the removed comments is still true, but
now applies to the seek code itself only.
Fixes "mpv file.mkv --cache --demuxer-cache-wait --play-dir=backward",
and other situations where the demuxer cache contains the entire file,
and playback is to start from the end. It also can be triggered when
starting playback normally with --cache, and once everything is in the
cache, enabling backward playback and seeking past EOF.
In all cases, the cache seek will set reader_head=NULL (because you
seeked to/past EOF). Then the code (the one modified by this commit)
sees that ds->queue->is_bof==true, and thinks we've reached BOF
(beginning of file) while searching for a useful packet, i.e. we found
nothing and playback really can only end.
Obviously this is nonsense, we've found only nothing if we actually
searched from the beginning, not some "random" reader_head (== first)
value that does not include the entire cache. That means the condition
should trigger only if the start of the search (first variable) points
to the beginning of the cache (ds->queue->head).
Not taking this if means we'll seek to an earlier position and retry.
Also, a seek before the beginning of the cache will always end up with
reader_head==ds->queue->head, i.e. we'll terminate properly.
That comment was quite right.
Before this commit, there was a single process_decoded_frame() function.
It handled various aspects of dealing with a newly decoded frame. Move
some of these to a separate process_output_frame() function.
This new function is called in the order the frames are returned to the
playback core. Some correct_audio_pts() (was process_audio_frame())
becomes slightly less awkward due to this, and the timestamp smoothing
can actually work in backward playback mode now (thus moving p->pts out
of reset_decoder()).
Behavior for normal playback also changes subtly. This shouldn't matter
in sane cases, but if you mix broken files, --no-correct-pts, and
timeline stuff, differences in behavior might be visible.
Timeline clipping (EDL/ordered chapters) works now, because it's done
before "transforming" the timestamps. Audio timestamp smoothing happens
after it, which is a behavior change, but should be more correct. This
still runs crazy_video_pts_stuff() before everything else. On the pther
hand, --no-correct-pts or missing timestamp processing is done last. But
these things didn't really work with timeline before.
Slightly cleaner. We don't need to awkwardly backup "some" state on
backwards playback. Due to not resetting last_format, normal timeline
switches don't unconditionally trigger recomputing of certain image
parameters. Also probably doesn't reset framedrop parameters, although I
don't care about that part.
This could lead to nonsense when backward playback is involved. Better
reduce the possible interactions. Besides, it's better to fully reset
things on seeks in general.
The only exception is has_broken_packet_pts, which enables hr-seek if
everything looks good. It's intended to trigger at the second hr-seek or
so if the file is normal, and to disable it if the file is broken. It
tries to avoid enabling the hr-seek logic before it can know about
whether things are "good", so resetting it on seeks would obviously
never enable it. Document it as explicit exception.
And add simpler aliases for the modes.
I'm not sure how to name things, and the option list is in general full
of different conventions. Some names are shortened, some are explicit
and long.
I guess options that have a chance to be used normally (i.e. not obscure
tuning or debugging) should have a short and convenient names.
In this specific case, play-direction is like a mixture of both. It
should be either playback-direction or play-dir, not shorten one word
but not the other.
The convenience aliases are because I got sick of typing out "backward".
I guess "back" would also do it, but there's no proper antonym (and
maybe it's "wrong" in the strict sense of the word).
Together with the previous commit, this seems to make backward playback
work in files with vorbis and mp3 audio codecs.
For Vorbis (with libavcodec's decoder, didn't test libvorbis), the first
packet was just always completely discarded. This happened even though
we tell libavcodec that we do discarding of padding manually. It simply
happened inside the codec, not libavcodec's general initial padding
handling. In addition, the first output decoded frame seems to contain
partial data. (Unlike the opus decoder, it doesn't report any padding at
all.)
The Opus decoder (again libavcodec only tested) reports an initial
padding, but it appears to be too small, and it sounds right only with 2
packets discarded. So its status doesn't change.
I'm not sure why I need 2 frames for mp3, but with that value I had
success on the samples I tested.
Some audio codecs will discard or cut the first frames when starting
decoding. While some of that works through well-defined mechanisms (like
initial padding), it's in general very codec/decoder specific, and not
really predictable. In addition, libavcodec isn't very good with
reporting "dropped" frames (and our internal interface reflects this).
It seems our only chance to handle this is through timestamps.
In theory, it would be best to discard frames that have timestamps
before the "resume" position. But since video has reordered timestamps,
we'd need to put some effort into finding this position. The first video
packet doesn't necessarily contain this timestamp. (In theory, we could
just do this in the demuxer with some trivial additional work, and set
it on the packet's kf_seek_pts field. Although this field is supposed to
contain just this value, the field is considered demuxer-internal, and I
didn't want to make matters worse by reusing it for the interface to the
decoder. With some more effort and buffering, we could calculate this
value within the decoder, but fuck that.)
The approach chosen in this commit is setting the timestamp to NOPTS.
This will break in some obscure situations, but backward playback is a
pretty obscure feature to begin with, so I considered this a reasonable
implementation choice.
Before passing a preroll packet to the decoder, its timestamps are set
to NOPTS. Frames that are returned from the decoder and have the NOPTS
timestamp are considered preroll and are discarded. This happens only
during "preroll" mode (preroll_discard==true), so it doesn't affect
normal forward playback. It's disabled on the first packet with a
timestamp, so it can tolerate some crap even in backward playback mode.
We don't check the dts fields out of laziness (decoded audio frames
don't even have this field).
I considered using an approach using the EDL clipping infrastructure (as
mentioned in the last two paragraphs in the commit message of commit
" demux_lavf: implement bad hack for backward playback of wav"). This
didn't work, and I blamed timestamp rounding within mpv for it. But the
problem was actually due to Matroska-rounded timestamps. Since the audio
frame size isn't exactly aligned to 1ms, there will be an overlap (or
gap) in the timestamps. This overlap is much smaller than 1ms, since
it's just the sub-millisecond remainder part of the audio frame size.
This makes the timestamps discontinuous and unreliable for the purpose
we wanted to use it. We can't just smooth the timestamps in the demuxer
either.
Matroska has this weird concept of "lacing", which are really sub-blocks
packed into a larger actual block. They are demuxed as individual
packets, because that's what the decoder needs. Basically they're a
Matroska hack to reduce per-packet muxing overhead.
One problem is that these sub-blocks don't have timestamps. The
timestamps need to be created from the "default duration". If this
default duration isn't in the file header (or if we drop it when it has
a known broken value), the resulting packets won't have a timestamp.
This is an usual and weird situation, that may confuse the demuxer layer
(and backward playback in particular). Fix this by not setting the
keyframe flag for these.
This affects only audio packets. Subtitle lacing is explicitly not
supported (in theory would at least lack a way to specify durations),
and video won't usually use it for real video codecs (not every frame is
a "keyframe", timestamp reordering).
Clarify existing semantics for the --start/--end/--length options.
De-emphasize the difference between absolute and relative timestamps,
since they've not been different by default since mpv 0.14.
Document a bug, that also happens to work as a feature: if the option
value begins with spaces, the code for checking for relative timestamps
is inactive, and they're always considered absolute. The check is done
on the first character of the string - so even a negative timestamp will
be treated as absolute.)
Yes, this is useful in extremely rare situations, such as when you
really want send a specific timestamp (even a negative one) to the
demuxer.
Another shitty obscure feature that usually nobody notices.
Unsurprisingly, it doesn't go well with backward playback mode.
If you use --keep-open in forward playback mode, and seek past the end
of the file, it tries to seek to the very last frame. The demuxer will
seek to the last "keyframe" before the end (i.e. some frames to go in
most cases), and trying to hr-seek to the file duration often won't cut
it, so this requires some special code. The function at hand seeks
"close" to the end, and then stops hr-seek when the last frame us
encountered (simple enough and very effective).
In backward playback mode, start and end are reversed, and we need to
seek "close" to the start of the file instead. Simple enough to do, and
it works.
One problem is that command.c has some weird logic to make going beyond
the last chapter either end playback (--keep-open=no), or jump to the
last frame. Now this will jump to the first frame, which is weird, but
let's ignore this.
Another problem is that seeking before playback start position hits EOF
in backward playback mode, which is a demuxer bug, and has nothing to do
with this code. But it triggers this code, so seeking before the start
will show the "last" frame. (My description is a mess with directions.
Figure it out yourself.)
Obviously should seek back to the end of the file when it loops.
Also remove some minor code duplication around start times. This isn't
the correct solution by the way. Rather than hoping we know a reasonable
start/end time, this stuff should instruct the demuxer to seek to the
exact location. It'll work with 99% of all normal files, but add an
appropriate comment (that basically says the function is bullshit) to
get_start_time() anyway.
This changes the behavior of the --ab-loop-a/b options. In addition, it
makes it work with backward playback mode.
The most obvious change is that the both the A and B point need to be
set now before any looping happens. Unlike before, unset points don't
implicitly use the start or end of the file. I think the old behavior
was a feature that was explicitly added/wanted. Well, it's gone now.
This is because of 2 reasons:
1. I never liked this feature, and it always got in my way (as user).
2. It's inherently annoying with backward playback mode.
In backward playback mode, the user wants to set A/B in the wrong order.
The ab-loop command will first set A, then B, so if you use this command
during backward playback, A will be set to a higher timestamps than B.
If you switch back to forward playback mode, the loop would stop
working. I want the loop to just continue to work, and the chosen
solution conflicts with the removed feature.
The order issue above _could_ be fixed by also switching the AB-loop
user option values around on direction switch. But there are no other
instances of option changes magically affecting other options, and doing
this would probably lead to unexpected misery (dying from corner cases
and such).
Another solution is sorting the A/B points by timestamps after copying
them from the user options. Then A/B options set in backward mode will
work in forward mode. This is the chosen solution. If you sort the
points, you don't know anymore whether the unset point is supposed to
signify the end or the start of the file.
The AB-loop code is slightly better abstracted now, so it should be easy
to restore the removed feature. It would still require coming up with a
solution for backwards playback, though.
A minor change is that if one point is set and the other is unset, I'm
rendering both the chapter markers and the marker for the set point.
Why? I don't know. My test file had chapters, and I guess I decided this
looked better.
This commit also fixes some subtle and obvious issues that I already
forgot about when I wrote this commit message. It cleans up some minor
code duplication and nonsense too.
Regarding backward playback, the code uses an unsanitary mix of internal
("transformed") and user timestamps. So the play_dir variable appears
more than usual.
To mention one unfixed issue: if you set an AB-loop that is completely
past the end of the file, it will get stuck in an infinite seeking loop
once playback reaches the end of the file. Fixing this reliably seemed
annoying, so the fix is "just don't do this". It's not a hard freeze
anyway.
This code attempts to seek to the last frame by seeking close to the
end, and then decoding until the last frame has been reached. To do so
it sets hrseek_lastframe, which for video enables some logic to "catch"
this last frame, and completely ignores hrseek_pts. But audio still may
use hrseek_pts
I don't know if the original author (me) was thinking, if anything, when
setting this variable to 1e99, essentially a random, number. It's very
large, and a timestamp like this will never happen, so it does its job.
But it's random.
Use INFINITY instead. It will skip all audio samples in the audio code
correctly. This change doesn't fix anything, but it does get rid of the
random looking number.
The get_play_start_pts() function was supposed to return "rebased"
(relative to 0) timestamps. This was roundabout, because one of 2
callers just added the offset back, and the other caller actually
expected an absolute timestamp.
Change rel_time_to_abs() (whose return value get_play_start_pts()
returns without further changes) to return absolute times.
This should fix that absolute and relative times passed to --start and
--end were treated the same, which can't be right. It probably also
fixes --end if --rebase-start-time=no is used (which can't have been
correct either).
All in all I'm not sure why --rebase-start-time=no or absolute vs.
relative times in --start/--end even exist, when they were incorrectly
implemented for years.
Untested, because no sample file and I don't care. However, if anyone
cares, and I got it wrong, I hope it's simple to fix.
Has been deprecated for almost 3 years. Manpage didn't mention the
deprecation, but CLI and release notes did. It wouldn't be much effort
to keep this option working, but I just don't see the damn point.
--start/--end can specify chapters using special syntax, which is
equivalent.
We need to transform the timestamp returned by get_play_end_pts().
I considered making it return the transformed timestamp directly. There
are 4 callers; 2 need a transformed timestamps, 2 don't. So I guess it
doesn't matter.