The demuxer cache benefits slightly from knowing where the current file
or stream begins. For example, seeking "left most" when the start is
cached would not trigger a low level seek (which would be followed by
messy range joining when it notices that the newly demuxed packets
overlap with an existing range).
Unfortunately, since multimedia is so crazy (or actually FFmpeg in its
quite imperfect attempt to be able to demux anything), it's hard to tell
where a file starts. There is no feedback whether a specific seek went
to the start of the file. Packets are not tagged with a flag indicating
they were demuxed from the start position. There is no index available
that could be used to cross-check this (even if the file contains a full
and "perfect" index, like mp4). You could go by the timestamps, but who
says streams start at 0? Streams can start somewhere at an extremely
high timestamps (transport streams like to do that), or they could start
at negative times (e.g. files with audio pre-padding will do that), and
maybe some file formats simply allow negative timestamps and could start
at any negative time. Even if the affected file formats don't allow it
in theory, they may in practice. In addition, FFmpeg exports a
start_time field, which may or may not be useful. (mpv's internal mkv
demuxer also exports such a field, but doesn't bother to set it for
efficiency and robustness reasons.)
Anyway, this is all a huge load of crap, so I decided that if the user
performs a seek command to time 0 or earlier, we consider the first
packet demuxed from each stream to be at the start of the file. In
addition, just trust the start_time field. This is the "shitty" part of
this commit.
One common case of negative timestamps is audio pre-padding. Demuxers
normally behave sanely, and will treat 0 as the start of the file, and
the first packets demuxed will have negative timestamps (since they
contain data to discard), which doesn't break our assumptions in this
commit. (Although, unfortunately, do break some other demuxer cache
assumptions, and the first cached range will be shown as starting at a
negative time.)
Implementation-wise, this is quite simple. Just split the existing
initial_state flag into two, since we want to deal with two separate
aspects. In addition, this avoids the refresh seek on track switching
when it happens right after a seek, instead of only after opening the
demuxer.
There were 3 packet reading functions: the "old" demux_read_packet()
that blocked (leftover from MPlayer times, but was still used until
recently by some obscure code), the "new" demux_read_packet_async(), and
the special demux_read_any_packet(), that is used by pseudo-demuxers
like demux_edl.
The first two could be used both in threaded and un-threaded mode. This
made 5 cases in total. Some bits of logic was spread across all of them.
Unify the logic. A recent commit made demux_read_packet() private, and
the code for it in threaded mode disappears. The difference between
threaded and un-threaded is minimized.
It's possible that this commit causes random regression. Enjoy.
This is a regression. According to git bisect, commit 4b2cbefc0 broke
this. I'm not sure why; the commit in question should have been a
refactor with no functional changes. The commit message mentions brain
damage, though, and this is where I stopped my investigation.
Since with --no-demuxer-thread, there is no read-ahead, the concept of
an underrun makes no sense. We might block the playback thread long
enough that the user perceives an interruption, but we don't know about
this, nor do we care. Only when the demuxer is threaded it can happen
that we can't immediately return a packet to the user, and this
underrun.
(The underrun detection is a bit janky and roundabout, to be honest.
Probably could be simpler by setting a per-stream underrun flag simply
when no packet/EOF is available when the decoder reads a packet.)
It's interesting that the regression existed for half a year, yet nobody
cared. I guess nobody uses --no-demuxer-thread.
Fixes: 4b2cbefc0d
There are 3 packet reading functions in the demux API, which all
function completely differently. One of them, demux_read_packet(), has
only 1 caller, which is in dec_sub.c. Change this caller to use
demux_read_packet_async() instead. Since it really wants to do a
blocking call, setup some proper waiting. This uses mp_dispatch_queue,
because even though it's overkill, it needs the least code.
In practice, waiting actually never happens. This code is only called on
code paths where everything is already read into memory (libavformat's
subtitle demuxers simply behave this way). It's still a bit of a
"coincidence", so implement it properly anyway.
If suubtitle decoder init fails, we still need to unset the demuxer
wakeup callback. Add a sub_destroy() call to the failure path. This also
happens to fix a missed pthread_mutex_destroy() call (in practice this
was a nop, or a memory leak on BSDs).
I'm not sure about this, but it looks like a bug. If a stream didn't
have packets, but the joined range does, the stream should obviously
read the packets added by the joined range. Until now, due to
reader_head being NULL, reading was only resumed if a _new_ packet was
added by actual demuxing (in add_packet_locked()), which means the
stream would suddenly skip ahead, past the original end of the joined
range.
Change it so that it will pick up the new range.
Also, clear the skip_to_keyframe flag. Nothing useful can come from this
flag being set; in the first place, the first packet of a range (that
isn't the current range) should start with a keyframe. Some code
probably enforced it (although it's fuzzy).
Completely untested.
When doing a seek to the end of the cache, ds->skip_to_keyframe can be
set to true. Then some packets passed to add_packet_locked() may have to
be skipped. In some aspects, the skipped packet was still treated as if
it was going to be returned to the reader.
It almost doesn't matter though: it only caused a redundant wakeup_ds()
call, and could pass the packet to the stream recorder. Fix it anyway.
This fixes that there were weird delay ("buffering") when seeking into
the last part of a seekable range. The exact case which triggers it if
SEEK_FORWARD is used, and the seek pts is after the second-last
keyframe, but before the end of the range. In that case,
find_seek_target() returned NULL, and the cache layer waited until the
_next_ keyframe the underlying demuxer returned until resuming playback.
find_seek_target() returned NULL, because the last keyframe had
kf_seek_pts unset. This field contains the lowest PTS in the packet
range from the keyframe until the next keyframe (or EOF). For normal
seeks, this is needed because keyframes don't necessarily have the
minimum PTS in the packet range, so it needs to be computed by waiting
for all packets until the next keyframe (or EOF).
Strictly speaking, this behavior was correct, but it meant that the
caller would set ds->skip_to_keyframe, which waits for the next newly
demuxed keyframe. No packets were returned to the decoder until this
happened, usually resulting in the frontend entering "buffering" mode.
What it really needs to do is returning the last keyframe in the cache.
In this situation, the seek target points in the middle of the last
completely cached packet range (as delimited by keyframes), and
SEEK_FORWARD is supposed to skip to the next keyframe. This is in line
with the basic assumptions the packet cache makes (e.g. the keyframe
flag means it's possible to start decoding, and the frames decoded from
it and following packets will strictly have PTS values above the
previous keyframe range). This means in this situation the kf_seek_pts
value doesn't matter either.
So fix this situation by explicitly detecting it and then returning the
last cached keyframe.
Should the search loop look at all packets, instead of only keyframe
ones? This would mean it can know that it's within the last keyframe
range (without looking at queue->seek_end). Maybe this would be a bit
more natural for the SEEK_FORWARD case, but due to PTS reordering it
doesn't sound like a useful thing to do.
Should skip_to_keyframe be checked by the code that sets kf_seek_pts to
a known value? This wouldn't help too much; the frontend would still go
into "buffering" mode for no reason until the packet range is completed,
although it would resume from the correct range.
Should a NULL return always unconditionally use keyframe_latest? This
makes sense because the seek PTS is usually already in the cached range,
so this is the only case that should happen. But there are scary special
cases, like sparse subtitle streams, or other uses of find_seek_target()
which could be out of range now or in future. Basically, don't "risk"
it.
One other potential problem with this is that the "adjust seek target"
code will be disabled in this case. It checks kf_seek_pts, and if it's
unset, the adjustment is not done. Maybe this could be changed to use
the queue's seek_end time, but I'm not sure if this is fully kosher. On
the other hand, I think the main use for this adjustment is with
backwards seeks, so this shouldn't matter.
A previous commit dealing with audio/video stream merging mentioned how
seeking forward entered "buffering" mode for unknown reasons; this
commit fixes this issue.
demux_timeline doesn't do any transport accesses itself. The slave
demuxers do this (these will actually access the stream layer and
perform e.g. network accesses). As a consequence, demux_timeline always
reported 0 bytes read, and network speed display didn't work.
Fix this by awkwardly reporting the amount of read bytes upwards. This
is not very nice, and requires explicit calls whenever the slave "might"
have read data.
Due to the way the reporting is done, it only works if the slaves do not
run demuxer threads, which makes things even less nice. (Fortunately
they don't anyway, because it would be a waste of resources.) Some
identifiers contain the word "hack" as a warning.
Some of the stupidity comes from the fact that demux.c itself resets the
stats randomly in order to calculate the bytes_per_second value, which
is useless for a slave, but of course is still done, because demux.c
itself is not aware of whether it's on the slave or top-level layer.
Unfortunately, this must do.
In theory, the demuxer thread/cache layer should be separated from
demuxer implementations. This would get rid of all the awkwardness and
nonsense. For example, the only threading involved would be the caching
layer, completely separate from demuxers themselves. It'd be the only
thing calculates speed rates for the player frontend, too (instead of
doing it for each demuxer, even if unused).
It was an ugly hack, and the next commit will make it even uglier.
Slightly reduce the ugliness to prevent death of too many brain cells,
though it's still an ugly hack.
The cleanup is really minor, but I guess the following commit would be
much worse otherwise. In particular, this commit checks accesses
(instead of having a public field with evil access rules), which should
avoid misunderstandings and incorrect use. Strictly speaking, the added
field is redundant, but the next commit complicates it a bit.
Bug caused by a recent refactor. The function is not supposed to unlock
the mutex anymore, but the unlock was forgotten on an obscure code path.
This could sometimes lead to crashes, depending on libc behavior (when
an unlocked mutex was unlocked again, or it tries to unlock a mutex
locked by a different thread).
None of those fancy debug tools could tell me that. I found it by
switching to an error checking mutex and wrapping pthread calls into a
macro that checks their return values.
If the number of chapters is 0, the chapter list can be NULL. clang
complains that we pass NULL to qsort(). This is yet another pointless UB
that exists for no reason other than wasting your time.
Evil shit due to a huge ownership/lifetime mess of various objects.
demux_close_stream() caused 1. the mp_cancel handle being lost (fix by
explicitly preserving it on the new dummy stream), and 2. passing a
dangling stream pointer to the timeline "demuxer" (use demuxer->stream,
which will never be a dangling pointer).
As one defensive programming measure, stop accessing the "stream"
variable in open_given_type(), even where it would still work. This
includes removing a redundant line of code, and removing the peak call,
which should not be needed anymore, as the remaining demuxers do this
mostly correctly.
The only thing left is the notification for track switching. Just get
rid of that.
There's probably no real reason to get rid of control(), but why not. I
think I was actually trying to do some real work but fuck that.
Subtitles (and a few other file types, like playlists) are not streamed,
but fully read on opening. This means keeping the file handle or network
socket open is a waste of resources and could cause other weird
behavior. This is why there's a hack to close them after opening.
Change this hack to make the demuxer itself do this, which is less
weird. (Until recently, demuxer->stream ownership was more complex,
which is why it was done this way.)
I always wanted to get rid of this, because it makes the ownership rules
for the stream pointer really awkward. demux_edl.c was the only
remaining user of this. Replace it with a semi-clever idea: the init
segment shit can be used to pass the "file" contents as memory block,
and "memory://" itself provides an empty stream. I have no idea if this
actually works, because I didn't immediately find a test stream (would
have to be some youtube DASH shit).
Instead of going through those weird DEMUXER_CTRLs, query this
information directly. I'm not sure which kind of brain damage made me
use CTRLs for these. Since there are no other DEMUXER_CTRLs that make
sense for the frontend, remove the remaining infrastructure for them
too.
The stream size return was the only thing that still required doing
STREAM_CTRLs from frontend through the demuxer layer. This can be done
much easier, so rip it out. Also rip out the now unused infrastructure
for STREAM_CTRLs via demuxer layer.
Apparently this was so that when playing a video file from a .rar file,
it would load external subtitles with the same name (instead of looking
for mpv's rar:// mangled URL). This was requested on github almost 5
years ago. Seems like a shit feature, and why should I give a fuck? Drop
it, because it complicates some in progress change.
--record-file is nice, but only sometimes. If you watch some sort of
livestream which you want to record, it's actually much nicer not to
record what you're currently "seeing", but anything you're receiving.
I don't ever use them, so kill them.
Linux TV is excessively complex, and whenever I attempted to use it, it
didn't work well or would have required some major work to update it.
(For example, when I tried to use a webcam-type device with tv://, it
worked badly; even the libavdevice garbage worked better.)
The "program" property was rather complex and rather obscure. I didn't
ever use it. Should there ever be a proper use for it (maybe HLS stream
selection?), it should be rewritten anyway.
The demuxer cache is the only cache now. Might need another change to
combat seeking failures in mp4 etc. The only bad thing is the loss of
cache-speed, which was sort of nice to have.
This will enable the player core to terminate the demuxers in a "nicer"
way without having to block on network. If it just used demux_free(), it
would either have to block on network, or like currently, essentially
kill all I/O forcefully.
The API is slightly awkward, because demuxer lifetime is bound to its
allocation. On the other hand, changing that would also be awkward, and
introduce weird in-between states that would have to be handled in tons
of places.
Currently unused, to be user later.
Alway give each demuxer its own mp_cancel instance. This makes
management of the mp_cancel things much easier. Also, instead of having
add/remove functions for mp_cancel slaves, replace them with a simpler
to use set_parent function. Remove cancel_and_free_demuxer(), which had
mpctx as parameter only to check an assumption. With this commit,
demuxers have their own mp_cancel, so add demux_cancel_and_free() which
makes use of it.
Them being separate is just dumb. Replace them with a single
demux_free() function, and free its stream by default. Not freeing the
stream is only needed in 1 special case (demux_disc.c), use a special
flag to not free the stream in this case.
The properties/commands touched in this commit are all for obscure
special inputs (BD/DVD/DVB/TV), and they all block on the demuxer/stream
layer. For network streams, this blocking is very unwelcome. They will
affect playback and probably introduce pauses and frame drops. The
player can even freeze fully, and the logic that tries to make playback
abortable even if frozen complicates the player.
Since the mentioned accesses are not needed for network streams, but
they will block on network streams even though they're going to fail,
add a flag that coarsely enables/disables these accesses. Essentially it
establishes a whitelist of demuxers/streams which support them.
In theory you could to access BD/DVD images over network (or add such
support, I don't think it's a thing in mpv). In these cases these
controls still can block and could even "freeze" the player completely.
Writing to the "program" and "cache-size" properties still can block
even for network streams. Just don't use them if you don't want freezes.
It seems a bit inappropriate to have dumped this into stream.c, even if
it's roughly speaking its main user. At least it made its way somewhat
unfortunately to other components not related to the stream or demuxer
layer at all.
I'm too greedy to give this weird helper its own file, so dump it into
thread_tools.c.
Probably a somewhat pointless change.
If a stream starts later than the others at the start of the file, it
shouldn't restrict the seek range to the time stamp where it begins.
This is similar to the previous commit, just for the other end.
Normally, the seek range is the minimum overlap of the cached ranges of
each stream. But if one of the streams ends earlier, this leads to the
seek range getting cut off, even if you could seek there.
Change it so that EOF streams cannot restrict the end of the seek range.
They can only extend it. This is the opposite from not-EOF streams, so
they need to be handled separately. In particular, they get exluded from
normal end range calculation, but when full EOF is reached, all streams
are EOF, and the maximum end time can be used to set the seek end time.
(In theory we could also take the max with the demuxer signaled total
file duration, but let's not for now.)
Also, if a stream is completely empty, essentially skip it, instead of
considering the range unseekable. (Also, we don't need to mess with
seek_start in this case, because it will be NOPTS and is skipped
anyway.)
When the current packet queue was completely empty, and EOF was reached,
the queue->is_eof flag was not correctly set to true. Change this by
reading ds->eof to check whether the stream is considered EOF. We also
need to make sure update_seek_ranges() is called in this case, so change
the code to simply call it when queue->is_eof changes.
Also, read_packet() needs to call adjust_seek_range_on_packet() if
ds->eof changes. In that case, the decoder also needs to be notified
about EOF. So both of these should be called when ds->eof changes to
true. (Other code outside of this function deals with the case when
ds->eof is changed to false.)
In addition, this code was kind of shoddy about calling wakeup_ds()
correctly. It looks like there was an inverted condition, and sent a
wakeup to the decoder only when ds->eof was already true, which is
obviously bogus. The final EOF case tried to be somehow clever about
checking in->last_eof for notifying the codec, which is sort of OK, but
seems to be strictly worse than just checking whether ds->eof changed.
Fix these things.
Fixes several issues playing back mpegts with video streams marked
as having "still images". For example, see this video which has
frames only every 6s: https://s3.amazonaws.com/tmm1/music-choice.ts
Changes include:
- start playback right away, without waiting for first video frame
- do not consider the sparse video stream in demuxer underrun detection
- do not require multiple video frames for the VO
- use audio as the master stream for demuxer metadata events
- use audio stream for playback time
Signed-off-by: Aman Gupta <aman@tmm1.net>
This makes ICY title changes show up at approximately the correct time,
even if the demuxer buffer is huge. (It'll still be wrong if the stream
byte cache contains a meaningful amount of data.)
It should have the same effect for mid-stream metadata changes in e.g.
OGG (untested).
This is still somewhat fishy, but in parts due to ICY being fishy, and
FFmpeg's metadata change API being somewhat fishy. For example, what
happens if you seek? With FFmpeg AVFMT_EVENT_FLAG_METADATA_UPDATED and
AVSTREAM_EVENT_FLAG_METADATA_UPDATED we hope that FFmpeg will correctly
restore the correct metadata when the first packet is returned.
If you seke with ICY, we're out of luck, and some audio will be
associated with the wrong tag until we get a new title through ICY
metadata update at an essentially random point (it's mostly inherent to
ICY). Then the tags will switch back and forth, and this behavior will
stick with the data stored in the demuxer cache. Fortunately, this can
happen only if the HTTP stream is actually seekable, which it usually is
not for ICY things. Seeking doesn't even make sense with ICY, since you
can't know the exact metadata location. Basically ICY metsdata sucks.
Some complexity is due to a microoptimization: I didn't want additional
atomic accesses for each packet if no timed metadata is used. (It
probably doesn't matter at all.)
This fixes an issue where captions stop rendering after an
in-demuxer-cache seek, because the demuxer keeps waiting to find
a keyframe (ds->skip_to_keyframe set to true in execute_cache_seek).
When this happens, network calls are forcibly aborted (more or less),
but demuxers might keep going, as most of them do not check for forced
exits properly. This can possibly lead to broken packets being added.
Also do not attempt to read more packets in this situation.
Also do not print a stream open failed message if opening was aborted
anyway.
Since the demuxer cache addition, ds->queue->head can actually be set to
non-NULL, but the decoder can still be at EOF (with no packets to come).
This made it report an unknown buffered size, instead of 0. Fix this by
checking the decoder part of the packet queue instead.
Probably doesn't matter much, but fixes an annoying "???" on the CLI
status line in some situations.