The only thing left is the notification for track switching. Just get
rid of that.
There's probably no real reason to get rid of control(), but why not. I
think I was actually trying to do some real work but fuck that.
Subtitles (and a few other file types, like playlists) are not streamed,
but fully read on opening. This means keeping the file handle or network
socket open is a waste of resources and could cause other weird
behavior. This is why there's a hack to close them after opening.
Change this hack to make the demuxer itself do this, which is less
weird. (Until recently, demuxer->stream ownership was more complex,
which is why it was done this way.)
I always wanted to get rid of this, because it makes the ownership rules
for the stream pointer really awkward. demux_edl.c was the only
remaining user of this. Replace it with a semi-clever idea: the init
segment shit can be used to pass the "file" contents as memory block,
and "memory://" itself provides an empty stream. I have no idea if this
actually works, because I didn't immediately find a test stream (would
have to be some youtube DASH shit).
Instead of going through those weird DEMUXER_CTRLs, query this
information directly. I'm not sure which kind of brain damage made me
use CTRLs for these. Since there are no other DEMUXER_CTRLs that make
sense for the frontend, remove the remaining infrastructure for them
too.
The stream size return was the only thing that still required doing
STREAM_CTRLs from frontend through the demuxer layer. This can be done
much easier, so rip it out. Also rip out the now unused infrastructure
for STREAM_CTRLs via demuxer layer.
Apparently this was so that when playing a video file from a .rar file,
it would load external subtitles with the same name (instead of looking
for mpv's rar:// mangled URL). This was requested on github almost 5
years ago. Seems like a shit feature, and why should I give a fuck? Drop
it, because it complicates some in progress change.
--record-file is nice, but only sometimes. If you watch some sort of
livestream which you want to record, it's actually much nicer not to
record what you're currently "seeing", but anything you're receiving.
I don't ever use them, so kill them.
Linux TV is excessively complex, and whenever I attempted to use it, it
didn't work well or would have required some major work to update it.
(For example, when I tried to use a webcam-type device with tv://, it
worked badly; even the libavdevice garbage worked better.)
The "program" property was rather complex and rather obscure. I didn't
ever use it. Should there ever be a proper use for it (maybe HLS stream
selection?), it should be rewritten anyway.
The demuxer cache is the only cache now. Might need another change to
combat seeking failures in mp4 etc. The only bad thing is the loss of
cache-speed, which was sort of nice to have.
This will enable the player core to terminate the demuxers in a "nicer"
way without having to block on network. If it just used demux_free(), it
would either have to block on network, or like currently, essentially
kill all I/O forcefully.
The API is slightly awkward, because demuxer lifetime is bound to its
allocation. On the other hand, changing that would also be awkward, and
introduce weird in-between states that would have to be handled in tons
of places.
Currently unused, to be user later.
Alway give each demuxer its own mp_cancel instance. This makes
management of the mp_cancel things much easier. Also, instead of having
add/remove functions for mp_cancel slaves, replace them with a simpler
to use set_parent function. Remove cancel_and_free_demuxer(), which had
mpctx as parameter only to check an assumption. With this commit,
demuxers have their own mp_cancel, so add demux_cancel_and_free() which
makes use of it.
Them being separate is just dumb. Replace them with a single
demux_free() function, and free its stream by default. Not freeing the
stream is only needed in 1 special case (demux_disc.c), use a special
flag to not free the stream in this case.
The properties/commands touched in this commit are all for obscure
special inputs (BD/DVD/DVB/TV), and they all block on the demuxer/stream
layer. For network streams, this blocking is very unwelcome. They will
affect playback and probably introduce pauses and frame drops. The
player can even freeze fully, and the logic that tries to make playback
abortable even if frozen complicates the player.
Since the mentioned accesses are not needed for network streams, but
they will block on network streams even though they're going to fail,
add a flag that coarsely enables/disables these accesses. Essentially it
establishes a whitelist of demuxers/streams which support them.
In theory you could to access BD/DVD images over network (or add such
support, I don't think it's a thing in mpv). In these cases these
controls still can block and could even "freeze" the player completely.
Writing to the "program" and "cache-size" properties still can block
even for network streams. Just don't use them if you don't want freezes.
It seems a bit inappropriate to have dumped this into stream.c, even if
it's roughly speaking its main user. At least it made its way somewhat
unfortunately to other components not related to the stream or demuxer
layer at all.
I'm too greedy to give this weird helper its own file, so dump it into
thread_tools.c.
Probably a somewhat pointless change.
If a stream starts later than the others at the start of the file, it
shouldn't restrict the seek range to the time stamp where it begins.
This is similar to the previous commit, just for the other end.
Normally, the seek range is the minimum overlap of the cached ranges of
each stream. But if one of the streams ends earlier, this leads to the
seek range getting cut off, even if you could seek there.
Change it so that EOF streams cannot restrict the end of the seek range.
They can only extend it. This is the opposite from not-EOF streams, so
they need to be handled separately. In particular, they get exluded from
normal end range calculation, but when full EOF is reached, all streams
are EOF, and the maximum end time can be used to set the seek end time.
(In theory we could also take the max with the demuxer signaled total
file duration, but let's not for now.)
Also, if a stream is completely empty, essentially skip it, instead of
considering the range unseekable. (Also, we don't need to mess with
seek_start in this case, because it will be NOPTS and is skipped
anyway.)
When the current packet queue was completely empty, and EOF was reached,
the queue->is_eof flag was not correctly set to true. Change this by
reading ds->eof to check whether the stream is considered EOF. We also
need to make sure update_seek_ranges() is called in this case, so change
the code to simply call it when queue->is_eof changes.
Also, read_packet() needs to call adjust_seek_range_on_packet() if
ds->eof changes. In that case, the decoder also needs to be notified
about EOF. So both of these should be called when ds->eof changes to
true. (Other code outside of this function deals with the case when
ds->eof is changed to false.)
In addition, this code was kind of shoddy about calling wakeup_ds()
correctly. It looks like there was an inverted condition, and sent a
wakeup to the decoder only when ds->eof was already true, which is
obviously bogus. The final EOF case tried to be somehow clever about
checking in->last_eof for notifying the codec, which is sort of OK, but
seems to be strictly worse than just checking whether ds->eof changed.
Fix these things.
Fixes several issues playing back mpegts with video streams marked
as having "still images". For example, see this video which has
frames only every 6s: https://s3.amazonaws.com/tmm1/music-choice.ts
Changes include:
- start playback right away, without waiting for first video frame
- do not consider the sparse video stream in demuxer underrun detection
- do not require multiple video frames for the VO
- use audio as the master stream for demuxer metadata events
- use audio stream for playback time
Signed-off-by: Aman Gupta <aman@tmm1.net>
This makes ICY title changes show up at approximately the correct time,
even if the demuxer buffer is huge. (It'll still be wrong if the stream
byte cache contains a meaningful amount of data.)
It should have the same effect for mid-stream metadata changes in e.g.
OGG (untested).
This is still somewhat fishy, but in parts due to ICY being fishy, and
FFmpeg's metadata change API being somewhat fishy. For example, what
happens if you seek? With FFmpeg AVFMT_EVENT_FLAG_METADATA_UPDATED and
AVSTREAM_EVENT_FLAG_METADATA_UPDATED we hope that FFmpeg will correctly
restore the correct metadata when the first packet is returned.
If you seke with ICY, we're out of luck, and some audio will be
associated with the wrong tag until we get a new title through ICY
metadata update at an essentially random point (it's mostly inherent to
ICY). Then the tags will switch back and forth, and this behavior will
stick with the data stored in the demuxer cache. Fortunately, this can
happen only if the HTTP stream is actually seekable, which it usually is
not for ICY things. Seeking doesn't even make sense with ICY, since you
can't know the exact metadata location. Basically ICY metsdata sucks.
Some complexity is due to a microoptimization: I didn't want additional
atomic accesses for each packet if no timed metadata is used. (It
probably doesn't matter at all.)
This fixes an issue where captions stop rendering after an
in-demuxer-cache seek, because the demuxer keeps waiting to find
a keyframe (ds->skip_to_keyframe set to true in execute_cache_seek).
When this happens, network calls are forcibly aborted (more or less),
but demuxers might keep going, as most of them do not check for forced
exits properly. This can possibly lead to broken packets being added.
Also do not attempt to read more packets in this situation.
Also do not print a stream open failed message if opening was aborted
anyway.
Since the demuxer cache addition, ds->queue->head can actually be set to
non-NULL, but the decoder can still be at EOF (with no packets to come).
This made it report an unknown buffered size, instead of 0. Fix this by
checking the decoder part of the packet queue instead.
Probably doesn't matter much, but fixes an annoying "???" on the CLI
status line in some situations.
It's a mess: mp3 files have user tags as global metadata (because the
id3v2 tag is global and there is only 1 stream), while OGG files have it
per-track (because it's per-stream on the lowest level). mpv needs to
try to make something nice out of the mess.
It did so by trying to detect audio-only OGG files, and then copying the
per-stream metadata to the global metadata. Make the heuristic for
detecting this slightly more clever, so it works for files with extra,
unrelated streams, like the awful libavformat cover art hack.
Fixes#5577.
Reduce backward/forward from 400MB/400MB to 50MB/150MB. Too many
complaints about high memory usage.
Note that external tracks (like ytdl DASH with external audio tracks)
will double the amounts, because an external track uses its own demuxer
and cache.
This is supposed to help making data flow easier and wakeup handling
more efficient. Once that change is done, reading a packet on any
stream won't have to wakeup and poll all decoders (which helps reducing
the mess even if all decoders are on the same thread).
This also improves the accuracy of wakeups by tracking better whether
a wakeup is needed.
And use it for 2 demuxer options. It could be used for more options
later. (Though the --cache options can not use this, because they use KB
as base unit.)
It was actually already implemented as ta_dup_ptrtype(), but that seems
like a clunky name. Also we still use the talloc_ names throughout the
source, and I'd rather use an old name instead of a mixing inconsistent
naming conventions.
If you play a video with an external audio track, and do backwards
keyframe seeks, then audio can be missing. This is because a backwards
seek can end up way before the seek target (this is just how this seek
mode works). The audio file will be seeked at the correct seek target
(since audio usually has a much higher seek granularity), which results
in silence being played until the video reaches the originally intended
seek target.
There was a hack in audio.c to deal with this. Replace it with a
different hack. The new hack probably works about as well as the old
hack, except it doesn't add weird crap to the audio resync path (which
is some of the worst code here, so this is some nice preparation for
rewriting it). As a more practical advantage, it doesn't discard the
audio demuxer packet cache. The old code did, which probably ruined
seeking in youtube DASH streams.
A non-hacky solution would be handling external files in the demuxer
layer. Then chaining the seeks would be pretty easy. But we're pretty
far from that, because it would either require intrusive changes to the
demuxer layer, or wouldn't be flexible enough to load/unload external
files at runtime. Maybe later.
Similar to 1eec7d2315, but for the beginning of the stream (named BOF in
this commit).
We can know this only if demuxing actually started from the beginning.
If there is a seek to the beginning (even if you use --start=-1000), we
don't know in general whether the demuxer truly returns the start of the
file. We could probably make a heuristic with assuming that this is what
happens if the seek target is before the start time or so, but this is
not included in this commit.
libavformat's cover art hack (aka attached pictures) breaks the ability
of the demuxer cache to keep multiple seek ranges. This happens because
the cover art packet has neither position nor timestamp, and libavformat
gives us the packet even though we intended to drop it.
The cover art hack works by adding the cover art packet to the read
packet stream once when demuxing starts (or after seeks). mpv treats
cover art in a similar way internally, but we have to compensate for
libavformat's shortcomings, and add the cover art packet ourselves when
we need it. So we don't want libavformat to return the packet.
We normally prevent this in demux_lavc.c/select_tracks() and explicitly
disable cover art streams. (We add it in dequeue_packet() instead.) But
libavformat will actually add the cover art packet even if we disable
the cover art stream, because it adds it at initialization time, and
does not bother to check again in av_read_frame() (apparently). The
packet is actually read, and upsets the demuxer cache logic. In
addition, this also means we probably decoded the cover art picture
twice in some situations.
Fix this by explicitly checking/discarding this in yet another place.
(Screw this hack...)
The impact was that you couldn't exactly seek to the join point with a
keyframe seek, even though there was a keyframe. This commit fixes it by
preserving the necessary metadata that got lost on cached range joining.
This is so absurdly obscure that it gets a longer code comment.
This warning was printed when the demuxer cache tried to join two
adjacent seek ranges, but failed if the last keyframe in the second
range was within the (overlapping) first range. This is a weird corner
case which to support probably would not be worth it.
So this code just printed a warning and discarded the second range. As
it turns out, this can happen relatively often if you seek a lot, and
the seek ranges are very tiny (such as consisting of only 1 keyframe).
Dropping the second range in these cases is OK and probably cheaper than
trying to actually join them. Change the warning to verbose level.
(It seems this could actually be "supported", because if keyframe_latest
is not set, there will be no other keyframes, so it could just be unset,
with the exception that q1->keyframe_latest in the code below must not
be overwritten. But still, too much trouble for a special case that
likely does not matter, and it would have to be tested too.)
This means if the user tries to seek past EOF, and we know EOF was seen
already, then use a cached seek, instead of triggering a low level seek.
This requires some annoying tracking, but seems pretty simple otherwise.
One advantage of doing this is that if the user tries to do this kind of
seek, there's no unnecessary waiting for a reaction by network (and in
most cases, redundant downloading of data just to discard it again).
Another is that this avoids creating overlapping seek ranges: previously, the
low level seek would naturally create a new range. Then it would read and add
data from the end of the stream due to the low level demuxer not being able to
seek to the target and selecting the last seek point before the end of the
stream. Consequently, this new range would overlap with the previous cached
range. But since the cache joining code is written such that you join the
current range with the _next_ range (instead of the previous as it would be
needed in this case), the overlapping ranges were left alone, until seeking back
to the previous range. That was ugly, sort of harmless, and could happen in
other cases, but this avoidable case was pretty easy to trigger.
Export them as explicitly undocumented debugging fields for the
"demuxer-cache-state" property.
Should be somewhat helpful to debug "wtf is the demuxer" doing
situations better, especially when seeking. It also becomes visible how
long the demuxer is blocked on an "old" seek when you keep seeking while
the first seek hasn't finished.
update_seek_ranges() has some special code that attempts to correctly
adjust seek ranges for subtitle tracks. (Subtitle are a nightmare for
seek ranges, because they are sparse, so using the packet list is not
enough to reliably determine the valid cached range.)
This had code like this inside the modified if statement:
range->seek_start = MP_PTS_MAX(range->seek_start, <something>);
If seek_start is NOPTS, then seek_start will be set to <something>,
breaking some other code that checks seek_start for NOPTS to see if it's
empty. Fix this by explicitly checking whether seek_start is NOPTS
before adjusting it.
The crash happened in prune_old_packets() because the range was marked
as non-empty, yet there was no packet in it to prune. This was with
files with muxed subtitles, when seeking back to the start. This should
not happen anymore with the change. Also add an assert() to
check_queue_consistency() that checks for this specific case.
There's still some mess. In theory, subtitle tracks could be completely
empty, yet their seek range would span the entire file. Seek range
tracking of subtitle files is slightly broken (even before this change).
Some of this should probably be revisited later, including not just
using seek_start to determine whether a seek range should be pruned due
to being empty.
This will help with things like livestreams.
As a minor detail, subtitles are excluded, because they sometimes have
"unused" events after video and audio ends. To avoid this annoying
corner case, just ignore them.
Before this change and before the seekable stream cache became a thing,
we could possibly seek using the stream cache. But we couldn't know
whether the seek would succeed. We knew the available byte range, but
could in general not tell whether a demuxer would stay within the range
when trying to seek to a specific time position. We preferred to have
safe defaults, so seeking in streams that were detected as unseekable
were not honored. We allowed overriding this via --force-seekable=yes,
in which case it depended on your luck whether the seek would work, or
the player crapped its pants.
With the demuxer packet cache, we can tell exactly whether a seek will
work (at least if there's only 1 seek range). We can just let seeks go
through. Everything to allow this is already in place, and this commit
just moves around some minor things.
Note that the demux_seek() return value was not used before, because low
level (i.e. network level) seeks are usually asynchronous, and if they
fail, the state is pretty much undefined. We simply repurpose the return
value to signal whether cache seeking worked. If it didn't, we can just
resume playback normally, because demuxing continues unaffected, and no
decoder are reset.
This should be particularly helpful to people who for some reason stream
data into stdin via streamlink and such.
This log line tells us why the demuxer is trying to read more, which us
useful when debugging queue overflows. Probably barely useful, but I
think keeping that flag separately also makes the code slightly easier
to understand.
This fixes weird behavior in the following case:
- open a file
- make sure the max. demuxer forward cache is smaller than the
file's video track
- make sure the max. readahead duration is larger than the file's
duration
- disable the audio track
- seek to the beginning of the file
- once the cache has filled enable the audio track
- a queue overflow warning should appear
(- looking at the seek ranges is also interesting)
The queue overflow warning happens because the packed queue for the
video track will use up the full quota set by --demuxer-max-bytes. When
the audio track is enabled, reading an audio packet would technically
overflow the packet cache by the size of whatever packet is read next.
This means the demuxer signals EOF to the decoder, and once playback has
consumed enough video packets so that audio packets can be read again,
the decoder resumes from EOF. This interacts badly with A/V
synchronization and the whole thing can randomly crap itself until audio
has fully recovered.
We didn't care about this so far, but we want to raise the readahead
duration to something very high, so that the demuxer cache is fully
used. This means this case can be hit quite quickly by switching audio
or subtitle tracks, and is not really an obscure corner case anymore.
Fix this by always losing all cache. Since the cache can't be used
anyway until the newly selected track has been read, this is not much of
a disadvantage. The only thing that could be brought up is that
unselecting the track again could resume operation normally. (Maybe this
would be useful if network died completely without chance of recovery.
Then you could watch the already buffered video anyway by deselecting
the audio track again.) But given the headaches, this seems like the
better solution.
Unfortunately this requires adding new new strange fields and strangely
fragmenting state management functions again. I'm sure whoever works on
this in the future will hate me. Currently it seems like the lesser
evil, and much simpler and robust than the other potential solutions.
In case this needs to be revisited, here is a reminder for readers from
the future what alternative solutions were considered, without those
disadvantages:
A first attempted solution allowed the demuxer to buffer some additional
packets on track switching. This would allow it to read enough data to
feed the decoder at least. But it was still awkward, as it didn't allow
the demuxer to continue prefetching the newly selected track. It also
barely worked, because you could make the forward buffer "over full" by
seeking back with seekable cache enabled, and then it couldn't read
packets anyway.
As alternative solution, we could always demux and cache all tracks,
even if they're deselected. This would also not require a network-level
seek for the "refresh" logic (it's the thing that lets the video decoder
continue as if nothing happened, while actually seeking back in the
stream to get the missing audio packets, in the case of enabling a
previously disabled audio track). But it would also possibly waste
network and memory resources, depending on what the user actually wants.
A second solution would just account the queue sizes for each stream
separately. We could freely fill up the audio packet queue, even if the
video queue is full. Since the demuxer API returns interleaved packets
and doesn't let you predict which packet type comes next, this is not as
simple as it sounds, but it'd probably tie in nicely with the "refresh"
logic.
A third solution would be removing buffered video packets from the end
of the packet queue. Since the "refresh" logic gets these anyway, there
is no reason to keep them if they prevent the audio packet queue from
catching up with the video one. But this would require additional logic,
would interact badly with a bunch of other corner cases. And as far as
the code goes, it's rather complex, because all the logic is written
with FIFO behavior in mind (including the fact that the packet queue is
a singly linked list with no backwards links, making removal from the
end harder).
It seems like there's nothing stopping from sub-demuxers from keeping
packets in the cache, even if it's completely pointless. The top-most
demuxer (demux_timeline) already takes care of caching, so sub-demuxers
only waste space and time with this.
Add a function that can disable the packet cache even at runtime and
after packets are read. (It's not clear whether it really can happen
that packets are read before demux_timeline gets the sub-demuxers, but
there's no reason to make it too fragile.) Call it on all sub-demuxers.
For this to work, it seems we have to move the code for setting the
seekable_cache flag to before demux_timeline is potentially initialized,
because otherwise the cache would be reenabled if the demuxer triggering
timeline support is a timeline segment itself (e.g. ordered chapters).
This fixes missing audio when cycling through audio tracks with anything
that uses nested demuxers, such as demux_timeline, which us used for
EDL, --merge-files, ordered chapters, and youtube-dl pseudo DASH
support. When this bug happened, reenabling an audio track would lead to
silence for the duration of the readahead amount.
The underlying reason is the incorrectly updated buffered range on track
switch. It accidentally included the amount covered by the deselected
stream. But the cause of the observed effect was that demux_timeline
issued a refresh seek to the underlying slave demuxer, which in turn
thought it could do a cache seek, because the seek range still included
everything.
update_stream_selection_state() calls update_seek_ranges() to update the
seek ranges after a track switch. When reenabling the track, ds->eager
was set to false during update_seek_ranges(), which made it think the
stream was sparse, and thus it didn't restrict the current seek range
(making later code think everything was buffered). Fix this by moving
some code, so we first update the ds->eager flag, then the seek ranges.
Also verbose log the low level stream selection calls.
Always display the duration as "unknown" if the duration is known. Also
fix that at least demux_lavf reported unknown duration as 0 (fix by
setting the default to unknown in demux.c).
Remove the dumb _u formatter function, and use a different approach to
avoiding displaying "unknown" as playback time on playback start (set
last_seek_pts for that).