These macros explicitly honor MP_NOPTS_VALUE, instead of implicitly
relying on the fact that this value is the lowest allowed value.
In addition, this changes one case to use MP_NOPTS_VALUE instead of
INFINITY, also a cosmetic change.
Make most of the demuxer options runtime-changeable. This includes the
cache options and stream recording. The manpage documents some of the
possibly weird issues related to this.
In particular, the disk cache isn't shuffled around if the setting
changes at runtime.
Or at least I hope it's theoretical. This function is supposed to unset
any old listeners for the given cache, and the code works only if
there's at most 1. Add a defense break to avoid UB if there's more than
one, and add an assert() to check the assumption that there's at most
one.
The added comment is unrelated.
Although this is not useful in general, it makes --stream-record work
with a certain video streaming service by a large dystopian company.
In the general case, this fails because normal muxing can, quite
obviously, not handle the segmented metadata in the packets. (There
isn't even a file format which could handle these, except possibly mp4.)
On the other hand, ytdl merely uses timeline/EDL to emulate DASH
streaming (unfortunately), which does not use the segmented stuff, and
stream recording will actually work.
If this is used, the runtime expects that wmain() instead of main() is
defined. This caused me severe problems in a certain now irrelevant
case. I think it's a good idea to avoid this special case.
We can just use main() and call GetCommandLineW() instead. This function
returns a single string, so use CommandLineToArgvW() to split it, and
hope it has the same semantics. Should this ever return NULL, hope that
it leaves argc at 0.
Untested, I think.
_LARGEFILE_SOURCE and _LARGEFILE64_SOURCE are not required for 64 bit
off_t, only _FILE_OFFSET_BITS.
See somewhere on: https://wiki.musl-libc.org/faq.html
I didn't test this anywhere except 64 bit Linux. It's probably a good
idea to test on Windows and all Android versions.
I once created this because someone wanted to use vapoursynth without
the Python dependency. No idea if anyone ever actually used it. It's
sort of icky (it calls itself "lazy" to preempt complaints about how
much it sucks), and complicates the build process. Kill it.
It seems much more promising to have something like this:
https://github.com/vapoursynth/vapoursynth/issues/386
This would either solve the build distribution problem by relaxing the
Python dependency, and/or allow a Lua backend to be included without
pain.
This filter wasn't referenced anywhere and thus was dead code. It should
have been in the audio filter list in user_filters.c. This was intended
as compatibility wrapper (to avoid breaking old command lines and config
files), and has no real use. Apparently I forgot to add it to the filter
list (did I even test this shit?), and so it was rotting around for 1.5
years doing nothing (just like myself).
Note that users can just use the libavfilter provided filter to force
resampling, just that it has a different name and different options.
There's also af_format to force inserting auto conversion through the
internal f_swsresample filter.
Useless garbage.
This was once added to test whether vdpau presentation feedback could be
used. Results were always unsatisfactory, and now vdpau is dead.
The readahead time should be interesting for latency vs. underruns
(which idiot protocols like HLS suffer from).
The total byte usage is less interesting than I hoped; maybe the
frequency at which it samples should be reduced. (Kind of dumb - you
want high frequency for the readahead field, but much lower for byte
usage.)
Of course, the code was copy&pasted from the DS ratio/jitter stuff. Some
of the choices may not make any sense for the new code.
It prints "Unexpected end of file (no clusters found)" when opening a
webm init fragment. The warning is correct, but unwanted in this case.
Add tons of kludges to avoid it.
(Actually it prints that twice, for audio and video each.)
Also, suppress another warning about a seek head entry that points
exactly to the end of the file. This is a MATROSKA_ID_CUES, which is
harmless, and, very strangely, doesn't point at any cues when you
concatenate the init fragment with a media fragment. No idea what that
crap is supposed to be.
Retarded webshit streaming protocols (well, DASH) chop a stream into
small fragments, and move unchanging header parts to an "init" fragment
to save some bytes (in the case at hand about 300 bytes for each
fragment that is 100KB-200KB, sure was worth it, fucking idiots).
Since mpv uses an even more retarded hack to inefficiently emulate DASH
through EDL, it opens a new demuxer for every fragment. Thus the
fragment needs to be virtually concatenated with the init fragment. (To
be fair, I'm not sure whether the alternative, reusing the demuxer and
letting it see a stream of byte-wise concatenated fragmenmts, would
actually be saner.)
demux_lavc.c contained a hack for this. Unfortunately, a certain shitty
streaming site by an evil company, that will bestow dytopia upon us soon
enough, sometimes serves webm based DASH instead of the expected mp4
DASH. And for some reason, libavformat's mkv demuxer can't handle the
init fragment or rejects it for some reason. Since I'd rather eat
mushrooms grown in Chernobyl than debugging, hacking, or (god no)
contributing to FFmpeg, and since Chernobyl is so far away, make it work
with our builtin mkv demuxer instead.
This is not hard. We just need to copy the hack in demux_lavf.c to
demux_mkv.c. Since I'm not _that_ much of a dumbfuck to actually do
this, remove the shitty gross demux_lavf.c hack, and replace it by a
slightly less bad generic implementation (stream_concat.c from the
previous commit), and use it on all demuxers. Although this requires
much more code, this frees demux_lavf.c from a hack, and doesn't require
adding a duplicated one to demux_mkv.c, so to the naive eye this seems
to be a much better outcome.
Regarding the code, for some reason stream_memory_open() is never meant
to fail, while stream_concat_open() can in extremely obscure situations,
and (currently) not in this case, but we handle failure of it anyway.
Yep.
Timeline (demux_timeline, for EDL and mkv ordered chapters) are a mess,
because it's the only nested demuxer case. Part of the mess comes from
shared struct stream pointers. This makes no sense, because the wrapper
(demux_timeline) doesn't have any business setting it.
Try to lessen it by not passing down streams. Instead, pass down NULL.
This prevents unintended interference, and tightens the ownership rules.
Now a demuxer always owns its stream.
On the other hand, demuxer->stream can now be NULL. This was never the
case before, and consequently there will be new bugs. At least they will
be spotted, because they've been bugs before.
struct stream is also used to access stream properties (such as whether
something is considered a network stream). Most of these have been
mirrored in struct demuxer (because the frontend has been forbidden to
access struct stream because of threading). But during initialization
was still used, so introduce an awkward struct parent_stream_info, which
unifies these.
Commit e0419fb181 changed demux_is_network_cached() to use
demuxer->stream->streaming instead of demuxer->is_network. To enable
timeline stuff to use the cache anyway, change it so that both flags can
contribute to it. The stream NULL-check is obviously due to changes in
this commit.
Instead of having to rely on the protocol matching, make a function that
creates a stream from a stream_info_t directly. Instead of going through
a weird indirection with STREAM_CTRL, add a direct argument for non-text
arguments to the open callback. Instead of creating a weird dummy
mpv_global, just pass an existing one from all callers. (The latter one
is just an artifact from the past, where mpv_global wasn't available
everywhere.)
Actually I just wanted a function that creates a stream without any of
that bullshit. This goal was slightly missed, since you still need this
heavy "constructor" just to setup a shitty struct with some shitty
callbacks.
Raise it from 8KB to 512KB.
Do this because ytdl_hook.lua generated a 40KB EDL file (from 80KB
youtube-dl JSON output), and putting it into a .m3u file for easier
debugging failed due to the size limit.
I think I repeated this inline in some places (or maybe not), and some
experimental but discarded code used it. Add it anyway, maybe it'll be
useful. Or it'll give someone a chance to get a contribution into mpv by
removing an unused macro.
The previous commit broke audio playback (maybe this is what 4. was
about?). But it wasn't the fault of the commit; it just exposed
pre-existing issues.
If the packet queue search can't get all packets, it checked
queue->is_bof to see whether there could be previous packets. But
obviously, is_bof can be set, even if the search start packet wasn't the
first one.
This was especially observable with audio, because audio by default
needs preroll packets, and plays artifacts if they're missing.
Fix by using the BOF playback condition for this purpose too.
In theory, backward demuxing does way too much work by doing a full
cache seek every time you need to step back through a packet queue. In
theory, it would be exceedingly more efficient to just iterate backwards
through the queue, but this is not possible because I'm too stingy to
add 8 bytes per packet for backlinks. (In theory, you could probably
come up with some sort of deque, that'd allow efficient iteration into
any direction, but other requirements make this tricky, and I'm
currently too dumb/lazy to do this. For example, the queue can grow to
millions of elements, all while packet pointers need to stay valid.)
Another possibility is to "locally" seek the queue. This still has less
overhead than a full seek.
Also, it just so happens that, as a side effect, this avoids performing
range merging, which commit f4b0e7e942 "broke". That wasn't a bug at
all, but since range joining is relatively slow, avoiding it is good.
This is really just a coincidental side effect, I'm not even quite sure
why it happens this way.
There are 4 ugly things about this change:
1. To get a keyframe "before" a certain one, we recompute the target
PTS, and then subtract 0.001 as arbitrary number to "fudge" it. This
isn't the first place where this is done, and hey, it wasn't my damn
idea that MPlayer should use floats for timestamps. (At first, it even
used 32 bit timestamps.)
2. This is the first time reader_head is reset to an earlier position
outside of the seek code. This might cause conceptual problems since
this code is now "duplicated".
3. In theory, find_backward_restart_pos() needs to be run again after
the backstep. Normally, the seek code calls it explicitly. We could call
it right in the new code, but then the damn function would be recursive.
We could shuffle the code around to make it a loop, but even then
there'd be an offchance into running into an unexpected corner case (aka
subtle bug), where it would loop forever. To avoid refactoring the code
and having to think too hard about it, make it deferred - add some new
state and the check_backward_seek() function for this. Great, even more
subtle mutable state for this backwards shit.
4. I forgot this one, but I can assure you, it's bad.
Without doubt someone will have to clean up this slightly in the future
(or rip it out), but it won't be me.
This code tries to determine the "current" position, which is used as
base for the seek target when it needs to seek back more (the point is
to prevent seeking back too far). But compute_keyframe_times() almost
computes the same thing, so use that. Unfortunately needs a forward
declaration.
("Almost", because it differs in some details that should not really
matter.)
Subtitle packets with a timestamp before the seek target may overlap
with the seek target anyway. This is why this subtitle preroll crap
exists: it needs to return packets before the seek target to ensure that
the subtitle is displayed at the seek target.
This didn't always work. Maybe it's a regression, but it must have been
an old one. The breakage is triggered by heuristic that is to prevent
excessive queuing of packets in garbage files (this heuristic apparently
became immediately necessary when this preroll mechanism was
implemented).
If a video keyframe packet was found, but no audio packet yet, then
subtitle_preroll was set to 0, and since a_skip_to_keyframe was still 0,
the subtitle packet was discarded. The dumb thing is that subtitle and
video seeking is finished at this point, so the preroll crap should not
be applied at all.
Fix this by moving the preoll overflow code into the block that handles
preroll.
Normally I use the OSC like this: not at all, but have a key binding
that does "cycle osc" to show it. And in that case, I don't really want
it to overlap the damn video.
I could use the zoom/pan options to move the video out of the way, but
this is also sort of annoying. Likewise, you could write a script or so
which does this automatically if the OSC appears, but that's still
annoying, and computing values for these options such that the video is
moved correctly is tricky.
So I added a bunch of options that set explicit video borders (previous
commit), and a option for the OSC to use them (this commit).
Disabled by default, since I'm afraid this is too awkward and
unpolished, especially with OSC default settings.
I'm also using "osc-visibility=always". Effectively, making the OSC
appear will box the video, and making it disappear (by unloading
osc.lua) will restore the video back to normal.
Semantics a bit questionable. This is done for the OSC (next commit),
and a comment added the manpage explicitly states this. Meaning this is
probably garbage and needs to revisit when the OSC changes and/or
someone wants to use this margin feature for something else.
Not sure about the subtitle thing. It's imaginable that someone uses
these options to create empty borders for subtitles on the bottom, so
subtitles should be located there. On the other hand, this gives a
rather unpolished user experience when using the (later added) OSC
feature to not overlap with the video. There's not much of a point if
the OSC still overlaps the video. However, I'm too lazy to think about
this, so it stays like it is.
--video-margin-ratio-left=0.2 --video-margin-ratio-right=0.9 (added in
the the next commit) will set f_w to inf, resulting in some garbage
being propagated. Later, the OSD margins are computed from values before
various sanity clamping is applied, which makes libass suffer from
bullshit values.
I'm very sure it's OK and more correct to compute the OSD margins using
the later values, but I'm not sure about that.
demux_packet.next should not be used outside of demux.c, and in this
case it's a packet that was just passed to demux.c from the outside.
demux_packet.stream is already set by the demuxer, and this is assured
by the add_packet_locked() caller.
We don't care much about this case, because backward playback can fail
terribly without a good way to detect it, so this was fine.
However, this froze in certain situations. Reading from a subtitle file
for which backward demuxing failed could make it get stuck in
demux_read_packet_async() in unthreaded mode. (That we don't support
backwards subtitle decoding anyway doesn't matter for this.)
So aggressively disable backward demuxing to prevent worse in these
situations. The behavior will still be awful, because the frontend is
still in backwards playback mode, but at least it won't freeze.
Somewhat similar to the old --cache-file, except for the demuxer cache.
Instead of keeping packet data in memory, it's written to disk and read
back when needed.
The idea is to reduce main memory usage, while allowing fast seeking in
large cached network streams (especially live streams). Keeping the
packet metadata on disk would be rather hard (would use mmap or so, or
rewrite the entire demux.c packet queue handling), and since it's
relatively small, just keep it in memory.
Also for simplicity, the disk cache is append-only. If you're watching
really long livestreams, and need pruning, you're probably out of luck.
This still could be improved by trying to free unused blocks with
fallocate(), but since we're writing multiple streams in an interleaved
manner, this is slightly hard.
Some rather gross ugliness in packet.h: we want to store the file
position of the cached data somewhere, but on 32 bit architectures, we
don't have any usable 64 bit members for this, just the buf/len fields,
which add up to 64 bit - so the shitty union aliases this memory.
Error paths untested. Side data (the complicated part of trying to
serialize ffmpeg packets) untested.
Stream recording had to be adjusted. Some minor details change due to
this, but probably nothing important.
The change in attempt_range_joining() is because packets in cache
have no valid len field. It was a useful check (heuristically
finding broken cases), but not a necessary one.
Various other approaches were tried. It would be interesting to list
them and to mention the pros and cons, but I don't feel like it.
Supposed to follow the standard function.
The standard function is not standard, but a GNU extension. Adding some
ifdef mess is pointless too - it has no advantages other than having a
mess, and not spotting implementation bugs in the emulation due to
running it only on "obscure" platforms (like Windows, so most computers
actually, except the developer's platform).
There is mkstemp(), which at least is in POSIX 2008. But it's 100%
useless, except in some obscure cases: it doesn't set O_CLOEXEC, nor can
you pass it to it. Without O_CLOEXEC, we'd leak the temporary file to
all child processes. (The fact that the file, which is expected to reach
double or tripple digit GB sizes, will be deleted only once all
processes unreference the FD, makes this sort of a big deal. You could
ftruncate() it, but that doesn't fix all the other problems.)
Why did POSIX standardize mkstemp() and O_CLOEXEC apparently at the same
time, but provided no way to pass O_CLOEXEC to mkstemp()? With the
introduction of O_CLOEXEC, they acknowledged that there's a need to
atomically set the FD_CLOEXEC flag when creating file descriptors.
(FD_CLOEXEC was standard before that, but setting it with fcntl() is
racy.) You're much more likely to need a temp file that is CLOEXEC
rather than the opposite, and even if they were somehow opposed to
CLOEXEC by default (such as for compat. reasons), surely POSIX could
have standardized mkostemp() too or instead.
And then there's the fact that this whole O_CLOEXEC mess is stupid.
Surely there would have been a better way to handle this, instead of
requiring adding O_CLOEXEC to almost ALL instances of open() in all code
that has been written ever. The justification for this is that the
historic default was wrong, and you can't change it (e.g. this won't
work: changing the behavior of exec() and not inherit the FD to the
child process, unless a hypothetical O_KEEP_EXEC flag is set).
But on the other hand, surely you could have introduced an exec()
variant which does close all FDs, except a whitelist of FDs passed to
it. Let's call it execve2(). In fact, I'm going to argue that exec()
call sites are the most aware of whether (and which) FDs to inherit.
Some programs even tried to explicitly iterate over all opened FDs and
explicitly close "unwanted" FDs (which of course was problematic for
other reasons), and such an execve2() call would have been the ideal
solution.
Maybe this proposed solution would have had problems too. But surely
revisiting and reviewing every exec*() call would have been simpler than
reviewing every open() call. And more importantly, having to extend
every damn library function that either calls open() or creates FDs in
some other way, like mkstemp().
What argument are there going to be against this? That there will be
library code that can't keep working correctly with processes that use
the "old" exec? Well, what about all my legacy library code that uses
open() incorrectly, and that will break no matter what?
Well, I'm not going to claim that I can come up with better solutions
than POSIX (generally or in this case), but this situation is ABSOLUTELY
ATROCIOUS. It makes win32 programming look attractive compared to POSIX,
that standard pandering to dead people from the past. (Note: not trying
to insult dead people.)
I'm not sure what POSIX is even doing. Anything useful? Doesn't look
like it to me. Are they paid? Why? They didn't even fix the locale mess,
nor do they intend to. I bet they're proud of discussing compatibility
to 70ies code day in and day out iwtohut ever producing anything useful.
What a load of crap. They seriously got to do better than this.
Oh, and my wrapper is probably buggy. Fortunately that doesn't matter.
Also I'm dumping this into io.h. Originally, io.h was just supposed to
replace broken implementation of standard functions by MinGW (and then
by Android), but whatever, just give a dumping ground for shit code.
Backwards demuxing usually seeks back back by a "random" amount (set by
a user option) when it needs new preceding packets. It turns out a past
change made these backwards seek amounts add up when it didn't need to
(i.e. subtracting the amount from the seek pos without properly
resetting it), which could possibly slow down playback as it went on.
The reason for this was that back_seek_pos was set for every stream on
every seek. This made the reset not affect other streams (in particular
streams which weren't used and never were reset, or which didn't reset
that often). But as the commit adding it showed, this is needed only to
set the initial position. So do that.
Fixes: "demux: fix initial backward demuxing state in some cases"
When packet appending sets the start of the range, it adjusts the range
by seek_preroll. Do this when packets are pruned from the start of the
range too.
(Yeah, seek_preroll handling is probably broken in some other cases. It
was halfhearted to begin with.)
The main thing this commit does is removing demux_packet.kf_seek_pts. It
gets rid of 8 bytes per packet. Which doesn't matter, but whatever.
This field was involved with much of seek range updating and pruning,
because it tracked the canonical seek PTS (i.e. start PTS) of a packet
range. We have to deal with timestamp reordering, and assume the start
PTS is the lowest PTS across all packets (not necessarily just the first
packet). So knowing this PTS requires looping over all packets of a
range (no, the demuxer isn't going to tell us, that would be too sane).
Having this as packet field was perfectly fine. I'm just removing it
because I started hating extra packet fields recently.
Before this commit, this value was cached in the kf_seek_pts field (and
computed "incrementally" when adding packets). This commit computes the
value on demand (compute_keyframe_times()) by iterating over the placket
list. There is some similarity with the state before 10d0963d85,
where I introduced the kf_seek_pts field - maybe I'm just moving in
circles. The commit message claims something about quadratic complexity,
but if the code before that had this problem, this new commit doesn't
reintroduce it, at least. (See below.)
The pruning logic is simplified (I think?) - there is no "incremental"
cached pruning decision anymore (next_prune_target is removed), and
instead it simply prunes until the next keyframe like it's supposed to.
I think this incremental stuff was only there because of very old code
that got refactored away before. I don't even know what I was thinking
there, it just seems complex. Now the seek range is updated when a
keyframe packet is removed.
Instead of using the kf_seek_pts field, queue->seek_start is used to
determine the stream with the lowest timestamp, which should be pruned
first. This is different, but should work well. Doing the same as the
previous code would require compute_keyframe_times(), which would
introduce quadratic complexity.
On the other hand, it's fine to call compute_keyframe_times() when the
seek range is recomputed on pruning, because this is called only once
per removed keyframe packet. Effectively, this will iterate over the
packet list twice instead of once, and with some locality. The same
happens when packets are appended - it loops over the recently added
packets once again. (And not more often, which would go above linear
complexity.)
This introduces some "cleverness" with avoiding calling
update_seek_ranges() even when keyframe packets added/removed, which is
not really tightly coupled to the new code, and could have been in a
separate commit.
Removing next_prune_target achieves the same as commit b275232141,
which is hereby reverted (stale is_bof flags prevent seeking before the
current range, even if the beginning of the file was pruned). The seek
range is now strictly computed after at least one packet was removed,
and stale state should not be possible anymore.
Range joining may over-allocate the index a little. It tried hard to
avoid this before by explicitly freeing the old index before creating a
new one. Now it iterates over the old index while adding the entries to
the new one, which is simpler, but may allocate twice the memory in the
worst case. It's not going to matter for anything, though.
Seeking will be slightly slower. It needs to compute the seek PTS values
across all packets in the vicinity of the seek target. The previous code
also iterated over these packets, but now it iterates one packet range
more.
Another minor detail is that the special seeking code for SEEK_FORWARD
goes away. The seeking code will now iterate over the very last packet
range too, even if it's incomplete (i.e. packets are still being
appended to it). It's fine that it touches the incomplete range, because
the seek_end fields prevent that anything particularly incorrect can
happen. On the other hand, SEEK_FORWARD can now consider this as seek
target, which the deleted code had to do explicitly, as kf_seek_pts was
unset for incomplete packet ranges.
Obviously doesn't sense with this order. The git history shows that this
comment was touched multiple times, without ever fixing it. It was
originally added in 2016, where the "for" was missing. Later, the "for"
was added, but to the wrong position.
What the fuck?
The old implementation didn't work for the OGG case. Discard the old
shit code (instead of fixing it), and write new shit code. The old code
was already over a year old, so it's about time to rewrite it for no
reason anyway.
While it's true that the old code appears to be broken, the main reason
to rewrite this is to make it simpler. While the amount of code seems to
be about the same, both the concept and the actual tag handling are
simpler. The result is probably a bit more correct.
The packet struct shrinks by 8 byte. That fact that it wasted 8 bytes
per packet for a rather obscure use case was the reason I started this
at all (and when I found that OGG updates didn't work). While these 8
bytes aren't going to hurt, the packet struct was getting too bloated.
If you buffer a lot of data, these extra fields will add up. Still quite
some effort for 8 bytes. Fortunately, it's not like there are any
managers that need to be convinced whether it's worth doing. The freedom
to waste time on dumb shit.
The old implementation attached the current metadata to each packet.
When the decoder read the packet, the packet's metadata was made
current. The new implementation stores metadata as separate list, and
requires that the player frontend tells it the current playback time,
which will be used to find the currently valid metadata. In both cases,
the objective was to correctly update metadata even if a lot of data is
buffered ahead (and to update them correctly when seeking within the
demuxer cache).
The new implementation is actually slightly more correct, because it
uses the playback time for the metadata lookup. Consider if you have an
audio filter which buffers 15 seconds (unfortunately such a filter
exists), then the old code would update the current title 15 seconds too
early, while the new one does it correctly.
The new code also simplifies mixing the 3 metadata sources (global, per
stream, ICY). We assume these aren't mixed in a meaningful way. The old
code tried to be a bit more "exact". I didn't bother to look how the old
code did this, but the new code simply always "merges" with the previous
metadata, so if a newer tag removes a field, it's going to stick around
anyway.
I tried to keep it simple. Other approaches include making metadata a
special sh_stream with metadata packets. This would have been
conceptually clean, but the implementation would probably have been
unnatural (and doesn't match well with libavformat's API anyway). It
would have been nice to make the metadata updates chapter points (makes
a lot of sense for the intended use case, web radio current song
information), but I don't think it would have been a good idea to make
chapters suddenly so dynamic. (Still an idea to keep in mind; the new
code actually makes it easier to work towards this.)
You could mention how subtitles are timed metadata, and actually are
implemented as sparse packet streams in some formats. mp4 implements
chapters as special subtitle stream, AFAIK. (Ironically, this is very
not-ideal for files. It would be useful for streaming like web radio,
but mp4 is extremely bad for streaming by design for other reasons.)
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
Some OGG web radio streams use timestamp resets when a new song starts
(you can find those Xiph's directory - other streams there don't show
this behavior). Basically, the OGG stream behaves like concatenated OGG
files, and "of course" the timestamps will start at 0 again when the
song changes. This is very inconvenient, and breaks the seekable demuxer
cache. In fact, any kind of seeking will break
This is more time wasted in Xiph's bullshit. No, having timestamp resets
by design is not reasonable, and fuck you. I much prefer the awful
ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it
wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET
HAPPENS. But it doesn't, and the randomly changing timestamps is the
only thing we get from its API.
At this point, demux_lavf.c is like 90% hacks. But well, if libavformat
applies this strange mixture of being clever for us vs. giving us
unfiltered garbage (while pretending it abstracts everything, and hiding
_useful_ implementation/low level details), not much we can do.
This timestamp linearizing would, in general, probably be better done
after the decoder, because then we wouldn't need to deal with timestamp
resets. But the main purpose of this change is to fix seeking within the
demuxer cache, so we have to do it on the lowest level.
This can probably be applied to other containers and video streams too.
But that is untested. Some further caveats are explained in the manpage.
Probably doesn't change anything, other than looking slightly better. In
theory, the common function has some stuff that makes it more likely
that timestamps round-trip through conversions properly, but I didn't
confirm that.
This may call memmove() with size==0 and a NULL data pointer. In
addition to this being UB with memmove(), I think it's UB to do
arithmetic on a NULL pointer too. Of course, this doesn't matter in
practice at all, and is just stupidity to torture programmers.
Remove the duplicated creation of the first range. Explicitly destroy
ranges, including the last one on final deinit.
It looks like this also fixes a leak of removed range structs, which was
never noticed because they're so small, and were freed on final deinit
due to having the demuxer as talloc parent.
This improves upon the previous commit too (that change should have
been part of it I guess). Sub-demuxers (demux_timeline only) now
automatically don't use the cache (like it was intended by the previous
commit). The cache is "initialized" (or disabled) last in the recursive
call chain, which is messy, but this sub demuxer stuff FUCKING SUCKS, as
mentioned in the previous commit message. This would be no problem if
the caching layer and actual demuxer implementations were separate.
Most of this change has no purpose. Might make (de-)initialization of
further cache exerpiments simpler.
It seems the so called demuxer cache wasn't really disabled for
sub-demuxers (timeline stuff). This was relatively harmless, since the
actual packet data was shared anyway via refcounting. But with the
addition of a mmap cache backend, this may change a lot.
So strictly disable any caching for sub-demuxers. This assumes that
users of sub-demuxers (only demux_timeline.c by now?) strictly use
demux_read_any_packet(), since demux_read_packet_async() will require
some minor read-ahead if a low level packet read returned a packet for a
different stream.
This requires some awkward messing with this fucking heap of trash. The
thing that is really wrong here is that the demuxer API mixes different
concepts, and sub-demuxers get the same API as decoders, and use the
cache code.
The demuxer cache tries to track the number of bytes allocated for the
cache. In addition to the packet queue, the seek index is another data
structure that roughly depends on the amount of packets cached. So the
index size should somehow be part of the total number of bytes tracking.
Until now, this was handled with KF_SEEK_ENTRY_WORST_CASE, basically a
shitty heuristic. It was a guess (and probably rather an upper bound
than a lower bound). The implementation details made it annoying, and it
was conceptually inaccurate too.
Change this, and instead simply add the index size to the total cache
size. This essentially makes it part of the backbuffer. It's nice that
this cleanly decouples it from the packet size tracking itself.
Since it's part of the backbuffer number of bytes now, packet pruning
can't necessarily free enough space in the backbuffer anymore. Before
this commit, the backbuffer consisted of packets only, so it was
possible to reduce its size to 0 by pruning all packets until the
decoder reader position, at which point a packet was accounted as
forward buffered. Now the index is added to this, and it can't be
pruned. Replace the assert() because of this changed invariant.