Bug caused by a recent refactor. The function is not supposed to unlock
the mutex anymore, but the unlock was forgotten on an obscure code path.
This could sometimes lead to crashes, depending on libc behavior (when
an unlocked mutex was unlocked again, or it tries to unlock a mutex
locked by a different thread).
None of those fancy debug tools could tell me that. I found it by
switching to an error checking mutex and wrapping pthread calls into a
macro that checks their return values.
If the number of chapters is 0, the chapter list can be NULL. clang
complains that we pass NULL to qsort(). This is yet another pointless UB
that exists for no reason other than wasting your time.
Seems to happen often with ytdl pseudo-DASH streams, so whatever. I
couldn't reproduce it and check what triggers it, I just remember seeing
the error message and found it annoying.
This warns about player/audio.c:253 with gcc 8.2.0. Although this
warning could be useful to check the worst case estimation, the compiler
doesn't explain how it gets its dumb, bogus result, so this is useless.
You'd just end up trying to make the compiler happy for no reason.
struct stream used to include the stream buffer, including peek buffer,
inline in the struct. It could not be resized, which means the maximum
peek size was set in stone. This meant demux_lavf.c could peek only so
much data.
Change it to use a dynamic buffer. Because it's possible, keep the
inline buffer for default buffer sizes (which are basically always used
outside of file opening). It's unknown whether it really helps with
anything. Probably not.
This is also the fallback plan in case we need something like the old
stream cache in order to deal with mp4 + unseekable http: the code can
now be easily changed to use any buffer size.
I misunderstood how this extension works. If I understand it correctly
now, it's worse than I thought. They key thing is that the (ust, msc,
sbc) tripple is not for a single swap event. Instead, (ust, msc) run
independently from sbc. Assuming a CFR display/compositor, this means
you can at best know the vsync phase and frequency, but not the exact
time a sbc changed value.
There is GLX_INTEL_swap_event, which might work as expected, but it has
no EGL equivalent (while GLX_OML_sync_control does, in theory).
Redo the context_glx sync code. Now it's either more correct or less
correct. I wanted to add proper skip detection (if a vsync gets skipped
due to rendering taking too long and other problems), but it turned out
to be too complex, so only some unused fields in vo.h are left of it.
The "generic" skip detection has to do.
The vsync_duration field is also unused by vo.c.
Actually this seems to be an improvement. In cases where the flip call
timing is off, but the real driver-level timing apparently still works,
this will not report vsync skips or higher vsync jitter anymore. I could
observe this with screenshots and fullscreen switching. On the other
hand, maybe it just introduces an A/V offset or so.
Why the fuck can't there be a proper API for retrieving these
statistics? I'm not even asking for much.
I encountered a stream that fails with "Could not demux init fragment.".
It turns out this is a regression from the recent change to that code.
The assumption was that demux_lavf.c would treat this as concatenated
stream - which it does, but not for probing.
Doing this transparently is hard without doing it properly. Doing it
properly would mean creating some sort of stream_concat (reminiscent of
that FFmpeg security bug). I probably don't want to go there, and I
think libavformat should just support this directly, so whatever.
Hack-fix this with the knowledge that the init segment will always
contain the headers.
The redundancy here always annoyed me. Back then I didn't change it
because it's hard to test and I just had fixed something. This doesn't
matter anymore, so simplify it, without testing and with the risk that
something breaks (why care).
This happened with a .flac file inside an archive. It tried to seek
beyond the end of the archive entry in a format where seeking isn't
supported. stream_libarchive handles these situations by skipping data.
But when the end of the archive is reached, archive_read_data() returns
0. While libarchive didn't bother to fucking document this, they do say
it's supposed to work like read(), so I guess a return value of 0 really
means EOF. So change the "< 0" to "<= 0". Also add some error logging.
The same file actually worked without out of bounds reads when
extracted, so there still might be something very wrong.
Evil shit due to a huge ownership/lifetime mess of various objects.
demux_close_stream() caused 1. the mp_cancel handle being lost (fix by
explicitly preserving it on the new dummy stream), and 2. passing a
dangling stream pointer to the timeline "demuxer" (use demuxer->stream,
which will never be a dangling pointer).
As one defensive programming measure, stop accessing the "stream"
variable in open_given_type(), even where it would still work. This
includes removing a redundant line of code, and removing the peak call,
which should not be needed anymore, as the remaining demuxers do this
mostly correctly.
The only thing left is the notification for track switching. Just get
rid of that.
There's probably no real reason to get rid of control(), but why not. I
think I was actually trying to do some real work but fuck that.
Subtitles (and a few other file types, like playlists) are not streamed,
but fully read on opening. This means keeping the file handle or network
socket open is a waste of resources and could cause other weird
behavior. This is why there's a hack to close them after opening.
Change this hack to make the demuxer itself do this, which is less
weird. (Until recently, demuxer->stream ownership was more complex,
which is why it was done this way.)
I always wanted to get rid of this, because it makes the ownership rules
for the stream pointer really awkward. demux_edl.c was the only
remaining user of this. Replace it with a semi-clever idea: the init
segment shit can be used to pass the "file" contents as memory block,
and "memory://" itself provides an empty stream. I have no idea if this
actually works, because I didn't immediately find a test stream (would
have to be some youtube DASH shit).
Instead of going through those weird DEMUXER_CTRLs, query this
information directly. I'm not sure which kind of brain damage made me
use CTRLs for these. Since there are no other DEMUXER_CTRLs that make
sense for the frontend, remove the remaining infrastructure for them
too.
The stream size return was the only thing that still required doing
STREAM_CTRLs from frontend through the demuxer layer. This can be done
much easier, so rip it out. Also rip out the now unused infrastructure
for STREAM_CTRLs via demuxer layer.
Apparently this was so that when playing a video file from a .rar file,
it would load external subtitles with the same name (instead of looking
for mpv's rar:// mangled URL). This was requested on github almost 5
years ago. Seems like a shit feature, and why should I give a fuck? Drop
it, because it complicates some in progress change.
This code set pkt->stream to a value which I'm not sure whether it's
correct. A recent commit overwrote it with a value that is definitely
correct.
There appears to be an off by one error. No fucking clue whether this
was somehow correct, but applying an apparent fix does not seem to break
anything, so whatever.
--record-file is nice, but only sometimes. If you watch some sort of
livestream which you want to record, it's actually much nicer not to
record what you're currently "seeing", but anything you're receiving.
In theory, this could be easily done with custom I/O. In practice, all
the halfassed garbage in FFmpeg shits itself and fucks up like there's
no tomorrow. There are several problems:
1. FFmpeg pretends you can do custom I/O, but in reality there's a lot
that custom I/O can do. hls.c even contains explicit checks to disable
important things if custom I/O is used! In particular, you can't use the
HTTP keepalive functionality (needed for somewhat decent HLS
performance), because some cranky asshole in the cursed FFmpeg dev.
community blocked it.
2. The implementation of nested I/O callbacks (io_open/io_close) is
bogus and halfassed (like everything in FFmpeg, really). It will call
io_open on some URLs without ever calling io_close. Instead, it'll call
avio_close() on the context directly. From what I can tell, avio_close()
is incompable to custom I/O anyway (overwhelmed by their own garbage,
the fFmpeg devs created the io_close callback for this reason, because
they couldn't fix their own fucking garbage). This commit adds some
shitty workaround for this (technically triggers UB, but with that
garbage heap of a library we depend on it's not like it matters).
3. Even then, you can't proxy I/O contexts (see 1.), but we can just
keep track of the opened nested I/O contexts. The bytes_read is
documented as not public, but reading it is literally the only way to
get what we want.
A more reasonable approach would probably be using curl. It could
transparently handle the keep-alive thing, as well as propagating
cookies etc. (which doesn't work with the FFmpeg approach if you use
custom I/O). Of course even better if there were an independent HLS
implementation anywhere. FFmpeg's HLS support is so embarrassing
pathetic and just goes to show that they belong into the past
(multimedia from 2000-2010) and should either modernize or fuck off.
With FFmpeg's shit-crusted structures, todic communities, and retarded
assholes denying progress, probably the latter. Did I already mention
that FFmpeg is a shit fucked steaming pile of garbage shit?
And all just to get some basic I/O stats, that any proper HLS consumer
requires in order to implement adaptive streaming correctly (i.e.
browser based players, and nothing FFmshit based).
Use the extension to compute the (hopefully correct) video delay and
vsync phase.
This is very fuzzy, because the latency will suddenly be applied after
some frames have already been shown. This means there _will_ be "jumps"
in the time accounting, which can lead to strange effects at start of
playback (such as making initial "dropped" etc. frames worse). The only
reasonable way to fix this would be running a few dummy frame swaps at
start of playback until the latency is known. The same happens when
unpausing.
This only affects display-sync mode.
Correct function was not confirmed. It only "looks right". I don't have
the equipment to make scientifically correct measurements.
A potentially bad thing is that we trust the timestamps we're receiving.
Out of bounds timestamps could wreak havoc. On the other hand, this will
probably cause the higher level code to panic and just disable DS.
As a further caveat, this makes a bunch of assumptions about UST
timestamps. If there are delayed frames (i.e. we skipped one or more
vsyncs), the latency logic is mostly reset. There is no attempt to make
the vo.c skipped vsync logic to use this. Also, the latency computation
determines a vsync duration, and there's no effort to reconcile or share
the vo.c logic for determining vsync duration.
I don't ever use them, so kill them.
Linux TV is excessively complex, and whenever I attempted to use it, it
didn't work well or would have required some major work to update it.
(For example, when I tried to use a webcam-type device with tv://, it
worked badly; even the libavdevice garbage worked better.)
The "program" property was rather complex and rather obscure. I didn't
ever use it. Should there ever be a proper use for it (maybe HLS stream
selection?), it should be rewritten anyway.
The demuxer cache is the only cache now. Might need another change to
combat seeking failures in mp4 etc. The only bad thing is the loss of
cache-speed, which was sort of nice to have.
FFmpeg is retarded enough not to give us any indication whether it is
(unless we query fields not in the ABI/API). I bet FFmpeg developers
love it when library users have to litter their code with duplicated
information.
Quitting is slightly asynchronous, so the status line can be updated
again. Normally, that's fine, but if quitting comes with a message (such
as with quit_watch_later), it will print the status line again after the
message, which looks annoying. So flush and clear the status message if
it's updated during quitting.
If anyone happened to build with GL disabled, this could lead to option
changes not always refreshing the screen. Since vo_gpu is always enabled
now (just not necessarily any backend for it), we can drop the #if
completely.
(The way this works is a bit idiotic - the option cache exists only to
grab the change notification, which will trigger a redraw and make
vo_gpu update its own second copy of them. But at least it avoids some
layering issues for now.)