Just rearranging shit. Setting SEEK_HR for backstep seeks actually
doesn't have much meaning, but disables the weird audio snapping for
"keyframe" seeks, and I don't know it's late.
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)
(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)
How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.
The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).
Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).
The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.
Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.
E.g.:
bool before = pts_a < pts_b;
would need to be:
bool before = forward
? pts_a < pts_b
: pts_a > pts_b;
or:
bool before = pts_a * dir < pts_b * dir;
or if you, as it's implemented now, just do this after decoding:
pts_a *= dir;
pts_b *= dir;
and then in the normal timing/renderer code:
bool before = pts_a < pts_b;
Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.
Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.
As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)
VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.
FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
This is a minor benign hack that reorders the MPV_FORMAT_NODE output.
The order of members is not supposed to matter, but it's how the OSD
renders them as raw output. Normally this isn't used, but
demuxer-cache-state is a "prominent" case. Moving the seek ranges to the
end avoids that the more important other fields are not cut off by going
out of the screen on the bottom.
Also output the seek ranges in reverse. The order doesn't matter either
(as declared by input.rst). Currently, the demuxer orders them by least
recent use. Reversing it makes the most recently used range (the current
range) show up on top.
In other words, this commit does basically nothing but fudge stuff in a
cosmetic way to make debugging easier for me, and you've wasted your
time reading this commit message and the diff. Good.
This will properly notify observed properties if the player hasn't
started actual playback yet, such as with --demuxer-cache-wait.
This also happens to cause the main loop more often, which triggers
MPV_EVENT_IDLE, and fixes the OSC display. (See previous commit
message.)
The OSC's (osc.lua) event handling is fundamentally broken. It waits for
MPV_EVENT_TICK to update the UI, and MPV_EVENT_TICK has become entirely
meaningless, except as a hack for the OSC. There are many situations
where the OSC doesn't properly update because the TICK event it expects
isn't sent.
Fix one of them: it doesn't update the cache state if the VO window is
forced and --demuxer-cache-wait is used. Make it so that the tick event
is sent even if playback initialization is taking time.
This is still slightly broken, because it works only if the mainloop is
actually run, which depends on random circumstances (such as moving the
mouse over the VO window). The next commit will add another such
circumstance which will make it appear to work, although it's still
conceptually broken. If we "fixed" it and strictly woke up the player
if the idle timer ran out, we'd send tick events all the time, even
if nothing is going on, which we don't want. Fucking shitshow.
This is just redundant and slightly annoying, at least for normal
command line usage. If there are multiple entries, still print it
(because you want to know where you are). Also still print it if the
player was redirected (because you want to know where you got redirected
to).
Remove the singly linked list hack, replace it with a slightly more
proper data structure. This probably gets rid of a few minor bugs along
the way, caused by the awkward nonsensical sharing/duplication of some
fields.
Another change (because I'm touching everything related to timeline
anyway) is that I'm removing the special semantics for parts[num_parts].
This is now strictly out of bounds, and instead of using the start time
of the next/beyond-last part, there is an end time field now.
Unfortunately, this also requires touching the code for cue and mkv
ordered chapters. From some superficial testing, they still seem to
mostly work.
One observable change is that the "no_chapters" header is per-stream
now, which is arguably more correct, and getting the old behavior would
require adding code to handle it as special-case, so just adjust
ytdl_hook.lua to the new behavior.
I noticed that some ytdl streams have a start time other than 0. There's
currently no mechanism inside of the EDL stuff that determines this
start time correctly, so it can happen that if the start time is high,
demux_timeline.c tries to clip off the entire video and audio, resulting
in failure of playback.
As a counter measure, use the no_clip header, which entirely disables
clipping against time ranges in demux_timeline.c. (It's basically a
hack.)
E.g. "mpv null:// --demuxer=rawvideo" will "hang" by waiting for video
EOF forever. It's not signalled correctly because of the last-frame
corner case, which attempts to wait until the current frame is finally
displayed (which is signalled by whether a new frame can be queued, see
commit 1a339fa09d for some details). If no frame was ever queued, the VO
is not configured, and vo_is_ready_for_frame() never returns true.
Fix this by using vo_has_frame(), which seems to be exactly the correct
thing we need.
Init fragments are not a necessity for DASH, but this code assumed so.
Maybe the check was to prevent worse. But using normal EDL here leads to
very shitty behavior where it tries to open hundreds or thousands of
fragments, each with its own demuxer and HTTP connection. (This behavior
is fine for normal uses of EDLs, but completely unacceptable when
emulating fragmented streaming protocols. I'm not sure why the normal
EDL code is needed here, but I think someone claimed some obscure sites
just need it.)
This happens in the same situation as the one described in the previous
commit.
Otherwise we'd just use the base URL as media URL, which would fail with
a 404 error.
Not sure if there's a deeper reason why the audio path was explicitly
different from the video one. But this actually works now for a video
that returned fragmented DASH audio with the default format selection.
(This affects streams on that well known site of a big evil Silicon
Valley company. Typically happens after live stream gets converted to a
normal video, though after some time passes, this fragmented version is
deleted, and replaced by a non-fragmented one. I've observed this
several times and this seems to be the "normal" behavior.)
This merges separate audio and video tracks into one virtual stream,
which helps the mpv caching layer. See previous EDL commit for details.
It's apparently active for most of evil Silicon Valley giant's streaming
videos.
Initial tests seem to work fine, except it happens pretty often that
playback goes into buffering immediately even when seeking within a
cached range, because there is not enough forward cache data yet to
fully restart playback. (Or something like this.)
The audio stream title used to be derived from track.format_note; this
commit stops doing so. It seemed pointless anyway. If really necessary,
it could be restored by adding new EDL headers.
Note that we explicitly don't do this with subtitle tracks. Subtitle
tracks still have a chance with on-demand loading or loading in the
background while video is already playing; merging them with EDL would
prevent this. Currently, subtitles are still added in a "blocking"
manner, but in theory this could be loosened. For example, the Lua API
already provides a way to run processes asynchronously, which could be
used to add subtitles during playback. EDL will probably be never
flexible enough to provide this. Also, subtitles are downloaded at
once, rather than streamed like audio and video.
Still missing: disabling EDL's pointless chapter generation, and
propagating download speed statistics through the EDL wrapper.
The ytdl wrapper can resolve web links to playlists. This playlist is
passed as big memory:// blob, and will contain further quite normal web
links. When playback of one of these playlist entries starts, ytdl is
called again and will resolve the web link to a media URL again.
This didn't work if playlist entries resolved to EDL URLs. Playback was
rejected with a "potentially unsafe URL from playlist" error. This was
completely weird and unexpected: using the playlist entry directly on
the command line worked fine, and there isn't a reason why it should be
different for a playlist entry (both are resolved by the ytdl wrapper
anyway). Also, if the only EDL URL was added via audio-add or sub-add,
the URL was accessed successfully.
The reason this happened is because the playlist entries were marked as
STREAM_SAFE_ONLY, and edl:// is not marked as "safe". Playlist entries
passed via command line directly are not marked, so resolving them to
EDL worked.
Fix this by making the ytdl hook set load-unsafe-playlists while the
playlist is parsed. (After the playlist is parsed, and before the first
playlist entry is played, file-local options are reset again.) Further,
extend the load-unsafe-playlists option so that the playlist entries are
not marked while the playlist is loaded.
Since playlist entries are already verified, this should change nothing
about the actual security situation.
There are now 2 locations which check load_unsafe_playlists. The old one
is a bit redundant now. In theory, the playlist loading code might not
be the only code which sets these flags, so keeping the old code is
somewhat justified (and in any case it doesn't hurt to keep it).
In general, the security concept sucks (and always did). I can for
example not answer the question whether you can "break" this mechanism
with various combinations of archives, EDL files, playlists files,
compromised sites, and so on. You probably can, and I'm fully aware that
it's probably possible, so don't blame me.
struct stream used to include the stream buffer, including peek buffer,
inline in the struct. It could not be resized, which means the maximum
peek size was set in stone. This meant demux_lavf.c could peek only so
much data.
Change it to use a dynamic buffer. Because it's possible, keep the
inline buffer for default buffer sizes (which are basically always used
outside of file opening). It's unknown whether it really helps with
anything. Probably not.
This is also the fallback plan in case we need something like the old
stream cache in order to deal with mp4 + unseekable http: the code can
now be easily changed to use any buffer size.
Instead of going through those weird DEMUXER_CTRLs, query this
information directly. I'm not sure which kind of brain damage made me
use CTRLs for these. Since there are no other DEMUXER_CTRLs that make
sense for the frontend, remove the remaining infrastructure for them
too.
The stream size return was the only thing that still required doing
STREAM_CTRLs from frontend through the demuxer layer. This can be done
much easier, so rip it out. Also rip out the now unused infrastructure
for STREAM_CTRLs via demuxer layer.
Apparently this was so that when playing a video file from a .rar file,
it would load external subtitles with the same name (instead of looking
for mpv's rar:// mangled URL). This was requested on github almost 5
years ago. Seems like a weird feature, and I don't care. Drop it,
because it complicates some in progress change.
The "program" property could switch between TS programs. It was rather
complex and rather obscure (even if you deal with TS captures, you
usually don't need it). If anyone actually needs it (did anyone ever
attempt to even use it?), it should be rewritten. The demuxer should
export a program list, and the frontend should handle the "cycling"
logic.
Linux analog TV support (via tv://) was excessively complex, and
whenever I attempted to use it (cameras or loopback devices), it didn't
work well, or would have required some major work to update it. It's
very much stuck in the analog past (my favorite are the frequency tables
in frequencies.c for analog TV channels which don't exist anymore).
Especially cameras and such work fine with libavdevice and better than
tv://, for example:
mpv av://v4l2:/dev/video0
(adding --profile=low-latency --untimed even makes it mostly realtime)
Adding a new input layer that targets such "modern" uses would be
acceptable, if anyone is interested in it. The old TV code is just too
focused on actual analog TV.
DVB is rather obscure, but has an active maintainer, so don't remove it.
However, the demux/stream ctrl layer must go, so remove controls for
channel switching. Most of these could be reimplemented by using the
normal method for option runtime changes.
This removes anything related to DVD/BD/CD that negatively affected the
core code. It includes trying to rewrite timestamps (since DVDs and
Blurays do not set packet stream timestamps to playback time, and can
even have resets mid-stream), export of chapters, stream languages,
export of title/track lists, and all that.
Only basic seeking is supported. It is very much possible that seeking
completely fails on some discs (on some parts of the timeline), because
timestamp rewriting was removed.
Note that I don't give a shit about optical media. If you want to watch
them, rip them. Keeping some bare support for DVD/BD is the most I'm
going to do to appease the type of lazy, obnoxious users who will care.
There are other players which are better at optical discs.
Semantics changes are the same as at 548ef078 .
Also, the previous C implementation returnd a string for the `stdout`
value, but stdout of the subprocess command is MPV_FORMAT_BYTE_ARRAY
which js previously didn't support, so support it too (at pushnode)
by returning it as a string - the same as the lua code does.
There were some cases where a js number (double) was blindly casted to
int or uint64, but that can be undefined behavior (out of range to int)
or wrong (negative to uint).
Now the code throws a js error if the value is out of range.
Additionally, commit ec625266 added these checks for the new hooks API,
but incorrectly tested int64 range rather than uint64. Fix this too.
Adds the negation missed in 8816e1117e
when moving from a positive-is-active to positive-is-idle variable.
This leads to proper updates to properties such as "eof-reached",
as well as fixes screensaver state updates.
Separately found and fixed by avih and wnoun.
Co-authored-by: wnoun <wnoun@outlook.com>
Manual changes done:
* Merged the interface-changes under the already master'd changes.
* Moved the hwdec-related option changes to video/decode/vd_lavc.c.
As stated in the original commit message, if the demuxer set the start
time to the first subtitle packet, the subtitles would be shifted
incorrectly. It appears that it is the case for external PGS subtitles.
This reverts commit 520fc74036.
Fixes#5485
Merge file-size/file-format and audio channel-count/format into one line
respectively. This fixes stats overflowing the screen in larger than
19:6 aspect ratios. In this case a problem was reported for ~21:9 which
should be common enough for us to "support" it.
Idle handlers used to not be executed when timers were active
Now they are executed:
* After all expired timers have been executed
* After all events have been processed (same as when there are no timers)
--record-file is nice, but only sometimes. If you watch some sort of
livestream which you want to record, it's actually much nicer not to
record what you're currently "seeing", but anything you're receiving.
The buffer is written to in `audio_config_to_str_buf` using `snprintf`
with a `%s` formatting of a 128-byte buffer. This can overflow the
target buffer of 80 causing a truncated output.