m_config has a m_config_option array, that is used for all option
access. The code maintaining shadow copies also tried to make use of it,
and did so by "cleverly" assigning each m_sub_options run a slice of
that array. But actually it's much simpler to, you know, directly access
the damn options.
This helps separation m_config and the general option code slightly.
Still seems to work after a superficial test, good enough.
This is good because a private thing is not so public anymore, and it's
also preparation for further changes.
Some tricky memory management issues: m_config_data (i.e. config->data)
now depends on m_config_shadow, instead of m_config. In particular,
free_option_data() accesses the m_config_shadow.groups array. Obviously
it must be freed before m_config_shadow.
Unused now. The old stream cache used it, but it was removed.
On a side note, the demuxer cache uses mp_mkostemps(). It looks like our
Windows open() emulation handles this correctly by using CREATE_NEW, so
no functionality gets lost by the "new" approach. On the other hand, the
demuxer cache does not set FILE_FLAG_DELETE_ON_CLOSE, but instead tries
to delete the file after opening (POSIX style), which probably won't
work on Windows. But I'm not sure how to make it use the DELETE_ON_CLOSE
flag, so whatever.
Move the comments documenting exported functions to the header. It looks
like the header is the preferred place for that (although I don't really
appreciate headers where you lose the overview because of all the
documentation comments). Add comments to some undocumented prototypes.
This was one of those "shouldn't exist" type of functions that could
access internals that were supposed to be isolated away, but some code
needed to access it anyway.
It looks like the last use of it went away in 2016, shortly after it was
introduced.
Dear diary,
today I fixed a shitty bug that was all my fault because I made a
horrible mess. (Except it was a horrible mess before I even touched
this shit, but let's not blame others.)
Sometimes, updates to VO option that control video sizing (like panscan)
didn't update the screen correctly. They were delayed until the next
option change or so.
It turns out that if the option update happens at the "same" time as a
VOCTRL, update_opts() doesn't actually notify the vo_driver of the
change. This in turn happened because run_control() called
m_config_cache_update(). The latter function returns true if the options
changed since the last call, and update_opts() also calls it (on the
same config cache) for the same purpose. The update_opts() call, which
is triggered by a third mechanism, comes later, but the cache update
call will return false (as it should). Basically, given the config API,
you can't act differently on multiple update calls and expect it to
work. The skipped handling in update_opts() meant that the notification
required to apply the changed option wasn't run.
Fix this by simply calling update_opts() directly instead. Now there's
only 1 m_config_cache_update() call on this specific instance. Fix the
call in run_reconfig() too, so the previous sentence isn't a lie (but it
probably doesn't make a difference in practice due to certain details).
I'm not sure how I even ran into this sort-of race condition. The VOCTRL
that messed up the option update was VOCTRL_UPDATE_PLAYBACK_STATE, which
happens semi-regularly.
Why this config cache shit and all the other shit? Rediscovering this
crap wasn't pleasant. It's a bunch of hacks that became necessary when
the ancient MPlayer architecture made it hard to move the VO to a
separate thread.
All the VO code typically accesses vo->opts (whose fields all used to be
global variables in MPlayer). The frontend changes these on user input.
Putting locking around all the options would be a nightmare, and keeping
a copy of the options in the thread was much simpler. You need a way to
propagate option changes, notify the thread, and update the local copy
too. And the result of these thoughts was the config cache mechanism.
In this specific case, the relevant cache update call in update_opts()
triggers a VOCTRL_SET_PANSCAN to the VO driver, which isn't related to
its former function anymore. Instead, it causes the VO driver to update
the video sizing/placing options, which the generic VO code can't do.
(Mostly because the VO driver includes the windowing stuff and is
responsible for resizing etc. itself.)
VOCTRLs sent by the frontend are even worse. MPlayer had no real runtime
option change mechanism. Some options were vaguely duplicated by
properties, so you could effectively change those options at runtime.
Each of these options had its own VOCTRL, which still exist today, e.g.
VOCTRL_FULLSCREEN, or VOCTRL_ONTOP. I tried to make all options runtime
changeable, and to unify properties with options. But I couldn't be
bothered with updating all VO drivers to listen to option changes
directly, because that would be pretty tedious. So the property code is
still all there and sends the old VOCTRLs. But of course you need to
sync up the options, which is why the run_control() code did that.
(Unrelated: VO_EVENT_FULLSCREEN_STATE is the worst shithack of them all.
Currently, only the frontend can actually write to options (for awful
reasons), so if the fullscreen state changes due to outside interaction,
the VO driver can't update the corresponding option fields. So the VO
notifies the frontend with said VO_EVENT_, and the frontend then sends
VOCTRL_GET_FULLSCREEN, and updates the global copy of the option with
the value returned by that. I still like to think the situation is not
that bad considering the monstrous effort of converting single-threaded
code that had hundreds of options in global variables to multi-threaded
code with no global variables at all.)
Helper for the ab-loop-dump-cache command, see manpage additions.
This is kind of shit. Not only is this a very "special" feature, but it
also vomits more messy code into the big and already bloated demux.c,
and the implementation is sort of duplicated with the dump-cache code.
(Except it's different.) In addition, the results sort of depend what a
video player would do with the dump-cache output, or what the user wants
(for example, a user might be more interested in the range of output
audio, instead of the video).
But hey, I don't actually need to justify it. I'm only justifying it for
fun.
This is the muxer used by all 3 stream recording features (why are there
so many?). It tried hard to avoid writing broken files. In particular,
it buffered packets until it new there was a keyframe packet (which, in
mpv's/FFmpeg's definition, mean seek points from which decoding can
resume), or final EOF. The danger that was probably considered here was
that due to video frame reordering, not muxing some trailing, missing
packets of a keyframe range could lead to broken decoding or skipped
frames, so better discard packets belonging to an incomplete range.
Sounds like a good idea so far.
Unfortunately, this will drop an entire keyframe range even if the
current packet run is complete and mp_recorder_mark_discontinuity() is
called, simply because recorder.c can not know that the next packet
would have been a keyframe.
It seems better to mux all packets to avoid losing valid data, even if
it means that sometimes packets/frames will be missing from the file. It
benefits especially the dump-cache command, which will call the function
to signal a discontinuity after every range. Before this commit, it
discarded the last packets, even if they were perfectly fine.
(An alternative solution for dump-cache would have been a second
discontinuity marker function, that communicates that the current packet
range is complete. But this commit's solution is simpler and overall
more robust, at the danger of producing more semi-broken files.)
This may make some of the complex buffering/waiting logic in recorder.c
pointless.
Untested (in this final form).
But don't tell the reader which those APIs are. Hope the user will just
search for "async" in the Lua section (lua.rst). But of course, nobody
will ever care about anything related to this.
That's right, and it's probably not the end of it. I'll just claim that
I have no idea how to create a proper user interface for this, so I'm
creating multiple partially-orthogonal, of which some may work better in
each of its special use cases.
Until now, there was --record-file. You get relatively good control
about what is muxed, and it can use the cache. But it sucks that it's
bound to playback. If you pause while it's set, muxing stops. If you
seek while it's set, the output will be sort-of trashed, and that's by
design.
Then --stream-record was added. This is a bit better (especially for
live streams), but you can't really control well when muxing stops or
ends. In particular, it can't use the cache (it just dumps whatever the
underlying demuxer returns).
Today, the idea is that the user should just be able to select a time
range to dump to a file, and it should not affected by the user seeking
around in the cache. In addition, the stream may still be running, so
there's some need to continue dumping, even if it's redundant to
--stream-record.
One notable thing is that it uses the async command shit. Not sure
whether this is a good idea. Maybe not, but whatever. Also, a user can
always use the "async" prefix to pretend it doesn't.
Much of this was barely tested (especially the reinterleaving crap),
let's just hope it mostly works. I'm sure you can tolerate the one or
other crash?
The screenshot command has this weird behavior that it shows messages
both on terminal and OSD by default, but that a command prefix can be
used to disable the OSD message.
Move this mechanism to common code, and make this available to other
commands too (although as of this commit only the screenshot commands
use it).
This gets rid of the weird screenshot_ctx.osd field too, which was sort
of set on a command, and sometimes inconsistently restored after the
command.
It makes some slight sense and helps with one of the following commits.
Also rename that other function to make it sound less similar to
find_seek_target().
Always set max_bytes_bw to 0 if seekable cache is disabled, instead at
the place of its use. This is the only use of it, so the commit should
not change any behavior.
(Alternatively, this could drop the max_bytes_bw variable, use the
option directly, and keep the old code that resets it on use of the
cache is disabled.)
Until now, the following could happen: if you set a 1GB forward cache,
and a 1GB backward cache, and you opened a 2GB file, it would prune away
the data cached at the start as playback progressed past the 50% mark.
With this commit, nothing gets pruned, because the total memory usage
will still be 2GB, which equals the total allowed memory usage of 1GB +
1GB.
There are no explicit buffers (every packet is malloc'ed and put into a
linked list), so it all comes down to buffer size computations. Both
reader and prune code use these sizes to decide whether a new packet
should be read / an old packet discarded. So just add the remaining free
"space" from the forward buffer to the available backward buffer. Still
respect if the back buffer is set to 0 (e.g. unseekable cache where it
doesn't make sense to keep old packets).
We need to make sure that the forward buffer can always append, as long
as the forward buffer doesn't exceed the set size, even if the back
buffer "borrows" free space from it. For this reason, always keep 1 byte
free, which is enough to allow it to read a new packet. Also, it's now
necessary to call pruning when adding a packet, to get back "borrowed"
space that may need to be free'd up after a packet has been added.
I refrained from doing the same for forward caching (making forward
cache use unused backward cache). This would work, but has a
disadvantage. Assume playback starts paused. Demuxing will stop once the
total allowed low total cache size is reached. When unpausing, the
forward buffer will slowly move to the back buffer. That alone will not
change the total buffer size, so demuxing remains stopped. Playback
would need to pass over data of the size of the back buffer until
demuxing resume; consider this unacceptable. Live playback would break
(or rather, would not resume in unintuitive ways), even normal streaming
may break if the server invalidates the URL due to inactivity. As an
alternative implementation, you could prune the back buffer immediately,
so the forward buffer can grow, but then the back buffer would never
grow. Also makes no sense.
As far as the user interface is concerned, the idea is that the limits
on their own aren't really meaningful, the purpose is merely to vaguely
restrict the cache memory usage. There could be just a single option to
set the total allowed memory usage, but the separate backward cache
controls the default ratio of backward/forward cache sizes. From that
perspective, it doesn't matter if the backward cache uses more of the
total buffer than assigned, if the forward buffer is complete.
The last_eof field is the last known EOF state from the underlying
demuxer. Normally, seeks reset it, because obviously if seek back into
the middle of the file, you don't want last_eof to have a "wrong" value
for a short time window (until a packet is read, which would reset the
field to its correct value).
This shouldn't happen during cache seeks, because you don't touch the
underlying demuxer state.
At first, I made this change because some other work in progress
required it. It turned out that it was unnecessary, but keep the change
anyway, since it's still correct and makes the logic cleaner.
m_geometry_apply() will read and modify the dummy variable. It's not
actually used for anything, but valgrind will still warn against
uninitialized data. I'm not sure whether this was UB, but in any case
it's annoying when running valgrind.
Determining how much memory something uses is very hard, especially in
high level code (yes we call code using malloc high level). There's no
way to get an exact amount, especially since the malloc arena is shared
with the entire process anyway. So the demuxer packet cache tries to get
by with an estimate using a number of rough guesses.
It seems this wasn't quite good. In some ways, it was too optimistic, in
others it seemed to account for too much data. Try to get it closer to
what malloc and ta probably do. In particular, talloc adds some
singificant overhead (using talloc for mass-data was a mistake, and it's
even my fault). The result appears to match better with measured memory
usage. This is still extremely dependent on malloc implementation and so
on.
The effect is that you may need to adjust the demuxer cache limits to
cache as much data as it did before this commit. In any case, seems to
be better for me.
If the disk cache is used, the AVPacket is not used anymore and is
completely deallocated when the packet is written to disk. As a minor
bug, the AVPacket allocation itself was not freed (although it wasn't a
memory leak, since talloc still automatically freed it when the entire
demux_packet was freed). For very large caches, this could easily add up
to over hundred MB, so actually free the unneeded allocation.
--hwdec=auto-copy was preferring vdpau over vaapi. In the HEVC 10 bit
case, this also led to hardware decoding not being enabled. (Probably
because the probing can't start over after enabling hw decoding fails at
runtime, or something like that.)
Possible that this subtly breaks on some setups. You can't always win.
During probing on a system with AMD GPU, mpv used to output the
following messages if hardware decoding was enabled:
[ffmpeg] AVHWFramesContext: Failed to create surface: 2 (resource
allocation failed).
[ffmpeg] AVHWFramesContext: Unable to allocate a surface from internal
buffer pool.
This commit removed the message, with hopefully no other side effects.
Long explanations follow, better don't read them, it's just tedious
drivel about the details. People should learn to write concise commit
messages, not drone on and on endlessly all while they have no fucking
point.
The code probes supported hardware pixel format, and checks whether they
can be mapped as textures. av_hwdevice_get_hwframe_constraints() returns
a list of hardware pixel formats in the valid_sw_formats field (the "sw"
means software, but they're still hardware pixel formats, makes sense).
This contained the format yuv420p, even though this is not a valid
hardware format. Trying to create a surface of this type results in VA
surface creation failure, upon which FFmpeg prints the error messages
above. We'd be fine with this, except FFmpeg has a global log callback,
and there's no way to suppress these messages without creating other
issues.
It turns out that FFmpeg's vaapi implementation returns all formats from
vaQueryImageFormats() if no "hwconfig" is provided. This list includes
yuv420p, which is probably supported for surface upload/download, but
not as native format. Following FFmpeg's logic, it should not appear in
the valid_sw_formats list, because formats for transfers are returned by
another roundabout API.
Idiotically, there doesn't seem to be any vaapi call that determines
whether a format is a valid surface format. All mechanisms to do this
are bound to a VAConfigID (= video codec or video processor), all while
the actual surface creation API strangely does not take a VAConfigID (a
big WTF).
Also, calling the vaCreateSurfaces() API ourselves for probing is out of
the question, because that functions is utterly and idiotically complex.
Look at the FFmpeg code and how much effort it requires to setup a
complete set of attributes - we can't duplicate this.
So the only way left to do this is the most idiotic and tedious way:
enumerating all VAProfile (and VAEntrypoints) to create all possible
VAConfigIDs. Each of the VAConfigIDs is associated with a list of
formats, which FFmpeg can return (by passing the ID along with the
"hwconfig"), and which is probed separately.
Note that VAConfigID actually refers to a dynamic instance of something,
and creating a VAConfigID takes not only the VAProfile and the
VAEntrypoint, but also an arbitrary attribute array. In theory, this
means our attempt to get to know all possible configurations cannot
work, but in practice this attribute array seems to be pointless for
decoding and video processing, and FFmpeg doesn't use it (though the
encoding path does use it). This probably just makes it _barely_ OK to
do it this way.
Could we discard all this probing shit, and somehow do it another way?
Probably not. The EGL API for mapping surfaces doesn't even seem to
provide a way to enumerate supported formats, we may not even know
whether DRM/dmabuf interop is actually supported (AFAIR the EGL
extensions are present even if they don't work), nor do we know whether
the VAAPI driver supports this interop (not sure). So actually trying is
the only way.
Further, mpv initializes the decoder on a another thread, where you
can't just access OpenGL state. This suckage is mostly to be blamed on
OpenGL itself and its crazy thread boundedness. In theory, this could be
done anyway (see how software decoding "direct rendering" tries to get
around this). But to make it worse, the decoder never cares about the
list of supported formats determined by this code; instead,
f_autoconvert.c tries to deal with it and insert a video processor
(well, good luck with this crap, I bet it doesn't even work). So this
whole endeavor might be pointless, other than the fact that failed
probing can disable use of vaapi (which is correct and necessary). But
if you have a shovel, you don't use it to smash the flat end on the heap
of shit that's piled up before you, or do you?
While this method probably works, it's still orgasmically tedious. It
was tedious before: we had to create a real surface, create a GL
texture, map the surface with it, then destroy everything again. But the
added code is tedious on its own. Highlights include the need to malloc
a FFmpeg struct just to pass a single damn integer, the need to
enumerate "entrypoints" for each VA profile, even though all profiles
have exactly 1 entrypoint, and the kind of obnoxious way how vaapi
requires you to preallocate arrays for returned things, even they could
for example reasonably be returned as immutable arrays or have some
other simpler API.
The main grand fuckup is of course that vaapi requires a VAConfigID to
query surface properties, but not for creating surfaces. This
awkwardness even affected the FFmpeg API design, which has a "hwconfig"
concept that is only used by vaapi (vaapi is only 1 out of 10 hardware
decoding APIs supported by the FFmpeg hwcontext stuff). Maybe I'm just
missing something. It's as if vaapi required setting radioactive shit on
fire. Look how clean the native D3D11 code is instead. (Even the ANGLE
code manages to avoid being this fucked up. Or the VDPAU code, despite
supporting multiple mapping methods.)
Another only barely related change is that the valid_sw_formats field
can be NULL, and the API explicitly documents this. Technically, the mpv
code was buggy for not checking this, although until now the FFmpeg
implementation so far could not return it when we still passed NULL for
the hwconfig parameter.
No functional changes, just preparation for the next commit. Split the
probing into multiple functions. Prepare for the yet unused possibility
to pass AVVAAPIHWConfig to probing. try_format_pixfmt() now assumes it
can be called multiple times with the same format, so it filters the
format.
The format probing is now something like O(n^2) for n formats, but n
will most likely remain something under 50 or so.
These macros explicitly honor MP_NOPTS_VALUE, instead of implicitly
relying on the fact that this value is the lowest allowed value.
In addition, this changes one case to use MP_NOPTS_VALUE instead of
INFINITY, also a cosmetic change.
Make most of the demuxer options runtime-changeable. This includes the
cache options and stream recording. The manpage documents some of the
possibly weird issues related to this.
In particular, the disk cache isn't shuffled around if the setting
changes at runtime.
Or at least I hope it's theoretical. This function is supposed to unset
any old listeners for the given cache, and the code works only if
there's at most 1. Add a defense break to avoid UB if there's more than
one, and add an assert() to check the assumption that there's at most
one.
The added comment is unrelated.
Although this is not useful in general, it makes --stream-record work
with a certain video streaming service by a large dystopian company.
In the general case, this fails because normal muxing can, quite
obviously, not handle the segmented metadata in the packets. (There
isn't even a file format which could handle these, except possibly mp4.)
On the other hand, ytdl merely uses timeline/EDL to emulate DASH
streaming (unfortunately), which does not use the segmented stuff, and
stream recording will actually work.
If this is used, the runtime expects that wmain() instead of main() is
defined. This caused me severe problems in a certain now irrelevant
case. I think it's a good idea to avoid this special case.
We can just use main() and call GetCommandLineW() instead. This function
returns a single string, so use CommandLineToArgvW() to split it, and
hope it has the same semantics. Should this ever return NULL, hope that
it leaves argc at 0.
Untested, I think.
_LARGEFILE_SOURCE and _LARGEFILE64_SOURCE are not required for 64 bit
off_t, only _FILE_OFFSET_BITS.
See somewhere on: https://wiki.musl-libc.org/faq.html
I didn't test this anywhere except 64 bit Linux. It's probably a good
idea to test on Windows and all Android versions.
I once created this because someone wanted to use vapoursynth without
the Python dependency. No idea if anyone ever actually used it. It's
sort of icky (it calls itself "lazy" to preempt complaints about how
much it sucks), and complicates the build process. Kill it.
It seems much more promising to have something like this:
https://github.com/vapoursynth/vapoursynth/issues/386
This would either solve the build distribution problem by relaxing the
Python dependency, and/or allow a Lua backend to be included without
pain.
This filter wasn't referenced anywhere and thus was dead code. It should
have been in the audio filter list in user_filters.c. This was intended
as compatibility wrapper (to avoid breaking old command lines and config
files), and has no real use. Apparently I forgot to add it to the filter
list (did I even test this shit?), and so it was rotting around for 1.5
years doing nothing (just like myself).
Note that users can just use the libavfilter provided filter to force
resampling, just that it has a different name and different options.
There's also af_format to force inserting auto conversion through the
internal f_swsresample filter.
Useless garbage.
This was once added to test whether vdpau presentation feedback could be
used. Results were always unsatisfactory, and now vdpau is dead.
The readahead time should be interesting for latency vs. underruns
(which idiot protocols like HLS suffer from).
The total byte usage is less interesting than I hoped; maybe the
frequency at which it samples should be reduced. (Kind of dumb - you
want high frequency for the readahead field, but much lower for byte
usage.)
Of course, the code was copy&pasted from the DS ratio/jitter stuff. Some
of the choices may not make any sense for the new code.
It prints "Unexpected end of file (no clusters found)" when opening a
webm init fragment. The warning is correct, but unwanted in this case.
Add tons of kludges to avoid it.
(Actually it prints that twice, for audio and video each.)
Also, suppress another warning about a seek head entry that points
exactly to the end of the file. This is a MATROSKA_ID_CUES, which is
harmless, and, very strangely, doesn't point at any cues when you
concatenate the init fragment with a media fragment. No idea what that
crap is supposed to be.
Retarded webshit streaming protocols (well, DASH) chop a stream into
small fragments, and move unchanging header parts to an "init" fragment
to save some bytes (in the case at hand about 300 bytes for each
fragment that is 100KB-200KB, sure was worth it, fucking idiots).
Since mpv uses an even more retarded hack to inefficiently emulate DASH
through EDL, it opens a new demuxer for every fragment. Thus the
fragment needs to be virtually concatenated with the init fragment. (To
be fair, I'm not sure whether the alternative, reusing the demuxer and
letting it see a stream of byte-wise concatenated fragmenmts, would
actually be saner.)
demux_lavc.c contained a hack for this. Unfortunately, a certain shitty
streaming site by an evil company, that will bestow dytopia upon us soon
enough, sometimes serves webm based DASH instead of the expected mp4
DASH. And for some reason, libavformat's mkv demuxer can't handle the
init fragment or rejects it for some reason. Since I'd rather eat
mushrooms grown in Chernobyl than debugging, hacking, or (god no)
contributing to FFmpeg, and since Chernobyl is so far away, make it work
with our builtin mkv demuxer instead.
This is not hard. We just need to copy the hack in demux_lavf.c to
demux_mkv.c. Since I'm not _that_ much of a dumbfuck to actually do
this, remove the shitty gross demux_lavf.c hack, and replace it by a
slightly less bad generic implementation (stream_concat.c from the
previous commit), and use it on all demuxers. Although this requires
much more code, this frees demux_lavf.c from a hack, and doesn't require
adding a duplicated one to demux_mkv.c, so to the naive eye this seems
to be a much better outcome.
Regarding the code, for some reason stream_memory_open() is never meant
to fail, while stream_concat_open() can in extremely obscure situations,
and (currently) not in this case, but we handle failure of it anyway.
Yep.
Timeline (demux_timeline, for EDL and mkv ordered chapters) are a mess,
because it's the only nested demuxer case. Part of the mess comes from
shared struct stream pointers. This makes no sense, because the wrapper
(demux_timeline) doesn't have any business setting it.
Try to lessen it by not passing down streams. Instead, pass down NULL.
This prevents unintended interference, and tightens the ownership rules.
Now a demuxer always owns its stream.
On the other hand, demuxer->stream can now be NULL. This was never the
case before, and consequently there will be new bugs. At least they will
be spotted, because they've been bugs before.
struct stream is also used to access stream properties (such as whether
something is considered a network stream). Most of these have been
mirrored in struct demuxer (because the frontend has been forbidden to
access struct stream because of threading). But during initialization
was still used, so introduce an awkward struct parent_stream_info, which
unifies these.
Commit e0419fb181 changed demux_is_network_cached() to use
demuxer->stream->streaming instead of demuxer->is_network. To enable
timeline stuff to use the cache anyway, change it so that both flags can
contribute to it. The stream NULL-check is obviously due to changes in
this commit.
Instead of having to rely on the protocol matching, make a function that
creates a stream from a stream_info_t directly. Instead of going through
a weird indirection with STREAM_CTRL, add a direct argument for non-text
arguments to the open callback. Instead of creating a weird dummy
mpv_global, just pass an existing one from all callers. (The latter one
is just an artifact from the past, where mpv_global wasn't available
everywhere.)
Actually I just wanted a function that creates a stream without any of
that bullshit. This goal was slightly missed, since you still need this
heavy "constructor" just to setup a shitty struct with some shitty
callbacks.
Raise it from 8KB to 512KB.
Do this because ytdl_hook.lua generated a 40KB EDL file (from 80KB
youtube-dl JSON output), and putting it into a .m3u file for easier
debugging failed due to the size limit.
I think I repeated this inline in some places (or maybe not), and some
experimental but discarded code used it. Add it anyway, maybe it'll be
useful. Or it'll give someone a chance to get a contribution into mpv by
removing an unused macro.
The previous commit broke audio playback (maybe this is what 4. was
about?). But it wasn't the fault of the commit; it just exposed
pre-existing issues.
If the packet queue search can't get all packets, it checked
queue->is_bof to see whether there could be previous packets. But
obviously, is_bof can be set, even if the search start packet wasn't the
first one.
This was especially observable with audio, because audio by default
needs preroll packets, and plays artifacts if they're missing.
Fix by using the BOF playback condition for this purpose too.
In theory, backward demuxing does way too much work by doing a full
cache seek every time you need to step back through a packet queue. In
theory, it would be exceedingly more efficient to just iterate backwards
through the queue, but this is not possible because I'm too stingy to
add 8 bytes per packet for backlinks. (In theory, you could probably
come up with some sort of deque, that'd allow efficient iteration into
any direction, but other requirements make this tricky, and I'm
currently too dumb/lazy to do this. For example, the queue can grow to
millions of elements, all while packet pointers need to stay valid.)
Another possibility is to "locally" seek the queue. This still has less
overhead than a full seek.
Also, it just so happens that, as a side effect, this avoids performing
range merging, which commit f4b0e7e942 "broke". That wasn't a bug at
all, but since range joining is relatively slow, avoiding it is good.
This is really just a coincidental side effect, I'm not even quite sure
why it happens this way.
There are 4 ugly things about this change:
1. To get a keyframe "before" a certain one, we recompute the target
PTS, and then subtract 0.001 as arbitrary number to "fudge" it. This
isn't the first place where this is done, and hey, it wasn't my damn
idea that MPlayer should use floats for timestamps. (At first, it even
used 32 bit timestamps.)
2. This is the first time reader_head is reset to an earlier position
outside of the seek code. This might cause conceptual problems since
this code is now "duplicated".
3. In theory, find_backward_restart_pos() needs to be run again after
the backstep. Normally, the seek code calls it explicitly. We could call
it right in the new code, but then the damn function would be recursive.
We could shuffle the code around to make it a loop, but even then
there'd be an offchance into running into an unexpected corner case (aka
subtle bug), where it would loop forever. To avoid refactoring the code
and having to think too hard about it, make it deferred - add some new
state and the check_backward_seek() function for this. Great, even more
subtle mutable state for this backwards shit.
4. I forgot this one, but I can assure you, it's bad.
Without doubt someone will have to clean up this slightly in the future
(or rip it out), but it won't be me.
This code tries to determine the "current" position, which is used as
base for the seek target when it needs to seek back more (the point is
to prevent seeking back too far). But compute_keyframe_times() almost
computes the same thing, so use that. Unfortunately needs a forward
declaration.
("Almost", because it differs in some details that should not really
matter.)