4e5d996c3a added this as part of a series
of patches written to avoid wasteful sub redraws when playing a still
image with subs. The is_animated special case was specifically for ASS
subtitles that have animations/effects and would need repeated redraws
in the still image case. This check was done unconditionally for all ASS
subtitles, but for very big ASS subtitles, this text parsing can get a
bit expensive.
Because this function call is only ever needed for the weird edge case
of ASS subtitles over a still image, some additional logic can be added
to avoid calling is_animated in the vast majority of cases. The animated
field in demux_packet can be changed to a tristate instead where -1
indicates "unknown" (the default state). In update_subtitle, we can look
at the current state of the video tracks and decide whether or not it is
neccesary to perform is_animated and pass that knowledge to sd_ass
before any subtitle packets are decoded and thus save us from doing this
potentially expensive call.
92a9f11a0b added locking for dec_sub.
At that time, because the lock was exposed to the outside world,
a recursive mutex was used. However, this is no longer true after
e9e883e3b2, when the public locking
functions were removed. This means that the lock is now private.
Unlike input.c, dec_sub already enforces said call hierarchy, so
combined with the aforementioned change, the lock is only ever
called once and never recursively. Thus the lock can be converted to
a normal mutex.
The previous commits optimized sub redrawing on still images/terminal so
mpv wouldn't redraw so much. There is a gap though. It only assumes
static subtitles. Since ASS can be animated, those types of subtitles
will always need redraws so we need to build in specific detection for
this. We need to build a whitelist of events in ASS that are considered
animations and then flag the packet. Additionally, there's a bunch of
annoying bookkeeping that has to be done since packets can be dropped on
seeks and so on.
This only affects two special cases: printing subtitles to the terminal
and printing subtitles on a still picture. Previously, mpv was very dumb
here and spammed this logic on every single loop. For terminal
subtitles, this isn't as big of a deal, but for the image case this is
pretty bad. The entire VO constantly redrew even when there was no need
to which can be very expensive depending on user settings.
Instead, let's rework sub_read_packets so that it also tells us whether
or not the subtitle packets update in some way in addition to telling us
whether or not to read more. Since we cache all packets thanks to the
previous commit, we can leverage this information to make a guess
whether or not the current subtitle packet is supposed to be visible on
the screen. Because the redraw now only happens when it is needed, the
mp_set_timeout_hack can be removed.
Previously, mpv only saved a strict limit of two packets while decoding
subtitles, but it is a bit incomplete. Firstly, it's certainly possible
for there to be more than two subtitles visible at once. And also, it
did not take into account preloading. Rework this mechanism so that it
is a growable array that can store as many packets as we want. Note that
in this commit, the packets are only ever discarded on reset or destroy,
so in theory it could grow forever. Some discarding logic will be added
in the next commit since it is inherently tied to other things.
Over the years, we've accumulated several secondary subtitle related
options and properties, but the implementation was not really consistent
and it wasn't clear what the right process for adding more should be. So
to make things nicer, let's refactor all of the subtitle options with
secondary variants (sub-delay, sub-pos, and sub-visibility) and split
them off to a new, separate struct. All of the underlying values are
stored in an array instead for simplicity. Additionally, the
implementation of some secondary-sub-* properties were slightly changed
so there would be less redundancy.
Add --secondary-sub-delay option and decouple --sub-delay from secondary
subtitles. This produces desirable behavior in most cases as secondary
and primary subtitles tracks tend to be timed independently of one
another.
This feature is implemented by turning the sub_delay field in
mp_subtitle_opts into an array of 2 floats. From here the track index is
either passed around or derived when sub_delay is needed. There are some
cases in dec_sub.c where it is possible for dec_sub.order (equivalent to
track index) to be -1. In these cases, sub_delay is inferred as 0.
Since 062104d16e, we started saving cached
packets for subtitles. However, these can point to the same address as
what is stored in sub->new_segment. When a segment is updated, the
packet is potentially freed. Later during decoding, this can lead to a
double free since the cached packets will naturally try to free itself
and update. Fix this by simply always making sub->new_segment a full
copy of the packet so its lifetime doesn't have to be tied to the cached
packet stuff.
UPDATE_SUB_HARD causes all of the ass objects to reset in order to apply
the new style. UPDATE_SUB_FILT doesn't actually reset the sd, but it
should in order to update the actual filters so that was added here.
Doing this causes the current subtitle to be dropped. In the paused
cause, this concidentally works because command.c forces a video refresh
which then reloads the subtitle essentially. But while playing, the
subtitle will be dropped and you won't get anything until the next one
appears.
Instead of using video refreshes, what we can do is just always save the
last two subtitle packets in a cache and redecode them if needed. This
is much easier and also allows us to get rid of all the video refresh
logic in command.c. Fixes#12386.
The stream selection state wasn't improved. I didn't realize this messed
with caches. All in all, just not a good idea. Back to drawing board I
guess.
This reverts commit f40bbfec4f.
This replaces the previous commit and makes more sense. The internal
demux marked tracks as eager depending on their type and for subtitles
it would always lazily read them unless there happened to be no
available av stream. However, we want the sub stream to be eager if the
player is paused. The existing subtitle is still preserved on the
screen, but if the user changes tracks that's when the problem occurs.
So to handle this case, propagate the mpctx->paused down to the stream
selection logic. This modifies both demuxer_refresh_track and
demuxer_select_track to take that boolean value. A few other parts of
the player use this, but we can just assume false there (no change in
behavior from before) since they should never be related to subtitles.
The core player code is aware of its own state naturally, and can always
pass the appropriate value so go ahead and do so. When we change the
pause state, a refresh seek is done on all existing subtitle tracks to
make sure their eager state is the appropriate value (i.e. so it's not
still set to eager after a pause and a track switch). Slightly invasive
change, but it works with the existing logic instead of going around it
so ultimately it should be a better approach. We can additionally remove
the old force boolean from sub_read_packets since it is no longer
needed.
Actually, I thought of a better way of handling this shortly after
merging this. Revert it and redo it in the next commit.
This reverts commit c2c157ebec.
a323dfae42 almost fixed subtitle tracks
disappearing when paused but it actually missed one part: the behavior
of demux_read_packet_async_until. It's a bit unintuitive, but for
subtitle streams, that function would only return the very first packet
regardless of whatever pts you pass to it. So the previous commit worked
on the very first subtitle, but not actually any of the others (oops).
This is because subtitle streams never marked as eager and thus never
actually read farther ahead. While the video is playing, this is OK, but
if we're paused and switching subtitle tracks then the stream should be
eagerly read. Luckily, the logic is already there in the function for
this. All we have to do add an extra argument to
demux_read_packet_async_until to force the stream to be read eagerly and
then it just works. Be sure to unset the eager flag when we're done.
Actually fixes the bug for real this time.
First of all, this never worked. Or if it ever did, it was in some
select few scenarios. c9474dc9ed is what
originally added support for the auto choice. However, that commit
worked by propagating a value to a fake option used internally. This
shouldn't have ever worked because the underlying m_config_cache was
never updated so the value shouldn't have been preserved when accessed
in sd_lavc. And indeed with some testing, the value there is always 0
unsurprisingly.
This was later rewritten in ba7cc07106
along with a lot of other sub changes, but with that, it was still
mostly broken. The reason is because one of the key parts of having to
hit this logic (prefer_forced) required `--no-subs-with-matching-audio`
to be set. If the audio language matches the subtitle language (the
requirement also excludes forced subs), the option makes no subtitle
selection in the first place so pick->forced_only_def is not set to true
and nothing even happens. Another way around this would be to attempt to
change your OS language (like with the LANG environment variable) so
that the subtitle track gets selected but then audio_matches mistakenly
becomes false because it compares the OS language to the audio language
which then make preferred_forced 0, so nothing happens. I don't think
there's a scenario where pick->forced_only_def is actually set to true
(thus meaning `auto` is useless), but maybe someone could contrive
something very strange. Regardless, it's definitely not something even
remotely common.
fbe8f99194 changed track selection again
but didn't consider this particular case. The net result is that DVD/PGS
subs become equivalent to --sub-forced-only being yes, so this a change
in behavior and probably not a good one. Note that I wasn't able to
actually observe any difference in a PGS sample. It still displayed
subtitles fine but that sample probably didn't have the right flags to
hit the sub-forced-only logic.
Anyways, the auto feature is extremely questionable at best and in my
view, not actually worth it. It is meant to be used with
`--no-subs-with-matching-audio` to display forced pictures in subtitle
tracks that are not marked as forced, but that contradicts that
particular option's purpose and description in the manual (secretly
selecting a track under certain conditions even though it says not to).
Instead of trying to shove all this logic into select_default_track
which is already insanely complicated as it is, recognize that this is a
trivial lua script. If you absolutely want to turn --sub-forced-only on
under these certain conditions (DVD/PGS subtitles, matching audio and
subtitle languages, etc.), just look at the current-tracks property and
do your thing. The very, very niche behavior that this option tried to
accomplish basically never worked, no user even knows what this option
does, and well it's just not worth supporting in core mpv code. Drop
all this code for sanity's sake and change --sub-forced-only back to a
bool.
Internal subtitles were not shown when switching between tracks while
mpv was paused. The reason for this is simply because the demuxer data
isn't available yet when the track switch happens. Fixing it is
basically just retrying until the packet is actually available when the
player is paused. Fixes#8311.
Upon an option update with an UPDATE_SUB_HARD flag,
the ass_track that stores all the decoded
subtitle packets/events is destroyed and recreated, which means
the packets need to be read and decoded again to refill
the ass_track. This caused issues (no subs displayed) in 2 cases:
1. external sub files
Previously, external sub files were read and decoded only
once when loaded. Since this meant all decoded events were lost
forever when recreating the ass_track, we need to change this
and trigger a new preload during sub reinits.
2. converted subs (non-ASS text subs like srt)
For converted subs, we maintain a list of previously
seen packets to avoid decoding and adding duplicate events
to the ass_track. Previously this list wasn’t synchronized with
the corresponding ass_track, so the sub decoder would reject
any previously seen sub packets, usually meaning only subs sometime
after the current pts would be displayed after sub reinits.
Fix this by resetting the list upon ass_track recreation.
Previously, the sub-visibility option changed the visibility of all
subtitles including secondary ones. This meant that it was not possible
to only display secondary subtitles while hiding the primary ones. This
modifies the sub-visibility option so that it only affects primary
subtitles which allows only secondary subtitles to be displayed.
This reverts commit 04f0b0abe4.
It's not a good idea to unify the names only for visibility, while
keeping secondary-* for everything else.
This needs a bit more thought before we allow secondary sub to be
visible on its own.
Adds --sub-visibility choices 'primary-only' for only displaying the
primary subtitle track, and 'secondary-only' for only displaying
secondary subtitle track.
Removes --secondary-sub-visibility and displays a message telling the
user to use --sub-visibility=yes/primary-only instead.
These changes make it so that the default 'sub-visibility' bind 'v'
cycles through all the 'sub-visibility' choices, 'no', 'yes',
'primary-only', and 'secondary-only'.
29e15e6248 prefixed youtube-dl's subs url with an edl prefix to not
download them until they're selected, which is useful when there are
many sub languages. But this prefix broke the alignment of secondary
subs, which would overlap the primary subs instead of always being
placed at the top. This can be tested with
--sub-file='edl://!no_clip;!delay_open,media_type=sub;secondary_sub.srt'
When a sub is added, sub.c:reinit_sub() is called. This calls in
init_subdec() -> dec_sub.c:sub_create() -> init_decoder() ->
sd_ass:init(). Then reinit_sub() calls
sub_control(track->d_sub, SD_CTRL_SET_TOP, &(bool){!!order}) which sets
sd_ass_priv.on_top = true for secondary subs.
But for EDL subs the real sub is initialized again when in
dec_sub.c:sub_read_packets() is_new_segment() returns true and
update_segment() is called, or when sub_get_bitmaps() calls
update_segment(). update_segment() then calls init_decoder(), which
calls sd_ass:init(), so sd_ass_priv is reinitialized, and its on_top
property is left false. This commit sets it to true again.
For URLs that need to be downloaded it seems that the update_segment()
call that reinitializes sd_ass_priv is always the one in
sub_read_packets(), but with local subs sub_get_bitmaps() is usually
called earlier (though there shouldn't be a reason to use the EDL URL
for local subs), so I added the order parameter to sub_create(), rather
than adding it to all of update_segment(), sub_read_packets() and
sub_get_bitmaps().
Also removes the cast to bool in the other sub_control call, since
sub/sd_ass.c:control already casts arg to bool when cmd is
SD_CTRL_SET_TOP.
See manpage additions. This was requested, sort of. Although what has
been requested might be something completely different. So this is
speculative.
This also changes sub_get_text() to return an allocated copy, because
the buffer shit was too damn messy.
Making OSD/subtitle bitmaps refcounted was planend a longer time ago,
e.g. the sub_bitmaps.packed field (which refcounts the subtitle bitmap
data) was added in 2016. But nothing benefited much from it, because
struct sub_bitmaps was usually stack allocated, and there was this weird
callback stuff through osd_draw().
Make it possible to get actually refcounted subtitle bitmaps on the OSD
API level. For this, we just copy all subtitle data other than the
bitmaps with sub_bitmaps_copy(). At first, I had planned some fancy
refcount shit, but when that was a big mess and hard to debug and just
boiled to emulating malloc(), I made it a full allocation+copy. This
affects mostly the parts array. With crazy ASS subtitles, this parts
array can get pretty big (thousands of elements or more), in which case
the extra alloc/copy could become performance relevant. But then again
this is just pure bullshit, and I see no need to care. In practice, this
extra work most likely gets drowned out by libass murdering a single
core (while mpv is waiting for it) anyway. So fuck it.
I just wanted this so draw_bmp.c requires only a single call to render
everything. VOs also can benefit from this, because the weird callback
shit isn't necessary anymore (simpler code), but I haven't done anything
about it yet. In general I'd hope this will work towards simplifying the
OSD layer, which is prerequisite for making actual further improvements.
I haven't tested some cases such as the "overlay-add" command. Maybe it
crashes now? Who knows, who cares.
In addition, it might be worthwhile to reduce the code duplication
between all the things that output subtitle bitmaps (with repacking,
image allocation, etc.), but that's orthogonal.
Using mpv without libass isn't really supported, since it's not only
used to display ASS subtitles, but all text subtitles, and even OSD.
At least 1 user complained that the player printed a warning if built
without libass. Avoid trying to create the impression that using this
software without libass is in any way supported or desirable, and make
it fully mandatory.
(As far as making dependencies optional goes, I'd rather make ffmpeg
optional, which is an oversized and bloated library, rather than
something tiny like libass.)
Setting demux_set_stream_wakeup_cb() will make all sh_stream (i.e.
track) specific wakeups go to this callback. But the callback takes care
of only the sub_preload() case (where it tries to pre-load subtitles
from already parsed and memory-present subtitles in a blocking way).
The old code assumed that the normal demuxer wakeup callback is called.
This was disregarded when the newer code was added. (And actually, the
original plan was to make _all_ per-sh_stream wakeups go to specialized
callbacks to avoid wasted work. dec_sub really should set the callback
always, and propagate wakeups to the playloop code. But it's too far
into the night to write coherent code.)
I couldn't actually observe any manifestation of this bug. Normally, the
playloop wakes up for other reasons (such as driving audio and video
decoding), so the lost wakeups rarely matter.
A negative subtitle delay means that subtitles from the future should be
shown earlier. With muxed subtitles, subtitle packets are demuxed along
with audio and video packets. But since they are demuxed "lazily",
nothing guarantees that subtitle packets from the future are available
in time.
Typically, the user-observed effect is that subtitles do not appear at
all (or too late) with large negative --sub-delay values, but that using
--cache might fix this.
Make this behave better. Automatically extend read-ahead to as much as
needed by the subtitles. It seems it's the easiest to pass the subtitle
render timestamp to the demuxer in order to guarantee that everything is
read. This timestamp based approach might be fragile, so disable it if
no negative sub-delay is used.
As far as the player frontend part is concerned, this makes use of the
code path for external subtitles, which are not lazily demuxed, and may
already trigger waiting.
Fixes: #7484
Until now, filter_sdh was simply a function that was called by sd_ass
directly (if enabled).
I want to add another filter, so it's time to turn this into a somewhat
more general subtitle filtering infrastructure.
I pondered whether to reuse the audio/video filtering stuff - but better
not. Also, since subtitles are horrible and tend to refuse proper
abstraction, it's still messed into sd_ass, instead of working on the
dec_sub.c level. Actually mpv used to have subtitle "filters" and even
made subtitle converters part of it, but it was fairly horrible, so
don't do that again.
In addition, make runtime changes possible. Since this was supposed to
be a quick hack, I just decided to put all subtitle filter options into
a separate option group (=> simpler change notification), to manually
push the change through the playloop (like it was sort of before for OSD
options), and to recreate the sub filter chain completely in every
change. Should be good enough.
One strangeness is that due to prefetching and such, most subtitle
packets (or those some time ahead) are actually done filtering when we
change, so the user still needs to manually seek to actually refresh
everything. And since subtitle data is usually cached in ASS_Track (for
other terrible but user-friendly reasons), we also must clear the
subtitle data, but of course only on seek, since otherwise all subtitles
would just disappear. What a fucking mess, but such is life. We could
trigger a "refresh seek" to make this more automatic, but I don't feel
like it currently.
This is slightly inefficient (lots of allocations and copying), but I
decided that it doesn't matter. Could matter slightly for crazy ASS
subtitles that render with thousands of events.
Not very well tested. Still seems to work, but I didn't have many test
cases.
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)
(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)
How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.
The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).
Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).
The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.
Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.
E.g.:
bool before = pts_a < pts_b;
would need to be:
bool before = forward
? pts_a < pts_b
: pts_a > pts_b;
or:
bool before = pts_a * dir < pts_b * dir;
or if you, as it's implemented now, just do this after decoding:
pts_a *= dir;
pts_b *= dir;
and then in the normal timing/renderer code:
bool before = pts_a < pts_b;
Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.
Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.
As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)
VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.
FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
There are 3 packet reading functions in the demux API, which all
function completely differently. One of them, demux_read_packet(), has
only 1 caller, which is in dec_sub.c. Change this caller to use
demux_read_packet_async() instead. Since it really wants to do a
blocking call, setup some proper waiting. This uses mp_dispatch_queue,
because even though it's overkill, it needs the least code.
In practice, waiting actually never happens. This code is only called on
code paths where everything is already read into memory (libavformat's
subtitle demuxers simply behave this way). It's still a bit of a
"coincidence", so implement it properly anyway.
If suubtitle decoder init fails, we still need to unset the demuxer
wakeup callback. Add a sub_destroy() call to the failure path. This also
happens to fix a missed pthread_mutex_destroy() call (in practice this
was a nop, or a memory leak on BSDs).
Remove them from the big MPOpts struct and move them to their sub
structs. In the places where their fields are used, create a private
copy of the structs, instead of accessing the semi-deprecated global
option struct instance (mpv_global.opts) directly.
This actually makes accessing these options finally thread-safe. They
weren't even if they should have for years. (Including some potential
for undefined behavior when e.g. the OSD font was changed at runtime.)
This is mostly transparent. All options get moved around, but most users
of the options just need to access a different struct (changing sd.opts
to a different type changes a lot of uses, for example).
One thing which has to be considered and could cause potential
regressions is that the new option copies must be explicitly updated.
sub_update_opts() takes care of this for example.
Another thing is that writing to the option structs manually won't work,
because the changes won't be propagated to other copies. Apparently the
only affected case is the implementation of the sub-step command, which
tries to change sub_delay. Handle this one explicitly (osd_changed()
doesn't need to be called anymore, because changing the option triggers
UPDATE_OSD, and updates the OSD as a consequence). The way the option
value is propagated is rather hacky, but for now this will do.
It was split at least across osd.c and sd_ass.c/sd_lavc.c. sd_lavc.c
actually ignored most of the more obscure subtitle timing things.
There's no reason for this - just move it all to dec_sub.c (mostly from
sd_ass.c, because it has some of the most complex stuff).
Now timestamps are transformed as they enter or leave dec_sub.c.
There appear to have been some subtle mismatches about how subtitle
timestamps were transformed, e.g. sd_functions.accepts_packet didn't
apply the subtitle speed to the timestamp. This patch should fix them,
although it's not clear if they caused actual misbehavior.
The semantics of SD_CTRL_SUB_STEP are slightly changed, which is the
reason for the changes in command.c and sd_lavc.c.
The new_segment field was used to track the decoder data flow handler of
timeline boundaries, which are used for ordered chapters etc. (anything
that sets demuxer_desc.load_timeline). This broke seeking with the
demuxer cache enabled. The demuxer is expected to set the new_segment
field after every seek or segment boundary switch, so the cached packets
basically contained incorrect values for this, and the decoders were not
initialized correctly.
Fix this by getting rid of the flag completely. Let the decoders instead
compare the segment information by content, which is hopefully enough.
(In theory, two segments with same information could perhaps appear in
broken-ish corner cases, or in an attempt to simulate looping, and such.
I preferred the simple solution over others, such as generating unique
and stable segment IDs.)
We still add a "segmented" field to make it explicit whether segments
are used, instead of doing something silly like testing arbitrary other
segment fields for validity.
Cached seeking with timeline stuff is still slightly broken even with
this commit: the seek logic is not aware of the overlap that segments
can have, and the timestamp clamping that needs to be performed in
theory to account for the fact that a packet might contain a frame that
is always clipped off by segment handling. This can be fixed later.
The previous commit means subtitles were reinitialized on every seek
(even within a segment). This commit restores the old behavior.
To check whether the segment changed at all, we don't reset the current
start/end values. This assumes the decoder wrapper is always fed by a
stream which doesn't mix segment and non-segment packets, which is
currently always true.
The accepts_packet packet callback is supposed to deal with subtitle
decoders which have only a small queue of current subtitle events (i.e.
sd_lavc.c), in case feeding it too many packets would discard events
that are still needed.
Normally, the number of subtitles that need to be preserved is estimated
by the rendering pts (get_bitmaps() argument). Rendering lags behind
decoding, so normally the rendering pts is smaller than the next video
frame pts, and we simply discard all subtitle events until the rendering
pts.
This breaks down in some annoying corner cases. One of them is seeking
backwards: the VO will still try to render the old PTS during seeks,
which passes a high PTS to the subtitle renderer, which in turn would
discard more subtitles than it should. There is a similar issue with
forward seeks. Add hacks to deal with those issues.
There should be a better way to deal with the essentially unknown
"rendering position", which is made worse by screenshots or rendering
with vf_sub. At the very least, we could handle seeks better, and e.g.
either force the VO not to re-render subs after seeks (ugly), or
introduce seek sequence numbers to distinguish attempts to render
earlier subtitles when a seek is done.
The intention is to let mp_ass_packer_pack() produce different output
for the RGBA and LIBASS formats. VOs (or whatever generates the OSD)
currently do not signal a preferred format, and this mechanism just
exists to switch between RGBA and LIBASS formats correctly, preferring
LIBASS if the VO supports it.