Secondary subtitle streams (to be shown on the top of the screen along
main subtitle stream) were shown with normal alignment. This is because
we tell libass to override the alignment style (a relatively recent
change, see commit 2f1eb49e). This would behave differently with old
libass versions too.
To escape the mess, just set the alignment explicitly with an override
tag instead of modifying the style.
Remove wrapper properties for OSD and video position updates, use the
new mechanism for them. We can mark the options directly. Update
behavior will work for more options (since I've casually marked more
affected options than the old less direct mechanism covered).
If --blend-subtitles=yes is given, vo_opengl will call osd_draw()
multiple times, once for subtitles, and once for OSD. This meant that
the want_redraw flag was reset before the OSD was rendered, which in
turn meant that update_osd() was never called. It seems like removing
the per-OSD object want_redraw wasn't such a good idea. Fix it by
reintroducing such a flag for OSDTYPE_OSD only.
Also, the want_redraw flag is now unused, so kill it.
Another regression caused by commit 9c9cf125. Fixes#3535.
This could in theory lead to missed updates if subtitles were switched
or external OSD overlays (via overlay-add) were updated. While the
change IDs of each of those were consistent, switching between two
separate OSD sources is not, and we have to explicitly trigger a change.
Regression since commit 9c9cf125. The new code is actually better,
because we do exactly what is needed, and don't just mess with the
update ID for libass-based OSD.
Remove the per-part force_redraw flags, and instead make the difference
between flagging dirty state and returning it to the player frontend
more explicit. The big issue is that 1. the OSD needs to know the dirty
state, and it should be cleared strictly when it is re-rendered
(force_redraw flag), and 2. the player core needs to be notified once,
and the notification must be reset (want_redraw flag).
The call in loadfile.c is replaced by making osd_set_sub() set the
change flag. Increasing the change flag on dirty state (the force_redraw
check in render_object()) should not be needed, because OSD part
renderers set it correctly (at least now).
Doing this just because someone pointed this out.
This is a bug fix, and the text alignment functionality probably got
lost sometime along the way.
For ASS subtitles, this could have unintended consequences, so it's hard
to get right - thus it's not applied to ASS subtitles.
For other text subtitles, this should be fine, though. It still works on
ASS subtitles as promised by the manpage if --no-sub-ass is used.
Whitelisting supported codecs is (probably) still better than just
allowing everything, given the weird FFmpeg API. I'm also assuming
Libav doesn't even have the codec ID, but I didn't check.
Also add a --teletext-page option, since otherwise it decodes every
teletext page and shows them in succession.
And yes, we can't use av_opt_set_int() - instead we have to set it as
string. Because FFmpeg's option system is terrible.
Like it's done for audio and video. Just to be uniform.
I'm sorry for deleting the anti-ffmpeg vitriol. It's still all true, but
since we decided to always set the timebase, the crappiness is isolated
to FFmpeg internals.
The accepts_packet packet callback is supposed to deal with subtitle
decoders which have only a small queue of current subtitle events (i.e.
sd_lavc.c), in case feeding it too many packets would discard events
that are still needed.
Normally, the number of subtitles that need to be preserved is estimated
by the rendering pts (get_bitmaps() argument). Rendering lags behind
decoding, so normally the rendering pts is smaller than the next video
frame pts, and we simply discard all subtitle events until the rendering
pts.
This breaks down in some annoying corner cases. One of them is seeking
backwards: the VO will still try to render the old PTS during seeks,
which passes a high PTS to the subtitle renderer, which in turn would
discard more subtitles than it should. There is a similar issue with
forward seeks. Add hacks to deal with those issues.
There should be a better way to deal with the essentially unknown
"rendering position", which is made worse by screenshots or rendering
with vf_sub. At the very least, we could handle seeks better, and e.g.
either force the VO not to re-render subs after seeks (ugly), or
introduce seek sequence numbers to distinguish attempts to render
earlier subtitles when a seek is done.
This has two reasons:
1. I tend to add new fields to this metadata, and every time I've done
so I've consistently forgotten to update all of the dozens of places in
which this colorimetry metadata might end up getting used. While most
usages don't really care about most of the metadata, sometimes the
intend was simply to “copy” the colorimetry metadata from one struct to
another. With this being inside a substruct, those lines of code can now
simply read a.color = b.color without having to care about added or
removed fields.
2. It makes the type definitions nicer for upcoming refactors.
In going through all of the usages, I also expanded a few where I felt
that omitting the “young” fields was a bug.
vo_vaapi is the only thing which can't scale RGBA on the GPU. (Other
cases of RGBA scaling are handled in draw_bmp.c for some reason.)
Move this code and get rid of the osd_conv_cache thing.
Functionally, nothing changes.
This affects VOs (or other code which render OSD) which does not support
the LIBASS format, but only RGBA. Instead of having a converter stage in
osd.c, make mp_ass_packer_pack() output directly in RGBA.
In general, this is work towards refcounted subtitle images.
Although we could keep the "converter" design, doing it this way seems
simpler, at least considering the current situation with only 2 OSD
formats. It also prevents copying & packing the data twice, which will
lead to better performance. (Although I guess this case is not important
at all.)
It also fixes --force-rgba-osd-rendering when used with vo_opengl,
vo_vdpau, and vo_direct3d.
The intention is to let mp_ass_packer_pack() produce different output
for the RGBA and LIBASS formats. VOs (or whatever generates the OSD)
currently do not signal a preferred format, and this mechanism just
exists to switch between RGBA and LIBASS formats correctly, preferring
LIBASS if the VO supports it.
Change all producer of libass images to packing the bitmaps into a
single larger bitmap directly when they're output. This is supposed to
help working towards refcounted sub bitmaps.
This will reduce performance for VOs like vo_xv, but not for vo_opengl.
vo_opengl simply will pick up the pre-packed sub bitmaps, and skip
packing them again. vo_xv will copy and pack the sub bitmaps
unnecessarily - but if we want sub bitmap refcounting, they'd have to be
copied anyway.
The packing code cannot be removed yet from vo_opengl, because there are
certain corner cases that still produce unpackad other sub bitmaps.
Actual refcounting will also require more work.
Since there are not many sub-rectangles, this doesn't cost too much. On
the other hand, it avoids frequent warnings with vo_xv.
Also, the second copy in mp_blur_rgba_sub_bitmap() can be dropped.
The previous few commits changed sd_lavc.c's output to packed RGB sub-
images. In particular, this means all sub-bitmaps are part of a larger,
single bitmap. Change the vo_opengl OSD code such that it can make use
of this, and upload the pre-packed image, instead of packing and copying
them again.
This complicates the upload code a bit (4 code paths due to messy PBO
handling). The plan is to make sub-bitmaps always packed, but some more
work is required to reach this point. The plan is to pack libass images
as well. Since this implies a copy, this will make it easy to refcount
the result.
(This is all targeted towards vo_opengl. Other VOs, vo_xv, vo_x11, and
vo_wayland in particular, will become less efficient. Although at least
vo_vdpau and vo_direct3d could be switched to the new method as well.)
The sub-bitmaps get extended by --sub-gauss, so we have to compute the
bounding box on the original subs. Not sure if this is really
eqauivalent to what the code did before, and I don't have the sample
anymore. (But this approach sure is a _shitty_ hack.)
Implement it directly in sd_lavc.c as well. Blurring requires extending
the size of the sub-images by the blur radius. Since we now want
sub_bitmaps to be packed into a single image, and we don't want to
repack for blurring, we add some extra padding to each sub-bitmap in the
initial packing, and then extend their size later. This relies on the
previous bitmap_packer commit, which always adds the padding in all
cases.
Since blurring is now done on parts of a large bitmap, the data pointers
can become unaligned, depending on their position. To avoid shitty
libswscale printing a dumb warning, allocate an extra image, so that the
blurring pass is done on two newly allocated images. (I don't find this
feature important enough to waste more time on it.)
The previous refactor accidentally broke this feature due to a logic bug
in osd.c. It didn't matter before it happened to break, and doesn't
matter now since the code paths are different.
Until now, subtitle renderers could export SUBBITMAP_INDEXED, which is a
8 bit per pixel with palette format. sd_lavc.c was the only renderer
doing this, and the result was converted to RGBA in every use-case
(except maybe when the subtitles were hidden.)
Change it so that sd_lavc.c converts to RGBA on its own. This simplifies
everything a bit, and the palette handling can be removed from the
common code.
This is also preparation for making subtitle images refcounted. The
"caching" in img_convert.c is a PITA in this respect, and needs to be
redone. So getting rid of some img_convert.c code is a positive side-
effect. Also related to refcounted subtitles is packing them into a
single mp_image. Fewer objects to refcount is easier, and for the libass
format the same will be done. The plan is to remove manual packing from
the VOs which need single images entirely.
Older ffmpeg releases don't have ffmpeg git commit
50401f5fb7d778583b03a13bc4440f71063d319d, which fixes ffmpeg's
pkt_timebase check to reject its default "unset" timebase as invalid.
The consequence was that all non-PGS bitmap subtitle timestamps were
forced to 0.
Of course this hit _only_ shitty distros using outdated/badly maintained
ffmpeg releases, so this is not worth working around. I've already
wasted a lot of time on analyzing this dumb issue, and it could be
useful for bisecting, so don't drop pre-3.0 ffmpeg just yet.
Fixes#3109.
libass's ass_process_chunk expects long long int for the timecode and
durations arguments, thus should use llrint instead of lrint.
This does not cause any problems on most platforms, but on cygwin, it
causes strange subtitle behaviour, like subtitles not showing, getting
stuck or old subtitles showing at the same time as new subtitles.
--sub-ass=no / --ass=no still work, but --ass-style-override=strip is
preferred now. With this change, --ass-style-override can control all
the types of style overriding.
There is an obscure feature which requires essentially reordering PTS
from different packets.
Unfortunately, libavcodec introduced a ridiculously shitty API for
this, which works very much unlike the audio/video API. Instead of
simply passing through the PTS, it wants to fuck with it for no reason,
and even worse, fucks with other fields and changes their semantivcs
(??????). This affects AVSubtitle.end_display_time. This probably will
cause issues for us, and I have no desire to find out whether it will.
Since only PGS requires this, and it happens not to use
end_display_time, do it for PGS only.
Fixes#3016.
Basically, this information is useless, because some muxers (hurr
libavformat) write bogus information anyway. This means if we e.g. see
PGS packets in mkv with duration explicitly set to 0, we must not trust
that value anyway. (The FFmpeg API problem is leaking into files, how
nice.)
In particular, this prevents subtitle packets from building up in the
subtitle queue if e.g. --vo=null is used. In this situation,
sub_get_bitmaps() is never called, and thus the segment never switched.
This also seems to help with flickering at segment switch boundaries (if
subs are supposed to be visible at the transition points).
In theory, this could trigger a switch too early, but the way VO and
subtitle renderer interact wrt. timing is a bit iffy anyway.
Glitches when resizing are still possible, but are reduced. Other VOs
could support this too, but don't need to do so.
(Totally avoiding glitches would be much more effort, and probably not
worth the trouble. How about you just watch the video the player is
playing, instead of spending your time resizing the window.)
No need to have them everywhere. The only exception/annoyance is
MAX_OSD_PARTS, which is now basically duplicated (and at runtime
initialization is checked with an assert()).
Until now, there was only 1 global ASS overlay that could be set by all
scripts. This was often perceived as bug when multiple scripts tried to
set their own ASS overlay.
This was kind of hard to solve because the script could set its own ASS
PlayResX/Y, which makes it impossible to share a single ASS_Renderer for
multiple scripts. The OSC unfortunately makes use of this feature (and
unfortunately can't be fixed because it's a POS), so we're stuck with
this complication.
Implement the worst-case solution and fix this by creating separate ASS
track and renderer objects for each script that wants to set an ASS
overlay.
The z-order is decided by the order the scripts set their text first.
This is essentially random, unless you do it at script init, and you
pass scripts in a specific order. Script initialization is currently
serialized (as a feature), so the first loaded script gets lowest
Z-order.
The Lua script API interestingly remains the same. (And also will remain
undocumented, unsupported, and potentially volatile.)
Instead of passing an explicit cache to the function, the res parameter
is used. Also, instead of replacing its contents, sub bitmaps are now
appended to it (all assuming the format doesn't actually change).
This is preparation for the following commits.
Do not scale OSD mouse input to the ASS OSD script resolution. The
original idea of this mechanism was that the user doesn't have to care
about the actual resolution of anything, and can just use the OSD
resolution consistently. But this made things worse.
Remove the implicit scaling, and always use the screen resolution.
(Except with --vo=xv, where additional scaling is forced upon
everything.)
Drop get_osd_resolution(). There is no replacement. Rename
get_screen_size() and get_screen_margins() to use "osd" instead of
"screen". For anything but --vo=xv these are equivalent, but with
--vo=xv the OSD resolution has additional implicit scaling.
Add code to osc.lua which emulates the old behavior.
Note that none of the changed functions were public API, so implicit
breakage of scripts which used it is just going to happen.
Subtitles can be preloaded, which means they're fully read and copied
into ASS_Track. This in turn is mainly for the sake of being able to do
subtitle seeking (when it comes down to it, subtitle seeking is the
cause for most trouble here).
Commit a714f8e92 broke preloaded subtitles which have events with
unknown duration, such as some MicroDVD samples. The event list gets
cleared on every seek, so the property of being preloaded obviously gets
lost.
Fix this by moving most of the preloading logic to dec_sub.c. If the
subtitle list gets cleared, they are not considered preloaded anymore,
and the logic for demuxed subtitles is used.
As another minor thing, preloadeding subtitles did neither disable the
demux stream, nor did it discard packets. Thus you could get queue
overflows in theory (harmless, but annoying). Fix this by explicitly
discarding packets in preloaded mode.
In summary, now the only difference between preloaded and normal
demuxing are:
1. a seek is issued, and all packets are read on start
2. during playback, discard the packets instead of feeding them to the
subtitle decoder
This is still petty annoying. It would be nice if maintaining the
subtitle index (and maybe a subtitle packet cache for instant subtitle
presentation when seeking back) could be maintained in the demuxer
instead. Half of all file formats with interleaved subtitles have
this anyway (mp4, mkv muxed with newer mkvmerge).