this field is used only in a special vo draining edge case.
switching to an atomic reduces osd->lock contention
between the mpv core (in write_video) and vo threads
which are managing osd rendering manually (such as vo_rpi).
Signed-off-by: Aman Karmani <aman@tmm1.net>
Should be somewhat helpful. (All VOs are full of code trying to
compensate for this, more or less, and this will allow simplifying
some code later. Maybe.)
The screen size is mostly for robustness checks.
Making OSD/subtitle bitmaps refcounted was planend a longer time ago,
e.g. the sub_bitmaps.packed field (which refcounts the subtitle bitmap
data) was added in 2016. But nothing benefited much from it, because
struct sub_bitmaps was usually stack allocated, and there was this weird
callback stuff through osd_draw().
Make it possible to get actually refcounted subtitle bitmaps on the OSD
API level. For this, we just copy all subtitle data other than the
bitmaps with sub_bitmaps_copy(). At first, I had planned some fancy
refcount shit, but when that was a big mess and hard to debug and just
boiled to emulating malloc(), I made it a full allocation+copy. This
affects mostly the parts array. With crazy ASS subtitles, this parts
array can get pretty big (thousands of elements or more), in which case
the extra alloc/copy could become performance relevant. But then again
this is just pure bullshit, and I see no need to care. In practice, this
extra work most likely gets drowned out by libass murdering a single
core (while mpv is waiting for it) anyway. So fuck it.
I just wanted this so draw_bmp.c requires only a single call to render
everything. VOs also can benefit from this, because the weird callback
shit isn't necessary anymore (simpler code), but I haven't done anything
about it yet. In general I'd hope this will work towards simplifying the
OSD layer, which is prerequisite for making actual further improvements.
I haven't tested some cases such as the "overlay-add" command. Maybe it
crashes now? Who knows, who cares.
In addition, it might be worthwhile to reduce the code duplication
between all the things that output subtitle bitmaps (with repacking,
image allocation, etc.), but that's orthogonal.
Add an infrastructure for collecting performance-related data, use it in
some places. Add rendering of them to stats.lua.
There were two main goals: minimal impact on the normal code and normal
playback. So all these stats_* function calls either happen only during
initialization, or return immediately if no stats collection is going
on. That's why it does this lazily adding of stats entries etc. (a first
iteration made each stats entry an API thing, instead of just a single
stats_ctx, but I thought that was getting too intrusive in the "normal"
code, even if everything gets worse inside of stats.c).
You could get most of this information from various profilers (including
the extremely primitive --dump-stats thing in mpv), but this makes it
easier to see the most important information at once (at least in
theory), partially because we know best about the context of various
things.
Not very happy with this. It's all pretty primitive and dumb. At this
point I just wanted to get over with it, without necessarily having to
revisit it later, but with having my stupid statistics.
Somehow the code feels terrible. There are a lot of meh decisions in
there that could be better or worse (but mostly could be better), and it
just sucks but it's also trivial and uninteresting and does the job. I
guess I hate programming. It's so tedious and the result is always shit.
Anyway, enjoy.
This is more or less a minimal hack to make _some_ text measurement
functionality available to scripts. Since libass does not support such a
thing, this simply uses the bounding box of the rendered text.
This is far from ideal. Problems include:
- using a bitmap bounding box
- additional memory waste and/or flushing caches
- dependency on window size
- odd small deviations with different window sizes (run osd-test.lua and
resize the window after each timer update; the bounding boxes aren't
adjusted in an overly useful way)
- inability to query the size _after_ actual rendering
But I guess it's a start. Since I'm aware that it's crap, add a threat
to the manpage that this may be changed/removed again. For now, I'm
interested whether anyone will have use for it in its current form, as
it's an often requested feature.
Lua scripting has an undocumented mp.set_osd_ass() function, which is
used by osc.lua and console.lua. Apparently, 3rd party scripts also use
this. It's probably time to make this a public API.
The Lua implementation just bypassed the libmpv API. To make it usable
by any type of client, turn it into a command, "osd-overlay".
There's already a "overlay-add". Ignore it (although the manpage admits
guiltiness). I don't really want to deal with that old command. Its main
problem is that it uses global IDs, while I'd like to avoid that scripts
mess with each others overlays (whether that is accidentally or
intentionally). Maybe "overlay-add" can eventually be merged into
"osd-overlay", but I'm too lazy to do that now.
Scripting now uses the commands. There is a helper to manage OSD
overlays. The helper is very "thin"; I only want to force script authors
to use the ID allocation, which may help with putting multiple scripts
into a single .lua file without causing conflicts (basically, avoiding
singletons within a script's environment). The old set_osd_ass() is
emulated with the new API.
The JS scripting wrapper also provides a set_osd_ass() function, which
calls internal mpv API. Comment that part (to keep it compiling), but
I'm leaving it to @avih to finish the change.
Remove them from the big MPOpts struct and move them to their sub
structs. In the places where their fields are used, create a private
copy of the structs, instead of accessing the semi-deprecated global
option struct instance (mpv_global.opts) directly.
This actually makes accessing these options finally thread-safe. They
weren't even if they should have for years. (Including some potential
for undefined behavior when e.g. the OSD font was changed at runtime.)
This is mostly transparent. All options get moved around, but most users
of the options just need to access a different struct (changing sd.opts
to a different type changes a lot of uses, for example).
One thing which has to be considered and could cause potential
regressions is that the new option copies must be explicitly updated.
sub_update_opts() takes care of this for example.
Another thing is that writing to the option structs manually won't work,
because the changes won't be propagated to other copies. Apparently the
only affected case is the implementation of the sub-step command, which
tries to change sub_delay. Handle this one explicitly (osd_changed()
doesn't need to be called anymore, because changing the option triggers
UPDATE_OSD, and updates the OSD as a consequence). The way the option
value is propagated is rather hacky, but for now this will do.
Lua scripts can call osd_set_external() early (before the VO window is
created and obj->vo_res is filled), in which case the PlayResX field
would be set to nonsense, and libass would print a pointless warning.
There's an easy and a hard fix: either just go on and pass dummy values
to libass (basically like before, just clamp them to avoid the values
which make libass print the warning). Or attempt to update the PlayRes
field to correct values on rendering (since at rendering time, you
always know the screen size and the correct values). Do the latter.
Since various things use PlayRes for scaling things, this might still
not be fully ideal. This is a general problem with the async scripting
interface.
If a VO is created, but no video is playing (i.e. --force-window is
used), then until now no subtitles were shown. This is because VO
subtitle display normally depends on video frame timing. If there are no
video frames, there can be no subtitles.
Change this and add some code to handle this situation specifically. Set
a subtitle PTS manually and request VO redrawing manually, which gets
the subtitles rendered somehow.
This is kind of shaky. The subtitles are essentially sampled at
arbitrary times (such as when new audio data is decoded and pushed to
the AO, or on user interaction). To make a it slightly more consistent,
force a completely arbitrary minimum FPS of 10.
Other solutions (such as creating fake video) would be more intrusive or
would require VO-level API changes.
Fixes#3684.
If --blend-subtitles=yes is given, vo_opengl will call osd_draw()
multiple times, once for subtitles, and once for OSD. This meant that
the want_redraw flag was reset before the OSD was rendered, which in
turn meant that update_osd() was never called. It seems like removing
the per-OSD object want_redraw wasn't such a good idea. Fix it by
reintroducing such a flag for OSDTYPE_OSD only.
Also, the want_redraw flag is now unused, so kill it.
Another regression caused by commit 9c9cf125. Fixes#3535.
Remove the per-part force_redraw flags, and instead make the difference
between flagging dirty state and returning it to the player frontend
more explicit. The big issue is that 1. the OSD needs to know the dirty
state, and it should be cleared strictly when it is re-rendered
(force_redraw flag), and 2. the player core needs to be notified once,
and the notification must be reset (want_redraw flag).
The call in loadfile.c is replaced by making osd_set_sub() set the
change flag. Increasing the change flag on dirty state (the force_redraw
check in render_object()) should not be needed, because OSD part
renderers set it correctly (at least now).
Doing this just because someone pointed this out.
This affects VOs (or other code which render OSD) which does not support
the LIBASS format, but only RGBA. Instead of having a converter stage in
osd.c, make mp_ass_packer_pack() output directly in RGBA.
In general, this is work towards refcounted subtitle images.
Although we could keep the "converter" design, doing it this way seems
simpler, at least considering the current situation with only 2 OSD
formats. It also prevents copying & packing the data twice, which will
lead to better performance. (Although I guess this case is not important
at all.)
It also fixes --force-rgba-osd-rendering when used with vo_opengl,
vo_vdpau, and vo_direct3d.
Change all producer of libass images to packing the bitmaps into a
single larger bitmap directly when they're output. This is supposed to
help working towards refcounted sub bitmaps.
This will reduce performance for VOs like vo_xv, but not for vo_opengl.
vo_opengl simply will pick up the pre-packed sub bitmaps, and skip
packing them again. vo_xv will copy and pack the sub bitmaps
unnecessarily - but if we want sub bitmap refcounting, they'd have to be
copied anyway.
The packing code cannot be removed yet from vo_opengl, because there are
certain corner cases that still produce unpackad other sub bitmaps.
Actual refcounting will also require more work.
No need to have them everywhere. The only exception/annoyance is
MAX_OSD_PARTS, which is now basically duplicated (and at runtime
initialization is checked with an assert()).
Until now, there was only 1 global ASS overlay that could be set by all
scripts. This was often perceived as bug when multiple scripts tried to
set their own ASS overlay.
This was kind of hard to solve because the script could set its own ASS
PlayResX/Y, which makes it impossible to share a single ASS_Renderer for
multiple scripts. The OSC unfortunately makes use of this feature (and
unfortunately can't be fixed because it's a POS), so we're stuck with
this complication.
Implement the worst-case solution and fix this by creating separate ASS
track and renderer objects for each script that wants to set an ASS
overlay.
The z-order is decided by the order the scripts set their text first.
This is essentially random, unless you do it at script init, and you
pass scripts in a specific order. Script initialization is currently
serialized (as a feature), so the first loaded script gets lowest
Z-order.
The Lua script API interestingly remains the same. (And also will remain
undocumented, unsupported, and potentially volatile.)
Instead of passing an explicit cache to the function, the res parameter
is used. Also, instead of replacing its contents, sub bitmaps are now
appended to it (all assuming the format doesn't actually change).
This is preparation for the following commits.
Reduces memory usage and startup times. The implementation is a bit
weird, because both OSD parts have conflicting requirements on the used
ASS styles.
This was used with --no-sub-ass (aka --no-ass). This option (which is
not yet removed) strips all styling from the subtitles, and renders them
as plaintext only. For some reason, it originally seemed convenient to
reuse all the OSD text rendering code (osd_libass.c). While this was
indeed simple, it had a bad influence on the rest of the code. For
example, it had to decide whether to go through the OSD code path, or
the proper subtitle renderer in sd_ass.c.
Kill the OSD subtitle renderer. Reimplement --no-sub-ass and also
"secondary" subtitles in sd_ass.c. fill_plaintext() contains some rather
minor code duplication with osd_libass.c for setting up a dummy
ASS_Event and escaping the stripped text. Since sd_ass.c already has to
handle "normal" text subtitles, and has code for stripping ASS tags,
this remains all relatively simple.
Remove all the unnecessary crap from the rest of the code.
Nobody wanted to restore this, so it gets the boot.
If anyone still wants to volunteer to restore menu support, this would
be welcome. (I might even try it myself if I feel masochistic and like
wasting a lot of time for nothing.) But if it does get restored, it
should be done differently. There were many stupid things about how it
was done. For example, it somehow tried to pull mp_nav_events through
all the layers (including needing to "buffer" them in the demuxer),
which was needlessly complicated. It could be done simpler.
This code was already inactive, so this commit actually changes nothing.
Also keep in mind that normal DVD/BD playback still works.
There was a somewhat obscure optimization in the OSD and subtitle
rendering path: if only the position of the sub-images changed, and not
the actual image data, uploading of the image data could be skipped. In
theory, this could speed up things like scrolling subtitles.
But it turns out that even in the rare cases subtitles have such scrolls
or axis-aligned movement, modern libass rarely signals this kind of
change. Possibly this is because of sub-pixel handling and such, which
break this.
As such, it's a worthless optimization and just introduces additional
complexity and subtle bugs (especially in cases libass does the
opposite: incorrectly signaling a position change only, which happened
before). Remove this optimization, and rename bitmap_pos_id to
change_id.
Let the VOs draw the OSD on their own, instead of making OSD drawing a
separate VO driver call. Further, let it be the VOs responsibility to
request subtitles with the correct PTS. We also basically allow the VO
to request OSD/subtitles at any time.
OSX changes untested.
Do two things:
1. add locking to struct osd_state
2. make struct osd_state opaque
While 1. is somewhat simple, 2. is quite horrible. Lots of code accesses
lots of osd_state (and osd_object) members. To make sure everything is
accessed synchronously, I prefer making osd_state opaque, even if it
means adding pretty dumb accessors.
All of this is meant to allow running VO in their own threads.
Eventually, VOs will request OSD on their own, which means osd_state
will be accessed from foreign threads.