Add an infrastructure for collecting performance-related data, use it in
some places. Add rendering of them to stats.lua.
There were two main goals: minimal impact on the normal code and normal
playback. So all these stats_* function calls either happen only during
initialization, or return immediately if no stats collection is going
on. That's why it does this lazily adding of stats entries etc. (a first
iteration made each stats entry an API thing, instead of just a single
stats_ctx, but I thought that was getting too intrusive in the "normal"
code, even if everything gets worse inside of stats.c).
You could get most of this information from various profilers (including
the extremely primitive --dump-stats thing in mpv), but this makes it
easier to see the most important information at once (at least in
theory), partially because we know best about the context of various
things.
Not very happy with this. It's all pretty primitive and dumb. At this
point I just wanted to get over with it, without necessarily having to
revisit it later, but with having my stupid statistics.
Somehow the code feels terrible. There are a lot of meh decisions in
there that could be better or worse (but mostly could be better), and it
just sucks but it's also trivial and uninteresting and does the job. I
guess I hate programming. It's so tedious and the result is always shit.
Anyway, enjoy.
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
check_obj_resize() in sub/osd.c calls mp_client_broadcast_event(), which
calls notify_property_events(). This is pretty unexpected, because
check_obj_resize() may be called from the VO thread. While that's sort
of awful, it seems to be OK locking-wise. But it breaks an assumption in
notify_property_events() that the core doesn't need to be woken up,
which could possibly lead to a missed/delayed property update (although
rather unlikely).
Fix this by explicitly waking up the core when it's called from the OSD
code.
libass had an API to configure this since 2013. mpv always used
ASS_FONTPROVIDER_AUTODETECT, because usually there's little reason to
use anything else. The intention of the now added option is to allow
users to disable use of system fonts.
I didn't consider it worth the trouble to add the coretext and
directwrite enum items from ASS_DefaultFontProvider. The "auto" choice
will have the same effect if they're available. Also, the part of the
code which defines the option does not necessarily have libass available
(it's still optional!), so defining all enum items as choices is icky. I
still added fontconfig, since that may be nice to emulate a nostalgic
2010 feeling of mpv freezing on fontconfig.
The option for OSD is even less useful. (But you get it for free, and
why pass up a chance to add yet another useless option?)
This is not quite what was requested in #6947, but as close as it gets.
Remove them from the big MPOpts struct and move them to their sub
structs. In the places where their fields are used, create a private
copy of the structs, instead of accessing the semi-deprecated global
option struct instance (mpv_global.opts) directly.
This actually makes accessing these options finally thread-safe. They
weren't even if they should have for years. (Including some potential
for undefined behavior when e.g. the OSD font was changed at runtime.)
This is mostly transparent. All options get moved around, but most users
of the options just need to access a different struct (changing sd.opts
to a different type changes a lot of uses, for example).
One thing which has to be considered and could cause potential
regressions is that the new option copies must be explicitly updated.
sub_update_opts() takes care of this for example.
Another thing is that writing to the option structs manually won't work,
because the changes won't be propagated to other copies. Apparently the
only affected case is the implementation of the sub-step command, which
tries to change sub_delay. Handle this one explicitly (osd_changed()
doesn't need to be called anymore, because changing the option triggers
UPDATE_OSD, and updates the OSD as a consequence). The way the option
value is propagated is rather hacky, but for now this will do.
It was split at least across osd.c and sd_ass.c/sd_lavc.c. sd_lavc.c
actually ignored most of the more obscure subtitle timing things.
There's no reason for this - just move it all to dec_sub.c (mostly from
sd_ass.c, because it has some of the most complex stuff).
Now timestamps are transformed as they enter or leave dec_sub.c.
There appear to have been some subtle mismatches about how subtitle
timestamps were transformed, e.g. sd_functions.accepts_packet didn't
apply the subtitle speed to the timestamp. This patch should fix them,
although it's not clear if they caused actual misbehavior.
The semantics of SD_CTRL_SUB_STEP are slightly changed, which is the
reason for the changes in command.c and sd_lavc.c.
If a VO-area option changes, gl_video_resize() is called
unconditionally. This function does something even if the size does not
change (at least it discards buffered frames for interpolation), which
can lead to stutter when you keep firing option change events during
playback.
Check for an actual resize, and if nothing changes, exit early.
All contributors of the code used for these files agreed to the LGPL
relicensing.
There are some unaccounted contributors, but all of their code was
completely removed before. (The only exception is one contributor whose
only line left was "#include <string.h>". I don't know if that's
copyrightable, but it wasn't needed anyway, so just remove it.)
These files started out as libvo/sub.* (renamed to sub/sub.*, then
renamed again to sub/osd.*). They used to contain code for rendering
the OSD (as in, actual pixel manipulation and text layouting). But
later all this code was dropped, and libass was used to render the OSD
instead. Actual subtitle rendering was reimplemented in other files
(the old subtitle rendering path is completely gone).
One potential problem are the option declarations, which makes this
harder, as these options involve more history. But it turns out most of
them were reimplemented since 80270218cb, rather than taken from old
code. (Although not all - but the rest covered by relicensing
agreements.)
This also affects osd_state.h, which was apparently incorrectly implied
to be LGPL.
To make it easier for the eyes, multi line subtitles should
be left justified (for most languages).
This adds an option to define how subtitles are to be justified
inpendently of how they are aligned.
Also add option to enable --sub-justify to be applied on ASS subtitles.
If a VO is created, but no video is playing (i.e. --force-window is
used), then until now no subtitles were shown. This is because VO
subtitle display normally depends on video frame timing. If there are no
video frames, there can be no subtitles.
Change this and add some code to handle this situation specifically. Set
a subtitle PTS manually and request VO redrawing manually, which gets
the subtitles rendered somehow.
This is kind of shaky. The subtitles are essentially sampled at
arbitrary times (such as when new audio data is decoded and pushed to
the AO, or on user interaction). To make a it slightly more consistent,
force a completely arbitrary minimum FPS of 10.
Other solutions (such as creating fake video) would be more intrusive or
would require VO-level API changes.
Fixes#3684.
Remove wrapper properties for OSD and video position updates, use the
new mechanism for them. We can mark the options directly. Update
behavior will work for more options (since I've casually marked more
affected options than the old less direct mechanism covered).
If --blend-subtitles=yes is given, vo_opengl will call osd_draw()
multiple times, once for subtitles, and once for OSD. This meant that
the want_redraw flag was reset before the OSD was rendered, which in
turn meant that update_osd() was never called. It seems like removing
the per-OSD object want_redraw wasn't such a good idea. Fix it by
reintroducing such a flag for OSDTYPE_OSD only.
Also, the want_redraw flag is now unused, so kill it.
Another regression caused by commit 9c9cf125. Fixes#3535.
This could in theory lead to missed updates if subtitles were switched
or external OSD overlays (via overlay-add) were updated. While the
change IDs of each of those were consistent, switching between two
separate OSD sources is not, and we have to explicitly trigger a change.
Regression since commit 9c9cf125. The new code is actually better,
because we do exactly what is needed, and don't just mess with the
update ID for libass-based OSD.
Remove the per-part force_redraw flags, and instead make the difference
between flagging dirty state and returning it to the player frontend
more explicit. The big issue is that 1. the OSD needs to know the dirty
state, and it should be cleared strictly when it is re-rendered
(force_redraw flag), and 2. the player core needs to be notified once,
and the notification must be reset (want_redraw flag).
The call in loadfile.c is replaced by making osd_set_sub() set the
change flag. Increasing the change flag on dirty state (the force_redraw
check in render_object()) should not be needed, because OSD part
renderers set it correctly (at least now).
Doing this just because someone pointed this out.
This affects VOs (or other code which render OSD) which does not support
the LIBASS format, but only RGBA. Instead of having a converter stage in
osd.c, make mp_ass_packer_pack() output directly in RGBA.
In general, this is work towards refcounted subtitle images.
Although we could keep the "converter" design, doing it this way seems
simpler, at least considering the current situation with only 2 OSD
formats. It also prevents copying & packing the data twice, which will
lead to better performance. (Although I guess this case is not important
at all.)
It also fixes --force-rgba-osd-rendering when used with vo_opengl,
vo_vdpau, and vo_direct3d.
The intention is to let mp_ass_packer_pack() produce different output
for the RGBA and LIBASS formats. VOs (or whatever generates the OSD)
currently do not signal a preferred format, and this mechanism just
exists to switch between RGBA and LIBASS formats correctly, preferring
LIBASS if the VO supports it.
Implement it directly in sd_lavc.c as well. Blurring requires extending
the size of the sub-images by the blur radius. Since we now want
sub_bitmaps to be packed into a single image, and we don't want to
repack for blurring, we add some extra padding to each sub-bitmap in the
initial packing, and then extend their size later. This relies on the
previous bitmap_packer commit, which always adds the padding in all
cases.
Since blurring is now done on parts of a large bitmap, the data pointers
can become unaligned, depending on their position. To avoid shitty
libswscale printing a dumb warning, allocate an extra image, so that the
blurring pass is done on two newly allocated images. (I don't find this
feature important enough to waste more time on it.)
The previous refactor accidentally broke this feature due to a logic bug
in osd.c. It didn't matter before it happened to break, and doesn't
matter now since the code paths are different.
Until now, subtitle renderers could export SUBBITMAP_INDEXED, which is a
8 bit per pixel with palette format. sd_lavc.c was the only renderer
doing this, and the result was converted to RGBA in every use-case
(except maybe when the subtitles were hidden.)
Change it so that sd_lavc.c converts to RGBA on its own. This simplifies
everything a bit, and the palette handling can be removed from the
common code.
This is also preparation for making subtitle images refcounted. The
"caching" in img_convert.c is a PITA in this respect, and needs to be
redone. So getting rid of some img_convert.c code is a positive side-
effect. Also related to refcounted subtitles is packing them into a
single mp_image. Fewer objects to refcount is easier, and for the libass
format the same will be done. The plan is to remove manual packing from
the VOs which need single images entirely.
Glitches when resizing are still possible, but are reduced. Other VOs
could support this too, but don't need to do so.
(Totally avoiding glitches would be much more effort, and probably not
worth the trouble. How about you just watch the video the player is
playing, instead of spending your time resizing the window.)
No need to have them everywhere. The only exception/annoyance is
MAX_OSD_PARTS, which is now basically duplicated (and at runtime
initialization is checked with an assert()).
Until now, there was only 1 global ASS overlay that could be set by all
scripts. This was often perceived as bug when multiple scripts tried to
set their own ASS overlay.
This was kind of hard to solve because the script could set its own ASS
PlayResX/Y, which makes it impossible to share a single ASS_Renderer for
multiple scripts. The OSC unfortunately makes use of this feature (and
unfortunately can't be fixed because it's a POS), so we're stuck with
this complication.
Implement the worst-case solution and fix this by creating separate ASS
track and renderer objects for each script that wants to set an ASS
overlay.
The z-order is decided by the order the scripts set their text first.
This is essentially random, unless you do it at script init, and you
pass scripts in a specific order. Script initialization is currently
serialized (as a feature), so the first loaded script gets lowest
Z-order.
The Lua script API interestingly remains the same. (And also will remain
undocumented, unsupported, and potentially volatile.)
Do not scale OSD mouse input to the ASS OSD script resolution. The
original idea of this mechanism was that the user doesn't have to care
about the actual resolution of anything, and can just use the OSD
resolution consistently. But this made things worse.
Remove the implicit scaling, and always use the screen resolution.
(Except with --vo=xv, where additional scaling is forced upon
everything.)
Drop get_osd_resolution(). There is no replacement. Rename
get_screen_size() and get_screen_margins() to use "osd" instead of
"screen". For anything but --vo=xv these are equivalent, but with
--vo=xv the OSD resolution has additional implicit scaling.
Add code to osc.lua which emulates the old behavior.
Note that none of the changed functions were public API, so implicit
breakage of scripts which used it is just going to happen.
MPlayer traditionally always used the display aspect ratio, e.g. 16:9,
while FFmpeg uses the sample (aka pixel) aspect ratio.
Both have a bunch of advantages and disadvantages. Actually, it seems
using sample aspect ratio is generally nicer. The main reason for the
change is making mpv closer to how FFmpeg works in order to make life
easier. It's also nice that everything uses integer fractions instead
of floats now (except --video-aspect option/property).
Note that there is at least 1 user-visible change: vf_dsize now does
not set the display size, only the display aspect ratio. This is
because the image_params d_w/d_h fields did not just set the display
aspect, but also the size (except in encoding mode).
Reduces memory usage and startup times. The implementation is a bit
weird, because both OSD parts have conflicting requirements on the used
ASS styles.
This was used with --no-sub-ass (aka --no-ass). This option (which is
not yet removed) strips all styling from the subtitles, and renders them
as plaintext only. For some reason, it originally seemed convenient to
reuse all the OSD text rendering code (osd_libass.c). While this was
indeed simple, it had a bad influence on the rest of the code. For
example, it had to decide whether to go through the OSD code path, or
the proper subtitle renderer in sd_ass.c.
Kill the OSD subtitle renderer. Reimplement --no-sub-ass and also
"secondary" subtitles in sd_ass.c. fill_plaintext() contains some rather
minor code duplication with osd_libass.c for setting up a dummy
ASS_Event and escaping the stripped text. Since sd_ass.c already has to
handle "normal" text subtitles, and has code for stripping ASS tags,
this remains all relatively simple.
Remove all the unnecessary crap from the rest of the code.
Use the demux_set_ts_offset() added in the previous commit to base each
timeline segment to use timestamps according to its relative position
within the overall timeline. As a consequence we don't need to care
about these timestamps anymore, and everything becomes simpler.
(Another minor but delicious nugget of sanity.)
If the aspect ratio of the video resolution and the subtitle resolution
(the implied subtitle coordinate system) mismatch, the subtitles
obviously can't be overlayed over the video perfectly. Either you get
video that can't be covered by subtitles, or the subtitles could go
beyond the video. We don't want to stretch the subtitle to compensate
for the aspect ratio, because it would look terrible.
Until now, mpv used to fit the subtitle rectangle into the video
rectangle (letterboxing/pillarboxing). This looks odd with some sample
files with subtitle canvas being wider than the video. Also, mpc-hc
displays them in a better way. vlc stretches them, which looks bad.
While you probably can't win this game with all those broken files
around, pick the mpc-hc method to handle this.
Nobody wanted to restore this, so it gets the boot.
If anyone still wants to volunteer to restore menu support, this would
be welcome. (I might even try it myself if I feel masochistic and like
wasting a lot of time for nothing.) But if it does get restored, it
should be done differently. There were many stupid things about how it
was done. For example, it somehow tried to pull mp_nav_events through
all the layers (including needing to "buffer" them in the demuxer),
which was needlessly complicated. It could be done simpler.
This code was already inactive, so this commit actually changes nothing.
Also keep in mind that normal DVD/BD playback still works.
This has a number of user-visible changes:
1. A new flag blend-subtitles (default on for opengl-hq) to control this
behavior.
2. The OSD itself will not be color managed or affected by
gamma controls. To get subtitle CMS/gamma, blend-subtitles must be
used.
3. When enabled, this will make subtitles be cleanly interpolated by
:interpolation, and also dithered etc. (just like the normal output).
Signed-off-by: wm4 <wm4@nowhere>
There was a somewhat obscure optimization in the OSD and subtitle
rendering path: if only the position of the sub-images changed, and not
the actual image data, uploading of the image data could be skipped. In
theory, this could speed up things like scrolling subtitles.
But it turns out that even in the rare cases subtitles have such scrolls
or axis-aligned movement, modern libass rarely signals this kind of
change. Possibly this is because of sub-pixel handling and such, which
break this.
As such, it's a worthless optimization and just introduces additional
complexity and subtle bugs (especially in cases libass does the
opposite: incorrectly signaling a position change only, which happened
before). Remove this optimization, and rename bitmap_pos_id to
change_id.
You can set in which "corner" the OSD and subtitles are shown. I'd
prefer it a bit more general (so you could set the alignment using
a factor), but the libass API does not provide this.
Until now, they used exactly the same defaults for the styling options.
The defaults were shared, so it was impossible to have different
defaults. Change this. This requires duplicating the full default
struct, even for settings that are the same. The list of options is
still shared, though.
Getting subtitle scaling and positioning right even if there are video
filters, which completely change the image (like cropping), doesn't seem
to have a single, correct solution. To some degree, the results are
arbitrary, so we may as well do what is most useful to the user.
In this case, if the PGS resolution aspect ratio and the video output
aspect ratio mismatch, letter-box it, instead of stretching the subs
over the video frame. (This will require additional fixes, should it
turn out that there are PGS subtitles which are stretched by design.)
Fixes#1205.
Simple fix for issue #1137.
Since all sub-bitmaps are packed on a larger texture, there's still a
"fall off" on the border due to the linear scaling. This could be
fixed by constraining each sub-bitmap to its own texture, or by
clamping on the shader level, but I don't care for now.
Merges pull request #1094, with some minor changes. mpv expects IEEE,
and IEEE allows divisions by 0 for floats, so these shouldn't actually
be a problem, but do it anyway for the sake of clang.
Signed-off-by: wm4 <wm4@nowhere>
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot