Until now, bitmap subtitles were decoded at "some" point, and then
simply replaced the old subtitle. Although the subtitle is selected
by time (PTS), it could happen that a subtitle was replaced too early.
One consequence is that this might lead to flicker even if the
subtitles are timed to follow each other without a gap (although most
subtitles are explicitly timed to introduce such a gap). With this
commit the past 4 subtitles are kept (instead of 1), so that the
correct one can be picked by time. This should fix the aforementioned
cases, but more importantly will allow demuxing/decoding and video
display to be somewhat asynchronous.
Still missing: somehow making sure the correct range of decoded
subtitles is available, instead of just passing along whatever comes
from the demuxer, and hoping that 4 queued subtitles are enough. But it
should certainly be good enough for now.
This removes a check that resets the subtitles if the PTS is 5 minutes
before the end of the current subtitle; this is probably not needed.
This fixes the build on platform where the atomic calls can't be turned into
lock-free instructions and thus need the external libatomic library (e.g. mips).
Make sure every video filter has valid parameters for input and output.
(This also ensures we don't take possibly invalid decoder output, or
feed invalid decodr/filter output to VOs.)
Also, the updated image size check now (almost) works like the
corresponding check in FFmpeg.
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
The error log callback was not thread-safe and not library-safe. And
apparently there were some other details that made it not library-safe,
such as a global lcms plugin registry.
Switch the the thread-safe API provided by lcms2 starting with 2.6.
Remove our approximate thread-safety hacks.
Note that lcms basically provides 2 APIs now, the old functions, and
the thread-safe alternatives whose names end with THR. Some functions
don't change, because they already have a context of some sort. Care
must be taken not to accidentally use old APIs.
When the reader is out of data, it tries to wake up the cache thread to
get more data. In theory, there's a small race condition, which could
cause the cache to miss the wakeup and idle before reaction.
Most certainly didn't cause real issues, because even if this extremely
unlikely race condition happens, the cache won't idle for longer than
1 second (the hardcoded cache idle time).
OSD used to be not thread-safe at all, so a track was used to get it
redrawn. This mostly reverts commit 6a2a8880, because OSD not being
thread-safe was the non-trivial part of it.
Mostly untested, because this code path is used on OSX only, and I don't
have OSX.
The main issue was actually that the OSD callback locked the subtitle
decoder, which does not necessarily work, because the OSD code is
already allowed to lock it. The state was already protected by
unsetting the callback (which involes the OSD lock). So, in summary,
this is probably just a cleanup.
We certainly don't want to maintain and improve the internal converter,
but we still need the internal one for Libav. (In the Libav case,
demux_subreader.c will be used to read the MicroDVD file.)
Let the VOs draw the OSD on their own, instead of making OSD drawing a
separate VO driver call. Further, let it be the VOs responsibility to
request subtitles with the correct PTS. We also basically allow the VO
to request OSD/subtitles at any time.
OSX changes untested.
Subsurfaces are only used by the wayland vo. Thats why it makes sense to move
all osd and subsurface specific parts to the vo_wayland.c
Also destroy the subsurfaces and subcompositor properly.
Not all the hardware supports kCGLPFASupportsAutomaticGraphicsSwitching
(apparently all Mid-2010 and before MacBooks do not work with it), so fallback
to not asking for this attribute in the GL pixel format.
Well, not sure if this really improves anything, but at least it's less
of a WTF to the playback core than always returning the same timestamp
for every frame.
There's really no need to convert this to float and then back. This is
mostly of cosmetic nature, double precision was probably enough to avoid
rounding.
This affects packed RGB formats up to 16 bits per pixel. The old mplayer
names used LSB-to-MSB order, while FFmpeg (and some other libraries) use
MSB-to-LSB.
Nothing should change with this commit, i.e. no bit order or endian bugs
should be added or fixed. In some cases, the name stays the same, even
though the byte order changes, e.g. RGB8->BGR8 and BGR8->RGB8, and this
affects the user-visible names too; this might cause confusion.
These suffixes are annoying when they're redundant, so strip them
automatically. On little endian machines, always strip the "le" suffix,
and on big endian machines vice versa (although I don't think anyone
ever tried to run mpv on a big endian machine).
Since pixel format strings are returned by a certain function and we
can't just change static strings, use a trick to pass a stack buffer
transparently. But this also means the string can't be permanently
stored by the caller, so vf_dlopen.c has to be updated. There seems
to be no other case where this is done, though.
Not sure how this symbol becomes visible in glibc (probably accidental
or mandatory recursive inclusion via the other standard or Linux-
specific headers), but normally this include file is needed to get the
symbol.
The "classic" sub-option stuff is not really needed anymore. The only
remaining use can be emulated in a simpler way. But note that this
breaks the --screenshot option (instead of the "flat" options like
--screenshot-...). This was undocumented and discouraged, so it
shouldn't affect anyone.
Instead of absuing m_option to store the property list, introduce a
separate type for properties. m_option is still used to handle data
types. The property declaration itself now never contains the option
type, and instead it's always queried with M_PROPERTY_GET_TYPE. (This
was already done with some properties, now all properties use it.)
This also fixes that the function signatures did not match the function
type with which these functions were called. They were called as:
int (*)(const m_option_t*, int, void*, void*)
but the actual function signatures were:
int (*)(m_option_t*, int, void*, MPContext *)
Two arguments were mismatched.
This adds one line per property implementation. With additional the
reordering of the parameters, this makes most of the changes in this
commit.
This means use of the min/max fields can be dropped for the flag option
type, which makes some things slightly easier. I'm also not sure if the
client API handled the case of flag not being 0 or 1 correctly, and this
change gets rid of this concern.
Also clarify the semantics.
It seems --idx didn't do anything. Possibly it used to change how the
now removed legacy demuxers like demux_avi used to behave. Or maybe
it was accidental.
--forceidx basically becomes --index=force. It's possible that new
index modes will be added in the future, so I'm keeping it
extensible, instead of e.g. creating --force-index.
Probably "needed" to get the correct alignment, although I'm not aware
of actual breakages or performance issues.
In fact we should probably always just allocate AVPackets, but for now
use the simple fix.
FFmpeg requires a bullshit padding after each input buffer, and they
just increased that padding without warning and without ABI or API bump.
We need this only in one file (although mp_image hardcodes something
similar, for which no FFmpeg API define is available), so drop our own
define.