So that the video FPs is not required at initialization, and can be set
later.
(As for whether this MicroDVD crap is worth the trouble to handle it
"correctly": MicroDVD files are unfortunately still around, and in at
least one case using the video FPS seemed to help indeed.)
Always preroll by default if the cue (index) information indicates
overlapping subtitles.
Increase the amount of maximum data it will skip to get such subtitles
to 10 seconds. Since the index information can reliably tell whether
reading earlier is needed, the maximum should be rarely actually used,
thus we can set it high. On the other hand, the "old" prerolling
mechanism always has to skip the maximum amount of data; thus the method
using the index gets its own option to control the maximum amount of
data to skip.
(As more and more files With newer mkvtoolnix versions are muxed, and
with this new and hopefully sane default established, these options can
probably be removed in the future.)
Untested, but should be fine. Broken by commit 0a0bb905.
Also fix the include statement in context_rpi.c, which caused another
compilation failure. Also untested. (Because I'm lazy.)
Fixes#2638.
Keeping ASS_Renderers around for a potentially large number of subtitle
tracks could lead to excessive memory usage, especially since the libass
cache is broken (caches even unneeded data), and might consume up to
~500MB of memory for no reason.
This includes the case of switching ordered chapter boundaries. It will
now be recreated on each timeline part switch. This shouldn't be much of
a problem with modern libass. (Older libass versions use fontconfig for
memory fonts, and will be very slow to reinitialize memory fonts.)
Since commit 6d9cb893, subtitle state doesn't survive timeline switches
(ordered chapters etc.). So there is no point in caching the state per
sh_stream anymore (which would be required to deal with multiple
segments). Move the cache to struct track.
(Whether it's worth caching the subtitle state just for the situation
when subtitle tracks get reselected is questionable. But for now, it's
nice to have the subtitles immediately show up when reselecting a
subtitle.)
For files with only 1 chapter, the "cycle" command was ignored. Reenable
it, but don't let it terminate playback of the file.
For the full story, see #2550.
Fixes#2550.
OK, this made the --sub-paths and --audio-file-paths synonyms, which is
not what we wanted. Actually restrict the type of file loaded as well.
Really fixes#2632.
Requested. It works like --sub-paths. This will also load audio files
from a "audio" sub directory in the config file (because the same code
as for subtitles is used, and it also had such a feature).
Fixes#2632.
When crossing timeline boundaries (such as switching to a new segment or
chapter with ordered chapters), clear the internal text subtitle list.
This breaks the sub-seek command, but is otherwise not too harmful.
Fixes Sub-OC-test-final7.mkv. (The internal text subtitle list is
basically a cache to make subtitles show up at the right time when
seeking back.)
I suspect this was caused by 76fcef61. The sample file times subtitles
slightly before the video frame when it should show up. This is to avoid
problems with subtitles showing up a frame later than intended. It also
means that a subtitle which is supposed to show up on the start of a
timeline part boundary actually might first be shown in a different
part. Since we now manipulate the packet timestamps, instead of
manipulating timestamps after the subtitle decoder, this means this
subtitle event would have 2 timestamps, which our code of course does
not handle.
If the two parts come one after another, this would actually work (since
the subtitle would have the same timestamps in the old and new part),
but it breaks if the new part (which follows the old part in the
physical file) is has a completely different start time in the timeline.
Essentially, the trick used to time subtitles correctly is incompatible
with the way we cache subtitles (to make them survive seeks).
The simple solution is just clearing the cached subtitles when crossing
chapter boundaries.
See #2609:
"When eof is reached it would be shown on the OSD and in the console.
Next try seeking to the middle. Seeking to the middle of the file will
only result in the OSD message being updated. Lua seems to fail to
observe the change in the property until the video is unpaused."
Merge blend_src8_alpha and blend_src16_alpha into blend_src_alpha, and
the same for blend_const_alpha.
One thing that changes is that the vertical loop is now shared for both
code paths.
I think this is slightly easier to read, and it's a bit shorter as well.
This removes the need to define IMGFMT_GBRAP, which fixes compilation
with the current Libav release.
This also makes it automatically pick up a GBRP format with the same bit
width. (Unfortunately, it seems libswscale does not support conversion
to AV_PIX_FMT_GBRAP16, so our code falls back to 8 bit, removing
precision for video covered by subtitles in cases this code is used.)
Also, when the source video is e.g. 10 bit YUV, upsample to 16 bit.
Whether this is good or bad, it fixes behavior with alpha. Although I'm
not sure if the alpha range is really correct ([0,2^16-1] vs.
[0,255*256]). Keep in mind that libswscale doesn't even agree with the
way we do it.
This actually treats destination alpha correctly, and gives much better
results than before. I don't know if this is perfectly correct yet,
though. Slight difference with vo_opengl behavior suggests it might not
be.
Note that this does not affect VOs with true alpha support. vo_opengl
does not use this code at all, and does the alpha calculations in OpenGL
instead.
Commit 1a6f3c56 added a fallback for the case when C11 TLS was not
available, but GCC TLS was. But it forget to enable VAAPI EGL interop in
the build system in this case.
Just remove the build system check. Should someone find a compiler that
works on Linux and does not support GCC extensions or C11, it will still
compile and just fail to init at runtime.
Actually fixes#2631 (hopefully).
The demuxer infrastructure was originally single-threaded. To make it
suitable for multithreading (specifically, demuxing and decoding on
separate threads), some sort of tripple-buffering was introduced. There
are separate "struct demuxer" allocations. The demuxer thread sets the
state on d_thread. If anything changes, the state is copied to d_buffer
(the copy is protected by a lock), and the decoder thread is notified.
Then the decoder thread copies the state from d_buffer to d_user (again
while holding a lock). This avoids the need for locking in the
demuxer/decoder code itself (only demux.c needs an internal, "invisible"
lock.)
Remove the streams/num_streams fields from this tripple-buffering
schema. Move them to the internal struct, and protect them with the
internal lock. Use accessors for read access outside of demux.c.
Other than replacing all field accesses with accessors, this separates
allocating and adding sh_streams. This is needed to avoid race
conditions. Before this change, this was awkwardly handled by first
initializing the sh_stream, and then sending a stream change event. Now
the stream is allocated, then initialized, and then declared as
immutable and added (at which point it becomes visible to the decoder
thread immediately).
This change is useful for PR #2626. And eventually, we should probably
get entirely of the tripple buffering, and this makes a nice first step.
The "script-binding" command is used by the Lua scripting wrapper to
register key bindings on the fly. It's also the only way to get fine-
grained information about key events (such as separate key up/down
events). This information is sent via a "key-binding" message when the
state of a key changes.
Extend it to send name of the mapped key itself. Previously, it was
assumed that the user just uses an unique identifier for the binding's
name, so it wasn't needed. With this change, a user can map exactly the
same command to multiple keys, which is useful especially with the next
commit.
Part of #2612.
gcc 4.8 does not support C11 thread local storage. This is a bit
annoying, so add a hack to use the gcc specific __thread extension if
C11 TLS is not available.
(This is used for the extremely silly mpv-internal way hwdec modules
access some platform specific handles. Disabling it simply made
hwdec_vaegl.c always fail initialization.)
Fixes#2631.
Add a "blend-tiles" choice to the "alpha" sub-option. This is pretty
simplistic and uses the GL raster position to derive the tiles. A weird
consequence is that using --vo=opengl and --vo=opengl-hq gives different
scaling behavior (screenspace pixel size vs. source video pixel size
16x16 tiles), but it seems we don't have easy access to the original
texture coordinates. Using the rasterpos is probably simpler.
Make this option the default.
Github will display a link to it when a user wants to create an issue or
pull request.
Also make some minor adjustments to DOCS/contribute.md, which is
developer oriented, and for which I see no reason to merge it with
the new file.
window-scale is now mapped to Alt+0 etc. by default (although these
bindings just use "set", not "cycle-values").
colormatrix can't be cycled anymore (would require using vf_format).
Make sure that subtraction of performance counters is done correctly.
Follow the *exact* instructions for converting performance counter to something
comparable to the QPCposition returned by IAudioClient::GetPosition
https://msdn.microsoft.com/en-us/library/windows/desktop/dd370889%28v=vs.85%29.aspx
Also make sure that subtraction of unsigned integers is stored into a signed
integer to avoid nastiness. Also be more careful about overflow in the
conversion of the device position into number of samples.
Avoid casting mp_time_us() to a double, and use llrint to convert the
double precision delay_us back to integer for ao_read_data.
Finally, actually check the return value of ao_read_data and add a verbose
message if it is not the expected value. Unfortunately,
there is no way to tell WASAPI when this happens since the frame_count in
ReleaseBuffer must match GetBuffer.
This is for the sake of command.c and the "deinterlace" option/property.
Instead of forcing certain "better" defaults when inserting yadif,
change the actual "yadif" defaults.
I pondered not changing vf_yadif, and instead adding a trivial "yadif-
auto" wrapper filter, which would merely have different defaults. But
thinking about it, it doesn't make any sense for "deinterlace" to have
different defaults from vf_yadif, with vf_yadif having the "worse"
defaults. If someone wants the old behavior, the old behavior can be
forced in a backward and forward compatible way by setting the
suboptions.
Fixes#2539 (kind of).