This is another attempt at making files with sparse video frames work
better.
The problem is that you generally can't know whether a jump in video
timestamps is just a (very) long video frame, or a timestamp reset. Due
to the existence of files with sparse video frames (new frame only every
few seconds or longer), every heuristic will be arbitrary (in general,
at least).
But we can use the fact that if video is continuous, audio should also
be continuous. Audio discontinuities can be easily detected, and if that
happens, reset some of the playback state.
The way the playback state is reset is rather radical (resets decoders
as well), but it's just better not to cause too much obscure stuff to
happen here. If the A/V sync code were to be rewritten, it should
probably strictly use PTS values (not this strange time_frame/delay
stuff), which would make it much easier to detect such situations and
to react to them.
PT_RELOAD_FILE is a somewhat obscure case when using DVB or when
switching Matroska editions. Both cases were broken, because the
asynchronous playback abort mechanism was still triggered. This
mechanism is used to force the demuxer and stream layers to exit
immediately (instead of blocking on I/O possibly forever), and
is normally disabled on playback start. The reopen path is a bit
strange, and needs to reset it manually.
Pointed out in #2568.
If you do "mpv /bla/", and then branch out into sub-directories using
playlist navigation, and then used quit and watch later, then playing
the same directory did not resume from the previous point. This was
because resuming is based on the path hash, so a path prefix can't be
detected when resuming the parent directory.
Solve this by writing each path prefix when playing directories is
involved. (This includes all parent paths, so interestingly, "mpv /"
would also resume in the above example.)
Something like this was requested multiple times, and I want it too.
When using --start with timeline/ordered chapters, then the
timeline_switch_to_time() function will look at playback_initialized
whether to rselect the currently selected streams on the demuxer level.
So we need to set this field to true at an earlier stage during
initialization, and in particular before the code for --start is called.
Slightly change how it is decided when a new packet should be read.
Switch to demux_read_packet_async(), and let the player "wait properly"
until required subtitle packets arrive, instead of blocking everything.
Move distinguishing the cases of passive and active reading into the
demuxer, where it belongs.
Just simplify by removing parts not needed anymore. This includes
merging dec_sub allocation and initialization (since things making
initialization complicated were removed), or format support queries (it
simply tries to create a decoder, and if that fails, tries the next
one).
So that the video FPs is not required at initialization, and can be set
later.
(As for whether this MicroDVD crap is worth the trouble to handle it
"correctly": MicroDVD files are unfortunately still around, and in at
least one case using the video FPS seemed to help indeed.)
Keeping ASS_Renderers around for a potentially large number of subtitle
tracks could lead to excessive memory usage, especially since the libass
cache is broken (caches even unneeded data), and might consume up to
~500MB of memory for no reason.
This includes the case of switching ordered chapter boundaries. It will
now be recreated on each timeline part switch. This shouldn't be much of
a problem with modern libass. (Older libass versions use fontconfig for
memory fonts, and will be very slow to reinitialize memory fonts.)
Since commit 6d9cb893, subtitle state doesn't survive timeline switches
(ordered chapters etc.). So there is no point in caching the state per
sh_stream anymore (which would be required to deal with multiple
segments). Move the cache to struct track.
(Whether it's worth caching the subtitle state just for the situation
when subtitle tracks get reselected is questionable. But for now, it's
nice to have the subtitles immediately show up when reselecting a
subtitle.)
For files with only 1 chapter, the "cycle" command was ignored. Reenable
it, but don't let it terminate playback of the file.
For the full story, see #2550.
Fixes#2550.
OK, this made the --sub-paths and --audio-file-paths synonyms, which is
not what we wanted. Actually restrict the type of file loaded as well.
Really fixes#2632.
Requested. It works like --sub-paths. This will also load audio files
from a "audio" sub directory in the config file (because the same code
as for subtitles is used, and it also had such a feature).
Fixes#2632.
When crossing timeline boundaries (such as switching to a new segment or
chapter with ordered chapters), clear the internal text subtitle list.
This breaks the sub-seek command, but is otherwise not too harmful.
Fixes Sub-OC-test-final7.mkv. (The internal text subtitle list is
basically a cache to make subtitles show up at the right time when
seeking back.)
I suspect this was caused by 76fcef61. The sample file times subtitles
slightly before the video frame when it should show up. This is to avoid
problems with subtitles showing up a frame later than intended. It also
means that a subtitle which is supposed to show up on the start of a
timeline part boundary actually might first be shown in a different
part. Since we now manipulate the packet timestamps, instead of
manipulating timestamps after the subtitle decoder, this means this
subtitle event would have 2 timestamps, which our code of course does
not handle.
If the two parts come one after another, this would actually work (since
the subtitle would have the same timestamps in the old and new part),
but it breaks if the new part (which follows the old part in the
physical file) is has a completely different start time in the timeline.
Essentially, the trick used to time subtitles correctly is incompatible
with the way we cache subtitles (to make them survive seeks).
The simple solution is just clearing the cached subtitles when crossing
chapter boundaries.
See #2609:
"When eof is reached it would be shown on the OSD and in the console.
Next try seeking to the middle. Seeking to the middle of the file will
only result in the OSD message being updated. Lua seems to fail to
observe the change in the property until the video is unpaused."
The demuxer infrastructure was originally single-threaded. To make it
suitable for multithreading (specifically, demuxing and decoding on
separate threads), some sort of tripple-buffering was introduced. There
are separate "struct demuxer" allocations. The demuxer thread sets the
state on d_thread. If anything changes, the state is copied to d_buffer
(the copy is protected by a lock), and the decoder thread is notified.
Then the decoder thread copies the state from d_buffer to d_user (again
while holding a lock). This avoids the need for locking in the
demuxer/decoder code itself (only demux.c needs an internal, "invisible"
lock.)
Remove the streams/num_streams fields from this tripple-buffering
schema. Move them to the internal struct, and protect them with the
internal lock. Use accessors for read access outside of demux.c.
Other than replacing all field accesses with accessors, this separates
allocating and adding sh_streams. This is needed to avoid race
conditions. Before this change, this was awkwardly handled by first
initializing the sh_stream, and then sending a stream change event. Now
the stream is allocated, then initialized, and then declared as
immutable and added (at which point it becomes visible to the decoder
thread immediately).
This change is useful for PR #2626. And eventually, we should probably
get entirely of the tripple buffering, and this makes a nice first step.
The "script-binding" command is used by the Lua scripting wrapper to
register key bindings on the fly. It's also the only way to get fine-
grained information about key events (such as separate key up/down
events). This information is sent via a "key-binding" message when the
state of a key changes.
Extend it to send name of the mapped key itself. Previously, it was
assumed that the user just uses an unique identifier for the binding's
name, so it wasn't needed. With this change, a user can map exactly the
same command to multiple keys, which is useful especially with the next
commit.
Part of #2612.
This is for the sake of command.c and the "deinterlace" option/property.
Instead of forcing certain "better" defaults when inserting yadif,
change the actual "yadif" defaults.
I pondered not changing vf_yadif, and instead adding a trivial "yadif-
auto" wrapper filter, which would merely have different defaults. But
thinking about it, it doesn't make any sense for "deinterlace" to have
different defaults from vf_yadif, with vf_yadif having the "worse"
defaults. If someone wants the old behavior, the old behavior can be
forced in a backward and forward compatible way by setting the
suboptions.
Fixes#2539 (kind of).
MPlayer traditionally always used the display aspect ratio, e.g. 16:9,
while FFmpeg uses the sample (aka pixel) aspect ratio.
Both have a bunch of advantages and disadvantages. Actually, it seems
using sample aspect ratio is generally nicer. The main reason for the
change is making mpv closer to how FFmpeg works in order to make life
easier. It's also nice that everything uses integer fractions instead
of floats now (except --video-aspect option/property).
Note that there is at least 1 user-visible change: vf_dsize now does
not set the display size, only the display aspect ratio. This is
because the image_params d_w/d_h fields did not just set the display
aspect, but also the size (except in encoding mode).
Apparently, this was replaced by the SD_CTRL_SET_VIDEO_PARAMS set
dimensions. But I can't find out when this happened - possibly, these
fields were never used by sd_lavc.c, and only by the (long removed)
MPlayer dvdsub decoder.
Until now, feeding packets to the decoder in advance was done for text
subtitles only. This was possible because libass buffers all subtitle
data anyway (in ASS_Track). sd_lavc, responsible for bitmap subs, does
not do this. But it can buffer a small number of subtitle frames ahead.
Enable this.
Repurpose the sub_accept_packets_in_advance(). Instead of "can take all
packets" it means "can take 1 packet" now. (The old meaning is still
needed locally in dec_sub.c; keep it there.) It asks the decoder whether
there is place for at least 1 subtitle packet. sd_lavc implements it and
returns true if its internal fixed-size subtitle queue still has a free
slot. (The implementation of this in dec_sub.c isn't entirely clean.
For one, decode_chain() ignores this mechanism, so it's implied that
bitmap subtitles do not use the subtitle filter chain in any advanced
way.)
Also fix 2 bugs in the sd_lavc queue handling. Subtitles must be checked
in reverse, because the first entry will often have endpts==NOPTS, which
would always match. alloc_sub() must cycle the queue buffer, because it
reuses memory allocations (like sub.imgs) by design.
Helps with files that have occasional broken timestamps. For larger
discontinuities, e.g. caused by actual timestamp resets, we still want
to realign audio.
(I guess in general, this should be removed and replaced by a more
general resync-on-desync logic, but not now.)
This makes no sense, because the client is obligated to react to this
event.
This also happens to fix a deadlock with JSON IPC clients sending
"disable_event all", because MPV_EVENT_SHUTDOWN was used to stop the
thread driving the socket connection (fixes#2558).
Requested. Don't overwrite permanent OSD text set with e.g. --osd-msg1.
Instead, append the OSD message to it (on the next line).
Note that with --osd-msg1, seeking will still overwrite the OSD with the
playback status for a while. If you do not want this, use --osd-msg3
--osd-level=3 instead.
At least I hope so.
Deriving the duration from the pts was not really correct. It doesn't
include speed adjustments, and becomes completely wrong of the user e.g.
changes the playback speed by a huge amount. Pass through the accurate
duration value by adding a new vo_frame field.
The value for vsync_offset was not correct either. We don't need the
error for the next frame, but the error for the current one. This wasn't
noticed because it makes no difference in symmetric cases, like 24 fps
on 60 Hz.
I'm still not entirely confident in the correctness of this, but it sure
is an improvement.
Also, remove the MP_STATS() calls - they're not really useful to debug
anything anymore.
This was just converting back and forth between int64_t/microseconds and
double/seconds. Remove this stupidity. The pts/duration fields are still
in microseconds, but they have no meaning in the display-sync case (also
drop printing the pts field from opengl/video.c - it's always 0).
Instead of periodically trying to enable it again. There are two cases
that can happen:
1. A random discontinuity messed everything up,
2. Things are just broken and will desync all the time
Until now, it tried to deal with case 1 - but maybe this is really rare,
and we don't really need to care about it. On the other hand, case 2 is
kind of hard to diagnose if the user doesn't use the terminal.
Seeking will reenable display-sync, so you can fix playback if case 1
happens, but still get predictable behavior in case 2.
This is simply the average refresh rate. Including "bad" samples is
actually an advantage, because the property exists only for
informational purposes, and will reflect problems such as the driver
skipping a vsync.
Also export the standard deviation of the vsync frame duration
(normalized to the range 0-1) as vsync-jitter property.
This was used with --no-sub-ass (aka --no-ass). This option (which is
not yet removed) strips all styling from the subtitles, and renders them
as plaintext only. For some reason, it originally seemed convenient to
reuse all the OSD text rendering code (osd_libass.c). While this was
indeed simple, it had a bad influence on the rest of the code. For
example, it had to decide whether to go through the OSD code path, or
the proper subtitle renderer in sd_ass.c.
Kill the OSD subtitle renderer. Reimplement --no-sub-ass and also
"secondary" subtitles in sd_ass.c. fill_plaintext() contains some rather
minor code duplication with osd_libass.c for setting up a dummy
ASS_Event and escaping the stripped text. Since sd_ass.c already has to
handle "normal" text subtitles, and has code for stripping ASS tags,
this remains all relatively simple.
Remove all the unnecessary crap from the rest of the code.
Use the demux_set_ts_offset() added in the previous commit to base each
timeline segment to use timestamps according to its relative position
within the overall timeline. As a consequence we don't need to care
about these timestamps anymore, and everything becomes simpler.
(Another minor but delicious nugget of sanity.)
Most of this is explained in the DOCS additions.
This gives us slightly more sanity, because there is less interaction
between the various parts. The goal is getting rid of the video_offset
entirely.
The simplification extends to the user API. In particular, we don't need
to fix missing parts in the API, such as the lack for a seek command
that seeks relatively to the start time. All these things are now
transparent.
(If someone really wants to know the real timestamps/start time, new
properties would have to be added.)
This adds support for the progress indicator taskbar extension
that was introduced with Windows 7 and Windows Server 2008 R2.
I don’t like this solution because it keeps its own state and
introduces another VOCTRL, but I couldn’t come up with anything
less messy.
closes#2399
If the player sends a frame with duration==0 to the VO, it can trivially
underrun. Don't panic, but keep the correct time.
Also, returning the absolute time from vo_get_next_frame_start_time()
just to turn it into a float with relative time was silly. Rename it and
make it return what the caller needs.