This was used with --no-sub-ass (aka --no-ass). This option (which is
not yet removed) strips all styling from the subtitles, and renders them
as plaintext only. For some reason, it originally seemed convenient to
reuse all the OSD text rendering code (osd_libass.c). While this was
indeed simple, it had a bad influence on the rest of the code. For
example, it had to decide whether to go through the OSD code path, or
the proper subtitle renderer in sd_ass.c.
Kill the OSD subtitle renderer. Reimplement --no-sub-ass and also
"secondary" subtitles in sd_ass.c. fill_plaintext() contains some rather
minor code duplication with osd_libass.c for setting up a dummy
ASS_Event and escaping the stripped text. Since sd_ass.c already has to
handle "normal" text subtitles, and has code for stripping ASS tags,
this remains all relatively simple.
Remove all the unnecessary crap from the rest of the code.
Use the demux_set_ts_offset() added in the previous commit to base each
timeline segment to use timestamps according to its relative position
within the overall timeline. As a consequence we don't need to care
about these timestamps anymore, and everything becomes simpler.
(Another minor but delicious nugget of sanity.)
Most of this is explained in the DOCS additions.
This gives us slightly more sanity, because there is less interaction
between the various parts. The goal is getting rid of the video_offset
entirely.
The simplification extends to the user API. In particular, we don't need
to fix missing parts in the API, such as the lack for a seek command
that seeks relatively to the start time. All these things are now
transparent.
(If someone really wants to know the real timestamps/start time, new
properties would have to be added.)
This adds support for the progress indicator taskbar extension
that was introduced with Windows 7 and Windows Server 2008 R2.
I don’t like this solution because it keeps its own state and
introduces another VOCTRL, but I couldn’t come up with anything
less messy.
closes#2399
We always let audio slowly desync until a threshold is reached, and then
pushed it back by applying a maximum compensation speed. Refine what
comes afterwards: instead of playing with the nominal video speed, use
the actual required audio speed for keeping sync as measured by the A/V
difference. (The "actual" speed is the ideal speed with A/V differences
added.)
Although this works in theory, it's somewhat questionable how much this
works in practice. The ideal time value is actually not exact, but is
the time at which the frame is scheduled (could be compensated by using
the time_left calculations in handle_display_sync_frame()). It doesn't
account for speed changes or catastrophic discontinuities. It uses only
10 past frames.
This is very "illustrative", unlike the video-speed-correction
property, and thus useful. It can also be used to observe scheduling
errors, which are not detected by the core. (These happen due to
rounding errors; possibly not evne our fault, but coming from
files with rounded timestamps and so on.)
Get rid of get_past_frame_durations(), which was a bit too messy. Add
a past_frames array, which contains the same information in a more
reasonable way. This also means that we can get the exact current and
past frame durations without going through awful stuff. (The main
problem is that vo_pts_history contains future frames as well, which is
needed for frame backstepping etc., but gets in the way here.)
Also disable the automatic disabling of display-sync if the frame
duration changes, and extend the frame durations allowed for display
sync. To allow arbitrarily high durations, vo.c needs to be changed
to pause and potentially redraw OSD while showing a single frame, so
they're still limited.
In an attempt to deal with VFR, calculate the overall speed using the
average FPS. The frame scheduling itself does not use the average FPS,
but the duration of the current frame. This does not work too well,
but provides a good base for further improvements.
Where this commit actually helps a lot is dealing with rounded
timestamps, e.g. if the container framerate is wrong or unknown, or
if the muxer wrote incorrectly rounded timestamps. While the rounding
errors apparently can't be get rid of completely in the general case,
this is still much better than e.g. disabling display-sync completely
just because some frame durations go out of bounds.
Discontinuities (like toggling fullscreen) can cause multiple frames to
be dropped in succession, which sounds very weird. It's better to drop
some video frames instead to compensate for larger desyncs.
We roughly base it on the maximum allowed speed changes (audio change is
"additional" to the video change to account for deviations when playing
at max. video speed change).
Sigh... After the recent changes, another regression appeared. This
time, the VO window wasn't cleared when changing from video to a non-
video file (such as audio-only with no cover art). Fix this by properly
taking the handle_force_window() bool parameter into account.
Also, the info message could be printed twice, which is harmless but
ugly. So just remove the message.
Also, do some more minor cleanups (like fixing the comment, which was
completely outdated).
The previous commit was incomplete (and I didn't notice due to a broken
test procedure).
The annoying part is that actually creating the VO was separate; redo
this and merge the code for this into handle_force_window() as well.
This will also make implementing proper reaction to runtime option
changes easier. (Only the part for actually listening to option changes
is missing.)
If this mode is enabled, the player tries to strictly synchronize video
to display refresh. It will adjust playback speed to match the display,
so if you play 23.976 fps video on a 24 Hz screen, playback speed is
increased by approximately 1/1000. Audio wll be resampled to keep up
with playback.
This is different from the default sync mode, which will sync video to
audio, with the consequence that video might skip or repeat a frame once
in a while to make video keep up with audio.
This is still unpolished. There are some major problems as well; in
particular, mkv VFR files won't work well. The reason is that Matroska
is terrible and rounds timestamps to milliseconds. This makes it rather
hard to guess the framerate of a section of video that is playing. We
could probably fix this by just accepting jittery timestamps (instead
of explicitly disabling the sync code in this case), but I'm not ready
to accept such a solution yet.
Another issue is that we are extremely reliant on OS video and audio
APIs working in an expected manner, which of course is not too often
the case. Consequently, the new sync mode is a bit fragile.
For video sync, we want separate playback speed controls for user-
requested speed and the "correction" speed for video timing. Further, we
use this separation to make sure only a resampler is inserted if
playback speed is only changed for video sync correction.
As of this commit, this is basically inactive code. It's just
preparation for the video sync code (the following commit).
Additionally to taking the average, this tries to use the demuxer FPS to
eliminate jitter, and applies some other heuristics to check if the
result is sane.
This code will also be used for the display sync code (it will actually
make use of the require_exact parameter).
(The value of doing this over keeping the simpler demux_mkv hack is
somewhat questionable. But at least it allows us to deal with other
container formats that use jittery timestamps, such as mp4 remuxed
from mkv.)
Nobody wanted to restore this, so it gets the boot.
If anyone still wants to volunteer to restore menu support, this would
be welcome. (I might even try it myself if I feel masochistic and like
wasting a lot of time for nothing.) But if it does get restored, it
should be done differently. There were many stupid things about how it
was done. For example, it somehow tried to pull mp_nav_events through
all the layers (including needing to "buffer" them in the demuxer),
which was needlessly complicated. It could be done simpler.
This code was already inactive, so this commit actually changes nothing.
Also keep in mind that normal DVD/BD playback still works.
Instead of calling it "future frames" and adding or subtracting 1 from
it, always call it "requested frames". This simplifies it a bit.
MPContext.next_frames had 2 added to it; this was mainly to ensure a
minimum size of 2. Drop it and assume VO_MAX_REQ_FRAMES is at least 2;
together with the other changes, this can be the exact size of the
array.
mp_seek_chapter() had only 1 caller. Also the code was rather
roundabout; the entire function can be compressed to 5 lines of code.
(The new code is functionally the same - "mpctx->last_chapter_seek =
-2;" was effectively a dead assingment.)
Each subtitle track gets its own decoder instance (sd_ass). But they use
a shared ASS_Renderer. This is done mainly because of fontconfig.
Initializing fontconfig is very slow when using it with memory fonts, so
there's a practical need to cache this memory font state, which is done
by not creating separate ASS_Renderers. This is very dirty and very
evil, but we probably can't get rid of it any time soon.
The shared ASS_Renderer was not properly synchronized. While the program
logic guarantees that only one sd_ass instance is visible at a time,
there are other interactions that require synchronization. In
particular, I suspect concurrent execution of mp_ass_configure_fonts()
and sd_ass.get_bitmaps cause issues in a newer libass development
branch.
So here's a shitty hack that hopefully fixes things, hopefully only
until libass becomes less dependent on fontconfig.
The final goal is making opening the demuxer and opening the stream the
same operation.
Stream dumping is a rather uninteresting feature, but has a small
number of vocal users, and it's easy to keep.
Now the VO can request a number of future frames with the last parameter
of vo_set_queue_params(). This will be helpful to fix the interpolation
code.
Note that the first frame (after playback start or seeking) will usually
not have any future frames (to make seeking fast). Near the end of the
file, the number of future frames will become lower as well.
At least Matroska files have a "forced" flag (in addition to the
"default" flag). Export this flag. Treat it almost like the default
flag, but with slightly higher priority.
Adding an external audio track before loading the main file didn't work
right. For one, mp_switch_track() assumes it is called after the main
file is loaded. (The difference is that decoders are only initialized
once the main file is loaded, and we avoid doing this before that for
whatever reason.)
To avoid further messiness, just allow mp_switch_track() to be called at
any time. Also make it do what mp_mark_user_track_selection() did, since
the latter requires current_track to be set. (One could probably simply
allow current_track to be set at this point, but it'd interfere with
default track selection anyway and thus would be pointless.)
Fixes#1984.
mp_find_config_file() will print the filename lookup and its result in
verbose mode. This is wanted, but gets inconvenient when it is done for
every playlist entry (for resuming).
Lookup the watch_later subdir only once and cache the result instead.
This drops the logic for loading the resume file from other locations,
which should generally be unnecessary, though might lead to confusion if
the user has mixed old and new config paths (which the user shouldn't).
Also add a mp_find_user_config_file() function for a more
straightforward and reliable way to get actual local configpaths,
instead of possibly global and unwritable locations.
Also, for symmetry, check the resume option in mp_load_playback_resume()
just like mp_check_playlist_resume() does.
And split the Cocoa and Unix cases. Simplify the Cocoa case slightly by
calling mpv_main directly, instead of passing a function pointer. Also
add a comment explaining why Cocoa needs a special case at all.
Should help with debugging, and might be slightly more userfriendly.
Note that this is called manually in multiple entry-points, instead of
the functions doing the actual work (like mp_remove_track()). This is
done so that exiting the player or calling the sub_reload command won't
print redundant in-between states.
Move the command line parsing and some other things to the common init
routine shared between command line player and client API. This means
they're using almost exactly the same code now.
The main intended side effect is that the client API will load mpv.conf;
though still only if config loading is enabled.
(The cplayer still avoids creating an extra thread, passes a command
line, and prints an exit status to the terminal. It also has some
different defaults.)
Commit f54220d9 attempted to improve this, but it got worse. Now there
was a crash when ytdl_hook.lua added external tracks. This happened
because close_unused_demuxers() assumed that sources[0] was the main
demuxer (so that it didn't close it). This assumption failed, because
the ytdl script can add external tracks before the main file is loaded.
The easy fix would have been to check for master_demuxer, and not i==0.
But instead give up on the old idea, make some stricter assumptions how
demuxers and external tracks map, and simplify the code.
These functions do blocking work on a separate thread, but wait until
they return. So they are not async or non-blocking. But they do react to
user-input and client API accesses, which makes them reentrant.
Instead of accessing MPContext in player/timeline/*, create a separate
context struct, which the timeline loaders fill out. It turns out that
there's not much in the way too big MPContext that these need to access.
One major PITA is managing (and closing) the set of open demuxers. The
problem is that we need a list of all demuxers to make sure no unneeded
streams are enabled.
This adds a callback to the demuxer_desc struct, with the intention of
leaving to to the demuxer to call the right loader, instead of
explicitly checking the demuxer type and dispatching manually in common
code. I also considered making the timeline part of the demuxer state,
but decided against: it's too much of a mess wrt. memory management and
threading, and also doesn't make it clear who owns the child demuxers.
With the struct timeline decoupled from the demuxer state, it's at least
somewhat clear that the child demuxers are independent from the "main"
demuxer.
The actual changes to player/timeline/* are separated in the following
commits, because they're quite verbose. Some artifacts will be removed
later as soon as there's only 1 timeline loading mechanism.
These commands are counterparts of sub_add/sub_remove/sub_reload which
work for external audio file.
Signed-off-by: wm4 <wm4@nowhere>
(minor simplification)
mpctx->audio_delay always has the same value as opts->audio_delay. (This
was not the case a long time ago, when the audio-delay property didn't
actually write to opts->audio_delay. I think.)
This is for the ordered chapters case only. In theory this could have
resulted in initial audio, video or subs missing, although it didn't
happen in practice (because no streams were selected, thus the demuxer
thread didn't actually try to read anything). It's still better to make
this explicit.
Also, timeline_set_part() can be private to loadfile.c.