Instead of adding "no-"-prefixed aliases to the internal option list,
which will act like normal options, do it in the parsing stage. This
turns out to be simpler (and cheaper), and avoids adding aliased
options.
This is basically dead code, and even the commit that added this code 4
years ago said that this should be for debugging only.
(Though it is possible that the clamp callback was used for something
else, and then unused again. Also, some of the clamping code remains and
is used for internal checking, e.g. clamp_double().)
Before this commit, all VOs had to toggle the option flag themselves,
now command.c does it.
I can't really comprehend why it required every VO to do this manually.
Maybe it was for rejecting the property/option change if the VO didn't
support a specific capability. But then it could have checked the VOCTRL
result. In any case, I don't care, and successfully changing the
property without doing anything (With some VOs) is fine too. Many things
work this way now, and it's simpler overall.
This change will be useful for cleaning up VO option handling.
Just a minor refactor along the planned option change. This commit will
make it easier to update (i.e. copy) the VO options without copying
_all_ options. For now, behavior should be equivalent, though.
(The VO options were put into a separate struct quite early - when all
global variables were removed from the source code. It wasn't clear
whether the separate struct would have any actual purpose, but it seems
it will now. Awesome, huh.)
More appropriate. Originally it really was for automatically added
options, but now it even needed some stupid comments to indicate that
it was used for simply hiding options.
It's actually redundant with whether m_option_type.free is set. Some
option types were flagged inconsistently. Its only use was for running
an additional sanity check without any real functionality.
Some files not only use rounded timestamps, but they also do it
incorrectly. They may jitter between up to 4 specific frame durations.
In this case, I found a file that mostly used 41ms and 42ms, but also
had 40ms and 43ms outliers (often but not always following each other).
This breaks the assumption of the framerate estimation code that the
frame duration can deviate up to 1ms. If it jitters around 4 possible
frame durations, the maximum deviation is 3ms. Increase it accordingly.
The change might make playback of "true VFR" video via display-sync mode
worse, but it's not like it was particularly good in the first place.
Also, the check whether to usen the container FPS needs to be stricter.
In the worst case, num_dur is 1, which doesn't really indicate any
evidence that the framerate is correct. Only if there are "enough"
frames the deviation check will become meaningful. 16 is an arbitrary
value that has been designated "enough" by myself.
Also otuput the frame duration values for --dump-stats.
It seems like many GL implementations (including Mesa) choke on this,
while others are fine. We still think that this use of the GL API is
allowed by the standard (at least in the Mesa case), so to reduce
confusion, explicitly check the "controversial" calls, and use an
appropriate error message.
Like it's done for audio and video. Just to be uniform.
I'm sorry for deleting the anti-ffmpeg vitriol. It's still all true, but
since we decided to always set the timebase, the crappiness is isolated
to FFmpeg internals.
These are different AVCodecContext fields. pkt_timebase is the correct
one for identifying the unit of packet/frame timestamps when decoding,
while time_base is for encoding. Some decoders also overwrite the
time_base field with some unrelated codec metadata.
pkt_timebase does not exist in Libav, so an #if is required.
Normally, OSD can be disabled with --osd-level=0. But this also disables
terminal OSD, and some users want _only_ the terminal OSD. Add
--video-osd=no, which essentially disables the video OSD.
Ideally, it should probably be possible to control terminal and video
OSD levels independently, but that would require separate OSD timers
(and other state) for both components, so don't do it. But because the
current situation isn't too ideal, add a threat to the manpage that
might be changed in the future.
Fixes#3387.
This is for text subtitles. libavformat currently always reads text
subtitles completely on init. This means the underlying stream is
useless and will consume resources for various reasons (network
connection, file handles, cache memory).
Take care of this by closing the underlying stream if we think the
demuxer has read everything. Since libavformat doesn't export whether it
did (or whether it may access the stream again in the future), we rely
on a whitelist. Also, instead of setting the stream to NULL or so, set
it to an empty dummy stream. This way we don't have to litter the code
with NULL checks.
demux_lavf.c needs extra changes, because it tries to do clever things
for the sake of subtitle charset conversion.
The main reason we keep the demuxer etc. open is because we fell for
libavformat being so generic, and we tried to remove corresponding
special-cases in the higher-level player code. Some of this is forced
due to ass/srt mkv/mp4 demuxing being very similar to external text
files. In the future it might be better to do this in a more
straight-forward way, such as reading text subtitles into libass and
then discarding the demuxer entirely, but for aforementioned reasons
this could be more of a mess than the solution introduced by this
commit.
Probably fixes#3456.
Cleaner and makes it easier to change the underlying stream.
mp_property_stream_capture() still directly accesses it directly via
demux_run_on_thread(). This is evil, but still somewhat sane and is not
getting into the way here.
Not sure if I got all field accesses.
It's just wasted memory.
One corner case is when a file grows during playback, but this is rare
and usually happens on-disk only. The cache size was generally limited
before this change already, so no reason to care.
As an unrelated change, move the cache size info to the resize_cache()
function. There's really no reason not to do this, and it's slightly
more informative if the user changes the cache size at runtime.
Because VOCTRL_CHECK_EVENTS is processed asynchronously (as of 088a007,)
the GUI thread no longer gets regular wakeups, so the old check that
made sure the video window matched the parent window's size in --wid
embedding mode did not run very often. This made --wid embedding not
very usable.
Instead of polling for window size changes, use Windows hooks to react
to them when they happen. When the parent window is owned by the same
process as the video window, use a WH_CALLWNDPROC hook. When the parent
window is not owned by the same process, WinEvents must be used, which
are not as smooth, but still work for this purpose.
Since neither SetWindowsHookEx nor SetWinEventHook take a context
parameter to send data to the hook function, the hook functions must
find the child window by its class instead, so there are a few changes
to ensure this is fast and the class is unique.
This also fixes up the logic to handle window destruction. When a parent
window is destroyed, its children are also destroyed, so this gives us a
way to react to parent window destruction without polling.
If the video has the same size as the screen, starting with --fs and
then leaving fullscreen doesn't actually leave fullscreen.
The reason is that mpv tries to restore the previous window size if
necessary (otherwise, you'd end up with a Window of nearly the same size
as the screen with some WMs). It will typically restore with the
rectangle set exactly to the screen if no other position or size is
forced. This triggers pre-EWMH fullscreen mode, which WMs detect using
various heuristics.
Apparently we triggered this with mutter (but strangely no other WMs).
It's possible that pre-EWMH fullscreen mode actually requires removing
decorations, and mutter either ignores this. But this is speculation and
I haven't checked.
Work this around by reducing the requested size by 1 pixel if it
happens.
This was observed with mutter 3.18.2.
Fixes#2072.
If spdif is enabled, the channel layout has no meaning other than
setting the number of channels. The number of channels must be fixed to
achieve the exact bitrate required.
Fixes#3445.
It doesn't necessarily have to mean anything bad.
We're still too lazy to provide any more detailed information (e.g.
whether this happened to likely bad interleaving, excessive amount of
packets like with some ASS subs, or that the readahead user option is
limitted by the packet queue size).
This should actually be rather safe - we already check whether the
estimated value jitters less than the (possibly untrustworthy) nominal
one. Remove a "safety" check that disabled this code for small
deviations, and make it trigger sooner into playback. Also lower the log
level of messages about using the estimated display FPS down to verbose.
Normally there's another mechanism for smoothing out minor estimation
differences, but that is not good enough here.
This possibly improves behavior as reported in #3433, which can be
reproduced with --vo=null:fps=48.426 --display-fps=48 (though it doesn't
consider the jitter introduced by a real VO).
Doing this required synchronizing with the VO thread, which could lead
to audio dropouts if the VO was frozen (which can happen in practice if
e.g. an opengl_cb user is not doing what the API demands).
Add a way to send asynchronous VOCTRLs, and use that for the playback
state. In theory, it would be better to make this status update a
several function and to "merge" several queued update, but that would be
slightly more effort/code, and the update is so infrequent that the
merging would never happen anyway.
The change to vo_destroy() is to make sure all queued asynchronous
reuqests are finished before making the vo_thread exit.
Even though it's only used on MS Windows, it's run on any platform with
any VO, which makes this worse.
run_control() dereferences an uint32_t as int. Whether this is allowed
depends on what uint32_t is typedefed to (dereferencing an unsigned int
as int should be fine). Fix it by always using int. The uint32_t type
never really made sense.