If there's a command that uses the OSD by default, then always print the
associated message (or a fallback made of name + value), even if the
command has an associated OSD bar.
This means volume, gamma, panscan, etc. all show both a message and a
OSD bar.
Also, add a '%' to the volume message. The extra_msg thing is not needed
anymore.
See issue #1103.
It's just confusing; users are encouraged to edit input.conf instead
(changing the argument to the "add" command).
Update input.conf to keep the old behavior.
When pausing after a frame was just dropped, we're logically at the
dropped frame, and thus should redraw the dropped frame. This was
implemented, but didn't work after unpausing for the second time,
because of a minor logic bug.
For incomprehensible reasons, AV_PIX_FMT_GRAY8 (and some others) have a
palette. This literally makes no sense and this issue has bitten us
before, but it is how it is.
This also caused a crash with vo_direct3d: this mapped a texture as
IMGFMT_Y8 (i.e. AV_PIX_FMT_GRAY8), and when copying this, it tried to
copy the non-existent palette.
Fixes#1113.
vo_vdpau uses its own framedrop code, mostly for historic reasons. It
has some tricky heuristics, of which I'm not sure how they work, or if
they have any effect at all, but in any case, I want to keep this code
for now. One day it might get fully ported to the vo.c framedrop code,
or just removed.
But improve its interaction with the user-visible framedrop controls.
Make --framedrop actually enable and disable the vo_vdpau framedrop
code, and increment the number of dropped frames correctly.
The code path for other VOs should be equivalent. The vo_vdpau behavior
should, except for the improvements mentioned above, be mostly
equivalent as well. One minor change is that frames "shown" during
preemption are always count as dropped.
Remove the statement from the manpage that vo_vdpau is the default; this
hasn't been the case for a while.
vc->vsync_interval and vsync_interval should be the same value, but
actually vc->vsync_interval was updated after vsync_interval was
initialized. This was probably not intended. Fix this by removing the
duplicate local variable. There were probably no bad effects.
When compiling semaphore_osx.c on win32, the following error happened:
/usr/i686-w64-mingw32/include/semaphore.h:160:6: error: unknown type name 'mode_t'
This is because this system header references symbols that are not
not defined anywhere. This is clearly a bug in pthreads-w32, but has
been known and unfixed since 2012, so add a hack to fix it.
We build semaphore_osx.c this way because it saves us an extra configure
check. On win32, Linux, etc. it's empty and contains
"#include <semaphore.h>" only.
Should fix#1108.
Be less clever, and restore the volume state even with AOs like pulse,
which have per-application audio.
Before this commit we didn't do this, because the volume is global (even
if per-application), so the volume will persist between invocations. But
to me it looks like always restoring is less tricky and makes for easier
to understand semantics.
Also, don't always unmute on exit. Unmuting was done even with ao_pulse,
and interfered with user expectations (see #1107).
This might annoy some users, because mpv will change the volume all the
time. We will see.
Fixes#1107.
Follow up to previous commit.
This is probably confusing from a user point of view, since this field
shouldn't show up normally anymore. (Before this commit, it could show
up sporadically when a slow operation was performed during playback,
such as switching fullscreen.)
Normally, feeding a packet to the decoder should always return a frame
_if_ we received a frame before. So while we can't know exactly whether
a frame was dropped, at least the normal case is easily detectable.
This means we display something closer to the actual framedrop count,
instead of a bad guess.
This is the "old" framedropping mode (derived from MPlayer). At least in
the mplayer2/mpv source base, it stopped working properly years ago (or
maybe it never worked properly). For one, it depends on the video
framerate, which assume constant framerate. Another problem was that it
could lead to freezing video display: video could get so much behind
that it couldn't recover from framedrop.
Make some small changes to improve this.
Don't use the current audio position to check how much we are behind.
Instead, use the last known A/V difference. last_av_difference is
updated only when a video frame is scheduled for display. This means we
can keep stop dropping once we're done catching up, even if video is
technically still behind. What helps us here that this forces a video
frame to be displayed after a while. Likewise, we reset the
dropped_frames count only when scheduling a new frame for display as
well.
Some inspiration was taken from earlier work by xnor (see issue #620),
although the implementation turned out quite different.
This still uses the demuxer-reported (possibly broken) FPS value. It
also doesn't account for filters changing FPS. We can't do much about
this, because without decoding _and_ filtering, we just can't know how
long a frame is. In theory, you could derive that from the raw packet
timestamps and the filter chain contents, but actually doing this is
too involved. Fortunately, the main thing the FPS affects is actually
the displayed framedrop count.
Sometimes, --af=hrtf produces heavy artifacts or silence. It's possible
that this commit fixes these issues. My theory is that usually, the
uninitialized coefficients quickly converge to sane values as more audio
is filtered, which would explain why there are often artifacts on init,
with normal playback after that. It's also possible that sometimes, the
uninitialized values were NaN or inf, so that the artifacts (or silence)
would never go away.
Fix this by initializing the coefficients to 0. I'm not sure if this is
correct, but certainly better than before.
See issue #1104.
Let us set a different rate and delay.
Needed for the following commit where we set rate and delay reported by weston.
But only if the option native-keyrepeat is set.
Uses the new mechanism introduced in the previous commit.
Depending on the actual filter, this distributes CPU load more evenly
over time, although it probably doesn't matter.
Consider a filter which turns 1 frame into 2 frames (such as an
deinterlacer). Until now, we forced filters to produce all output frames
at once. This was done for simplicity.
Change the filter API such that a filter can produce frames
incrementally.
Rename video_decode_and_filter to video_filter, and add a new
video_decode_and_filter function. This function now calls the decoder.
This is done so that we can check filters a second time after decoding,
which avoids a useless playloop iteration.
(This and the previous commits are really just microoptimizations, which
simply reduce the number of times the playloop has to recheck
everything.)
Move the check to a function. Run the check a second time after
decoding/filtering. This second check is strictly speaking redundant
(which is why it wasn't done until now), but it avoids a useless
playloop iteration.
Move this code below the code that "shifts" the newly filtered frame.
This allows us to skip a useless playloop iteration later, because
obviously we need to filter a new frame after the previous frame has
been "shifted", and not before that.
Until now, you could override only level 3 with --osd-status-msg. Extend
this, add add --osd-msg1 to --osd-msg3 (one for each OSD level). OSD
level 0 always means disable OSD, so that isn't included.
--osd-msg3 corresponds to --osd-status-msg, but they're not exactly the
same. To allow more customization, --osd-msgN do not include the OSD
symbol. The symbol can be manually added with "${osd-sym-cc}". We keep
the "old" option for some short-term compatibility.
--osd-msg1 should be particularly useful; for example you could do:
--osd-msg1='${?pause==yes:${osd-sym-cc}}'
to display a "paused" symbol when paused, and nothing during normal
playback. (Although admittedly, the syntax is quite a bit of work.)
We don't allow this by default, because it would be silly if random
external data (like filenames or file tags) could accidentally trigger
them.
Add a property that magically disables this ASS tag escaping.
Note that malicious input could still disable ASS tag escaping by
itself. This would be annoying but harmless.
Pausing/unpausing while the audio device can't be reopened, and then
unpausing again when the device is finally reopened, can hang the
player for a while.
This happens because p->prepause_samples grows without bounds each
time the player is unpaused while the device is lost. On unpause,
ao_oss plays prepause_samples of silence to compensate for A/V timing
issues due to the partially lost buffer (we can't pause the device at
an arbitrary sample position, and the current period will be lost).
This in turn will make the player appear to be frozen if too much
audio is queued. (Normally, play() must never block, but here it
happens because more data is written than get_space() reports. A
better implementation would never let prepause_samples grow larger
than the period size.)
The unbounded growth happens because get_space() always returns that
the device can be written while the device is lost. So limit it to
200ms. (A better implementation would limit it to the period size.)
Also see #1080.
There's no reason to let the core wait until the frame is done
displaying. In practice, the core normally didn't need this additional
wakeup, and the VO was quick enough to fetch the new frame, before the
core even attempted to queue a new frame. But it wasn't entirely clean,
and the correct wakeup handling might matter in some cases.
With default settings, this allows you to hit the 100% mark (with
default --softvol-max in the middle) even if you've reached min or max
volume before. This is because 50 is not divisible by 3 (old default)
but by 2 (new default).
Not really sure why there still can be issues with higher --softvol-max
and --volstep=1, but this is where I stop caring.
If --write-filename-in-watch-later-config is used, and the filename
contains newline characters (as generally allowed on Unix), then the
newline will be written to the resume file literally, and the parts
after the newline character are interpreted as options.
This is possibly security relevant.
Change newline characters (and in fact any other special characters)
to '_'.
Reported as #1099 (this commit is a reimplementation of the proposed
pull request).
CC: @mpv-player/stable