Add the --cache-secs option, which literally overrides the value of
--demuxer-readahead-secs if the stream cache is active. The default
value is very high (10 seconds), which means it can act as network
cache.
Remove the old behavior of trying to pause once the byte cache runs
low. Instead, do something similar wit the demuxer cache. The nice
thing is that we can guess how many seconds of video it has cached,
and we can make better decisions. But for now, apply a relatively
naive heuristic: if the cache is below 0.5 secs, pause, and wait
until at least 2 secs are available.
Note that due to timestamp reordering, the estimated cached duration
of video might be inaccurate, depending on the file format. If the
file format has DTS, it's easy, otherwise the duration will seemingly
jump back and forth.
Add a new parameter 'p' to gaussian filter. The new formula used
a different base taken from fmtconv plugin, so that the
new parameter is exactly same as the one used in Avisynth and
Vapoursynth.
The new default value is 2 / log(2) * 10, with the default value it
conforms to the original kernel taken from vector-agg.
Add two new options, make it possible for user to set the radius
for some of the filters with no fixed radius.
Also add three new filters with the new radius parameter supported.
The little lua snippet at #488 as well as the actual implementation
seems to indicate that not expanding properties is indeed the correct
behavior. Document that.
Signed-off-by: wm4 <wm4@nowhere>
--demuxer-readahead-secs now controls how much the demuxer should
readahead by an amount of seconds. This is based on the raw packet
timestamps. It's not always very exact. For example, h264 in Matroska
does not store any linear timestamps (only PTS values which are going
to be reordered by the decoder), so this heuristic is usually off by
several hundred milliseconds.
The decision whether to readahead is basically OR-ed with the other
--demuxer-readahead-packets options. Change the manpage descriptions
to subtly convey these semantics.
Since the display FPS is currently detected on X11 only (and even there
it's known to be wrong on certain setups), it seems like a good idea to
make this user-configurable.
I'm not sure about the merit, though it does print nice numbers if debug
output is enabled.
Basically, this tries to achieve similar results as the glFinish()
business, but again it entirely depends on the drivers whether this
does anything meaningful, or whether it's actively harmful.
It seems that at least on nvidia systems with composting disabled, we
can get it to block deterministically on the actual vsync event, which
should improve framedropping.
This mostly uses the same idea as with vo_vdpau.c, but much simplified.
On X11, it tries to get the display framerate with XF86VM, and limits
the frequency of new video frames against it. Note that this is an old
extension, and is confirmed not to work correctly with multi-monitor
setups. But we're using it because it was already around (it is also
used by vo_vdpau).
This attempts to predict the next vsync event by using the time of the
last frame and the display FPS. Even if that goes completely wrong,
the results are still relatively good.
On other systems, or if the X11 code doesn't return a display FPS, a
framerate of 1000 is assumed. This is infinite for all practical
purposes, and means that only frames which are definitely too late are
dropped. This probably has worse results, but is still useful.
"--framedrop=yes" is basically replaced with "--framedrop=decoder". The
old framedropping mode is kept around, and should perhaps be improved.
Dropping on the decoder level is still useful if decoding itself is too
slow.
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
Completely useless, and could accidentally be enabled by cycling
framedrop modes. Just get rid of it.
But still allow triggering the old code with --vd-lavc-framedrop, in
case someone asks for it. If nobody does, this new option will be
removed eventually.
Split the options into the following sections:
* Playback Control
* Program Behaviour
* Video
* Audio
* Subtitles
* Window
* Disc Devices
* Equalizer
* Demuxer
* Input
* OSD
* Screenshot
* Software Scaler
* Terminal
* TV
* Cache
* Network
* DVB
* PVR
* Miscellaneous
Most options are sorted by usefullness and how often they're used or how
important they are.
This makes finding the right options easier and adds some sort of structure.
The client API exports this state via events already, but maybe it's
better to explicitly provide this property in order to facilitate use on
OSD and similar cases.
This is probably nicer. The actual version number doesn't change (other
than the minor being incremented).
The "| 0UL" is to make the type unsigned long int, like it was before.
Handle --term-playing-msg at a better place.
Move MPV_EVENT_TICK hack into a separate function. Also add some words
to the client API that you shouldn't use it. (But better leave breaking
it for later.)
Handle --frames and frame_step differently. Remove the mess from the
playloop, and do it after frame display. Give up on the weird semantics
for audio-only mode (they didn't make sense anyway), and adjust the
manpage accordingly.
Almost nothing was left of it.
The only thing this commit actually removes is support for reading
input commands from stdin. But you can emulate this via:
--input-file=/dev/stdin --input-terminal=no
However, this won't work on Windows. Just use a named pipe.
Some ALSA plugins take non-interleaved audio, but treat it as
interleaved, which results in various funny bugs. Users keep hitting
this issue, and it just doesn't seem worth the trouble.
CC: @mpv-player/stable
Add an option that enables using native PulseAudio auto-updated timing
information, instead of the manual calculations added in mplayer2 times.
You can use --ao=pulse:no-latency-hacks to enable the new code. The code
is almost the same as the code that was removed with commit de435ed5,
but I didn't readd some bits I didn't understand. Likewise, the option
will disable the code added with that commit.
In my tests this seemed to work well, though the A/V sync display looks
funny when seeking.
The default is still the old behavior.
See issue #959.