Apparently making ESC exit fullscreen mode is the more popular
convention compared to ESC quitting the program.
It was also concluded that ESC should do nothing when the windows is
already in normal state.
See discussion in #973.
This means they get special handling for asynchronously aborting
playback, even if the player is "stuck".
Also document "stop". It seems somewhat useful for client API users
(although that will be implemented properly only in the following
commits.)
This makes the player wait until each script is loaded. Do this to give
the script a chance to setup all its event handlers. It might also be
useful to allow a script to change options that matter for playback.
While waiting for a script to be loaded, the player actually accepts
input. This is needed because the scripts can execute player commands
anyway while they are being "loaded". The player won't react to most
commands though: it can't quit or navigate the playlist in this state.
For deciding whether a script is finally loaded, we use a cheap hack: if
mpv_wait_event() is called, it's considered loaded. Let's hope this is
good enough. I think it's better than introducing explicit API for this.
Although I'm sure this will turn out as too simplistic some time in the
future, the same would probably happen with a more explicit API.
This was kept in the codebase because it is slightly faster than --vo=opengl
on really old Intel cards (from the GMA era). Time to kill it, and let it rest.
Fixes#1061
The Windows version of tmpfile is actually pretty broken. It tries to
create the file in the root directory of the current drive, which means
on Vista and up, it normally fails due to insufficient permissions.
Replace it with a version that uses GetTempPath.
Also remove the Windows-specific note about automatic deletion of the
cache file. FILE_FLAG_DELETE_ON_CLOSE is available in NT, and it should
be pretty reliable.
--hls-bitrate=min/max lets you select the min or max bitrate. That's it.
Something more sophisticated might be possible, but is probably not even
worth the effort.
Until now, you had to use --load-unsafe-playlists or --playlist to get
playlists loaded. Change this and always load playlists by default.
This still attempts to reject unsafe URLs. For example, trying to invoke
libavdevice pseudo-demuxer is explicitly prevented. Local paths and any
http links (and some more) are always allowed.
In particular, use the note markup. The issue about rounded timestamps
is mostly with respect to Matroska (which usually rounds them to
milliseconds), which somewhat adds to the reliability issue.
This inserts an automatic conversion filter if a Matroska file is marked
as 3D (StereoMode element). The basic idea is similar to video rotation
and colorspace handling: the 3D mode is added as a property to the video
params. Depending on this property, a video filter can be inserted.
As of this commit, extending mp_image_params is actually completely
unnecessary - but the idea is that it will make it easier to integrate
with VOs supporting stereo 3D mogrification. Although vo_opengl does
support some stereo rendering, it didn't support the mode my sample file
used, so I'll leave that part for later.
Not that most mappings from Matroska mode to vf_stereo3d mode are
probably wrong, and some are missing.
Assuming that Matroska modes, and vf_stereo3d in modes, and out modes
are all the same might be an oversimplification - we'll see.
See issue #1045.
Since we have to be portable, our options for creating temporary files
are somewhat limited. tmpfile() happens to be available everywhere, so
use that. This function doesn't allow having a "visible" filename or
location, so we use the magic string "TMP" for this.
A (hopefully) temporary hack to make stream switching delays tolerable.
It's not clear how this should be handled (either executing a precise
seek on track switching, or always enabling all streams), so get this
issue out of the way for now by picking a rather low value.
Add the --cache-secs option, which literally overrides the value of
--demuxer-readahead-secs if the stream cache is active. The default
value is very high (10 seconds), which means it can act as network
cache.
Remove the old behavior of trying to pause once the byte cache runs
low. Instead, do something similar wit the demuxer cache. The nice
thing is that we can guess how many seconds of video it has cached,
and we can make better decisions. But for now, apply a relatively
naive heuristic: if the cache is below 0.5 secs, pause, and wait
until at least 2 secs are available.
Note that due to timestamp reordering, the estimated cached duration
of video might be inaccurate, depending on the file format. If the
file format has DTS, it's easy, otherwise the duration will seemingly
jump back and forth.
Add a new parameter 'p' to gaussian filter. The new formula used
a different base taken from fmtconv plugin, so that the
new parameter is exactly same as the one used in Avisynth and
Vapoursynth.
The new default value is 2 / log(2) * 10, with the default value it
conforms to the original kernel taken from vector-agg.
Add two new options, make it possible for user to set the radius
for some of the filters with no fixed radius.
Also add three new filters with the new radius parameter supported.
The little lua snippet at #488 as well as the actual implementation
seems to indicate that not expanding properties is indeed the correct
behavior. Document that.
Signed-off-by: wm4 <wm4@nowhere>
--demuxer-readahead-secs now controls how much the demuxer should
readahead by an amount of seconds. This is based on the raw packet
timestamps. It's not always very exact. For example, h264 in Matroska
does not store any linear timestamps (only PTS values which are going
to be reordered by the decoder), so this heuristic is usually off by
several hundred milliseconds.
The decision whether to readahead is basically OR-ed with the other
--demuxer-readahead-packets options. Change the manpage descriptions
to subtly convey these semantics.
Since the display FPS is currently detected on X11 only (and even there
it's known to be wrong on certain setups), it seems like a good idea to
make this user-configurable.
I'm not sure about the merit, though it does print nice numbers if debug
output is enabled.
Basically, this tries to achieve similar results as the glFinish()
business, but again it entirely depends on the drivers whether this
does anything meaningful, or whether it's actively harmful.
It seems that at least on nvidia systems with composting disabled, we
can get it to block deterministically on the actual vsync event, which
should improve framedropping.
This mostly uses the same idea as with vo_vdpau.c, but much simplified.
On X11, it tries to get the display framerate with XF86VM, and limits
the frequency of new video frames against it. Note that this is an old
extension, and is confirmed not to work correctly with multi-monitor
setups. But we're using it because it was already around (it is also
used by vo_vdpau).
This attempts to predict the next vsync event by using the time of the
last frame and the display FPS. Even if that goes completely wrong,
the results are still relatively good.
On other systems, or if the X11 code doesn't return a display FPS, a
framerate of 1000 is assumed. This is infinite for all practical
purposes, and means that only frames which are definitely too late are
dropped. This probably has worse results, but is still useful.
"--framedrop=yes" is basically replaced with "--framedrop=decoder". The
old framedropping mode is kept around, and should perhaps be improved.
Dropping on the decoder level is still useful if decoding itself is too
slow.
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
Completely useless, and could accidentally be enabled by cycling
framedrop modes. Just get rid of it.
But still allow triggering the old code with --vd-lavc-framedrop, in
case someone asks for it. If nobody does, this new option will be
removed eventually.