I was unhappy with the old way of handling buffers, especially resizing. But my
original plan to use wl_shm_pool_resize wasn't as good as I initially thought.
I might get back to it.
With the new buffer pools it now possible to select triple buffering. Also the
buffer pools are also needed for the upcoming subsurfaces for osd and subtitles.
I hope this change was worth it.
Use of these is "discouraged", but they're there to select these special
cases with the "aspect" property. They really should use some sort of
choice option type, but since it would be some work to make these work
with float values, the simple and dumb alternative was picked.
Return the error Lua-style, instead of raising it as Lua error. This is
better, because raising errors is reserved for more "fatal" conditions.
Pretending they're exceptions and trying to do exception-style error
handling will just lead to pain in this language.
Print a warning if a library has mismatched compile time and link time
versions.
Refuse to work if the compile time and link time versions are a mix of
ffmpeg and libav. We print an error message and call exit(). Since we'd
randomly crash anyway, I think this is ok.
This doesn't catch the case if you e.g. use a ffmpeg libavcodec and a
libav libavformat, which would of course just crash as quickly, but I
think this checks enough already.
This library will export the client API functions.
Note that this doesn't allow compiling the command line player to link
against this library yet. The reason is that there's lots of weird stuff
required to setup the execution environment (mostly Windows and OSX
specifics), as well as things which are out of scope of the client API
and every application has to do on its own. However, since the mpv
command line player basically reuses functions from the mpv core to
implement these things, it's not very easy to separate the command
line player form the mpv core.
This is partial only, and it still accesses some MPContext internals.
Specifically, chapter and track lists are still read directly, and OSD
access is special-cased too.
The OSC seems to work fine, except using the fast-forward/backward
buttons. These buttons behave differently, because the OSC code had
certain assumptions how often its update code is called.
The Lua interface changes slightly.
Note that this has the odd property that Lua script and video start
at the same time, asynchronously. If this becomes an issue, explicit
synchronization could be added.
Add a client API, which is intended to be a stable API to get some rough
control over the player. Basically, it reflects what can be done with
input.conf commands or the old slavemode. It will replace the old
slavemode (and enable the implementation of a new slave protocol).
This avoids trouble if another mpv instance is initialized in the same
process.
Since timeBeginPeriod/timeEndPeriod are hereby not easily matched
anymore, use an atexit() handler to call timeEndPeriod, so that we
can be sure these calls are matched, even if we allow multiple
initializations later when introducing the client API.
It's quite possible to overflow the calculation by setting the timeout
to high values. Limit it to INT_MAX, which should be safe. The issue is
mainly the secs variable.
timespec.tv_sec will normally be 64 bit on sane systems, and we assume
it can't overflow by adding INT_MAX to it.
This skipped all audio packets before the first video key frame was
found. I'm not really sure why this would be needed; most likely it
isn't. So get rid of it. Even if audio packets are returned to the
player too soon, the player will sync the audio start to the video
start by decoding and discarding audio data.
Note that although the removed code was just added in the previous
commit, it merely kept the old keeping semantics which demux_mkv
always followed. This commit removes these special semantics.
v_skip_to_keyframe is set to true while non-keyframe video packets are
skipped. Until now, audio packets were also skipped when doing this. I
can't see any good reason why this would be done, but for now I want to
keep the old logic when audio+video seeks are done.
However, for audio-only mode, do proper seeking, which also fixes
behavior when trying to seek past the end of the file: playback is
terminated properly, instead of starting playback on the start of the
last cluster.
Note that a_no_timecode_check is used only for audio+video seek. I'm
not sure what this is needed for, but it might influence A/V sync after
seeking.
This sometimes happened when changing playback speed (= reinitializing
audio) after seeking of playback start. The assertion in audio.c:441 was
triggered, because buffer_playable_samples wasn't reset correctly when
the audio buffer was cleared or shortened. The assertion is correct and
should hold up any time.
I could not see any difference whatsoever, but for usage with a 3DLUT
there's zero performance difference so we might as well follow the spec to
the letter.
On Windows, no ANSI control sequences are available, so we can't easily
clear lines, move the cursor, etc. It's yet to be decided how this
should be handled (emulate ANSI escapes in osdep/terminal-win.c, or
provide abstracted terminal API functions to unify the Linux and Windows
code).
For now, this fixes the regression that was introduced earlier by the
status line rewrite. It doesn't fix all aspects of status line and
terminal OSD handling, as can be clearly seen by the unconditional use
of terminal_erase_to_end_of_line further down the changed code.
Fixes github issue #499 (sort of).
Trying to set a non-existent flag (like +keepside on Libav) causes
libavutil print an incomprehensible warning (something about eval;
probably the overengineered libavutil option parser tripping over the
'+' normally used for flags, and trying to interpret it as formula).
There's apparently no easy way to check for the existence of a flag,
so add some more ifdeffery to shut it up.
The code removed from handle_input_and_seek_coalesce() did two things:
1. If there's a queued seek, stop accepting non-seek commands, and delay
them to the next playloop iteration.
2. If a seek is executing (i.e. the seek was unqueued, and now it's
trying to decode and display the first video frame), stop accepting
seek commands (and in fact all commands that were queued after the
first seek command). This logic is disabled if seeking started longer
than 300ms ago. (To avoid starvation.)
I'm not sure why 1. would be needed. It's still possible that a command
immediately executed after a seek command sees a "seeking in progress"
state, because it affects queued seeks only, and not seeks in progress.
Drop this code, since it can easily lead to input starvation, and I'm
not aware of any disadvantages.
The logic in 2. is good to make seeking behave much better, as it
guarantees that the video display is updated frequently. Keep the core
idea, but implement it differently. Now this logic is applied to seeks
only. Commands after the seek can execute freely, and like with 1., I
don't see a reason why they couldn't. However, in some cases, seeks are
supposed to be executed instantly, so queue_seek() needs an additional
parameter to signal the need for immediate update.
One nice thing is that commands like sub_seek automatically profit from
the seek delay logic. On the other hand, hitting chapter seek multiple
times still does not update the video on chapter boundaries (as it
should be).
Note that the main goal of this commit is actually simplification of the
input processing logic and to allow all commands to be executed
immediately.
Legacy GL context creation (glXCreateContext) explicitly requires a X
visual, while the modern one (glXCreateContextAttribsARB) does not for
some reason. So fail only on the legacy code path if we don't find a
visual. Note that vo_x11_config_vo_window() will select a default visual
if a NULL visual is passed to it.
This fixes issue #504. For some reason, glXChooseFBConfig() will return
a fbconfig with no associated visual. (I'm not sure if this allowed.
They don't always have a visual, but since GLX_X_RENDERABLE is set
and GLX_DRAWABLE_TYPE is (implicitly) set to GLX_WINDOW_BIT, why would
there be no visual?)
Even worse, a test program seems to show that a 16 bit fbconfig is
selected (instead of 24/32 bit), which doesn't sound nice at all. Since
there _are_ better fbconfigs available, glXChooseFBConfig() should
normally sort them by quality, and return the better ones first. It's
worth noting that this function should also prefer GLX_TRUE_COLOR
over anything else, although this comes last in the sort order.
Whatever is going on, requesting GLX_X_VISUAL_TYPE with GLX_TRUE_COLOR
seems to fix it.
There is some logic to discard packets from streams that are not
selected. Run the metadata update code before this, just to make 100%
sure that no metadata updates can be lost when streams are deselected.
(I'm not sure why this logic would be needed, since both libavformat and
the generic demuxer code do this already. But a quick test shows that
av_read_frame() can return a packet from a stream even if the stream has
AVStream.discard set to AVDISCARD_ALL. This happened after stream
switching. Maybe libavformat doesn't discard already queued packets.)