on my windows system this allows smoothmotion to work perfectly also in windowed
mode. There's no real right or wrong here, with the the only goal being to
always hit the next vsync. however, on cases where vsync timing is jittery (as
could happen with DWM), this patch tries to aim to the middle of the vsync cycle
to get as least affected as possible by such jitter.
adds 1 vsync interval "slack" before deciding to drop the first frame. it should
help on cases of timing jitter (sleep duration, container timestamps, compositor
vsync timing, etc). once the drop threshold has been crossed, it will keep
dropping until perfect timing alignment. this prevents crossing the drop
threshold back and forth repeatedly and therefore more resilient to frame drops
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.
Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.
This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.
vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.
This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.
This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.
Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
This caused complaints because the fps was basically rounded on
microsecond boundaries in the vsync interval (it seemed convenient to
store only the vsync interval). So store the fps as float too, and let
the "display-fps" property return it directly.
Requested change in behavior.
Note that we set the assumed "infinite" display_fps to 1e6, which
conveniently lets vo_get_vsync_interval() return a dummy value of 1,
which can be easily checked against, and still avoids doing math with
float INFs.
I'm not comfortable with VOCTRL_GET_DISPLAY_FPS being called every
frame.
This requires the VO to set VO_EVENT_WIN_STATE if the FPS could have
changed. At least the X11 backend does this.
With mf://, rather long frame durations are common. By default, one
frame takes 1 second. This causes the if branch changed with this commit
to always being taken, which in turn leads to the player not being woken
up correctly. (As a consequence, it "freezes" by waiting for events that
never come, and moving the mouse cursor over the window will wake it up
again and advance video.)
Obviously, the code should account for how long the video frame takes.
The code is probably still not fully correct, but for now this fixes the
issue at hand.
Fixes#1521.
Usually, a VO must react to VOCTRL_REDRAW_FRAME in order to redraw the
current screen correctly if video is paused (this is done to update
OSD). But if it's not supported, we can just draw the current image
again in the generic vo.c code.
Unfortunately, this turned out pretty useless, because the VOs which
would benefit from this need to redraw even if there is no image, in
order to draw a black screen in --idle --force-window mode. The way
redrawing is handled in the X11 common code and in vo_x11 and vo_xv is
in the way, and I'm not sure what exactly vo_wayland requires. Other VOs
have a non-trivial implementation of VOCTRL_REDRAW_FRAME, which
(probably) makes redrawing slightly more efficient, e.g. by skipping
texture upload. So for now, no VO uses this new functionality, but since
it's trivial, commit it anyway.
The vo_driver->untimed case is for forcibly disabling redraw for vo_lavc
and vo_image always.
At the time screenshot support was added, images weren't refcounted yet,
so screenshots required specialized implementations in the VOs. But now
we can handle these things much simpler. Also see commit 5bb24980.
If there are VOs in the future which can't do this (e.g. they need to
write to the image passed to vo_driver->draw_image), this still could be
disabled on a per-VO basis etc., so we lose no potential performance
advantages.
vo.c queried the VO at initialization whether it wants to be updated on
every display frame, or every video frame. If the smoothmotion option
was changed at runtime, the rendering mode in vo.c wasn't updated.
Just let vo_opengl set the mode directly. Abuse the existing
vo_set_flip_queue_offset() function for this.
Also add a comment suggesting the use of --display-fps to the manpage,
which doesn't have anything to do with the rest of this commit, but is
important to make smoothmotion run well.
The logic disabled framedropping if the frame was interpolated (i.e. the
render call is only done to interpolate between the previous frame, and
the frame before that).
It seems doing this wasn't even necessary, and broke framedrop in
smoothmotion mode. In fact, this code did nothing for display with video
fps below display fps. It did prevent the framedrop counter from going
up, though. So change it so that dropped interpolated frames are never
reported. (Doing so can give confusing results, such as dropping 1000s
of frames on slow operations like video start or changing filters.)
SmoothMotion is a way to time and blend frames made popular by MadVR. It's
intended behaviour is to remove stuttering caused by mismatches between the
display refresh rate and the video fps, while preserving the video's original
artistic qualities (no soap opera effect). It's supposed to make 24fps video
playback on 60hz monitors as close as possible to a 24hz monitor.
Instead of drawing a frame once once it's pts has passed the vsync time, we
redraw at the display refresh rate, and if we detect the vsync is between two
frames we interpolated them (depending on their position relative to the vsync).
We actually interpolate as few frames as possible to avoid a blur effect as
much as possible. For example, if we were to play back a 1fps video on a 60hz
monitor, we would blend at most on 1 vsync for each frame (while the other 59
vsyncs would be rendered as is).
Frame interpolation is always done before scaling and in linear light when
possible (an ICC profile is used, or :srgb is used).
And remove all uses of the VFCAP_CSP_SUPPORTED* constants. This is
supposed to reduce conversions if many filters are used (with many
incompatible pixel formats), and also for preferring the VO's natively
supported pixel formats (as opposed to conversion).
This is worthless by now. Not only do the main VOs not use software
conversion, but also the way vf_lavfi and libavfilter work mostly break
the way the old MPlayer mechanism worked. Other important filters like
vf_vapoursynth do not support "proper" format negotation either.
Part of this was already removed with the vf_scale cleanup from today.
While I'm touching every single VO, also fix the query_format argument
(it's not a FourCC anymore).
We still need to send the VO a duration in these cases. Disabling
framedrop has logically absolutely nothing to do with these cases; it
was overlooked in commit 918b06c4.
So we always send the frame duration (or a guess for it), and check
whether framedropping is actually enabled in the VO code. (It would
be cleaner to send framedrop as a flag, but I don't care about that
right now.)
The last video frame is another case that has a separate code path,
although it's pretty similar to the one in commit 73e5aa87. Fix this
in a different way, which also takes care of the last frame case,
although without context the code becomes slightly more tricky.
As further cleanup, move the decision about framedropping itself to
the same place, so the check in vo.c becomes much simpler. The check
for the vo->driver->encode flag, which is remvoed completely, was
redundant too.
Fixes#1480.
Don't use vo_control() for sending VOCTRL_RESET when starting a seek.
This means vo_seek_reset() won't wait until the VO actually processed
VOCTRL_RESET. It happens asynchronously instead.
The impact of this change should be minimal, unless the VO is somehow
too busy (like blocking on vsync).
There are currently 568 pixel formats (actually fewer, but the namespace
is this big), and for each format elaborate synchronization was done to
call it synchronously on the VO. This is completely unnecessary, and we
can do with just a single call.
Involve detection of software renderers in the probing properly. Other
VOs could handle probing also more gracefully, and e.g. produce less
noise if an API is unavailable. (Although other than the OpenGL VOs,
only vo_wayland will.)
Now the "sw" suboption for vo_opengl[_old] is strictly speaking not
needed anymore. Doing "--vo=opengl" disables the probing logic, and will
always force it, if possible.
Includes some simplifications as well.
Commit d38bc531 is incorrect: the 50ms queue-ahead value and the flip
queue offset have different functions. The latter is about calling
flip_page in advance, so the change attempted to show video frames 50ms
in advance on all VOs.
The change was for vo_opengl_cb, but that can be handled differently.
This adds API to libmpv that lets host applications use the mpv opengl
renderer. This is a more flexible (and possibly more portable) option to
foreign window embedding (via --wid).
This assumes that methods like context sharing and multithreaded OpenGL
rendering are infeasible, and that a way is needed to integrate it with
an application that uses a single thread to render everything.
Add an example that does this with QtQuick/qml. The example is
relatively lazy, but still shows how relatively simple the integration
is. The FBO indirection could probably be avoided, but would require
more work (and would probably lead to worse QtQuick integration, because
it would have to ignore transformations like rotation).
Because this makes mpv directly use the host application's OpenGL
context, there is no platform specific code involved in mpv, except
for hw decoding interop.
main.qml is derived from some Qt example.
The following things are still missing:
- a way to do better video timing
- expose GL renderer options, allow changing them at runtime
- support for color equalizer controls
- support for screenshots
Add a generic mechanism to the VO to relay "extra" events from VO to
player. Use it to notify the core of window resizes, which in turn will
be used to mark all affected properties ("window-scale" in this case) as
changed.
(I refrained from hacking this as internal command into input_ctx, or to
poll the state change, etc. - but in the end, maybe it would be best to
actually pass the client API context directly to the places where events
can happen.)
Especially with other components (libavcodec, OSX stuff), the thread
list can get quite populated. Setting the thread name helps when
debugging.
Since this is not portable, we check the OS variants in waf configure.
old-configure just gets a special-case for glibc, since doing a full
check here would probably be a waste of effort.
When the VO was moved it its own thread, responsibility for redrawing
was given to the VO thread itself. So if there was a condition that
indicated that redrawing was required, like expose events or certain
VOCTRLs, the VO thread was redrawing itself.
This worked fine, but there are some corner cases where this works
rather badly. E.g. if I fullscreen the player and hit panscan controls
with mpv's default autorepeat rate, playback stops. This happens because
the VO redraws itself after every panscan change command. Running each
(repeated) command takes so long due to redrawing and (involuntary)
waiting on vsync, that it never leaves the input processing loop while
the key is held down. I suspect that in my case, redrawing in fullscreen
mode just gets slow enough that it takes 2 vsyncs instead of 1 on
average, and the processing time gets larger than the autorepeat delay.
Fix this by taking redraw control from the VO, and instead let the
playloop issue a "real" redraw command to the VO if needed. This
basically reverts redraw handling to what it was before moving the VO to
a thread.
CC: @mpv-player/stable
When pausing after a frame was just dropped, we're logically at the
dropped frame, and thus should redraw the dropped frame. This was
implemented, but didn't work after unpausing for the second time,
because of a minor logic bug.
vo_vdpau uses its own framedrop code, mostly for historic reasons. It
has some tricky heuristics, of which I'm not sure how they work, or if
they have any effect at all, but in any case, I want to keep this code
for now. One day it might get fully ported to the vo.c framedrop code,
or just removed.
But improve its interaction with the user-visible framedrop controls.
Make --framedrop actually enable and disable the vo_vdpau framedrop
code, and increment the number of dropped frames correctly.
The code path for other VOs should be equivalent. The vo_vdpau behavior
should, except for the improvements mentioned above, be mostly
equivalent as well. One minor change is that frames "shown" during
preemption are always count as dropped.
Remove the statement from the manpage that vo_vdpau is the default; this
hasn't been the case for a while.
There's no reason to let the core wait until the frame is done
displaying. In practice, the core normally didn't need this additional
wakeup, and the VO was quick enough to fetch the new frame, before the
core even attempted to queue a new frame. But it wasn't entirely clean,
and the correct wakeup handling might matter in some cases.
This was kept in the codebase because it is slightly faster than --vo=opengl
on really old Intel cards (from the GMA era). Time to kill it, and let it rest.
Fixes#1061
bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.
The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
If duration<0, it means the duration is unknown. Disable framedropping,
because end_time makes no sense in this case.
Also, strictly never drop the first frame.
This fixes weird behavior with the cover-art case (for the 100th time).
This could be used by VO implementations to report a recent vsync time
to the generic VO code, which in turn will use it and the display FPS
to estimate at which point in time the next vsync will happen.