Currently, the wayland backend needs extra work to avoid drawing more
often than the wayland frame callback allows. (This is not ideal, but
will be fixed at a later time.)
Unify this with the start_frame callback added for cocoa. Some details
change for the better. For example, if a frame is dropped, and a redraw
is done afterwards, the actually correct frame is redrawn, instead
whatever was in the textures from before the dropped frame.
Increase the default queue size. This helps with "missed" frames due to
the asynchronous nature of the API. All the other VOs are synchronous,
so if rendering and displaying takes a while, the common code in vo.c
will be blocked until it can continue. But with opengl-cb, vo.c might
immediately push the next ready frame, which causes the current frame
to be dropped _if_ it wasn't rendered yet.
One could fix this by making vo.c wait a while (until the API user calls
the render function, which pulls the frame). But setting the default
queue size to 2 seems much simpler: instead of dropping the frame, it
will be pushed to the API user once the next renderer call finishes.
(This is still a bit strange, and will hopefully be cleaned up when
video scheduling is redone, but for now this appears to deliver
relatively good results.)
Unlike other VOs, this rendered OSD even while no VO was created
(because the renderer lives as long as the API user wants). Change this,
and refactor the code so that the OSD object is accessible only while
the VO is created.
(There is a short time where the OSD can still be accessed even after VO
destruction - this is not a race condition, though it's inelegant and
unfortunately unavoidable.)
This replaces the old smoothmotion code by a more flexible tscale
option, which essentially allows any scaler to be used for interpolating
frames. (The actual "smoothmotion" scaler which behaves identical to the
old code does not currently exist, but it will be re-added in a later commit)
The only odd thing is that larger filters require a larger queue size
offset, which is currently set dynamically as it introduces some issues
when pausing or framestepping. Filters with a lower radius are not
affected as much, so this is identical to the old smoothmotion if the
smoothmotion interpolator is used.
The basic idea is to use dynamically generated shaders instead of a
single monolithic file + a ton of ifdefs. Instead of having to setup
every aspect of it separately (like compiling shaders, setting uniforms,
perfoming the actual rendering steps, the GLSL parts), we generate the
GLSL on the fly, and perform the rendering at the same time. The GLSL
is regenerated every frame, but the actual compiled OpenGL-level shaders
are cached, which makes it fast again. Almost all logic can be in a
single place.
The new code is significantly more flexible, which allows us to improve
the code clarity, performance and add more features easily.
This commit is incomplete. It drops almost all previous code, and
readds only the most important things (some of them actually buggy).
The next commit will complete it - it's separate to preserve authorship
information.
mpv_opengl_cb_render() is supposed to clear the screen with the
background color if there's no video... I think.
It didn't do this, because although uninit() requested gl_video_config()
to be called, this didn't happen, because this function checks whether
the VO is set - and it's unsert after uninit() releases the lock.
Also call the user wakeup callback in this situation, so the user
actually redraws immediately.
This is somewhat imperfect, because detection of hw decoding APIs is
mostly done on demand, and often avoided if not necessary. (For example,
we know very well that there are no hw decoders for certain codecs.)
This also requires every hwdec backend to identify itself (see hwdec.h
changes).
At the time screenshot support was added, images weren't refcounted yet,
so screenshots required specialized implementations in the VOs. But now
we can handle these things much simpler. Also see commit 5bb24980.
If there are VOs in the future which can't do this (e.g. they need to
write to the image passed to vo_driver->draw_image), this still could be
disabled on a per-VO basis etc., so we lose no potential performance
advantages.
Use different VOCTRLs for "window" and normal screenshot modes. The
normal one will probably be removed, and replaced by generic code in
vo.c, and this commit is preparation for this. (Doing it the other way
around would be slightly simpler, but I haven't decided yet about the
second one, and touching every VO is needed anyway in order to remove
the unneeded crap. E.g. has_osd has been unused for a long time.)
SmoothMotion is a way to time and blend frames made popular by MadVR. It's
intended behaviour is to remove stuttering caused by mismatches between the
display refresh rate and the video fps, while preserving the video's original
artistic qualities (no soap opera effect). It's supposed to make 24fps video
playback on 60hz monitors as close as possible to a 24hz monitor.
Instead of drawing a frame once once it's pts has passed the vsync time, we
redraw at the display refresh rate, and if we detect the vsync is between two
frames we interpolated them (depending on their position relative to the vsync).
We actually interpolate as few frames as possible to avoid a blur effect as
much as possible. For example, if we were to play back a 1fps video on a 60hz
monitor, we would blend at most on 1 vsync for each frame (while the other 59
vsyncs would be rendered as is).
Frame interpolation is always done before scaling and in linear light when
possible (an ICC profile is used, or :srgb is used).
Before this commit, each hw backend had their own specific struct types
for context, and some, like VDA, had none at all. Add a context struct
(mp_hwdec_ctx) that provides a somewhat generic way to pass the hwdec
context around. Some things get slightly better, some slightly more
verbose.
mp_hwdec_info is still around; it's still needed, but is reduced to its
role of handling delayed loading of the hwdec backend.
And remove all uses of the VFCAP_CSP_SUPPORTED* constants. This is
supposed to reduce conversions if many filters are used (with many
incompatible pixel formats), and also for preferring the VO's natively
supported pixel formats (as opposed to conversion).
This is worthless by now. Not only do the main VOs not use software
conversion, but also the way vf_lavfi and libavfilter work mostly break
the way the old MPlayer mechanism worked. Other important filters like
vf_vapoursynth do not support "proper" format negotation either.
Part of this was already removed with the vf_scale cleanup from today.
While I'm touching every single VO, also fix the query_format argument
(it's not a FourCC anymore).
Support for taking screenshots when doing hardware decoding needs to be
added later.
This takes the last image queued to the VO, which is logically the image
the player thinks is on screen (so e.g. subtitles will match).
forget_frames() does not clear this, because seeking does not remove the
current image from the screen (until the next one is drawn).
The previous implementation of opengl-cb kept only latest flipped frame.
This can cause massive frame drops because rendering is done asynchronously
and only the latest frame can be rendered.
This commit introduces frame queue and releated options to opengl-cb.
frame-queue-size: the maximum size of frame queue (1-100, default: 1)
frame-drop-mode: behavior when frame queue is full (pop, clear, default: pop)
The frame queue holds delayed frames and drops frames if the frame queue is
overflowed with next method:
'pop' mode: drops all the oldest frames overflown.
'clear' mode: drops all frames in queue and clear it.
With default options(frame-queue-size=1:frame-drop-mode=pop),
opengl-cb behaves in the same way as previous implementation effectively.
For frame-queue-size > 1, opengl-cb tries to calls update() without waiting
next flip_page() in order to consume queued frames.
Signed-off-by: wm4 <wm4@nowhere>
This was always supposed to work.
Just add the option declaration. Normally I'm not a fan of duplicating
such things, but in this case it's (still) harmless.
This makes vo_opengl_cb respond to controls like "gamma" and
"brightness". The commit includes an awkward refactor for vo_opengl to
make it easier for vo_opengl_cb.
One problem is a logical race condition. The set of supported controls
depends on the pixelformat, which in turn is set by reconfig(). But the
actual reconfig() call (on the renderer) happens asynchronously on the
renderer thread. At the time it happens, the player most likely already
tried to set some controls for command line options (see init_vo() in
video.c). So setting this command line options will fail most of the
time, though it could randomly succeed. This can't be fixed directly,
because the player can't wait on the renderer thread, because the
renderer thread might already wait on the player.
Until now, calling mpv_opengl_cb_uninit_gl() at a "bad moment" could
make the whole thing to explode. The API user was asked to avoid such
situations by calling it only in "good moments". But this was probably a
bit too subtle and could easily be overlooked.
Integrate the approach the qml example uses directly into the
implementation. If the OpenGL context is to be unitialized, forcefully
disable video, and block until this is done.
GL_ARB_debug_output provides a logging callback, which can be used to
diagnose problems etc. in case the driver supports it. It's enabled only
if the vo_opengl "debug" suboption is set.
Now it accepts the same renderer arguments as vo_opengl. This also
disables debug checks by default, and reverts the background color
override. Both can now be controlled by the host application.
Commit d38bc531 is incorrect: the 50ms queue-ahead value and the flip
queue offset have different functions. The latter is about calling
flip_page in advance, so the change attempted to show video frames 50ms
in advance on all VOs.
The change was for vo_opengl_cb, but that can be handled differently.
I think that's expected; mpv shouldn't draw anything while no video is
active. This doesn't blend transparently, though.
Also document the vo_opengl_cb thing.
This adds API to libmpv that lets host applications use the mpv opengl
renderer. This is a more flexible (and possibly more portable) option to
foreign window embedding (via --wid).
This assumes that methods like context sharing and multithreaded OpenGL
rendering are infeasible, and that a way is needed to integrate it with
an application that uses a single thread to render everything.
Add an example that does this with QtQuick/qml. The example is
relatively lazy, but still shows how relatively simple the integration
is. The FBO indirection could probably be avoided, but would require
more work (and would probably lead to worse QtQuick integration, because
it would have to ignore transformations like rotation).
Because this makes mpv directly use the host application's OpenGL
context, there is no platform specific code involved in mpv, except
for hw decoding interop.
main.qml is derived from some Qt example.
The following things are still missing:
- a way to do better video timing
- expose GL renderer options, allow changing them at runtime
- support for color equalizer controls
- support for screenshots