2001-02-24 20:28:24 +00:00
|
|
|
/*
|
2009-02-08 03:27:30 +00:00
|
|
|
* Copyright (C) Aaron Holtzman - Aug 1999
|
2015-04-13 07:36:54 +00:00
|
|
|
*
|
2009-02-08 03:27:30 +00:00
|
|
|
* Strongly modified, most parts rewritten: A'rpi/ESP-team - 2000-2001
|
|
|
|
* (C) MPlayer developers
|
2001-02-24 20:28:24 +00:00
|
|
|
*
|
2015-04-13 07:36:54 +00:00
|
|
|
* This file is part of mpv.
|
2001-02-24 20:28:24 +00:00
|
|
|
*
|
vo.c, vo.h, vo_null.c: change license to LGPL
Most contributors have agreed. vo.c requires a "driver" entry for each
video output - we assume that if someone who didn't agree to LGPL added
a line, it's fine for vo.c to be LGPL anyway. If the affected video
output is not disabled at compilation time, the resulting binary will be
GPL anyway.
One problem are the changes by Nick Kurshev (usually using "nick" as SVN
username). He could not be reached. I believe all changes to his files
are actually gone, but here is a detailed listing:
fa1d5742bc: nick introduces a new VO API. It was removed in 64bedd9683.
Some of this was replaced by VOCTRLs are introduced in 7c51652a1b,
obviously replacing at least some functionality by his API.
b587a3d642: nick adds a vo_tune_info_t struct. Removed in 64bedd9683
too.
9caad2c29a: nick adds some VOCTRLs, which were silently removed in
8cc5ba5ab8 (they became unused probably with the VIDIX removal).
340183b0e9: nick adds VO-based screenshots, which got removed in
2f4b840f62. Strangely the same name was introduced in 01cf896a2f again,
but this is a coincidence and worked differently (also it was removed
yet again in 2858232220).
104c125e6d: nick adds an option for "direct rendering". It was renamed
in 6403904ae9 and fully removed in e48b21dd87.
5ddd8e92a1: nick adds code to check the VO driver preinit arg to every
single VO driver. The argument itself and any possibly remaining code
associated with it was removed in 1f5ffe7d30.
f6878753fb: nick adds header inclusion guards. We assume this is not
relevant for copyright.
Some of nick's code was merely moved to other files, such as the
equalizer stuff added in 555c676683 and moved in 4db72f6a80 and
12579136ff, and don't affect copyright of these files anymore.
Other notes:
fef7b17c34: a patch by someone who wasn't asked for relicensing added a
symbol that was removed again in 1b09f4633.
4a8a46fafd: author probably didn't agree to LGPL, but the function
signature was changed later on anyway, and nothing of this is left.
7b25afd742: the same author adds a symbol to what is vo.h today, which
this relicensing commit removes, as it was unused. (It's not clear
whether the mere symbol is copyrightable, but no need to take a risk.)
3a406e94d7, 9dd8f241ac: slave mode things by someone who couldn't be
reached. This aspect of the old slave mode was completely removed.
bbeb54d80a: patch by someone who was not asked, but the added code was
completely removed again.
2017-05-10 12:32:34 +00:00
|
|
|
* mpv is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
2009-02-08 03:27:30 +00:00
|
|
|
*
|
2015-04-13 07:36:54 +00:00
|
|
|
* mpv is distributed in the hope that it will be useful,
|
2009-02-08 03:27:30 +00:00
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
vo.c, vo.h, vo_null.c: change license to LGPL
Most contributors have agreed. vo.c requires a "driver" entry for each
video output - we assume that if someone who didn't agree to LGPL added
a line, it's fine for vo.c to be LGPL anyway. If the affected video
output is not disabled at compilation time, the resulting binary will be
GPL anyway.
One problem are the changes by Nick Kurshev (usually using "nick" as SVN
username). He could not be reached. I believe all changes to his files
are actually gone, but here is a detailed listing:
fa1d5742bc: nick introduces a new VO API. It was removed in 64bedd9683.
Some of this was replaced by VOCTRLs are introduced in 7c51652a1b,
obviously replacing at least some functionality by his API.
b587a3d642: nick adds a vo_tune_info_t struct. Removed in 64bedd9683
too.
9caad2c29a: nick adds some VOCTRLs, which were silently removed in
8cc5ba5ab8 (they became unused probably with the VIDIX removal).
340183b0e9: nick adds VO-based screenshots, which got removed in
2f4b840f62. Strangely the same name was introduced in 01cf896a2f again,
but this is a coincidence and worked differently (also it was removed
yet again in 2858232220).
104c125e6d: nick adds an option for "direct rendering". It was renamed
in 6403904ae9 and fully removed in e48b21dd87.
5ddd8e92a1: nick adds code to check the VO driver preinit arg to every
single VO driver. The argument itself and any possibly remaining code
associated with it was removed in 1f5ffe7d30.
f6878753fb: nick adds header inclusion guards. We assume this is not
relevant for copyright.
Some of nick's code was merely moved to other files, such as the
equalizer stuff added in 555c676683 and moved in 4db72f6a80 and
12579136ff, and don't affect copyright of these files anymore.
Other notes:
fef7b17c34: a patch by someone who wasn't asked for relicensing added a
symbol that was removed again in 1b09f4633.
4a8a46fafd: author probably didn't agree to LGPL, but the function
signature was changed later on anyway, and nothing of this is left.
7b25afd742: the same author adds a symbol to what is vo.h today, which
this relicensing commit removes, as it was unused. (It's not clear
whether the mere symbol is copyrightable, but no need to take a risk.)
3a406e94d7, 9dd8f241ac: slave mode things by someone who couldn't be
reached. This aspect of the old slave mode was completely removed.
bbeb54d80a: patch by someone who was not asked, but the added code was
completely removed again.
2017-05-10 12:32:34 +00:00
|
|
|
* GNU Lesser General Public License for more details.
|
2009-02-08 03:27:30 +00:00
|
|
|
*
|
vo.c, vo.h, vo_null.c: change license to LGPL
Most contributors have agreed. vo.c requires a "driver" entry for each
video output - we assume that if someone who didn't agree to LGPL added
a line, it's fine for vo.c to be LGPL anyway. If the affected video
output is not disabled at compilation time, the resulting binary will be
GPL anyway.
One problem are the changes by Nick Kurshev (usually using "nick" as SVN
username). He could not be reached. I believe all changes to his files
are actually gone, but here is a detailed listing:
fa1d5742bc: nick introduces a new VO API. It was removed in 64bedd9683.
Some of this was replaced by VOCTRLs are introduced in 7c51652a1b,
obviously replacing at least some functionality by his API.
b587a3d642: nick adds a vo_tune_info_t struct. Removed in 64bedd9683
too.
9caad2c29a: nick adds some VOCTRLs, which were silently removed in
8cc5ba5ab8 (they became unused probably with the VIDIX removal).
340183b0e9: nick adds VO-based screenshots, which got removed in
2f4b840f62. Strangely the same name was introduced in 01cf896a2f again,
but this is a coincidence and worked differently (also it was removed
yet again in 2858232220).
104c125e6d: nick adds an option for "direct rendering". It was renamed
in 6403904ae9 and fully removed in e48b21dd87.
5ddd8e92a1: nick adds code to check the VO driver preinit arg to every
single VO driver. The argument itself and any possibly remaining code
associated with it was removed in 1f5ffe7d30.
f6878753fb: nick adds header inclusion guards. We assume this is not
relevant for copyright.
Some of nick's code was merely moved to other files, such as the
equalizer stuff added in 555c676683 and moved in 4db72f6a80 and
12579136ff, and don't affect copyright of these files anymore.
Other notes:
fef7b17c34: a patch by someone who wasn't asked for relicensing added a
symbol that was removed again in 1b09f4633.
4a8a46fafd: author probably didn't agree to LGPL, but the function
signature was changed later on anyway, and nothing of this is left.
7b25afd742: the same author adds a symbol to what is vo.h today, which
this relicensing commit removes, as it was unused. (It's not clear
whether the mere symbol is copyrightable, but no need to take a risk.)
3a406e94d7, 9dd8f241ac: slave mode things by someone who couldn't be
reached. This aspect of the old slave mode was completely removed.
bbeb54d80a: patch by someone who was not asked, but the added code was
completely removed again.
2017-05-10 12:32:34 +00:00
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
|
2001-02-24 20:28:24 +00:00
|
|
|
*/
|
2009-02-08 03:27:30 +00:00
|
|
|
|
2008-02-22 09:09:46 +00:00
|
|
|
#ifndef MPLAYER_VIDEO_OUT_H
|
|
|
|
#define MPLAYER_VIDEO_OUT_H
|
2001-02-24 20:28:24 +00:00
|
|
|
|
|
|
|
#include <inttypes.h>
|
2009-09-17 14:52:09 +00:00
|
|
|
#include <stdbool.h>
|
2001-02-24 20:28:24 +00:00
|
|
|
|
2012-11-09 00:06:43 +00:00
|
|
|
#include "video/img_format.h"
|
2013-12-17 01:39:45 +00:00
|
|
|
#include "common/common.h"
|
2013-12-17 01:02:25 +00:00
|
|
|
#include "options/options.h"
|
2009-10-14 01:12:10 +00:00
|
|
|
|
2016-09-12 17:51:04 +00:00
|
|
|
enum {
|
|
|
|
// VO needs to redraw
|
|
|
|
VO_EVENT_EXPOSE = 1 << 0,
|
|
|
|
// VO needs to update state to a new window size
|
|
|
|
VO_EVENT_RESIZE = 1 << 1,
|
|
|
|
// The ICC profile needs to be reloaded
|
|
|
|
VO_EVENT_ICC_PROFILE_CHANGED = 1 << 2,
|
|
|
|
// Some other window state changed (position, window state, fps)
|
|
|
|
VO_EVENT_WIN_STATE = 1 << 3,
|
|
|
|
// The ambient light conditions changed and need to be reloaded
|
|
|
|
VO_EVENT_AMBIENT_LIGHTING_CHANGED = 1 << 4,
|
|
|
|
// Special mechanism for making resizing with Cocoa react faster
|
|
|
|
VO_EVENT_LIVE_RESIZING = 1 << 5,
|
2020-01-08 19:23:12 +00:00
|
|
|
// For VOCTRL_GET_HIDPI_SCALE changes.
|
|
|
|
VO_EVENT_DPI = 1 << 6,
|
2018-04-29 17:42:18 +00:00
|
|
|
// Special thing for encode mode (vo_driver.initially_blocked).
|
|
|
|
// Part of VO_EVENTS_USER to make vo_is_ready_for_frame() work properly.
|
|
|
|
VO_EVENT_INITIAL_UNBLOCK = 1 << 7,
|
2001-03-03 21:46:39 +00:00
|
|
|
|
2016-09-12 17:51:04 +00:00
|
|
|
// Set of events the player core may be interested in.
|
2020-01-08 19:23:12 +00:00
|
|
|
VO_EVENTS_USER = VO_EVENT_RESIZE | VO_EVENT_WIN_STATE | VO_EVENT_DPI |
|
2019-12-17 22:18:17 +00:00
|
|
|
VO_EVENT_INITIAL_UNBLOCK,
|
2016-09-12 17:51:04 +00:00
|
|
|
};
|
2014-11-02 19:26:51 +00:00
|
|
|
|
2011-08-22 20:24:07 +00:00
|
|
|
enum mp_voctrl {
|
|
|
|
/* signal a device reset seek */
|
2012-11-04 15:24:18 +00:00
|
|
|
VOCTRL_RESET = 1,
|
2013-05-15 16:17:18 +00:00
|
|
|
/* Handle input and redraw events, called by vo_check_events() */
|
|
|
|
VOCTRL_CHECK_EVENTS,
|
2011-08-22 20:24:07 +00:00
|
|
|
/* signal a device pause */
|
|
|
|
VOCTRL_PAUSE,
|
|
|
|
/* start/resume playback */
|
|
|
|
VOCTRL_RESUME,
|
2012-11-06 14:27:44 +00:00
|
|
|
|
2011-08-22 20:24:07 +00:00
|
|
|
VOCTRL_SET_PANSCAN,
|
2019-09-19 22:45:17 +00:00
|
|
|
VOCTRL_SET_EQUALIZER,
|
2011-08-22 20:24:07 +00:00
|
|
|
|
2019-11-29 11:50:50 +00:00
|
|
|
// Trigger by any change to mp_vo_opts. This is for convenience. In theory,
|
|
|
|
// you could install your own listener.
|
|
|
|
VOCTRL_VO_OPTS_CHANGED,
|
|
|
|
|
2018-01-20 15:10:42 +00:00
|
|
|
/* private to vo_gpu */
|
2016-05-09 17:42:03 +00:00
|
|
|
VOCTRL_LOAD_HWDEC_API,
|
2012-11-04 16:17:11 +00:00
|
|
|
|
2014-06-15 18:46:57 +00:00
|
|
|
// Redraw the image previously passed to draw_image() (basically, repeat
|
|
|
|
// the previous draw_image call). If this is handled, the OSD should also
|
vo: generic redraw support
Usually, a VO must react to VOCTRL_REDRAW_FRAME in order to redraw the
current screen correctly if video is paused (this is done to update
OSD). But if it's not supported, we can just draw the current image
again in the generic vo.c code.
Unfortunately, this turned out pretty useless, because the VOs which
would benefit from this need to redraw even if there is no image, in
order to draw a black screen in --idle --force-window mode. The way
redrawing is handled in the X11 common code and in vo_x11 and vo_xv is
in the way, and I'm not sure what exactly vo_wayland requires. Other VOs
have a non-trivial implementation of VOCTRL_REDRAW_FRAME, which
(probably) makes redrawing slightly more efficient, e.g. by skipping
texture upload. So for now, no VO uses this new functionality, but since
it's trivial, commit it anyway.
The vo_driver->untimed case is for forcibly disabling redraw for vo_lavc
and vo_image always.
2015-01-24 22:28:38 +00:00
|
|
|
// be updated and redrawn. Optional; emulated if not available.
|
2011-12-05 03:24:18 +00:00
|
|
|
VOCTRL_REDRAW_FRAME,
|
2011-08-22 20:24:07 +00:00
|
|
|
|
cocoa-cb: initial implementation via opengl-cb API
this is meant to replace the old and not properly working vo_gpu/opengl
cocoa backend in the future. the problems are various shortcomings of
Apple's opengl implementation and buggy behaviour in certain
circumstances that couldn't be properly worked around. there are also
certain regressions on newer macOS versions from 10.11 onwards.
- awful opengl performance with a none layer backed context
- huge amount of dropped frames with an early context flush
- flickering of system elements like the dock or volume indicator
- double buffering not properly working with a none layer backed context
- bad performance in fullscreen because of system optimisations
all the problems were caused by using a normal opengl context, that
seems somewhat abandoned by apple, and are fixed by using a layer backed
opengl context instead. problems that couldn't be fixed could be
properly worked around.
this has all features our old backend has sans the wid embedding,
the possibility to disable the automatic GPU switching and taking
screenshots of the window content. the first was deemed unnecessary by
me for now, since i just use the libmpv API that others can use anyway.
second is technically not possible atm because we have to pre-allocate
our opengl context at a time the config isn't read yet, so we can't get
the needed property. third one is a bit tricky because of deadlocking
and it needed to be in sync, hopefully i can work around that in the
future.
this also has at least one additional feature or eye-candy. a properly
working fullscreen animation with the native fs. also since this is a
direct port of the old backend of the parts that could be used, though
with adaptions and improvements, this looks a lot cleaner and easier to
understand.
some credit goes to @pigoz for the initial swift build support which
i could improve upon.
Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739,
#2392, #2217
2018-02-12 11:28:19 +00:00
|
|
|
// Only used internally in vo_opengl_cb
|
|
|
|
VOCTRL_PREINIT,
|
|
|
|
VOCTRL_UNINIT,
|
|
|
|
VOCTRL_RECONFIG,
|
|
|
|
|
2013-06-15 17:04:20 +00:00
|
|
|
VOCTRL_UPDATE_WINDOW_TITLE, // char*
|
2015-11-15 22:03:48 +00:00
|
|
|
VOCTRL_UPDATE_PLAYBACK_STATE, // struct voctrl_playback_state*
|
2011-08-22 20:24:07 +00:00
|
|
|
|
2016-06-06 00:44:15 +00:00
|
|
|
VOCTRL_PERFORMANCE_DATA, // struct voctrl_performance_data*
|
|
|
|
|
2013-06-07 23:35:44 +00:00
|
|
|
VOCTRL_SET_CURSOR_VISIBILITY, // bool*
|
2013-05-16 21:17:46 +00:00
|
|
|
|
2013-06-13 22:03:32 +00:00
|
|
|
VOCTRL_KILL_SCREENSAVER,
|
|
|
|
VOCTRL_RESTORE_SCREENSAVER,
|
|
|
|
|
2014-09-04 20:53:50 +00:00
|
|
|
// Return or set window size (not-fullscreen mode only - if fullscreened,
|
|
|
|
// these must access the not-fullscreened window size only).
|
|
|
|
VOCTRL_GET_UNFS_WINDOW_SIZE, // int[2] (w/h)
|
|
|
|
VOCTRL_SET_UNFS_WINDOW_SIZE, // int[2] (w/h)
|
2011-08-22 20:24:07 +00:00
|
|
|
|
2014-11-05 23:58:24 +00:00
|
|
|
// char *** (NULL terminated array compatible with CONF_TYPE_STRING_LIST)
|
|
|
|
// names for displays the window is on
|
|
|
|
VOCTRL_GET_DISPLAY_NAMES,
|
|
|
|
|
2015-01-24 21:56:02 +00:00
|
|
|
// Retrieve window contents. (Normal screenshots use vo_get_current_frame().)
|
2018-02-07 19:18:36 +00:00
|
|
|
// Deprecated for VOCTRL_SCREENSHOT with corresponding flags.
|
2015-01-23 21:06:12 +00:00
|
|
|
VOCTRL_SCREENSHOT_WIN, // struct mp_image**
|
2012-07-31 23:06:59 +00:00
|
|
|
|
2018-02-07 19:18:36 +00:00
|
|
|
// A normal screenshot - VOs can react to this if vo_get_current_frame() is
|
|
|
|
// not sufficient.
|
|
|
|
VOCTRL_SCREENSHOT, // struct voctrl_screenshot*
|
|
|
|
|
2016-09-02 13:59:40 +00:00
|
|
|
VOCTRL_UPDATE_RENDER_OPTS,
|
2014-02-24 23:04:30 +00:00
|
|
|
|
2015-01-07 17:47:27 +00:00
|
|
|
VOCTRL_GET_ICC_PROFILE, // bstr*
|
2015-02-03 17:16:02 +00:00
|
|
|
VOCTRL_GET_AMBIENT_LUX, // int*
|
video: add VO framedropping mode
This mostly uses the same idea as with vo_vdpau.c, but much simplified.
On X11, it tries to get the display framerate with XF86VM, and limits
the frequency of new video frames against it. Note that this is an old
extension, and is confirmed not to work correctly with multi-monitor
setups. But we're using it because it was already around (it is also
used by vo_vdpau).
This attempts to predict the next vsync event by using the time of the
last frame and the display FPS. Even if that goes completely wrong,
the results are still relatively good.
On other systems, or if the X11 code doesn't return a display FPS, a
framerate of 1000 is assumed. This is infinite for all practical
purposes, and means that only frames which are definitely too late are
dropped. This probably has worse results, but is still useful.
"--framedrop=yes" is basically replaced with "--framedrop=decoder". The
old framedropping mode is kept around, and should perhaps be improved.
Dropping on the decoder level is still useful if decoding itself is too
slow.
2014-08-15 21:33:33 +00:00
|
|
|
VOCTRL_GET_DISPLAY_FPS, // double*
|
2019-12-20 11:47:00 +00:00
|
|
|
VOCTRL_GET_HIDPI_SCALE, // double*
|
2014-04-29 13:18:19 +00:00
|
|
|
|
|
|
|
VOCTRL_GET_PREF_DEINT, // int*
|
2017-12-26 00:38:32 +00:00
|
|
|
|
|
|
|
/* private to vo_gpu */
|
|
|
|
VOCTRL_EXTERNAL_RESIZE,
|
2011-08-22 20:24:07 +00:00
|
|
|
};
|
|
|
|
|
2013-06-15 16:59:52 +00:00
|
|
|
#define VO_TRUE true
|
|
|
|
#define VO_FALSE false
|
2014-04-13 16:00:51 +00:00
|
|
|
#define VO_ERROR -1
|
|
|
|
#define VO_NOTAVAIL -2
|
|
|
|
#define VO_NOTIMPL -3
|
2002-02-09 00:47:26 +00:00
|
|
|
|
2015-11-15 22:03:48 +00:00
|
|
|
// VOCTRL_UPDATE_PLAYBACK_STATE
|
|
|
|
struct voctrl_playback_state {
|
2016-05-05 05:56:21 +00:00
|
|
|
bool taskbar_progress;
|
2015-11-22 10:40:20 +00:00
|
|
|
bool playing;
|
2015-11-15 22:03:48 +00:00
|
|
|
bool paused;
|
|
|
|
int percent_pos;
|
|
|
|
};
|
|
|
|
|
2016-06-06 00:44:15 +00:00
|
|
|
// VOCTRL_PERFORMANCE_DATA
|
2017-09-11 00:28:15 +00:00
|
|
|
#define VO_PERF_SAMPLE_COUNT 256
|
vo_opengl: refactor vo performance subsystem
This replaces `vo-performance` by `vo-passes`, bringing with it a number
of changes and improvements:
1. mpv users can now introspect the vo_opengl passes, which is something
that has been requested multiple times.
2. performance data is now measured per-pass, which helps both
development and debugging.
3. since adding more passes is cheap, we can now report information for
more passes (e.g. the blit pass, and the osd pass). Note: we also
switch to nanosecond scale, to be able to measure these passes
better.
4. `--user-shaders` authors can now describe their own passes, helping
users both identify which user shaders are active at any given time
as well as helping shader authors identify performance issues.
5. the timing data per pass is now exported as a full list of samples,
so projects like Argon-/mpv-stats can immediately read out all of the
samples and render a graph without having to manually poll this
option constantly.
Due to gl_timer's design being complicated (directly reading performance
data would block, so we delay the actual read-back until the next _start
command), it's vital not to conflate different passes that might be
doing different things from one frame to another. To accomplish this,
the actual timers are stored as part of the gl_shader_cache's sc_entry,
which makes them unique for that exact shader.
Starting and stopping the time measurement is easy to unify with the
gl_sc architecture, because the existing API already relies on a
"generate, render, reset" flow, so we can just put timer_start and
timer_stop in sc_generate and sc_reset, respectively.
The ugliest thing about this code is that due to the need to keep pass
information relatively stable in between frames, we need to distinguish
between "new" and "redrawn" frames, which bloats the code somewhat and
also feels hacky and vo_opengl-specific. (But then again, this entire
thing is vo_opengl-specific)
2017-06-29 15:00:06 +00:00
|
|
|
|
|
|
|
struct mp_pass_perf {
|
|
|
|
// times are all in nanoseconds
|
2016-06-06 00:44:15 +00:00
|
|
|
uint64_t last, avg, peak;
|
2017-09-10 22:27:27 +00:00
|
|
|
uint64_t samples[VO_PERF_SAMPLE_COUNT];
|
|
|
|
uint64_t count;
|
vo_opengl: refactor vo performance subsystem
This replaces `vo-performance` by `vo-passes`, bringing with it a number
of changes and improvements:
1. mpv users can now introspect the vo_opengl passes, which is something
that has been requested multiple times.
2. performance data is now measured per-pass, which helps both
development and debugging.
3. since adding more passes is cheap, we can now report information for
more passes (e.g. the blit pass, and the osd pass). Note: we also
switch to nanosecond scale, to be able to measure these passes
better.
4. `--user-shaders` authors can now describe their own passes, helping
users both identify which user shaders are active at any given time
as well as helping shader authors identify performance issues.
5. the timing data per pass is now exported as a full list of samples,
so projects like Argon-/mpv-stats can immediately read out all of the
samples and render a graph without having to manually poll this
option constantly.
Due to gl_timer's design being complicated (directly reading performance
data would block, so we delay the actual read-back until the next _start
command), it's vital not to conflate different passes that might be
doing different things from one frame to another. To accomplish this,
the actual timers are stored as part of the gl_shader_cache's sc_entry,
which makes them unique for that exact shader.
Starting and stopping the time measurement is easy to unify with the
gl_sc architecture, because the existing API already relies on a
"generate, render, reset" flow, so we can just put timer_start and
timer_stop in sc_generate and sc_reset, respectively.
The ugliest thing about this code is that due to the need to keep pass
information relatively stable in between frames, we need to distinguish
between "new" and "redrawn" frames, which bloats the code somewhat and
also feels hacky and vo_opengl-specific. (But then again, this entire
thing is vo_opengl-specific)
2017-06-29 15:00:06 +00:00
|
|
|
};
|
|
|
|
|
2017-09-11 16:20:18 +00:00
|
|
|
#define VO_PASS_PERF_MAX 64
|
vo_opengl: refactor vo performance subsystem
This replaces `vo-performance` by `vo-passes`, bringing with it a number
of changes and improvements:
1. mpv users can now introspect the vo_opengl passes, which is something
that has been requested multiple times.
2. performance data is now measured per-pass, which helps both
development and debugging.
3. since adding more passes is cheap, we can now report information for
more passes (e.g. the blit pass, and the osd pass). Note: we also
switch to nanosecond scale, to be able to measure these passes
better.
4. `--user-shaders` authors can now describe their own passes, helping
users both identify which user shaders are active at any given time
as well as helping shader authors identify performance issues.
5. the timing data per pass is now exported as a full list of samples,
so projects like Argon-/mpv-stats can immediately read out all of the
samples and render a graph without having to manually poll this
option constantly.
Due to gl_timer's design being complicated (directly reading performance
data would block, so we delay the actual read-back until the next _start
command), it's vital not to conflate different passes that might be
doing different things from one frame to another. To accomplish this,
the actual timers are stored as part of the gl_shader_cache's sc_entry,
which makes them unique for that exact shader.
Starting and stopping the time measurement is easy to unify with the
gl_sc architecture, because the existing API already relies on a
"generate, render, reset" flow, so we can just put timer_start and
timer_stop in sc_generate and sc_reset, respectively.
The ugliest thing about this code is that due to the need to keep pass
information relatively stable in between frames, we need to distinguish
between "new" and "redrawn" frames, which bloats the code somewhat and
also feels hacky and vo_opengl-specific. (But then again, this entire
thing is vo_opengl-specific)
2017-06-29 15:00:06 +00:00
|
|
|
|
|
|
|
struct mp_frame_perf {
|
|
|
|
int count;
|
|
|
|
struct mp_pass_perf perf[VO_PASS_PERF_MAX];
|
|
|
|
// The owner of this struct does not have ownership over the names, and
|
|
|
|
// they may change at any time - so this struct should not be stored
|
|
|
|
// anywhere or the results reused
|
|
|
|
char *desc[VO_PASS_PERF_MAX];
|
2016-06-06 00:44:15 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct voctrl_performance_data {
|
vo_opengl: refactor vo performance subsystem
This replaces `vo-performance` by `vo-passes`, bringing with it a number
of changes and improvements:
1. mpv users can now introspect the vo_opengl passes, which is something
that has been requested multiple times.
2. performance data is now measured per-pass, which helps both
development and debugging.
3. since adding more passes is cheap, we can now report information for
more passes (e.g. the blit pass, and the osd pass). Note: we also
switch to nanosecond scale, to be able to measure these passes
better.
4. `--user-shaders` authors can now describe their own passes, helping
users both identify which user shaders are active at any given time
as well as helping shader authors identify performance issues.
5. the timing data per pass is now exported as a full list of samples,
so projects like Argon-/mpv-stats can immediately read out all of the
samples and render a graph without having to manually poll this
option constantly.
Due to gl_timer's design being complicated (directly reading performance
data would block, so we delay the actual read-back until the next _start
command), it's vital not to conflate different passes that might be
doing different things from one frame to another. To accomplish this,
the actual timers are stored as part of the gl_shader_cache's sc_entry,
which makes them unique for that exact shader.
Starting and stopping the time measurement is easy to unify with the
gl_sc architecture, because the existing API already relies on a
"generate, render, reset" flow, so we can just put timer_start and
timer_stop in sc_generate and sc_reset, respectively.
The ugliest thing about this code is that due to the need to keep pass
information relatively stable in between frames, we need to distinguish
between "new" and "redrawn" frames, which bloats the code somewhat and
also feels hacky and vo_opengl-specific. (But then again, this entire
thing is vo_opengl-specific)
2017-06-29 15:00:06 +00:00
|
|
|
struct mp_frame_perf fresh, redraw;
|
2016-06-06 00:44:15 +00:00
|
|
|
};
|
|
|
|
|
2018-02-07 19:18:36 +00:00
|
|
|
struct voctrl_screenshot {
|
|
|
|
bool scaled, subs, osd, high_bit_depth;
|
|
|
|
struct mp_image *res;
|
|
|
|
};
|
|
|
|
|
2015-10-02 16:45:32 +00:00
|
|
|
enum {
|
|
|
|
// VO does handle mp_image_params.rotate in 90 degree steps
|
|
|
|
VO_CAP_ROTATE90 = 1 << 0,
|
|
|
|
// VO does framedrop itself (vo_vdpau). Untimed/encoding VOs never drop.
|
|
|
|
VO_CAP_FRAMEDROP = 1 << 1,
|
2018-02-16 03:56:52 +00:00
|
|
|
// VO does not allow frames to be retained (vo_mediacodec_embed).
|
|
|
|
VO_CAP_NORETAIN = 1 << 2,
|
2015-10-02 16:45:32 +00:00
|
|
|
};
|
2014-04-20 19:36:56 +00:00
|
|
|
|
2015-07-20 19:12:46 +00:00
|
|
|
#define VO_MAX_REQ_FRAMES 10
|
2015-07-01 17:22:40 +00:00
|
|
|
|
2008-04-03 03:25:41 +00:00
|
|
|
struct vo;
|
2008-06-23 22:53:58 +00:00
|
|
|
struct osd_state;
|
2009-09-18 13:27:55 +00:00
|
|
|
struct mp_image;
|
2013-06-07 23:35:44 +00:00
|
|
|
struct mp_image_params;
|
2008-04-03 03:25:41 +00:00
|
|
|
|
2014-12-31 18:01:28 +00:00
|
|
|
struct vo_extra {
|
|
|
|
struct input_ctx *input_ctx;
|
|
|
|
struct osd_state *osd;
|
|
|
|
struct encode_lavc_context *encode_lavc_ctx;
|
2016-09-16 12:23:54 +00:00
|
|
|
void (*wakeup_cb)(void *ctx);
|
|
|
|
void *wakeup_ctx;
|
2014-12-31 18:01:28 +00:00
|
|
|
};
|
|
|
|
|
2015-07-01 17:24:28 +00:00
|
|
|
struct vo_frame {
|
2015-07-01 17:22:40 +00:00
|
|
|
// If > 0, realtime when frame should be shown, in mp_time_us() units.
|
2015-07-01 17:24:28 +00:00
|
|
|
// If 0, present immediately.
|
2014-11-23 19:06:05 +00:00
|
|
|
int64_t pts;
|
2015-07-01 17:24:28 +00:00
|
|
|
// Approximate frame duration, in us.
|
|
|
|
int duration;
|
2015-11-04 13:26:28 +00:00
|
|
|
// Realtime of estimated distance between 2 vsync events.
|
2015-11-27 21:04:44 +00:00
|
|
|
double vsync_interval;
|
2015-07-01 17:23:26 +00:00
|
|
|
// "ideal" display time within the vsync
|
2015-11-27 21:04:44 +00:00
|
|
|
double vsync_offset;
|
2015-11-28 14:45:35 +00:00
|
|
|
// "ideal" frame duration (can be different from num_vsyncs*vsync_interval
|
|
|
|
// up to a vsync) - valid for the entire frame, i.e. not changed for repeats
|
|
|
|
double ideal_frame_duration;
|
2015-08-10 16:43:25 +00:00
|
|
|
// how often the frame will be repeated (does not include OSD redraws)
|
|
|
|
int num_vsyncs;
|
2015-07-01 17:24:28 +00:00
|
|
|
// Set if the current frame is repeated from the previous. It's guaranteed
|
|
|
|
// that the current is the same as the previous one, even if the image
|
|
|
|
// pointer is different.
|
2016-10-05 10:34:47 +00:00
|
|
|
// The repeat flag is set if exactly the same frame should be rendered
|
|
|
|
// again (and the OSD does not need to be redrawn).
|
|
|
|
// A repeat frame can be redrawn, in which case repeat==redraw==true, and
|
|
|
|
// OSD should be updated.
|
2015-07-01 17:24:28 +00:00
|
|
|
bool redraw, repeat;
|
|
|
|
// The frame is not in movement - e.g. redrawing while paused.
|
|
|
|
bool still;
|
2015-08-10 16:43:25 +00:00
|
|
|
// Frames are output as fast as possible, with implied vsync blocking.
|
|
|
|
bool display_synced;
|
2018-03-13 11:54:48 +00:00
|
|
|
// Dropping the frame is allowed if the VO is behind.
|
|
|
|
bool can_drop;
|
2015-07-01 17:24:28 +00:00
|
|
|
// The current frame to be drawn.
|
|
|
|
// Warning: When OSD should be redrawn in --force-window --idle mode, this
|
|
|
|
// can be NULL. The VO should draw a black background, OSD on top.
|
|
|
|
struct mp_image *current;
|
2015-07-16 20:04:23 +00:00
|
|
|
// List of future images, starting with the current one. This does not
|
2015-07-01 17:22:40 +00:00
|
|
|
// care about repeated frames - it simply contains the next real frames.
|
2015-07-01 17:24:28 +00:00
|
|
|
// vo_set_queue_params() sets how many future frames this should include.
|
|
|
|
// The actual number of frames delivered to the VO can be lower.
|
|
|
|
// frames[0] is current, frames[1] is the next frame.
|
|
|
|
// Note that some future frames may never be sent as current frame to the
|
|
|
|
// VO if frames are dropped.
|
|
|
|
int num_frames;
|
2015-07-20 19:12:46 +00:00
|
|
|
struct mp_image *frames[VO_MAX_REQ_FRAMES];
|
2016-09-22 18:16:44 +00:00
|
|
|
// ID for frames[0] (== current). If current==NULL, the number is
|
|
|
|
// meaningless. Otherwise, it's an unique ID for the frame. The ID for
|
|
|
|
// a frame is guaranteed not to change (instant redraws will use the same
|
|
|
|
// ID). frames[n] has the ID frame_id+n, with the guarantee that frame
|
|
|
|
// drops or reconfigs will keep the guarantee.
|
2016-11-01 12:06:48 +00:00
|
|
|
// The ID is never 0 (unless num_frames==0). IDs are strictly monotonous.
|
2016-09-22 18:16:44 +00:00
|
|
|
uint64_t frame_id;
|
2014-11-23 19:06:05 +00:00
|
|
|
};
|
|
|
|
|
vo, vo_gpu, glx: correct GLX_OML_sync_control usage
I misunderstood how this extension works. If I understand it correctly
now, it's worse than I thought. They key thing is that the (ust, msc,
sbc) tripple is not for a single swap event. Instead, (ust, msc) run
independently from sbc. Assuming a CFR display/compositor, this means
you can at best know the vsync phase and frequency, but not the exact
time a sbc changed value.
There is GLX_INTEL_swap_event, which might work as expected, but it has
no EGL equivalent (while GLX_OML_sync_control does, in theory).
Redo the context_glx sync code. Now it's either more correct or less
correct. I wanted to add proper skip detection (if a vsync gets skipped
due to rendering taking too long and other problems), but it turned out
to be too complex, so only some unused fields in vo.h are left of it.
The "generic" skip detection has to do.
The vsync_duration field is also unused by vo.c.
Actually this seems to be an improvement. In cases where the flip call
timing is off, but the real driver-level timing apparently still works,
this will not report vsync skips or higher vsync jitter anymore. I could
observe this with screenshots and fullscreen switching. On the other
hand, maybe it just introduces an A/V offset or so.
Why the fuck can't there be a proper API for retrieving these
statistics? I'm not even asking for much.
2018-09-21 13:42:37 +00:00
|
|
|
// Presentation feedback. See get_vsync() for how backends should fill this
|
|
|
|
// struct.
|
2018-08-31 18:08:08 +00:00
|
|
|
struct vo_vsync_info {
|
vo, vo_gpu, glx: correct GLX_OML_sync_control usage
I misunderstood how this extension works. If I understand it correctly
now, it's worse than I thought. They key thing is that the (ust, msc,
sbc) tripple is not for a single swap event. Instead, (ust, msc) run
independently from sbc. Assuming a CFR display/compositor, this means
you can at best know the vsync phase and frequency, but not the exact
time a sbc changed value.
There is GLX_INTEL_swap_event, which might work as expected, but it has
no EGL equivalent (while GLX_OML_sync_control does, in theory).
Redo the context_glx sync code. Now it's either more correct or less
correct. I wanted to add proper skip detection (if a vsync gets skipped
due to rendering taking too long and other problems), but it turned out
to be too complex, so only some unused fields in vo.h are left of it.
The "generic" skip detection has to do.
The vsync_duration field is also unused by vo.c.
Actually this seems to be an improvement. In cases where the flip call
timing is off, but the real driver-level timing apparently still works,
this will not report vsync skips or higher vsync jitter anymore. I could
observe this with screenshots and fullscreen switching. On the other
hand, maybe it just introduces an A/V offset or so.
Why the fuck can't there be a proper API for retrieving these
statistics? I'm not even asking for much.
2018-09-21 13:42:37 +00:00
|
|
|
// mp_time_us() timestamp at which the last queued frame will likely be
|
|
|
|
// displayed (this is in the future, unless the frame is instantly output).
|
2018-08-31 18:08:08 +00:00
|
|
|
// -1 if unset or unsupported.
|
vo, vo_gpu, glx: correct GLX_OML_sync_control usage
I misunderstood how this extension works. If I understand it correctly
now, it's worse than I thought. They key thing is that the (ust, msc,
sbc) tripple is not for a single swap event. Instead, (ust, msc) run
independently from sbc. Assuming a CFR display/compositor, this means
you can at best know the vsync phase and frequency, but not the exact
time a sbc changed value.
There is GLX_INTEL_swap_event, which might work as expected, but it has
no EGL equivalent (while GLX_OML_sync_control does, in theory).
Redo the context_glx sync code. Now it's either more correct or less
correct. I wanted to add proper skip detection (if a vsync gets skipped
due to rendering taking too long and other problems), but it turned out
to be too complex, so only some unused fields in vo.h are left of it.
The "generic" skip detection has to do.
The vsync_duration field is also unused by vo.c.
Actually this seems to be an improvement. In cases where the flip call
timing is off, but the real driver-level timing apparently still works,
this will not report vsync skips or higher vsync jitter anymore. I could
observe this with screenshots and fullscreen switching. On the other
hand, maybe it just introduces an A/V offset or so.
Why the fuck can't there be a proper API for retrieving these
statistics? I'm not even asking for much.
2018-09-21 13:42:37 +00:00
|
|
|
// This implies the latency of the output.
|
|
|
|
int64_t last_queue_display_time;
|
|
|
|
|
|
|
|
// Time between 2 vsync events in microseconds. The difference should be the
|
|
|
|
// from 2 times sampled from the same reference point (it should not be the
|
|
|
|
// difference between e.g. the end of scanout and the start of the next one;
|
|
|
|
// it must be continuous).
|
|
|
|
// -1 if unsupported.
|
|
|
|
// 0 if supported, but no value available yet. It is assumed that the value
|
|
|
|
// becomes available after enough swap_buffers() calls were done.
|
|
|
|
// >0 values are taken for granted. Very bad things will happen if it's
|
|
|
|
// inaccurate.
|
|
|
|
int64_t vsync_duration;
|
|
|
|
|
|
|
|
// Number of skipped physical vsyncs at some point in time. Typically, this
|
|
|
|
// value is some time in the past by an offset that equals to the latency.
|
|
|
|
// This value is reset and newly sampled at every swap_buffers() call.
|
|
|
|
// This can be used to detect delayed frames iff you try to call
|
|
|
|
// swap_buffers() for every physical vsync.
|
|
|
|
// -1 if unset or unsupported.
|
|
|
|
int64_t skipped_vsyncs;
|
2018-08-31 18:08:08 +00:00
|
|
|
};
|
|
|
|
|
2008-04-03 03:25:41 +00:00
|
|
|
struct vo_driver {
|
2013-02-06 21:54:03 +00:00
|
|
|
// Encoding functionality, which can be invoked via --o only.
|
|
|
|
bool encode;
|
|
|
|
|
2018-04-29 17:42:18 +00:00
|
|
|
// This requires waiting for a VO_EVENT_INITIAL_UNBLOCK event before the
|
|
|
|
// first frame can be sent. Doing vo_reconfig*() calls is allowed though.
|
|
|
|
// Encode mode uses this, the core uses vo_is_ready_for_frame() to
|
|
|
|
// implicitly check for this.
|
|
|
|
bool initially_blocked;
|
|
|
|
|
2014-04-20 19:36:56 +00:00
|
|
|
// VO_CAP_* bits
|
|
|
|
int caps;
|
|
|
|
|
vo: generic redraw support
Usually, a VO must react to VOCTRL_REDRAW_FRAME in order to redraw the
current screen correctly if video is paused (this is done to update
OSD). But if it's not supported, we can just draw the current image
again in the generic vo.c code.
Unfortunately, this turned out pretty useless, because the VOs which
would benefit from this need to redraw even if there is no image, in
order to draw a black screen in --idle --force-window mode. The way
redrawing is handled in the X11 common code and in vo_x11 and vo_xv is
in the way, and I'm not sure what exactly vo_wayland requires. Other VOs
have a non-trivial implementation of VOCTRL_REDRAW_FRAME, which
(probably) makes redrawing slightly more efficient, e.g. by skipping
texture upload. So for now, no VO uses this new functionality, but since
it's trivial, commit it anyway.
The vo_driver->untimed case is for forcibly disabling redraw for vo_lavc
and vo_image always.
2015-01-24 22:28:38 +00:00
|
|
|
// Disable video timing, push frames as quickly as possible, never redraw.
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
bool untimed;
|
|
|
|
|
2013-10-23 17:06:14 +00:00
|
|
|
const char *name;
|
|
|
|
const char *description;
|
|
|
|
|
2009-09-17 14:52:09 +00:00
|
|
|
/*
|
|
|
|
* returns: zero on successful initialization, non-zero on error.
|
|
|
|
*/
|
2013-07-22 20:52:42 +00:00
|
|
|
int (*preinit)(struct vo *vo);
|
2012-11-04 15:24:18 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Whether the given image format is supported and config() will succeed.
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
* format: one of IMGFMT_*
|
2015-01-21 21:08:24 +00:00
|
|
|
* returns: 0 on not supported, otherwise 1
|
2012-11-04 15:24:18 +00:00
|
|
|
*/
|
2015-01-21 21:08:24 +00:00
|
|
|
int (*query_format)(struct vo *vo, int format);
|
2012-11-04 15:24:18 +00:00
|
|
|
|
2009-09-17 14:52:09 +00:00
|
|
|
/*
|
2013-06-07 23:35:44 +00:00
|
|
|
* Initialize or reconfigure the display driver.
|
|
|
|
* params: video parameters, like pixel format and frame size
|
|
|
|
* returns: < 0 on error, >= 0 on success
|
|
|
|
*/
|
2015-10-03 16:20:16 +00:00
|
|
|
int (*reconfig)(struct vo *vo, struct mp_image_params *params);
|
2013-06-07 23:35:44 +00:00
|
|
|
|
2018-04-22 17:36:16 +00:00
|
|
|
/*
|
|
|
|
* Like reconfig(), but provides the whole mp_image for which the change is
|
|
|
|
* required. (The image doesn't have to have real data.)
|
|
|
|
*/
|
|
|
|
int (*reconfig2)(struct vo *vo, struct mp_image *img);
|
|
|
|
|
2009-09-17 14:52:09 +00:00
|
|
|
/*
|
|
|
|
* Control interface
|
|
|
|
*/
|
|
|
|
int (*control)(struct vo *vo, uint32_t request, void *data);
|
|
|
|
|
2017-07-23 07:41:51 +00:00
|
|
|
/*
|
|
|
|
* lavc callback for direct rendering
|
|
|
|
*
|
|
|
|
* Optional. To make implementation easier, the callback is always run on
|
|
|
|
* the VO thread. The returned mp_image's destructor callback is also called
|
|
|
|
* on the VO thread, even if it's actually unref'ed from another thread.
|
|
|
|
*
|
|
|
|
* It is guaranteed that the last reference to an image is destroyed before
|
|
|
|
* ->uninit is called (except it's not - libmpv screenshots can hold the
|
|
|
|
* reference longer, fuck).
|
|
|
|
*
|
|
|
|
* The allocated image - or a part of it, can be passed to draw_frame(). The
|
|
|
|
* point of this mechanism is that the decoder directly renders to GPU
|
|
|
|
* staging memory, to avoid a memcpy on frame upload. But this is not a
|
|
|
|
* guarantee. A filter could change the data pointers or return a newly
|
|
|
|
* allocated image. It's even possible that only 1 plane uses the buffer
|
|
|
|
* allocated by the get_image function. The VO has to check for this.
|
|
|
|
*
|
|
|
|
* stride_align is always a value >=1 that is a power of 2. The stride
|
|
|
|
* values of the returned image must be divisible by this value.
|
|
|
|
*
|
|
|
|
* Currently, the returned image must have exactly 1 AVBufferRef set, for
|
|
|
|
* internal implementation simplicity.
|
|
|
|
*
|
|
|
|
* returns: an allocated, refcounted image; if NULL is returned, the caller
|
|
|
|
* will silently fallback to a default allocator
|
|
|
|
*/
|
|
|
|
struct mp_image *(*get_image)(struct vo *vo, int imgfmt, int w, int h,
|
|
|
|
int stride_align);
|
|
|
|
|
2018-04-20 15:48:44 +00:00
|
|
|
/*
|
|
|
|
* Thread-safe variant of get_image. Set at most one of these callbacks.
|
|
|
|
* This excludes _all_ synchronization magic. The only guarantee is that
|
|
|
|
* vo_driver.uninit is not called before this function returns.
|
|
|
|
*/
|
|
|
|
struct mp_image *(*get_image_ts)(struct vo *vo, int imgfmt, int w, int h,
|
|
|
|
int stride_align);
|
|
|
|
|
2014-04-30 20:25:11 +00:00
|
|
|
/*
|
|
|
|
* Render the given frame to the VO's backbuffer. This operation will be
|
|
|
|
* followed by a draw_osd and a flip_page[_timed] call.
|
2014-06-17 21:05:50 +00:00
|
|
|
* mpi belongs to the VO; the VO must free it eventually.
|
2014-06-15 18:46:57 +00:00
|
|
|
*
|
|
|
|
* This also should draw the OSD.
|
2015-07-01 17:24:28 +00:00
|
|
|
*
|
|
|
|
* Deprecated for draw_frame. A VO should have only either callback set.
|
2014-04-30 20:25:11 +00:00
|
|
|
*/
|
2012-11-04 14:56:04 +00:00
|
|
|
void (*draw_image)(struct vo *vo, struct mp_image *mpi);
|
2009-09-18 13:27:55 +00:00
|
|
|
|
2015-07-01 17:24:28 +00:00
|
|
|
/* Render the given frame. Note that this is also called when repeating
|
|
|
|
* or redrawing frames.
|
2018-04-29 14:07:21 +00:00
|
|
|
*
|
|
|
|
* frame is freed by the caller, but the callee can still modify the
|
|
|
|
* contained data and references.
|
2014-11-23 19:06:05 +00:00
|
|
|
*/
|
2015-07-01 17:24:28 +00:00
|
|
|
void (*draw_frame)(struct vo *vo, struct vo_frame *frame);
|
2014-11-23 19:06:05 +00:00
|
|
|
|
2009-09-17 14:52:09 +00:00
|
|
|
/*
|
|
|
|
* Blit/Flip buffer to the screen. Must be called after each frame!
|
2014-09-20 13:14:43 +00:00
|
|
|
*/
|
|
|
|
void (*flip_page)(struct vo *vo);
|
|
|
|
|
2018-08-31 14:33:15 +00:00
|
|
|
/*
|
2018-08-31 18:08:08 +00:00
|
|
|
* Return presentation feedback. The implementation should not touch fields
|
|
|
|
* it doesn't support; the info fields are preinitialized to neutral values.
|
|
|
|
* Usually called once after flip_page(), but can be called any time.
|
vo, vo_gpu, glx: correct GLX_OML_sync_control usage
I misunderstood how this extension works. If I understand it correctly
now, it's worse than I thought. They key thing is that the (ust, msc,
sbc) tripple is not for a single swap event. Instead, (ust, msc) run
independently from sbc. Assuming a CFR display/compositor, this means
you can at best know the vsync phase and frequency, but not the exact
time a sbc changed value.
There is GLX_INTEL_swap_event, which might work as expected, but it has
no EGL equivalent (while GLX_OML_sync_control does, in theory).
Redo the context_glx sync code. Now it's either more correct or less
correct. I wanted to add proper skip detection (if a vsync gets skipped
due to rendering taking too long and other problems), but it turned out
to be too complex, so only some unused fields in vo.h are left of it.
The "generic" skip detection has to do.
The vsync_duration field is also unused by vo.c.
Actually this seems to be an improvement. In cases where the flip call
timing is off, but the real driver-level timing apparently still works,
this will not report vsync skips or higher vsync jitter anymore. I could
observe this with screenshots and fullscreen switching. On the other
hand, maybe it just introduces an A/V offset or so.
Why the fuck can't there be a proper API for retrieving these
statistics? I'm not even asking for much.
2018-09-21 13:42:37 +00:00
|
|
|
* The values returned by this are always relative to the last flip_page()
|
|
|
|
* call.
|
2018-08-31 14:33:15 +00:00
|
|
|
*/
|
2018-08-31 18:08:08 +00:00
|
|
|
void (*get_vsync)(struct vo *vo, struct vo_vsync_info *info);
|
2018-08-31 14:33:15 +00:00
|
|
|
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
/* These optional callbacks can be provided if the GUI framework used by
|
2016-07-21 12:48:30 +00:00
|
|
|
* the VO requires entering a message loop for receiving events and does
|
|
|
|
* not call vo_wakeup() from a separate thread when there are new events.
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
*
|
|
|
|
* wait_events() will wait for new events, until the timeout expires, or the
|
|
|
|
* function is interrupted. wakeup() is used to possibly interrupt the
|
|
|
|
* event loop (wakeup() itself must be thread-safe, and not call any other
|
|
|
|
* VO functions; it's the only vo_driver function with this requirement).
|
|
|
|
* wakeup() should behave like a binary semaphore; if wait_events() is not
|
|
|
|
* being called while wakeup() is, the next wait_events() call should exit
|
|
|
|
* immediately.
|
|
|
|
*/
|
|
|
|
void (*wakeup)(struct vo *vo);
|
2016-07-20 18:42:30 +00:00
|
|
|
void (*wait_events)(struct vo *vo, int64_t until_time_us);
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
|
2009-09-17 14:52:09 +00:00
|
|
|
/*
|
|
|
|
* Closes driver. Should restore the original state of the system.
|
|
|
|
*/
|
|
|
|
void (*uninit)(struct vo *vo);
|
2012-06-25 20:12:03 +00:00
|
|
|
|
2012-08-06 15:51:53 +00:00
|
|
|
// Size of private struct for automatic allocation (0 doesn't allocate)
|
|
|
|
int priv_size;
|
2012-06-25 20:12:03 +00:00
|
|
|
|
2012-08-06 15:52:17 +00:00
|
|
|
// If not NULL, it's copied into the newly allocated private struct.
|
|
|
|
const void *priv_defaults;
|
|
|
|
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
// List of options to parse into priv struct (requires priv_size to be set)
|
2016-11-25 20:00:39 +00:00
|
|
|
// This will register them as global options (with options_prefix), and
|
|
|
|
// copy the current value at VO creation time to the priv struct.
|
2012-06-25 20:12:03 +00:00
|
|
|
const struct m_option *options;
|
2016-09-05 19:04:17 +00:00
|
|
|
|
2016-11-25 20:00:39 +00:00
|
|
|
// All options in the above array are prefixed with this string. (It's just
|
|
|
|
// for convenience and makes no difference in semantics.)
|
|
|
|
const char *options_prefix;
|
2016-09-05 19:05:47 +00:00
|
|
|
|
2016-11-25 20:00:39 +00:00
|
|
|
// Registers global options that go to a separate options struct.
|
|
|
|
const struct m_sub_options *global_opts;
|
2008-04-03 03:25:41 +00:00
|
|
|
};
|
2001-02-24 20:28:24 +00:00
|
|
|
|
2008-04-03 03:25:41 +00:00
|
|
|
struct vo {
|
|
|
|
const struct vo_driver *driver;
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
struct mp_log *log; // Using e.g. "[vo/vdpau]" as prefix
|
2008-04-03 03:25:41 +00:00
|
|
|
void *priv;
|
2013-12-21 16:51:20 +00:00
|
|
|
struct mpv_global *global;
|
2008-04-20 04:36:34 +00:00
|
|
|
struct vo_x11_state *x11;
|
2012-04-14 11:39:53 +00:00
|
|
|
struct vo_w32_state *w32;
|
2012-09-13 07:32:59 +00:00
|
|
|
struct vo_cocoa_state *cocoa;
|
2017-10-01 20:16:49 +00:00
|
|
|
struct vo_wayland_state *wl;
|
2019-09-23 11:49:36 +00:00
|
|
|
struct vo_android_state *android;
|
2016-05-09 17:42:03 +00:00
|
|
|
struct mp_hwdec_devices *hwdec_devs;
|
2008-04-30 08:06:55 +00:00
|
|
|
struct input_ctx *input_ctx;
|
2014-06-15 18:46:57 +00:00
|
|
|
struct osd_state *osd;
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
struct encode_lavc_context *encode_lavc_ctx;
|
|
|
|
struct vo_internal *in;
|
2014-12-31 18:01:28 +00:00
|
|
|
struct vo_extra extra;
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
|
|
|
|
// --- The following fields are generally only changed during initialization.
|
|
|
|
|
|
|
|
bool probing;
|
|
|
|
|
|
|
|
// --- The following fields are only changed with vo_reconfig(), and can
|
|
|
|
// be accessed unsynchronized (read-only).
|
|
|
|
|
|
|
|
int config_ok; // Last config call was successful?
|
|
|
|
struct mp_image_params *params; // Configured parameters (as in vo_reconfig)
|
|
|
|
|
|
|
|
// --- The following fields can be accessed only by the VO thread, or from
|
|
|
|
// anywhere _if_ the VO thread is suspended (use vo->dispatch).
|
|
|
|
|
2016-09-02 13:50:54 +00:00
|
|
|
struct m_config_cache *opts_cache; // cache for ->opts
|
|
|
|
struct mp_vo_opts *opts;
|
options: add a thread-safe way to notify option updates
So far, we had a thread-safe way to read options, but no option update
notification mechanism. Everything was funneled though the main thread's
central mp_option_change_callback() function. For example, if the
panscan options were changed, the function called vo_control() with
VOCTRL_SET_PANSCAN to manually notify the VO thread of updates. This
worked, but's pretty inconvenient. Most of these problems come from the
fact that MPlayer was written as a single-threaded program.
This commit works towards a more flexible mechanism. It adds an update
callback to m_config_cache (the thing that is already used for
thread-safe access of global options).
This alone would still be rather inconvenient, at least in context of
VOs. Add another mechanism on top of it that uses mp_dispatch_queue, and
takes care of some annoying synchronization issues. We extend
mp_dispatch_queue itself to make this easier and slightly more
efficient.
As a first application, use this to reimplement certain VO scaling and
renderer options. The update_opts() function translates these to the
"old" VOCTRLs, though.
An annoyingly subtle issue is that m_config_cache's destructor now
releases pending notifications, and must be released before the
associated dispatch queue. Otherwise, it could happen that option
updates during e.g. VO destruction queue or run stale entries, which is
not expected.
Rather untested. The singly-linked list code in dispatch.c is probably
buggy, and I bet some aspects about synchronization are not entirely
sane.
2017-08-22 13:50:33 +00:00
|
|
|
struct m_config_cache *gl_opts_cache;
|
video: redo video equalizer option handling
I really wouldn't care much about this, but some parts of the core code
are under HAVE_GPL, so there's some need to get rid of it. Simply turn
the video equalizer from its current fine-grained handling with vf/vo
fallbacks into global options. This makes updating them much simpler.
This removes any possibility of applying video equalizers in filters,
which affects vf_scale, and the previously removed vf_eq. Not a big
loss, since the preferred VOs have this builtin.
Remove video equalizer handling from vo_direct3d, vo_sdl, vo_vaapi, and
vo_xv. I'm not going to waste my time on these legacy VOs.
vo.eq_opts_cache exists _only_ to send a VOCTRL_SET_EQUALIZER, which
exists _only_ to trigger a redraw. This seems silly, but for now I feel
like this is less of a pain. The rest of the equalizer using code is
self-updating.
See commit 96b906a51d5 for how some video equalizer code was GPL only.
Some command line option names and ranges can probably be traced back to
a GPL only committer, but we don't consider these copyrightable.
2017-08-22 15:01:35 +00:00
|
|
|
struct m_config_cache *eq_opts_cache;
|
2016-09-02 13:50:54 +00:00
|
|
|
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
bool want_redraw; // redraw as soon as possible
|
2008-04-20 21:37:12 +00:00
|
|
|
|
2014-01-21 23:26:01 +00:00
|
|
|
// current window state
|
|
|
|
int dwidth;
|
|
|
|
int dheight;
|
|
|
|
float monitor_par;
|
2008-04-03 03:25:41 +00:00
|
|
|
};
|
2001-02-24 20:28:24 +00:00
|
|
|
|
2013-07-31 19:44:21 +00:00
|
|
|
struct mpv_global;
|
2014-12-31 18:01:28 +00:00
|
|
|
struct vo *init_best_video_out(struct mpv_global *global, struct vo_extra *ex);
|
2015-10-03 16:20:16 +00:00
|
|
|
int vo_reconfig(struct vo *vo, struct mp_image_params *p);
|
2018-04-22 17:36:16 +00:00
|
|
|
int vo_reconfig2(struct vo *vo, struct mp_image *img);
|
2002-09-29 21:53:05 +00:00
|
|
|
|
2016-08-20 12:11:35 +00:00
|
|
|
int vo_control(struct vo *vo, int request, void *data);
|
2016-08-20 12:46:38 +00:00
|
|
|
void vo_control_async(struct vo *vo, int request, void *data);
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
bool vo_is_ready_for_frame(struct vo *vo, int64_t next_pts);
|
2015-07-01 17:24:28 +00:00
|
|
|
void vo_queue_frame(struct vo *vo, struct vo_frame *frame);
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
void vo_wait_frame(struct vo *vo);
|
video: fix and simplify video format changes and last frame display
The previous commit broke these things, and fixing them is separate in
this commit in order to reduce the volume of changes.
Move the image queue from the VO to the playback core. The image queue
is a remnant of the old way how vdpau was implemented, and increasingly
became more and more an artifact. In the end, it did only one thing:
computing the duration of the current frame. This was done by taking the
PTS difference between the current and the future frame. We keep this,
but by moving it out of the VO, we don't have to special-case format
changes anymore. This simplifies the code a lot.
Since we need the queue to compute the duration only, a queue size
larger than 2 makes no sense, and we can hardcode that.
Also change how the last frame is handled. The last frame is a bit of a
problem, because video timing works by showing one frame after another,
which makes it a special case. Make the VO provide a function to notify
us when the frame is done, instead. The frame duration is used for that.
This is not perfect. For example, changing playback speed during the
last frame doesn't update the end time. Pausing will not stop the clock
that times the last frame. But I don't think this matters for such a
corner case.
2014-08-12 21:17:35 +00:00
|
|
|
bool vo_still_displaying(struct vo *vo);
|
|
|
|
bool vo_has_frame(struct vo *vo);
|
2014-06-15 18:46:57 +00:00
|
|
|
void vo_redraw(struct vo *vo);
|
2014-10-03 19:53:32 +00:00
|
|
|
bool vo_want_redraw(struct vo *vo);
|
2009-09-18 13:27:55 +00:00
|
|
|
void vo_seek_reset(struct vo *vo);
|
2008-04-03 03:25:41 +00:00
|
|
|
void vo_destroy(struct vo *vo);
|
video: add VO framedropping mode
This mostly uses the same idea as with vo_vdpau.c, but much simplified.
On X11, it tries to get the display framerate with XF86VM, and limits
the frequency of new video frames against it. Note that this is an old
extension, and is confirmed not to work correctly with multi-monitor
setups. But we're using it because it was already around (it is also
used by vo_vdpau).
This attempts to predict the next vsync event by using the time of the
last frame and the display FPS. Even if that goes completely wrong,
the results are still relatively good.
On other systems, or if the X11 code doesn't return a display FPS, a
framerate of 1000 is assumed. This is infinite for all practical
purposes, and means that only frames which are definitely too late are
dropped. This probably has worse results, but is still useful.
"--framedrop=yes" is basically replaced with "--framedrop=decoder". The
old framedropping mode is kept around, and should perhaps be improved.
Dropping on the decoder level is still useful if decoding itself is too
slow.
2014-08-15 21:33:33 +00:00
|
|
|
void vo_set_paused(struct vo *vo, bool paused);
|
|
|
|
int64_t vo_get_drop_count(struct vo *vo);
|
2015-01-08 19:16:49 +00:00
|
|
|
void vo_increment_drop_count(struct vo *vo, int64_t n);
|
2015-11-13 21:41:41 +00:00
|
|
|
int64_t vo_get_delayed_count(struct vo *vo);
|
2015-01-03 16:23:01 +00:00
|
|
|
void vo_query_formats(struct vo *vo, uint8_t *list);
|
2014-11-02 19:26:51 +00:00
|
|
|
void vo_event(struct vo *vo, int event);
|
2014-11-09 09:00:21 +00:00
|
|
|
int vo_query_and_reset_events(struct vo *vo, int events);
|
2015-01-24 21:56:02 +00:00
|
|
|
struct mp_image *vo_get_current_frame(struct vo *vo);
|
2015-11-25 21:10:55 +00:00
|
|
|
void vo_set_queue_params(struct vo *vo, int64_t offset_us, int num_req_frames);
|
2015-07-20 19:12:46 +00:00
|
|
|
int vo_get_num_req_frames(struct vo *vo);
|
2014-08-17 00:50:59 +00:00
|
|
|
int64_t vo_get_vsync_interval(struct vo *vo);
|
2015-11-25 21:07:56 +00:00
|
|
|
double vo_get_estimated_vsync_interval(struct vo *vo);
|
|
|
|
double vo_get_estimated_vsync_jitter(struct vo *vo);
|
2015-03-13 12:14:11 +00:00
|
|
|
double vo_get_display_fps(struct vo *vo);
|
2015-11-14 20:44:59 +00:00
|
|
|
double vo_get_delay(struct vo *vo);
|
player: fix display-sync timing if audio take long on resume
In display-sync mode, the very first video frame is idiotically fully
timed, even though audio has not been synced yet at this point, and the
video frame is more like a "preview" frame. But since it's fully timed,
an underflow is detected if audio takes longer than the display time of
the frame (we send the second frame only after audio is done).
The timing code will try to compensate for the determined desync, but it
really shouldn't. So explicitly discard the timing info in this specific
case. On the other hand, if the first frame still hasn't finished
display, we can pretend everything is ok.
This is a hack - ideally, we either would send a frame without timing
info (and then send it again or so when playback starts properly), or we
would add real pause support to the VO, and pause it during syncing.
2016-08-07 12:06:54 +00:00
|
|
|
void vo_discard_timing_info(struct vo *vo);
|
2018-02-07 19:18:36 +00:00
|
|
|
struct vo_frame *vo_get_current_vo_frame(struct vo *vo);
|
2017-07-23 07:41:51 +00:00
|
|
|
struct mp_image *vo_get_image(struct vo *vo, int imgfmt, int w, int h,
|
|
|
|
int stride_align);
|
2015-03-13 12:14:11 +00:00
|
|
|
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
void vo_wakeup(struct vo *vo);
|
2016-07-20 18:42:30 +00:00
|
|
|
void vo_wait_default(struct vo *vo, int64_t until_time);
|
video: move display and timing to a separate thread
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
2014-08-12 21:02:08 +00:00
|
|
|
|
2010-04-23 10:22:44 +00:00
|
|
|
struct mp_keymap {
|
2008-12-20 11:52:11 +00:00
|
|
|
int from;
|
|
|
|
int to;
|
|
|
|
};
|
2010-04-23 10:22:44 +00:00
|
|
|
int lookup_keymap_table(const struct mp_keymap *map, int key);
|
2012-10-27 20:10:32 +00:00
|
|
|
|
|
|
|
struct mp_osd_res;
|
|
|
|
void vo_get_src_dst_rects(struct vo *vo, struct mp_rect *out_src,
|
|
|
|
struct mp_rect *out_dst, struct mp_osd_res *out_osd);
|
|
|
|
|
2015-07-01 17:24:28 +00:00
|
|
|
struct vo_frame *vo_frame_ref(struct vo_frame *frame);
|
|
|
|
|
2008-02-22 09:09:46 +00:00
|
|
|
#endif /* MPLAYER_VIDEO_OUT_H */
|