mpv/video/out/vo.h

529 lines
19 KiB
C
Raw Normal View History

/*
* Copyright (C) Aaron Holtzman - Aug 1999
*
* Strongly modified, most parts rewritten: A'rpi/ESP-team - 2000-2001
* (C) MPlayer developers
*
* This file is part of mpv.
*
vo.c, vo.h, vo_null.c: change license to LGPL Most contributors have agreed. vo.c requires a "driver" entry for each video output - we assume that if someone who didn't agree to LGPL added a line, it's fine for vo.c to be LGPL anyway. If the affected video output is not disabled at compilation time, the resulting binary will be GPL anyway. One problem are the changes by Nick Kurshev (usually using "nick" as SVN username). He could not be reached. I believe all changes to his files are actually gone, but here is a detailed listing: fa1d5742bc: nick introduces a new VO API. It was removed in 64bedd9683. Some of this was replaced by VOCTRLs are introduced in 7c51652a1b, obviously replacing at least some functionality by his API. b587a3d642: nick adds a vo_tune_info_t struct. Removed in 64bedd9683 too. 9caad2c29a: nick adds some VOCTRLs, which were silently removed in 8cc5ba5ab8 (they became unused probably with the VIDIX removal). 340183b0e9: nick adds VO-based screenshots, which got removed in 2f4b840f62. Strangely the same name was introduced in 01cf896a2f again, but this is a coincidence and worked differently (also it was removed yet again in 2858232220). 104c125e6d: nick adds an option for "direct rendering". It was renamed in 6403904ae9 and fully removed in e48b21dd87. 5ddd8e92a1: nick adds code to check the VO driver preinit arg to every single VO driver. The argument itself and any possibly remaining code associated with it was removed in 1f5ffe7d30. f6878753fb: nick adds header inclusion guards. We assume this is not relevant for copyright. Some of nick's code was merely moved to other files, such as the equalizer stuff added in 555c676683 and moved in 4db72f6a80 and 12579136ff, and don't affect copyright of these files anymore. Other notes: fef7b17c34: a patch by someone who wasn't asked for relicensing added a symbol that was removed again in 1b09f4633. 4a8a46fafd: author probably didn't agree to LGPL, but the function signature was changed later on anyway, and nothing of this is left. 7b25afd742: the same author adds a symbol to what is vo.h today, which this relicensing commit removes, as it was unused. (It's not clear whether the mere symbol is copyrightable, but no need to take a risk.) 3a406e94d7, 9dd8f241ac: slave mode things by someone who couldn't be reached. This aspect of the old slave mode was completely removed. bbeb54d80a: patch by someone who was not asked, but the added code was completely removed again.
2017-05-10 12:32:34 +00:00
* mpv is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* mpv is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
vo.c, vo.h, vo_null.c: change license to LGPL Most contributors have agreed. vo.c requires a "driver" entry for each video output - we assume that if someone who didn't agree to LGPL added a line, it's fine for vo.c to be LGPL anyway. If the affected video output is not disabled at compilation time, the resulting binary will be GPL anyway. One problem are the changes by Nick Kurshev (usually using "nick" as SVN username). He could not be reached. I believe all changes to his files are actually gone, but here is a detailed listing: fa1d5742bc: nick introduces a new VO API. It was removed in 64bedd9683. Some of this was replaced by VOCTRLs are introduced in 7c51652a1b, obviously replacing at least some functionality by his API. b587a3d642: nick adds a vo_tune_info_t struct. Removed in 64bedd9683 too. 9caad2c29a: nick adds some VOCTRLs, which were silently removed in 8cc5ba5ab8 (they became unused probably with the VIDIX removal). 340183b0e9: nick adds VO-based screenshots, which got removed in 2f4b840f62. Strangely the same name was introduced in 01cf896a2f again, but this is a coincidence and worked differently (also it was removed yet again in 2858232220). 104c125e6d: nick adds an option for "direct rendering". It was renamed in 6403904ae9 and fully removed in e48b21dd87. 5ddd8e92a1: nick adds code to check the VO driver preinit arg to every single VO driver. The argument itself and any possibly remaining code associated with it was removed in 1f5ffe7d30. f6878753fb: nick adds header inclusion guards. We assume this is not relevant for copyright. Some of nick's code was merely moved to other files, such as the equalizer stuff added in 555c676683 and moved in 4db72f6a80 and 12579136ff, and don't affect copyright of these files anymore. Other notes: fef7b17c34: a patch by someone who wasn't asked for relicensing added a symbol that was removed again in 1b09f4633. 4a8a46fafd: author probably didn't agree to LGPL, but the function signature was changed later on anyway, and nothing of this is left. 7b25afd742: the same author adds a symbol to what is vo.h today, which this relicensing commit removes, as it was unused. (It's not clear whether the mere symbol is copyrightable, but no need to take a risk.) 3a406e94d7, 9dd8f241ac: slave mode things by someone who couldn't be reached. This aspect of the old slave mode was completely removed. bbeb54d80a: patch by someone who was not asked, but the added code was completely removed again.
2017-05-10 12:32:34 +00:00
* GNU Lesser General Public License for more details.
*
vo.c, vo.h, vo_null.c: change license to LGPL Most contributors have agreed. vo.c requires a "driver" entry for each video output - we assume that if someone who didn't agree to LGPL added a line, it's fine for vo.c to be LGPL anyway. If the affected video output is not disabled at compilation time, the resulting binary will be GPL anyway. One problem are the changes by Nick Kurshev (usually using "nick" as SVN username). He could not be reached. I believe all changes to his files are actually gone, but here is a detailed listing: fa1d5742bc: nick introduces a new VO API. It was removed in 64bedd9683. Some of this was replaced by VOCTRLs are introduced in 7c51652a1b, obviously replacing at least some functionality by his API. b587a3d642: nick adds a vo_tune_info_t struct. Removed in 64bedd9683 too. 9caad2c29a: nick adds some VOCTRLs, which were silently removed in 8cc5ba5ab8 (they became unused probably with the VIDIX removal). 340183b0e9: nick adds VO-based screenshots, which got removed in 2f4b840f62. Strangely the same name was introduced in 01cf896a2f again, but this is a coincidence and worked differently (also it was removed yet again in 2858232220). 104c125e6d: nick adds an option for "direct rendering". It was renamed in 6403904ae9 and fully removed in e48b21dd87. 5ddd8e92a1: nick adds code to check the VO driver preinit arg to every single VO driver. The argument itself and any possibly remaining code associated with it was removed in 1f5ffe7d30. f6878753fb: nick adds header inclusion guards. We assume this is not relevant for copyright. Some of nick's code was merely moved to other files, such as the equalizer stuff added in 555c676683 and moved in 4db72f6a80 and 12579136ff, and don't affect copyright of these files anymore. Other notes: fef7b17c34: a patch by someone who wasn't asked for relicensing added a symbol that was removed again in 1b09f4633. 4a8a46fafd: author probably didn't agree to LGPL, but the function signature was changed later on anyway, and nothing of this is left. 7b25afd742: the same author adds a symbol to what is vo.h today, which this relicensing commit removes, as it was unused. (It's not clear whether the mere symbol is copyrightable, but no need to take a risk.) 3a406e94d7, 9dd8f241ac: slave mode things by someone who couldn't be reached. This aspect of the old slave mode was completely removed. bbeb54d80a: patch by someone who was not asked, but the added code was completely removed again.
2017-05-10 12:32:34 +00:00
* You should have received a copy of the GNU Lesser General Public
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef MPLAYER_VIDEO_OUT_H
#define MPLAYER_VIDEO_OUT_H
#include <inttypes.h>
#include <stdbool.h>
#include "video/img_format.h"
#include "common/common.h"
#include "options/options.h"
enum {
// VO needs to redraw
VO_EVENT_EXPOSE = 1 << 0,
// VO needs to update state to a new window size
VO_EVENT_RESIZE = 1 << 1,
// The ICC profile needs to be reloaded
VO_EVENT_ICC_PROFILE_CHANGED = 1 << 2,
// Some other window state changed (position, window state, fps)
VO_EVENT_WIN_STATE = 1 << 3,
// The ambient light conditions changed and need to be reloaded
VO_EVENT_AMBIENT_LIGHTING_CHANGED = 1 << 4,
// Special mechanism for making resizing with Cocoa react faster
VO_EVENT_LIVE_RESIZING = 1 << 5,
// For VOCTRL_GET_HIDPI_SCALE changes.
VO_EVENT_DPI = 1 << 6,
// Special thing for encode mode (vo_driver.initially_blocked).
// Part of VO_EVENTS_USER to make vo_is_ready_for_frame() work properly.
VO_EVENT_INITIAL_UNBLOCK = 1 << 7,
// Set of events the player core may be interested in.
VO_EVENTS_USER = VO_EVENT_RESIZE | VO_EVENT_WIN_STATE | VO_EVENT_DPI |
VO_EVENT_INITIAL_UNBLOCK,
};
enum mp_voctrl {
/* signal a device reset seek */
VOCTRL_RESET = 1,
/* Handle input and redraw events, called by vo_check_events() */
VOCTRL_CHECK_EVENTS,
/* signal a device pause */
VOCTRL_PAUSE,
/* start/resume playback */
VOCTRL_RESUME,
VOCTRL_SET_PANSCAN,
VOCTRL_SET_EQUALIZER,
// Trigger by any change to mp_vo_opts. This is for convenience. In theory,
// you could install your own listener.
VOCTRL_VO_OPTS_CHANGED,
/* private to vo_gpu */
VOCTRL_LOAD_HWDEC_API,
// Redraw the image previously passed to draw_image() (basically, repeat
// the previous draw_image call). If this is handled, the OSD should also
// be updated and redrawn. Optional; emulated if not available.
VOCTRL_REDRAW_FRAME,
cocoa-cb: initial implementation via opengl-cb API this is meant to replace the old and not properly working vo_gpu/opengl cocoa backend in the future. the problems are various shortcomings of Apple's opengl implementation and buggy behaviour in certain circumstances that couldn't be properly worked around. there are also certain regressions on newer macOS versions from 10.11 onwards. - awful opengl performance with a none layer backed context - huge amount of dropped frames with an early context flush - flickering of system elements like the dock or volume indicator - double buffering not properly working with a none layer backed context - bad performance in fullscreen because of system optimisations all the problems were caused by using a normal opengl context, that seems somewhat abandoned by apple, and are fixed by using a layer backed opengl context instead. problems that couldn't be fixed could be properly worked around. this has all features our old backend has sans the wid embedding, the possibility to disable the automatic GPU switching and taking screenshots of the window content. the first was deemed unnecessary by me for now, since i just use the libmpv API that others can use anyway. second is technically not possible atm because we have to pre-allocate our opengl context at a time the config isn't read yet, so we can't get the needed property. third one is a bit tricky because of deadlocking and it needed to be in sync, hopefully i can work around that in the future. this also has at least one additional feature or eye-candy. a properly working fullscreen animation with the native fs. also since this is a direct port of the old backend of the parts that could be used, though with adaptions and improvements, this looks a lot cleaner and easier to understand. some credit goes to @pigoz for the initial swift build support which i could improve upon. Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739, #2392, #2217
2018-02-12 11:28:19 +00:00
// Only used internally in vo_opengl_cb
VOCTRL_PREINIT,
VOCTRL_UNINIT,
VOCTRL_RECONFIG,
VOCTRL_UPDATE_WINDOW_TITLE, // char*
VOCTRL_UPDATE_PLAYBACK_STATE, // struct voctrl_playback_state*
VOCTRL_PERFORMANCE_DATA, // struct voctrl_performance_data*
VOCTRL_SET_CURSOR_VISIBILITY, // bool*
VOCTRL_KILL_SCREENSAVER,
VOCTRL_RESTORE_SCREENSAVER,
// Return or set window size (not-fullscreen mode only - if fullscreened,
// these must access the not-fullscreened window size only).
VOCTRL_GET_UNFS_WINDOW_SIZE, // int[2] (w/h)
VOCTRL_SET_UNFS_WINDOW_SIZE, // int[2] (w/h)
// char *** (NULL terminated array compatible with CONF_TYPE_STRING_LIST)
// names for displays the window is on
VOCTRL_GET_DISPLAY_NAMES,
// Retrieve window contents. (Normal screenshots use vo_get_current_frame().)
// Deprecated for VOCTRL_SCREENSHOT with corresponding flags.
VOCTRL_SCREENSHOT_WIN, // struct mp_image**
// A normal screenshot - VOs can react to this if vo_get_current_frame() is
// not sufficient.
VOCTRL_SCREENSHOT, // struct voctrl_screenshot*
VOCTRL_UPDATE_RENDER_OPTS,
VOCTRL_GET_ICC_PROFILE, // bstr*
VOCTRL_GET_AMBIENT_LUX, // int*
VOCTRL_GET_DISPLAY_FPS, // double*
VOCTRL_GET_HIDPI_SCALE, // double*
VOCTRL_GET_PREF_DEINT, // int*
/* private to vo_gpu */
VOCTRL_EXTERNAL_RESIZE,
};
#define VO_TRUE true
#define VO_FALSE false
#define VO_ERROR -1
#define VO_NOTAVAIL -2
#define VO_NOTIMPL -3
// VOCTRL_UPDATE_PLAYBACK_STATE
struct voctrl_playback_state {
bool taskbar_progress;
bool playing;
bool paused;
int percent_pos;
};
// VOCTRL_PERFORMANCE_DATA
#define VO_PERF_SAMPLE_COUNT 256
vo_opengl: refactor vo performance subsystem This replaces `vo-performance` by `vo-passes`, bringing with it a number of changes and improvements: 1. mpv users can now introspect the vo_opengl passes, which is something that has been requested multiple times. 2. performance data is now measured per-pass, which helps both development and debugging. 3. since adding more passes is cheap, we can now report information for more passes (e.g. the blit pass, and the osd pass). Note: we also switch to nanosecond scale, to be able to measure these passes better. 4. `--user-shaders` authors can now describe their own passes, helping users both identify which user shaders are active at any given time as well as helping shader authors identify performance issues. 5. the timing data per pass is now exported as a full list of samples, so projects like Argon-/mpv-stats can immediately read out all of the samples and render a graph without having to manually poll this option constantly. Due to gl_timer's design being complicated (directly reading performance data would block, so we delay the actual read-back until the next _start command), it's vital not to conflate different passes that might be doing different things from one frame to another. To accomplish this, the actual timers are stored as part of the gl_shader_cache's sc_entry, which makes them unique for that exact shader. Starting and stopping the time measurement is easy to unify with the gl_sc architecture, because the existing API already relies on a "generate, render, reset" flow, so we can just put timer_start and timer_stop in sc_generate and sc_reset, respectively. The ugliest thing about this code is that due to the need to keep pass information relatively stable in between frames, we need to distinguish between "new" and "redrawn" frames, which bloats the code somewhat and also feels hacky and vo_opengl-specific. (But then again, this entire thing is vo_opengl-specific)
2017-06-29 15:00:06 +00:00
struct mp_pass_perf {
// times are all in nanoseconds
uint64_t last, avg, peak;
uint64_t samples[VO_PERF_SAMPLE_COUNT];
uint64_t count;
vo_opengl: refactor vo performance subsystem This replaces `vo-performance` by `vo-passes`, bringing with it a number of changes and improvements: 1. mpv users can now introspect the vo_opengl passes, which is something that has been requested multiple times. 2. performance data is now measured per-pass, which helps both development and debugging. 3. since adding more passes is cheap, we can now report information for more passes (e.g. the blit pass, and the osd pass). Note: we also switch to nanosecond scale, to be able to measure these passes better. 4. `--user-shaders` authors can now describe their own passes, helping users both identify which user shaders are active at any given time as well as helping shader authors identify performance issues. 5. the timing data per pass is now exported as a full list of samples, so projects like Argon-/mpv-stats can immediately read out all of the samples and render a graph without having to manually poll this option constantly. Due to gl_timer's design being complicated (directly reading performance data would block, so we delay the actual read-back until the next _start command), it's vital not to conflate different passes that might be doing different things from one frame to another. To accomplish this, the actual timers are stored as part of the gl_shader_cache's sc_entry, which makes them unique for that exact shader. Starting and stopping the time measurement is easy to unify with the gl_sc architecture, because the existing API already relies on a "generate, render, reset" flow, so we can just put timer_start and timer_stop in sc_generate and sc_reset, respectively. The ugliest thing about this code is that due to the need to keep pass information relatively stable in between frames, we need to distinguish between "new" and "redrawn" frames, which bloats the code somewhat and also feels hacky and vo_opengl-specific. (But then again, this entire thing is vo_opengl-specific)
2017-06-29 15:00:06 +00:00
};
#define VO_PASS_PERF_MAX 64
vo_opengl: refactor vo performance subsystem This replaces `vo-performance` by `vo-passes`, bringing with it a number of changes and improvements: 1. mpv users can now introspect the vo_opengl passes, which is something that has been requested multiple times. 2. performance data is now measured per-pass, which helps both development and debugging. 3. since adding more passes is cheap, we can now report information for more passes (e.g. the blit pass, and the osd pass). Note: we also switch to nanosecond scale, to be able to measure these passes better. 4. `--user-shaders` authors can now describe their own passes, helping users both identify which user shaders are active at any given time as well as helping shader authors identify performance issues. 5. the timing data per pass is now exported as a full list of samples, so projects like Argon-/mpv-stats can immediately read out all of the samples and render a graph without having to manually poll this option constantly. Due to gl_timer's design being complicated (directly reading performance data would block, so we delay the actual read-back until the next _start command), it's vital not to conflate different passes that might be doing different things from one frame to another. To accomplish this, the actual timers are stored as part of the gl_shader_cache's sc_entry, which makes them unique for that exact shader. Starting and stopping the time measurement is easy to unify with the gl_sc architecture, because the existing API already relies on a "generate, render, reset" flow, so we can just put timer_start and timer_stop in sc_generate and sc_reset, respectively. The ugliest thing about this code is that due to the need to keep pass information relatively stable in between frames, we need to distinguish between "new" and "redrawn" frames, which bloats the code somewhat and also feels hacky and vo_opengl-specific. (But then again, this entire thing is vo_opengl-specific)
2017-06-29 15:00:06 +00:00
struct mp_frame_perf {
int count;
struct mp_pass_perf perf[VO_PASS_PERF_MAX];
// The owner of this struct does not have ownership over the names, and
// they may change at any time - so this struct should not be stored
// anywhere or the results reused
char *desc[VO_PASS_PERF_MAX];
};
struct voctrl_performance_data {
vo_opengl: refactor vo performance subsystem This replaces `vo-performance` by `vo-passes`, bringing with it a number of changes and improvements: 1. mpv users can now introspect the vo_opengl passes, which is something that has been requested multiple times. 2. performance data is now measured per-pass, which helps both development and debugging. 3. since adding more passes is cheap, we can now report information for more passes (e.g. the blit pass, and the osd pass). Note: we also switch to nanosecond scale, to be able to measure these passes better. 4. `--user-shaders` authors can now describe their own passes, helping users both identify which user shaders are active at any given time as well as helping shader authors identify performance issues. 5. the timing data per pass is now exported as a full list of samples, so projects like Argon-/mpv-stats can immediately read out all of the samples and render a graph without having to manually poll this option constantly. Due to gl_timer's design being complicated (directly reading performance data would block, so we delay the actual read-back until the next _start command), it's vital not to conflate different passes that might be doing different things from one frame to another. To accomplish this, the actual timers are stored as part of the gl_shader_cache's sc_entry, which makes them unique for that exact shader. Starting and stopping the time measurement is easy to unify with the gl_sc architecture, because the existing API already relies on a "generate, render, reset" flow, so we can just put timer_start and timer_stop in sc_generate and sc_reset, respectively. The ugliest thing about this code is that due to the need to keep pass information relatively stable in between frames, we need to distinguish between "new" and "redrawn" frames, which bloats the code somewhat and also feels hacky and vo_opengl-specific. (But then again, this entire thing is vo_opengl-specific)
2017-06-29 15:00:06 +00:00
struct mp_frame_perf fresh, redraw;
};
struct voctrl_screenshot {
bool scaled, subs, osd, high_bit_depth;
struct mp_image *res;
};
enum {
// VO does handle mp_image_params.rotate in 90 degree steps
VO_CAP_ROTATE90 = 1 << 0,
// VO does framedrop itself (vo_vdpau). Untimed/encoding VOs never drop.
VO_CAP_FRAMEDROP = 1 << 1,
// VO does not allow frames to be retained (vo_mediacodec_embed).
VO_CAP_NORETAIN = 1 << 2,
};
#define VO_MAX_REQ_FRAMES 10
struct vo;
struct osd_state;
core/VO: Allow VO drivers to add/modify frames Add interfaces to allow VO drivers to add or remove frames from the video stream and to alter timestamps. Currently this functionality only works with in correct-pts mode. Use the new functionality in vo_vdpau to properly support frame-adding deinterlace modes. Frames added by the VDPAU deinterlacing code are now properly timed. Before every second frame was always shown immediately (probably next monitor refresh) after the previous one, even if you were watching things in slow motion, and framestepping didn't stop at them at all. When seeking the deinterlace algorithm is no longer fed a mix of frames from old and new positions. As a side effect of the changes a problem with resize events was also fixed. Resizing calls video_to_output_surface() to render the frame at the new resolution, but before this function also changed the list of history frames, so resizing could give an image different from the original one, and also corrupt next frames due to them seeing the wrong history. Now the function has no such side effects. There are more resize-related problems though that will be fixed in a later commit. The deint_mpi[] list of reserved frames is increased from 2 to 3 entries for reasons related to the above. Having 2 entries is enough when you initially get a new frame in draw_image() because then you'll have those two entries plus the new one for a total of 3 (the code relied on the oldest mpi implicitly staying reserved for the duration of the call even after usage count was decreased). However if you want to be able to reproduce the rendering outside draw_image(), relying on the explicitly reserved list only, then it needs to store 3 entries.
2009-09-18 13:27:55 +00:00
struct mp_image;
struct mp_image_params;
struct vo_extra {
struct input_ctx *input_ctx;
struct osd_state *osd;
struct encode_lavc_context *encode_lavc_ctx;
void (*wakeup_cb)(void *ctx);
void *wakeup_ctx;
};
struct vo_frame {
// If > 0, realtime when frame should be shown, in mp_time_us() units.
// If 0, present immediately.
int64_t pts;
// Approximate frame duration, in us.
int duration;
// Realtime of estimated distance between 2 vsync events.
double vsync_interval;
// "ideal" display time within the vsync
double vsync_offset;
// "ideal" frame duration (can be different from num_vsyncs*vsync_interval
// up to a vsync) - valid for the entire frame, i.e. not changed for repeats
double ideal_frame_duration;
// how often the frame will be repeated (does not include OSD redraws)
int num_vsyncs;
// Set if the current frame is repeated from the previous. It's guaranteed
// that the current is the same as the previous one, even if the image
// pointer is different.
// The repeat flag is set if exactly the same frame should be rendered
// again (and the OSD does not need to be redrawn).
// A repeat frame can be redrawn, in which case repeat==redraw==true, and
// OSD should be updated.
bool redraw, repeat;
// The frame is not in movement - e.g. redrawing while paused.
bool still;
// Frames are output as fast as possible, with implied vsync blocking.
bool display_synced;
// Dropping the frame is allowed if the VO is behind.
bool can_drop;
// The current frame to be drawn.
// Warning: When OSD should be redrawn in --force-window --idle mode, this
// can be NULL. The VO should draw a black background, OSD on top.
struct mp_image *current;
// List of future images, starting with the current one. This does not
// care about repeated frames - it simply contains the next real frames.
// vo_set_queue_params() sets how many future frames this should include.
// The actual number of frames delivered to the VO can be lower.
// frames[0] is current, frames[1] is the next frame.
// Note that some future frames may never be sent as current frame to the
// VO if frames are dropped.
int num_frames;
struct mp_image *frames[VO_MAX_REQ_FRAMES];
// ID for frames[0] (== current). If current==NULL, the number is
// meaningless. Otherwise, it's an unique ID for the frame. The ID for
// a frame is guaranteed not to change (instant redraws will use the same
// ID). frames[n] has the ID frame_id+n, with the guarantee that frame
// drops or reconfigs will keep the guarantee.
// The ID is never 0 (unless num_frames==0). IDs are strictly monotonous.
uint64_t frame_id;
};
// Presentation feedback. See get_vsync() for how backends should fill this
// struct.
struct vo_vsync_info {
// mp_time_us() timestamp at which the last queued frame will likely be
// displayed (this is in the future, unless the frame is instantly output).
// -1 if unset or unsupported.
// This implies the latency of the output.
int64_t last_queue_display_time;
// Time between 2 vsync events in microseconds. The difference should be the
// from 2 times sampled from the same reference point (it should not be the
// difference between e.g. the end of scanout and the start of the next one;
// it must be continuous).
// -1 if unsupported.
// 0 if supported, but no value available yet. It is assumed that the value
// becomes available after enough swap_buffers() calls were done.
// >0 values are taken for granted. Very bad things will happen if it's
// inaccurate.
int64_t vsync_duration;
// Number of skipped physical vsyncs at some point in time. Typically, this
// value is some time in the past by an offset that equals to the latency.
// This value is reset and newly sampled at every swap_buffers() call.
// This can be used to detect delayed frames iff you try to call
// swap_buffers() for every physical vsync.
// -1 if unset or unsupported.
int64_t skipped_vsyncs;
};
struct vo_driver {
// Encoding functionality, which can be invoked via --o only.
bool encode;
// This requires waiting for a VO_EVENT_INITIAL_UNBLOCK event before the
// first frame can be sent. Doing vo_reconfig*() calls is allowed though.
// Encode mode uses this, the core uses vo_is_ready_for_frame() to
// implicitly check for this.
bool initially_blocked;
// VO_CAP_* bits
int caps;
// Disable video timing, push frames as quickly as possible, never redraw.
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
bool untimed;
const char *name;
const char *description;
/*
* returns: zero on successful initialization, non-zero on error.
*/
int (*preinit)(struct vo *vo);
/*
* Whether the given image format is supported and config() will succeed.
video: decouple internal pixel formats from FourCCs mplayer's video chain traditionally used FourCCs for pixel formats. For example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the string 'YV12' interpreted as unsigned int. Additionally, it used to encode information into the numeric values of some formats. The RGB formats had their bit depth and endian encoded into the least significant byte. Extended planar formats (420P10 etc.) had chroma shift, endian, and component bit depth encoded. (This has been removed in recent commits.) Replace the FourCC mess with a simple enum. Remove all the redundant formats like YV12/I420/IYUV. Replace some image format names by something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P. Add img_fourcc.h, which contains the old IDs for code that actually uses FourCCs. Change the way demuxers, that output raw video, identify the video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to request the rawvideo decoder, and sh_video->imgfmt specifies the pixel format. Like the previous hack, this is supposed to avoid the need for a complete codecs.cfg entry per format, or other lookup tables. (Note that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT raw video, but this is still considered better than adding a raw video decoder - even if trivial, it would be full of annoying lookup tables.) The TV code has not been tested. Some corrective changes regarding endian and other image format flags creep in.
2012-12-23 19:03:30 +00:00
* format: one of IMGFMT_*
* returns: 0 on not supported, otherwise 1
*/
int (*query_format)(struct vo *vo, int format);
/*
* Initialize or reconfigure the display driver.
* params: video parameters, like pixel format and frame size
* returns: < 0 on error, >= 0 on success
*/
int (*reconfig)(struct vo *vo, struct mp_image_params *params);
/*
* Like reconfig(), but provides the whole mp_image for which the change is
* required. (The image doesn't have to have real data.)
*/
int (*reconfig2)(struct vo *vo, struct mp_image *img);
/*
* Control interface
*/
int (*control)(struct vo *vo, uint32_t request, void *data);
/*
* lavc callback for direct rendering
*
* Optional. To make implementation easier, the callback is always run on
* the VO thread. The returned mp_image's destructor callback is also called
* on the VO thread, even if it's actually unref'ed from another thread.
*
* It is guaranteed that the last reference to an image is destroyed before
* ->uninit is called (except it's not - libmpv screenshots can hold the
* reference longer, fuck).
*
* The allocated image - or a part of it, can be passed to draw_frame(). The
* point of this mechanism is that the decoder directly renders to GPU
* staging memory, to avoid a memcpy on frame upload. But this is not a
* guarantee. A filter could change the data pointers or return a newly
* allocated image. It's even possible that only 1 plane uses the buffer
* allocated by the get_image function. The VO has to check for this.
*
* stride_align is always a value >=1 that is a power of 2. The stride
* values of the returned image must be divisible by this value.
*
* Currently, the returned image must have exactly 1 AVBufferRef set, for
* internal implementation simplicity.
*
* returns: an allocated, refcounted image; if NULL is returned, the caller
* will silently fallback to a default allocator
*/
struct mp_image *(*get_image)(struct vo *vo, int imgfmt, int w, int h,
int stride_align);
/*
* Thread-safe variant of get_image. Set at most one of these callbacks.
* This excludes _all_ synchronization magic. The only guarantee is that
* vo_driver.uninit is not called before this function returns.
*/
struct mp_image *(*get_image_ts)(struct vo *vo, int imgfmt, int w, int h,
int stride_align);
2014-04-30 20:25:11 +00:00
/*
* Render the given frame to the VO's backbuffer. This operation will be
* followed by a draw_osd and a flip_page[_timed] call.
* mpi belongs to the VO; the VO must free it eventually.
*
* This also should draw the OSD.
*
* Deprecated for draw_frame. A VO should have only either callback set.
2014-04-30 20:25:11 +00:00
*/
void (*draw_image)(struct vo *vo, struct mp_image *mpi);
core/VO: Allow VO drivers to add/modify frames Add interfaces to allow VO drivers to add or remove frames from the video stream and to alter timestamps. Currently this functionality only works with in correct-pts mode. Use the new functionality in vo_vdpau to properly support frame-adding deinterlace modes. Frames added by the VDPAU deinterlacing code are now properly timed. Before every second frame was always shown immediately (probably next monitor refresh) after the previous one, even if you were watching things in slow motion, and framestepping didn't stop at them at all. When seeking the deinterlace algorithm is no longer fed a mix of frames from old and new positions. As a side effect of the changes a problem with resize events was also fixed. Resizing calls video_to_output_surface() to render the frame at the new resolution, but before this function also changed the list of history frames, so resizing could give an image different from the original one, and also corrupt next frames due to them seeing the wrong history. Now the function has no such side effects. There are more resize-related problems though that will be fixed in a later commit. The deint_mpi[] list of reserved frames is increased from 2 to 3 entries for reasons related to the above. Having 2 entries is enough when you initially get a new frame in draw_image() because then you'll have those two entries plus the new one for a total of 3 (the code relied on the oldest mpi implicitly staying reserved for the duration of the call even after usage count was decreased). However if you want to be able to reproduce the rendering outside draw_image(), relying on the explicitly reserved list only, then it needs to store 3 entries.
2009-09-18 13:27:55 +00:00
/* Render the given frame. Note that this is also called when repeating
* or redrawing frames.
*
* frame is freed by the caller, but the callee can still modify the
* contained data and references.
*/
void (*draw_frame)(struct vo *vo, struct vo_frame *frame);
/*
* Blit/Flip buffer to the screen. Must be called after each frame!
*/
void (*flip_page)(struct vo *vo);
/*
* Return presentation feedback. The implementation should not touch fields
* it doesn't support; the info fields are preinitialized to neutral values.
* Usually called once after flip_page(), but can be called any time.
* The values returned by this are always relative to the last flip_page()
* call.
*/
void (*get_vsync)(struct vo *vo, struct vo_vsync_info *info);
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
/* These optional callbacks can be provided if the GUI framework used by
* the VO requires entering a message loop for receiving events and does
* not call vo_wakeup() from a separate thread when there are new events.
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
*
* wait_events() will wait for new events, until the timeout expires, or the
* function is interrupted. wakeup() is used to possibly interrupt the
* event loop (wakeup() itself must be thread-safe, and not call any other
* VO functions; it's the only vo_driver function with this requirement).
* wakeup() should behave like a binary semaphore; if wait_events() is not
* being called while wakeup() is, the next wait_events() call should exit
* immediately.
*/
void (*wakeup)(struct vo *vo);
void (*wait_events)(struct vo *vo, int64_t until_time_us);
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
/*
* Closes driver. Should restore the original state of the system.
*/
void (*uninit)(struct vo *vo);
// Size of private struct for automatic allocation (0 doesn't allocate)
int priv_size;
// If not NULL, it's copied into the newly allocated private struct.
const void *priv_defaults;
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
// List of options to parse into priv struct (requires priv_size to be set)
// This will register them as global options (with options_prefix), and
// copy the current value at VO creation time to the priv struct.
const struct m_option *options;
// All options in the above array are prefixed with this string. (It's just
// for convenience and makes no difference in semantics.)
const char *options_prefix;
// Registers global options that go to a separate options struct.
const struct m_sub_options *global_opts;
};
struct vo {
const struct vo_driver *driver;
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
struct mp_log *log; // Using e.g. "[vo/vdpau]" as prefix
void *priv;
struct mpv_global *global;
struct vo_x11_state *x11;
struct vo_w32_state *w32;
struct vo_cocoa_state *cocoa;
struct vo_wayland_state *wl;
struct vo_android_state *android;
struct mp_hwdec_devices *hwdec_devs;
struct input_ctx *input_ctx;
struct osd_state *osd;
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
struct encode_lavc_context *encode_lavc_ctx;
struct vo_internal *in;
struct vo_extra extra;
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
// --- The following fields are generally only changed during initialization.
bool probing;
// --- The following fields are only changed with vo_reconfig(), and can
// be accessed unsynchronized (read-only).
int config_ok; // Last config call was successful?
struct mp_image_params *params; // Configured parameters (as in vo_reconfig)
// --- The following fields can be accessed only by the VO thread, or from
// anywhere _if_ the VO thread is suspended (use vo->dispatch).
struct m_config_cache *opts_cache; // cache for ->opts
struct mp_vo_opts *opts;
options: add a thread-safe way to notify option updates So far, we had a thread-safe way to read options, but no option update notification mechanism. Everything was funneled though the main thread's central mp_option_change_callback() function. For example, if the panscan options were changed, the function called vo_control() with VOCTRL_SET_PANSCAN to manually notify the VO thread of updates. This worked, but's pretty inconvenient. Most of these problems come from the fact that MPlayer was written as a single-threaded program. This commit works towards a more flexible mechanism. It adds an update callback to m_config_cache (the thing that is already used for thread-safe access of global options). This alone would still be rather inconvenient, at least in context of VOs. Add another mechanism on top of it that uses mp_dispatch_queue, and takes care of some annoying synchronization issues. We extend mp_dispatch_queue itself to make this easier and slightly more efficient. As a first application, use this to reimplement certain VO scaling and renderer options. The update_opts() function translates these to the "old" VOCTRLs, though. An annoyingly subtle issue is that m_config_cache's destructor now releases pending notifications, and must be released before the associated dispatch queue. Otherwise, it could happen that option updates during e.g. VO destruction queue or run stale entries, which is not expected. Rather untested. The singly-linked list code in dispatch.c is probably buggy, and I bet some aspects about synchronization are not entirely sane.
2017-08-22 13:50:33 +00:00
struct m_config_cache *gl_opts_cache;
struct m_config_cache *eq_opts_cache;
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
bool want_redraw; // redraw as soon as possible
2008-04-20 21:37:12 +00:00
// current window state
int dwidth;
int dheight;
float monitor_par;
};
struct mpv_global;
struct vo *init_best_video_out(struct mpv_global *global, struct vo_extra *ex);
int vo_reconfig(struct vo *vo, struct mp_image_params *p);
int vo_reconfig2(struct vo *vo, struct mp_image *img);
int vo_control(struct vo *vo, int request, void *data);
void vo_control_async(struct vo *vo, int request, void *data);
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
bool vo_is_ready_for_frame(struct vo *vo, int64_t next_pts);
void vo_queue_frame(struct vo *vo, struct vo_frame *frame);
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
void vo_wait_frame(struct vo *vo);
bool vo_still_displaying(struct vo *vo);
bool vo_has_frame(struct vo *vo);
void vo_redraw(struct vo *vo);
bool vo_want_redraw(struct vo *vo);
core/VO: Allow VO drivers to add/modify frames Add interfaces to allow VO drivers to add or remove frames from the video stream and to alter timestamps. Currently this functionality only works with in correct-pts mode. Use the new functionality in vo_vdpau to properly support frame-adding deinterlace modes. Frames added by the VDPAU deinterlacing code are now properly timed. Before every second frame was always shown immediately (probably next monitor refresh) after the previous one, even if you were watching things in slow motion, and framestepping didn't stop at them at all. When seeking the deinterlace algorithm is no longer fed a mix of frames from old and new positions. As a side effect of the changes a problem with resize events was also fixed. Resizing calls video_to_output_surface() to render the frame at the new resolution, but before this function also changed the list of history frames, so resizing could give an image different from the original one, and also corrupt next frames due to them seeing the wrong history. Now the function has no such side effects. There are more resize-related problems though that will be fixed in a later commit. The deint_mpi[] list of reserved frames is increased from 2 to 3 entries for reasons related to the above. Having 2 entries is enough when you initially get a new frame in draw_image() because then you'll have those two entries plus the new one for a total of 3 (the code relied on the oldest mpi implicitly staying reserved for the duration of the call even after usage count was decreased). However if you want to be able to reproduce the rendering outside draw_image(), relying on the explicitly reserved list only, then it needs to store 3 entries.
2009-09-18 13:27:55 +00:00
void vo_seek_reset(struct vo *vo);
void vo_destroy(struct vo *vo);
void vo_set_paused(struct vo *vo, bool paused);
int64_t vo_get_drop_count(struct vo *vo);
void vo_increment_drop_count(struct vo *vo, int64_t n);
int64_t vo_get_delayed_count(struct vo *vo);
void vo_query_formats(struct vo *vo, uint8_t *list);
void vo_event(struct vo *vo, int event);
int vo_query_and_reset_events(struct vo *vo, int events);
struct mp_image *vo_get_current_frame(struct vo *vo);
void vo_set_queue_params(struct vo *vo, int64_t offset_us, int num_req_frames);
int vo_get_num_req_frames(struct vo *vo);
int64_t vo_get_vsync_interval(struct vo *vo);
double vo_get_estimated_vsync_interval(struct vo *vo);
double vo_get_estimated_vsync_jitter(struct vo *vo);
double vo_get_display_fps(struct vo *vo);
double vo_get_delay(struct vo *vo);
void vo_discard_timing_info(struct vo *vo);
struct vo_frame *vo_get_current_vo_frame(struct vo *vo);
struct mp_image *vo_get_image(struct vo *vo, int imgfmt, int w, int h,
int stride_align);
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
void vo_wakeup(struct vo *vo);
void vo_wait_default(struct vo *vo, int64_t until_time);
video: move display and timing to a separate thread The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
2014-08-12 21:02:08 +00:00
struct mp_keymap {
int from;
int to;
};
int lookup_keymap_table(const struct mp_keymap *map, int key);
struct mp_osd_res;
void vo_get_src_dst_rects(struct vo *vo, struct mp_rect *out_src,
struct mp_rect *out_dst, struct mp_osd_res *out_osd);
struct vo_frame *vo_frame_ref(struct vo_frame *frame);
#endif /* MPLAYER_VIDEO_OUT_H */