mpv/video/out/gpu/video.h

240 lines
6.4 KiB
C
Raw Normal View History

/*
* This file is part of mpv.
*
* mpv is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* mpv is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef MP_GL_VIDEO_H
#define MP_GL_VIDEO_H
#include <stdbool.h>
#include "options/m_option.h"
#include "sub/osd.h"
#include "utils.h"
#include "lcms.h"
#include "shader_cache.h"
#include "video/csputils.h"
#include "video/out/filter_kernels.h"
vo_opengl: refactor scaler configuration This merges all of the scaler-related options into a single configuration struct, and also cleans up the way they're passed through the code. (For example, the scaler index is no longer threaded through pass_sample, just the scaler configuration itself, and there's no longer duplication of the params etc.) In addition, this commit makes scale-down more principled, and turns it into a scaler in its own right - so there's no longer an ugly separation between scale and scale-down in the code. Finally, the radius stuff has been made more proper - filters always have a radius now (there's no more radius -1), and get a new .resizable attribute instead for when it's tunable. User-visible changes: 1. scale-down has been renamed dscale and now has its own set of config options (dscale-param1, dscale-radius) etc., instead of reusing scale-param1 (which was arguably a bug). 2. The default radius is no longer fixed at 3, but instead uses that filter's preferred radius by default. (Scalers with a default radius other than 3 include sinc, gaussian, box and triangle) 3. scale-radius etc. now goes down to 0.5, rather than 1.0. 0.5 is the smallest radius that theoretically makes sense, and indeed it's used by at least one filter (nearest). Apart from that, it should just be internal changes only. Note that this sets up for the refactor discussed in #1720, which would be to merge scaler and window configurations (include parameters etc.) into a single, simplified string. In the code, this would now basically just mean getting rid of all the OPT_FLOATRANGE etc. lines related to scalers and replacing them by a single function that parses a string and updates the struct scaler_config as appropriate.
2015-03-26 00:55:32 +00:00
struct scaler_fun {
char *name;
float params[2];
float blur;
float taper;
vo_opengl: refactor scaler configuration This merges all of the scaler-related options into a single configuration struct, and also cleans up the way they're passed through the code. (For example, the scaler index is no longer threaded through pass_sample, just the scaler configuration itself, and there's no longer duplication of the params etc.) In addition, this commit makes scale-down more principled, and turns it into a scaler in its own right - so there's no longer an ugly separation between scale and scale-down in the code. Finally, the radius stuff has been made more proper - filters always have a radius now (there's no more radius -1), and get a new .resizable attribute instead for when it's tunable. User-visible changes: 1. scale-down has been renamed dscale and now has its own set of config options (dscale-param1, dscale-radius) etc., instead of reusing scale-param1 (which was arguably a bug). 2. The default radius is no longer fixed at 3, but instead uses that filter's preferred radius by default. (Scalers with a default radius other than 3 include sinc, gaussian, box and triangle) 3. scale-radius etc. now goes down to 0.5, rather than 1.0. 0.5 is the smallest radius that theoretically makes sense, and indeed it's used by at least one filter (nearest). Apart from that, it should just be internal changes only. Note that this sets up for the refactor discussed in #1720, which would be to merge scaler and window configurations (include parameters etc.) into a single, simplified string. In the code, this would now basically just mean getting rid of all the OPT_FLOATRANGE etc. lines related to scalers and replacing them by a single function that parses a string and updates the struct scaler_config as appropriate.
2015-03-26 00:55:32 +00:00
};
struct scaler_config {
struct scaler_fun kernel;
struct scaler_fun window;
float radius;
float antiring;
float cutoff;
float clamp;
vo_opengl: refactor scaler configuration This merges all of the scaler-related options into a single configuration struct, and also cleans up the way they're passed through the code. (For example, the scaler index is no longer threaded through pass_sample, just the scaler configuration itself, and there's no longer duplication of the params etc.) In addition, this commit makes scale-down more principled, and turns it into a scaler in its own right - so there's no longer an ugly separation between scale and scale-down in the code. Finally, the radius stuff has been made more proper - filters always have a radius now (there's no more radius -1), and get a new .resizable attribute instead for when it's tunable. User-visible changes: 1. scale-down has been renamed dscale and now has its own set of config options (dscale-param1, dscale-radius) etc., instead of reusing scale-param1 (which was arguably a bug). 2. The default radius is no longer fixed at 3, but instead uses that filter's preferred radius by default. (Scalers with a default radius other than 3 include sinc, gaussian, box and triangle) 3. scale-radius etc. now goes down to 0.5, rather than 1.0. 0.5 is the smallest radius that theoretically makes sense, and indeed it's used by at least one filter (nearest). Apart from that, it should just be internal changes only. Note that this sets up for the refactor discussed in #1720, which would be to merge scaler and window configurations (include parameters etc.) into a single, simplified string. In the code, this would now basically just mean getting rid of all the OPT_FLOATRANGE etc. lines related to scalers and replacing them by a single function that parses a string and updates the struct scaler_config as appropriate.
2015-03-26 00:55:32 +00:00
};
struct scaler {
int index;
struct scaler_config conf;
double scale_factor;
bool initialized;
struct filter_kernel *kernel;
struct ra_tex *lut;
struct ra_tex *sep_fbo;
bool insufficient;
int lut_size;
// kernel points here
struct filter_kernel kernel_storage;
};
enum scaler_unit {
SCALER_SCALE, // luma/video
SCALER_DSCALE, // luma-video downscaling
SCALER_CSCALE, // chroma upscaling
SCALER_TSCALE, // temporal scaling (interpolation)
SCALER_COUNT
};
enum dither_algo {
DITHER_NONE = 0,
DITHER_FRUIT,
DITHER_ORDERED,
DITHER_ERROR_DIFFUSION,
};
enum alpha_mode {
ALPHA_NO = 0,
ALPHA_YES,
ALPHA_BLEND,
ALPHA_BLEND_TILES,
};
enum blend_subs_mode {
BLEND_SUBS_NO = 0,
BLEND_SUBS_YES,
BLEND_SUBS_VIDEO,
};
enum tone_mapping {
TONE_MAPPING_AUTO,
TONE_MAPPING_CLIP,
TONE_MAPPING_MOBIUS,
TONE_MAPPING_REINHARD,
TONE_MAPPING_HABLE,
TONE_MAPPING_GAMMA,
TONE_MAPPING_LINEAR,
TONE_MAPPING_SPLINE,
TONE_MAPPING_BT_2390,
TONE_MAPPING_BT_2446A,
TONE_MAPPING_ST2094_40,
TONE_MAPPING_ST2094_10,
};
enum tone_mapping_mode {
TONE_MAP_MODE_AUTO,
TONE_MAP_MODE_RGB,
TONE_MAP_MODE_MAX,
TONE_MAP_MODE_HYBRID,
TONE_MAP_MODE_LUMA,
};
enum gamut_mode {
GAMUT_AUTO,
GAMUT_CLIP,
GAMUT_WARN,
GAMUT_DESATURATE,
GAMUT_DARKEN,
};
struct gl_tone_map_opts {
int curve;
float curve_param;
float max_boost;
bool inverse;
float crosstalk;
int mode;
int compute_peak;
float decay_rate;
float scene_threshold_low;
float scene_threshold_high;
int gamut_mode;
bool visualize;
};
struct gl_video_opts {
int dumb_mode;
vo_opengl: refactor scaler configuration This merges all of the scaler-related options into a single configuration struct, and also cleans up the way they're passed through the code. (For example, the scaler index is no longer threaded through pass_sample, just the scaler configuration itself, and there's no longer duplication of the params etc.) In addition, this commit makes scale-down more principled, and turns it into a scaler in its own right - so there's no longer an ugly separation between scale and scale-down in the code. Finally, the radius stuff has been made more proper - filters always have a radius now (there's no more radius -1), and get a new .resizable attribute instead for when it's tunable. User-visible changes: 1. scale-down has been renamed dscale and now has its own set of config options (dscale-param1, dscale-radius) etc., instead of reusing scale-param1 (which was arguably a bug). 2. The default radius is no longer fixed at 3, but instead uses that filter's preferred radius by default. (Scalers with a default radius other than 3 include sinc, gaussian, box and triangle) 3. scale-radius etc. now goes down to 0.5, rather than 1.0. 0.5 is the smallest radius that theoretically makes sense, and indeed it's used by at least one filter (nearest). Apart from that, it should just be internal changes only. Note that this sets up for the refactor discussed in #1720, which would be to merge scaler and window configurations (include parameters etc.) into a single, simplified string. In the code, this would now basically just mean getting rid of all the OPT_FLOATRANGE etc. lines related to scalers and replacing them by a single function that parses a string and updates the struct scaler_config as appropriate.
2015-03-26 00:55:32 +00:00
struct scaler_config scaler[4];
int scaler_lut_size;
float gamma;
bool gamma_auto;
int target_prim;
int target_trc;
int target_peak;
struct gl_tone_map_opts tone_map;
bool correct_downscaling;
bool linear_downscaling;
bool linear_upscaling;
bool sigmoid_upscaling;
float sigmoid_center;
float sigmoid_slope;
bool scaler_resizes_only;
bool pbo;
int dither_depth;
int dither_algo;
int dither_size;
bool temporal_dither;
int temporal_dither_period;
char *error_diffusion;
char *fbo_format;
int alpha_mode;
bool use_rectangle;
struct m_color background;
bool interpolation;
float interpolation_threshold;
int blend_subs;
char **user_shaders;
char **user_shader_opts;
bool deband;
struct deband_opts *deband_opts;
float unsharp;
int tex_pad_x, tex_pad_y;
struct mp_icc_opts *icc_opts;
bool shader_cache;
int early_flush;
char *shader_cache_dir;
vo_gpu: make it possible to load multiple hwdec interop drivers Make the VO<->decoder interface capable of supporting multiple hwdec APIs at once. The main gain is that this simplifies autoprobing a lot. Before this change, it could happen that the VO loaded the "wrong" hwdec API, and the decoder was stuck with the choice (breaking hw decoding). With the change applied, the VO simply loads all available APIs, so autoprobing trickery is left entirely to the decoder. In the past, we were quite careful about not accidentally loading the wrong interop drivers. This was in part to make sure autoprobing works, but also because libva had this obnoxious bug of dumping garbage to stderr when using the API. libva was fixed, so this is not a problem anymore. The --opengl-hwdec-interop option is changed in various ways (again...), and renamed to --gpu-hwdec-interop. It does not have much use anymore, other than debugging. It's notable that the order in the hwdec interop array ra_hwdec_drivers[] still matters if multiple drivers support the same image formats, so the option can explicitly force one, if that should ever be necessary, or more likely, for debugging. One example are the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both support d3d11 input. vo_gpu now always loads the interop lazily by default, but when it does, it loads them all. vo_opengl_cb now always loads them when the GL context handle is initialized. I don't expect that this causes any problems. It's now possible to do things like changing between vdpau and nvdec decoding at runtime. This is also preparation for cleaning up vd_lavc.c hwdec autoprobing. It's another reason why hwdec_devices_request_all() does not take a hwdec type anymore.
2017-12-01 04:05:00 +00:00
char *hwdec_interop;
};
extern const struct m_sub_options gl_video_conf;
struct gl_video;
struct vo_frame;
struct voctrl_screenshot;
enum {
RENDER_FRAME_SUBS = 1 << 0,
RENDER_FRAME_OSD = 1 << 1,
RENDER_FRAME_VF_SUBS = 1 << 2,
RENDER_FRAME_DEF = RENDER_FRAME_SUBS | RENDER_FRAME_OSD,
};
struct gl_video *gl_video_init(struct ra *ra, struct mp_log *log,
struct mpv_global *g);
void gl_video_uninit(struct gl_video *p);
void gl_video_set_osd_source(struct gl_video *p, struct osd_state *osd);
bool gl_video_check_format(struct gl_video *p, int mp_format);
void gl_video_config(struct gl_video *p, struct mp_image_params *params);
void gl_video_render_frame(struct gl_video *p, struct vo_frame *frame,
struct ra_fbo fbo, int flags);
void gl_video_resize(struct gl_video *p,
struct mp_rect *src, struct mp_rect *dst,
struct mp_osd_res *osd);
void gl_video_set_fb_depth(struct gl_video *p, int fb_depth);
vo_opengl: refactor vo performance subsystem This replaces `vo-performance` by `vo-passes`, bringing with it a number of changes and improvements: 1. mpv users can now introspect the vo_opengl passes, which is something that has been requested multiple times. 2. performance data is now measured per-pass, which helps both development and debugging. 3. since adding more passes is cheap, we can now report information for more passes (e.g. the blit pass, and the osd pass). Note: we also switch to nanosecond scale, to be able to measure these passes better. 4. `--user-shaders` authors can now describe their own passes, helping users both identify which user shaders are active at any given time as well as helping shader authors identify performance issues. 5. the timing data per pass is now exported as a full list of samples, so projects like Argon-/mpv-stats can immediately read out all of the samples and render a graph without having to manually poll this option constantly. Due to gl_timer's design being complicated (directly reading performance data would block, so we delay the actual read-back until the next _start command), it's vital not to conflate different passes that might be doing different things from one frame to another. To accomplish this, the actual timers are stored as part of the gl_shader_cache's sc_entry, which makes them unique for that exact shader. Starting and stopping the time measurement is easy to unify with the gl_sc architecture, because the existing API already relies on a "generate, render, reset" flow, so we can just put timer_start and timer_stop in sc_generate and sc_reset, respectively. The ugliest thing about this code is that due to the need to keep pass information relatively stable in between frames, we need to distinguish between "new" and "redrawn" frames, which bloats the code somewhat and also feels hacky and vo_opengl-specific. (But then again, this entire thing is vo_opengl-specific)
2017-06-29 15:00:06 +00:00
void gl_video_perfdata(struct gl_video *p, struct voctrl_performance_data *out);
void gl_video_set_clear_color(struct gl_video *p, struct m_color color);
void gl_video_set_osd_pts(struct gl_video *p, double pts);
bool gl_video_check_osd_change(struct gl_video *p, struct mp_osd_res *osd,
double pts);
void gl_video_screenshot(struct gl_video *p, struct vo_frame *frame,
struct voctrl_screenshot *args);
float gl_video_scale_ambient_lux(float lmin, float lmax,
float rmin, float rmax, float lux);
void gl_video_set_ambient_lux(struct gl_video *p, int lux);
void gl_video_set_icc_profile(struct gl_video *p, bstr icc_data);
bool gl_video_icc_auto_enabled(struct gl_video *p);
bool gl_video_gamma_auto_enabled(struct gl_video *p);
struct mp_colorspace gl_video_get_output_colorspace(struct gl_video *p);
void gl_video_reset(struct gl_video *p);
bool gl_video_showing_interpolated_frame(struct gl_video *p);
vo_gpu: make it possible to load multiple hwdec interop drivers Make the VO<->decoder interface capable of supporting multiple hwdec APIs at once. The main gain is that this simplifies autoprobing a lot. Before this change, it could happen that the VO loaded the "wrong" hwdec API, and the decoder was stuck with the choice (breaking hw decoding). With the change applied, the VO simply loads all available APIs, so autoprobing trickery is left entirely to the decoder. In the past, we were quite careful about not accidentally loading the wrong interop drivers. This was in part to make sure autoprobing works, but also because libva had this obnoxious bug of dumping garbage to stderr when using the API. libva was fixed, so this is not a problem anymore. The --opengl-hwdec-interop option is changed in various ways (again...), and renamed to --gpu-hwdec-interop. It does not have much use anymore, other than debugging. It's notable that the order in the hwdec interop array ra_hwdec_drivers[] still matters if multiple drivers support the same image formats, so the option can explicitly force one, if that should ever be necessary, or more likely, for debugging. One example are the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both support d3d11 input. vo_gpu now always loads the interop lazily by default, but when it does, it loads them all. vo_opengl_cb now always loads them when the GL context handle is initialized. I don't expect that this causes any problems. It's now possible to do things like changing between vdpau and nvdec decoding at runtime. This is also preparation for cleaning up vd_lavc.c hwdec autoprobing. It's another reason why hwdec_devices_request_all() does not take a hwdec type anymore.
2017-12-01 04:05:00 +00:00
struct mp_hwdec_devices;
void gl_video_init_hwdecs(struct gl_video *p, struct mp_hwdec_devices *devs,
vo_gpu: make it possible to load multiple hwdec interop drivers Make the VO<->decoder interface capable of supporting multiple hwdec APIs at once. The main gain is that this simplifies autoprobing a lot. Before this change, it could happen that the VO loaded the "wrong" hwdec API, and the decoder was stuck with the choice (breaking hw decoding). With the change applied, the VO simply loads all available APIs, so autoprobing trickery is left entirely to the decoder. In the past, we were quite careful about not accidentally loading the wrong interop drivers. This was in part to make sure autoprobing works, but also because libva had this obnoxious bug of dumping garbage to stderr when using the API. libva was fixed, so this is not a problem anymore. The --opengl-hwdec-interop option is changed in various ways (again...), and renamed to --gpu-hwdec-interop. It does not have much use anymore, other than debugging. It's notable that the order in the hwdec interop array ra_hwdec_drivers[] still matters if multiple drivers support the same image formats, so the option can explicitly force one, if that should ever be necessary, or more likely, for debugging. One example are the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both support d3d11 input. vo_gpu now always loads the interop lazily by default, but when it does, it loads them all. vo_opengl_cb now always loads them when the GL context handle is initialized. I don't expect that this causes any problems. It's now possible to do things like changing between vdpau and nvdec decoding at runtime. This is also preparation for cleaning up vd_lavc.c hwdec autoprobing. It's another reason why hwdec_devices_request_all() does not take a hwdec type anymore.
2017-12-01 04:05:00 +00:00
bool load_all_by_default);
struct hwdec_imgfmt_request;
void gl_video_load_hwdecs_for_img_fmt(struct gl_video *p, struct mp_hwdec_devices *devs,
struct hwdec_imgfmt_request *params);
struct vo;
void gl_video_configure_queue(struct gl_video *p, struct vo *vo);
struct mp_image *gl_video_get_image(struct gl_video *p, int imgfmt, int w, int h,
int stride_align, int flags);
#endif