vo_gpu: allow boosting dark scenes when tone mapping

In theory our "eye adaptation" algorithm works in both ways, both
darkening bright scenes and brightening dark scenes. But I've always
just prevented the latter with a hard clamp, since I wanted to avoid
blowing up dark scenes into looking funny (and full of noise).

But allowing a tiny bit of over-exposure might be a good thing. I won't
change the default just yet (better let users test), but a moderate
value of 1.2 might be better than the current 1.0 limit. Needs testing
especially on dark scenes.
This commit is contained in:
Niklas Haas 2019-01-02 03:03:38 +01:00 committed by Jan Ekström
parent 6179dcbb79
commit 12e58ff8a6
5 changed files with 13 additions and 1 deletions

View File

@ -52,6 +52,7 @@ Interface changes
The strength now linearly blends between the linear and nonlinear tone
mapped versions of a color.
- add --hdr-peak-decay-rate and --hdr-scene-threshold-low/high
- add --tone-mapping-max-boost
--- mpv 0.29.0 ---
- drop --opensles-sample-rate, as --audio-samplerate should be used if desired
- drop deprecated --videotoolbox-format, --ff-aid, --ff-vid, --ff-sid,

View File

@ -5235,6 +5235,14 @@ The following video options are currently all specific to ``--vo=gpu`` and
linear
Specifies the scale factor to use while stretching. Defaults to 1.0.
``--tone-mapping-max-boost=<1.0..10.0>``
Upper limit for how much the tone mapping algorithm is allowed to boost
the average brightness by over-exposing the image. The default value of 1.0
allows no additional brightness boost. A value of 2.0 would allow
over-exposing by a factor of 2, and so on. Raising this setting can help
reveal details that would otherwise be hidden in dark scenes, but raising
it too high will make dark scenes appear unnaturally bright.
``--hdr-compute-peak=<auto|yes|no>``
Compute the HDR peak and frame average brightness per-frame instead of
relying on tagged metadata. These values are averaged over local regions as

View File

@ -316,6 +316,7 @@ static const struct gl_video_opts gl_video_opts_def = {
.tone_map = {
.curve = TONE_MAPPING_HABLE,
.curve_param = NAN,
.max_boost = 1.0,
.decay_rate = 100.0,
.scene_threshold_low = 50,
.scene_threshold_high = 200,
@ -376,6 +377,7 @@ const struct m_sub_options gl_video_conf = {
OPT_INTRANGE("hdr-scene-threshold-high",
tone_map.scene_threshold_high, 0, 0, 10000),
OPT_FLOAT("tone-mapping-param", tone_map.curve_param, 0),
OPT_FLOATRANGE("tone-mapping-max-boost", tone_map.max_boost, 0, 1.0, 10.0),
OPT_FLOAT("tone-mapping-desaturate", tone_map.desat, 0),
OPT_FLOATRANGE("tone-mapping-desaturate-exponent",
tone_map.desat_exp, 0, 0.0, 20.0),

View File

@ -98,6 +98,7 @@ enum tone_mapping {
struct gl_tone_map_opts {
int curve;
float curve_param;
float max_boost;
int compute_peak;
float decay_rate;
int scene_threshold_low;

View File

@ -652,7 +652,7 @@ static void pass_tone_map(struct gl_shader_cache *sc,
}
GLSL(float sig_orig = sig[sig_idx];)
GLSLF("float slope = min(1.0, %f / sig_avg);\n", sdr_avg);
GLSLF("float slope = min(%f, %f / sig_avg);\n", opts->max_boost, sdr_avg);
GLSL(sig *= slope;)
GLSL(sig_peak *= slope;)