When adding things like brightness or gamma, the video obviously needs a
redraw if paused. This happened to work in the normal case because the
OSD notification triggered a redraw, but if you use no-osd the picture
won't change. Fix this by adding another option flag, UPDATE_VIDEO, and
simply signalling we want a redraw. This gets handled along with the
normal osd redrawing check in the playloop so something like "no-osd add
gamma 1" actually works.
This commit replaces all uses of sig_peak and maps all HDR metadata.
Form notable changes mixed usage of maxCLL and max_luma is resolved and
not always max_luma is used which makes vo_gpu and vo_gpu_next behave
the same way.
In debug mode the macro causes an assertion failure.
In release mode it works differently and tells the compiler that it can
assume the codepath will never execute. For this reason I was conversative
in replacing it, e.g. in mpv-internal code that exhausts all valid values
of an enum or when a condition is clear from directly preceding code.
This is commonly used by UHD/HDR sources, and mpv hilariously ignores it
up until now, just blindly mapping it to MP_CHROMA_AUTO without even so
much as a warning message.
It would be justified to add all the other chroma locations as well, but
I'm lazy and just wanted to quickly fix this bug.
This standard says we should use a value of 203 nits instead of 100 for
mapping between SDR and HDR.
Code copied from https://code.videolan.org/videolan/libplacebo/-/commit/9d9164773
In particular, that commit also includes a test case to make sure the
implementation doesn't break roundtrips.
Relevant to #4248 and #7357.
This provides a way to convert YUV in fixed point (or pseudo-fixed
point; probably best to say "uint") to float, or rather coefficients
to perform such a conversion. Things like colorspace conversion is out
of scope, so this is simple and strictly per-component.
This is somewhat similar to mp_get_csp_mul(), but includes proper color
range expansion and correct chroma centering. The old function even
seems to have a bug, and assumes something about shifted range for full
range YCbCr, which is wrong.
vo_gpu should probably use the new function eventually.
This was incorrect at least because the colorspace matrix attempted to
center chroma at (conceptually) 0.5, instead of 0. Also, it tried to
apply the fixed point shift logic for component sizes > 8 bit.
There is no float yuv format in mpv/ffmpeg yet, but see next commit,
which enables zimg to output it. I'm assuming zimg defines this format
such that luma is in range [0,1] and chroma in range [-0.5,0.5], with
the levels flag being ignored. This is consistent with H264/5 Annex E (I
think...), and it sort of seems to look right, so that's it.
This is mostly for testing. It adds passing through the metadata through
the video chain. The metadata can be manipulated with vf_format. Support
for zimg alpha conversion (if built with zimg after it gained alpha
support) is implemented. Support premultiplied input in vo_gpu.
Some things still seem to be buggy.
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
This no longer hard-codes BT.709, it converts to whatever primaries are
tagged in the same metadata struct. The actual BT.709 defaulting comes
from `mp_image_params_guess_csp`.
I really wouldn't care much about this, but some parts of the core code
are under HAVE_GPL, so there's some need to get rid of it. Simply turn
the video equalizer from its current fine-grained handling with vf/vo
fallbacks into global options. This makes updating them much simpler.
This removes any possibility of applying video equalizers in filters,
which affects vf_scale, and the previously removed vf_eq. Not a big
loss, since the preferred VOs have this builtin.
Remove video equalizer handling from vo_direct3d, vo_sdl, vo_vaapi, and
vo_xv. I'm not going to waste my time on these legacy VOs.
vo.eq_opts_cache exists _only_ to send a VOCTRL_SET_EQUALIZER, which
exists _only_ to trigger a redraw. This seems silly, but for now I feel
like this is less of a pain. The rest of the equalizer using code is
self-updating.
See commit 96b906a51d for how some video equalizer code was GPL only.
Some command line option names and ranges can probably be traced back to
a GPL only committer, but we don't consider these copyrightable.
This preserves channel balance better and helps reduce discoloration due
to nonlinear tone mapping.
I wasn't sure whether to stuff this inside pass_color_manage or
pass_tone_map but decided for the former because adding the extra
mp_csp_prim would have made the signature of the latter longer than
80col, and also because the `mp_get_cms_matrix` below it basically does
the same thing anyway, so it doesn't look that out of place. Also why is
this justification longer than the actual description of the algorithm
and what it's good for?
This introduces (yet another..) mp_colorspace members, an enum `light`
(for lack of a better name) which basically tells us whether we're
dealing with scene-referred or display-referred light, but also a bit
more metadata (in which way is the scene-referred light expected to be
mapped to the display?).
The addition of this parameter accomplishes two goals:
1. Allows us to actually support HLG more-or-less correctly[1]
2. Allows people playing back direct “camera” content (e.g. v-log or
s-log2) to treat it as scene-referred instead of display-referred
[1] Even better would be to use the display-referred OOTF instead of the
idealized OOTF, but this would require either native HLG support in
LittleCMS (unlikely) or more communication between lcms.c and
video_shaders.c than I'm remotely comfortable with
That being said, in principle we could switch our usage of the BT.1886
EOTF to the BT.709 OETF instead and treat BT.709 content as being
scene-referred under application of the 709+1886 OOTF; which moves that
particular conversion from the 3dlut to the shader code; but also allows
a) users like UliZappe to turn it off and b) supporting the full HLG
OOTF in the same framework. But I think I prefer things as they are
right now.
st2084 and std-b67 are really weird names for PQ and HLG, which is what
everybody else (including e.g. the ITU-R) calls them. Follow their
example.
I decided against naming them bt2020-pq and bt2020-hlg because it's not
necessary in this case. The standard name is only used for the other
colorspaces etc. because those literally have no other names.
List of changes:
1. Kill nom_peak, since it's a pointless non-field that stores nothing
of value and is _always_ derived from ref_white anyway.
2. Kill ref_white/--target-brightness, because the only case it really
existed for (PQ) actually doesn't need to be this general: According
to ITU-R BT.2100, PQ *always* assumes a reference monitor with a
white point of 100 cd/m².
3. Improve documentation and comments surrounding this stuff.
4. Clean up some of the code in general. Move stuff where it belongs.
Implementation-wise, the values from the demuxer/codec header are merged
with the values from the decoder such that the former are used only
where the latter are unknown (0/auto).
Commit aa1047a3 originally added this as:
+ // this is from the DarkPlaces engine, reduces to 3x3. Original code
+ // released under GPL2 or any later version.
According to Rudolf Polzer, the original author (a certain LH) was
actually asked whether it would be ok to put this code under LGPL, and
the author gave his agreement. This code is not from id Software either
(on which large parts of DarkPlaces is based on), which is the main
reason why DarkPlaces is under GPL.
So this note is just confusing, and always has been LGPL. Fix it.
These mostly happen in situations where the correct behavior is
relatively new and not found in the wild (therefore not worth
implementing) and/or extremely complicated (and thus not worth worrying
about the potential edge cases and UI changes).
Still, it's best to document these where they happen to guide the poor
souls maintaining these files in the future.
This involves multiple changes:
1. Brightness metadata is split into nominal peak and signal peak.
For a quick and dirty explanation: nominal peak is the brightest value
that your color space can represent (i.e. the brightness of an encoded
1.0), and signal peak is the brightest value that actually occurs in
the video (i.e. the brightest thing that's displayed).
2. vo_opengl uses a new decision logic to figure out the right nom_peak
and sig_peak for all situations. It also does a better job of picking
the right target gamut/colorspace to use for the OSD. (Which still is
and still should be treated as sRGB). This change in logic also
fixes#3293 en passant.
3. Since it was growing rapidly, the logic for auto-guessing / inferring
the right colorimetry configuration (in pass_colormanage) was split from
the logic for actually performing the adaptation (now pass_color_map).
Right now, the new logic doesn't do a whole lot since HDR metadata is
still ignored (but not for long).
This has two reasons:
1. I tend to add new fields to this metadata, and every time I've done
so I've consistently forgotten to update all of the dozens of places in
which this colorimetry metadata might end up getting used. While most
usages don't really care about most of the metadata, sometimes the
intend was simply to “copy” the colorimetry metadata from one struct to
another. With this being inside a substruct, those lines of code can now
simply read a.color = b.color without having to care about added or
removed fields.
2. It makes the type definitions nicer for upcoming refactors.
In going through all of the usages, I also expanded a few where I felt
that omitting the “young” fields was a bug.
Instead of hard-coding a big list, move some of the functionality
to csputils. Affects both the auto-guess blacklist and the peak
estimation.
Also update the comments.
User request and not that hard. Closes#3157.
Note that FFmpeg doesn't support this and there's no signalling in HEVC
etc., so the only way users can access it is by using vf_format
manually.
Mind: This encoding uses full range values, not TV range.
This is actually not entirely trivial since it involves negative Yxy
coordinates, so the CMM has to be capable of full floating point
operation. Fortunately, LittleCMS is, so we can just blindly implement
it.
This HDR function is unique in that it's still display-referred, it just
allows for values above the reference peak (super-highlights). The
official standard doesn't actually document this very well, but the
nominal peak turns out to be exactly 12.0 - so we normalize to this
value internally in mpv. (This lets us preserve the property that the
textures are encoded in the range [0,1], preventing clipping and making
the best use of an integer texture's range)
This was grouped together with SMPTE ST2084 when checking libavutil
compatibility since they were added in the same release window, in a
similar timeframe.
This now lets us auto-detect appropriately tagged HDR content using
FFmpeg's new TRC entries (when available).
Hidden behind an #if because Libav stable doesn't have it yet.
Currently, this relies on the user manually entering their display
brightness (since we have no way to detect this at runtime or from ICC
metadata). The default value of 250 was picked by looking at ~10 reviews
on tftcentral.co.uk and realizing they all come with around 250 cd/m^2
out of the box. (In addition, ITU-R Rec. BT.2022 supports this)
Since there is no metadata in FFmpeg to indicate usage of this TRC, the
only way to actually play HDR content currently is to set
``--vf=format=gamma=st2084``. (It could be guessed based on SEI, but
this is not implemented yet)
Incidentally, since SEI is ignored, it's currently assumed that all
content is scaled to 10,000 cd/m^2 (and hard-clipped where out of
range). I don't see this assumption changing much, though.
As an unfortunate consequence of the fact that we don't know the display
brightness, mixed with the fact that LittleCMS' parametric tone curves
are not flexible enough to support PQ, we have to build the 3DLUT
against gamma 2.2 if it's used. This might be a good thing, though,
consdering the PQ source space is probably not fantastic for
interpolation either way.
Partially addresses #2572.
This colorspace has been historically used as a calibration target for
most digital projectors and sees some involvement in the UltraHD
standards, so it's a useful addition to mpv.