With the conversion from sub-options to global options, this becomes
useless. This change also comes slightly too soon, because not all VOs
have been changed yet.
vo_opengl sub-option were always rather annoying to handle. It seems
better to make them global options instead. This is simpler and easier
to use. The only disadvantage we are aware of is that it's not clear
that many/all of these new global options work with vo_opengl only.
--vo=opengl-hq is also deprecated.
There is extensive compatibility with the old behavior. One exception is
that --vo-defaults will not apply to opengl-hq (though with opengl it
still works). vo-cmdline is also dysfunctional and will be removed in a
following commit.
These changes also affect opengl-cb.
The update mechanism is still rather inefficient: it requires syncing
with the VO after each option change, rather than batching updates.
There's also no granularity (video.c just updates "everything", and if
auto-ICC profiles are enabled, vo_opengl.c will fetch them on each
update).
Most of the manpage changes were done by Niklas Haas <git@haasn.xyz>.
Deprecated in favor of user-shaders, which are functionally equivalent
but superior. (Except in the case of scaler-shader, which has no direct
replacement, but it turned out to be a very unpopular feature either way
- most custom scalers don't fit into the mpv kernel infrastructure and
are therefore implemented as user shaders either way)
Signed-off-by: wm4 <wm4@nowhere>
This requires changing the pixel upload alignment because the odd sizes
might not be aligned to multiples of 4.
Anyway, the restriction has no real benefit and the sizes in between 32
and 64 might be worth using, so just drop it.
Following testing after ebe798a, this is a more than sufficient size to
cover our use case.
The old default was a drop of about 58 dB PSNR using the old code, and
this new default is about 65 dB PSNR, so it's actually an improvement
despite resulting in a smaller size.
There was no outlier whatsoever when comparing sizes around the 64
neighbourhood (with every step corresponding to a PSNR drop of about
0.07 dB), so I picked this since it's a power of two and requires no
change to the current 3dlut-size parsing logic.
I also tested smaller sizes such as 32x32x32 which performed almost as
well on colorful samples, but this results in noticeable black boost in
the dark regions, which is pretty undesirable. Therefore, we should
avoid going much further below 64x64x64.
Either way, this new size is so fast to compute that the 3dlut cache is
almost useless on my end. In fact, it might even be slower to load the
profile from the cache than to recompute it from scratch. (For caches on
a disk. For cache on a tmpfs, it makes no difference)
Old-style commands using _ as separator (e.g. show_progress) were still
used in some places, including documentation and configuration files.
This commit updates all such instances to the new style (show-progress)
so that commands are easier to find in the manual.
User request and not that hard. Closes#3157.
Note that FFmpeg doesn't support this and there's no signalling in HEVC
etc., so the only way users can access it is by using vf_format
manually.
Mind: This encoding uses full range values, not TV range.
This is actually not entirely trivial since it involves negative Yxy
coordinates, so the CMM has to be capable of full floating point
operation. Fortunately, LittleCMS is, so we can just blindly implement
it.
Most devices seems to require special signalling (e.g. via HDMI
metadata) to actually decode HDR signals and treat them as such, so it's
probably worth warning the potential user about the fact that mpv pretty
definitely does *not* set any of this metadata signalling.
This HDR function is unique in that it's still display-referred, it just
allows for values above the reference peak (super-highlights). The
official standard doesn't actually document this very well, but the
nominal peak turns out to be exactly 12.0 - so we normalize to this
value internally in mpv. (This lets us preserve the property that the
textures are encoded in the range [0,1], preventing clipping and making
the best use of an integer texture's range)
This was grouped together with SMPTE ST2084 when checking libavutil
compatibility since they were added in the same release window, in a
similar timeframe.
User hooks can now use an extra WHEN expression to specify when the
shader should be run. For example, this can be used to only run a chroma
scaling shader `WHEN CHROMA.w LUMA.w <`.
There's a slight semantics change to user shaders: When trying to bind a
texture that does not exist, a shader will now be silently skipped
(similar to when the condition is false) instead of generating an error.
This allows shader stages to depend on an optional earlier stage without
having to copy/paste the same condition everywhere.
(In other words: there's an implicit condition on all of the bound
textures existing)
This algorithm works really well. Setting it is a much better
"out-of-the-box" experience than just clipping, which will always look
ugly.
In other words, with this default, users of mpv will just be able to
play HDR content without even realizing it's HDR (pretty much).
Instead of doing HDR tone mapping on an ad-hoc basis inside
pass_colormanage, the reference peak of an image is now part of the
image params (alongside colorspace, gamma, etc.) and tone mapping is
done whenever peak_src != peak_dst.
To get sensible behavior when mixing HDR and SDR content and displays,
target-brightness is a generic filler for "the assumed brightness of SDR
content".
This gets rid of the weird display_scaled hack, sets the framework
for multiple HDR functions with difference reference peaks, and allows
us to (in a future commit) autodetect the right source peak from
the HDR metadata.
(Apart from metadata, the source peak can also be controlled via
vf_format. For HDR content this adjusts the overall image brightness,
for SDR content it's like simulating a different exposure)
Remove the opengl-hq option default that caused it not to autoselect
ANGLE (unlike --vo=opengl). Details see commit d5df90a2.
Back then the intention was to use ANGLE by default, since it integrates
much nicer with the Windows compositor (instead of native OpenGL, which
tends to cause crazy glitches). On the other hand, many opengl-hq
capabilities are not available with older ANGLE builds, so it didn't
make any sense to autoselect ANGLE for it.
With the GL_EXT_texture_norm16 extension recently added to ANGLE, it has
essentially reached feature parity to desktop GL for the subset we are
using. (Even the integer texture hack for high bit depth input could be
dropped now.)
It (probably) still does not support nnedi3, due to the weird way the NN
coefficients are imported. Also, it uses half-floats instead of 16 bit
fixed-point textures for technical reasons, which implies about 5 bits
of precision loss. If anyone actually manages to distinguish the two
dithering texture formats in a double-blind test, I will fix it.
Following commit 84ccebd9, the internal helpers don't allow GL_RGB and
GL_RGBA as internal formats for FBO attachments anymore.
While OpenGL itself is perfectly fine with it, I don't see much of a
reason to bother, and mixing sized and unsized internal formats is
confusing anyway.
Just remove these formats.
This is now a configurable option, with tunable parameters.
I got inspiration for these algorithms off wikipedia. "simple" seems to
work pretty well, but not well enough to make it a reasonable default.
Some other notable candidates:
- Local functions (e.g. based on local contrast or gradient)
- Clamp with soft knee (linear up to a point)
- Mapping in CIE L*Ch. Map L smoothly, clamp C and h.
- Color appearance models
These will have to be implemented some other time.
Note that the parameter "peak_src" to pass_tone_map should, in
principle, be auto-detected from the SEI information of the source file
where available. This will also have to be implemented in a later
commit.
Currently, this relies on the user manually entering their display
brightness (since we have no way to detect this at runtime or from ICC
metadata). The default value of 250 was picked by looking at ~10 reviews
on tftcentral.co.uk and realizing they all come with around 250 cd/m^2
out of the box. (In addition, ITU-R Rec. BT.2022 supports this)
Since there is no metadata in FFmpeg to indicate usage of this TRC, the
only way to actually play HDR content currently is to set
``--vf=format=gamma=st2084``. (It could be guessed based on SEI, but
this is not implemented yet)
Incidentally, since SEI is ignored, it's currently assumed that all
content is scaled to 10,000 cd/m^2 (and hard-clipped where out of
range). I don't see this assumption changing much, though.
As an unfortunate consequence of the fact that we don't know the display
brightness, mixed with the fact that LittleCMS' parametric tone curves
are not flexible enough to support PQ, we have to build the 3DLUT
against gamma 2.2 if it's used. This might be a good thing, though,
consdering the PQ source space is probably not fantastic for
interpolation either way.
Partially addresses #2572.
This macro takes care of rotation, swizzling, integer conversion and
normalization automatically. I found the performance impact to be
nonexistant for superxbr and debanding, although rotation *did* have an
impact due to the extra matrix multiplication. (So it gets skipped where
possible)
All of the internal hooks have been rewritten to use this new mechanism,
and the prescaler hooks have finally been separated from each other.
This also means the prescale FBO kludge is no longer required.
This fixes image corruption for image formats like 0bgr, and also fixes
prescaling under rotation. (As well as other user hooks that have
orientation-dependent access)
The "raw" attributes (tex, tex_pos, pixel_size) are still un-rotated, in
case something needs them, but ideally the hooks should be rewritten to
use the new API as much as possible. The hooked texture has been renamed
from just NAME to NAME_raw to make script authors notice the change (and
also deemphasize direct texture access).
This is also a step towards getting rid of the use_integer pass.
This replaces the previous TRANSFORM by WIDTH, HEIGHT and OFFSET where
WIDTH and HEIGHT are RPN expressions. This allows for more fine-grained
control over the output size, and also makes sure that overwriting
existing textures works more cleanly.
(Also add some more useful bstr functions)
This allows users to add their own near-arbitrary hooks to the vo_opengl
processing pipeline, greatly enhancing the flexibility of user shaders.
This enables, among other things, user shaders such as CrossBilateral,
SuperRes, LumaSharpen and many more.
To make parsing the user shaders easier, shaders are now loaded as
bstrs, and the hooks are set up during video reconfig instead of on
every single frame.
Some of this documentation was left woefully inaccurate as color
management in mpv evolved. This commit updates all of the wording and
adds notes and comments where appropriate.
First of all, black point compensation is now on by default. This is
really rather harmless and only improves the result (where "improvement"
means "less black clipping").
Second, this adds an option to limit the ICC profile's contrast, which
helps for untagged matrix profiles that are implicitly black scaled even
in colorimetric intent. (Note that this relies on BPC being enabled to
work properly, which is why the two changes are tied together)
Third, this uses the LittleCMS built in black point estimator instead of
relying on the presence of accurate A2B tables. This also checks tags
and does some amounts of noise elimination.
If the option is unspecified and the profile is missing black point
information, print a warning instructing the user to set the option, and
fall back to 1000 otherwise.
This gives us 16 bit fixed-point integer texture formats, including
ability to sample from them with linear filtering, and using them as FBO
attachments.
The integer texture format path is still there for the sake of ANGLE,
which does not support GL_EXT_texture_norm16 yet.
The change to pass_dither() is needed, because the code path using
GL_R16 for the dither texture relies on glTexImage2D being able to
convert from GL_FLOAT to GL_R16. GLES does not allow this. This could be
trivially fixed by doing the conversion ourselves, but I'm too lazy to
do this now.
This colorspace has been historically used as a calibration target for
most digital projectors and sees some involvement in the UltraHD
standards, so it's a useful addition to mpv.
Since prescale now literally only affects the luma plane (and the
filters are all designed for luma-only operation either way), the option
has been renamed and the documentation updated to clarify this.
GLES does not support high bit depth fixed point textures for unknown
reasons, so direct 10 bit input is not possible. But we can still use
integer textures, which are supported by GLES 3.0. These store integer
data just like the standard fixed point textures, except they are not
normalized on sampling. They also don't support bilinear filtering, and
require a special sampler ("usampler2D").
While these texture formats enable us to shuffle the data to the GPU,
they're rather impractical with the requirements mentioned above and our
current architecture. One problem is that most code assumes it can
always use bilinear scaling (even if bilinear is never used when using
appropriate scale/cscale options). Another is that we don't have any
concept of running a function on a texture in an uniform way.
So for now, run a simple conversion step through a FBO. The FBO will use
the rgba16f format normally, which gives enough bits for 10 bit, and
will at least gracefully degrade with higher depth input.
This is bound to be much slower than a more "direct" method, but at
least it works and is simple to implement.
The odd change of function call order in init_video() is to properly
disable "dumb mode" (no FBO use) if these texture formats are in use.
Often requested. The main argument, that prominent scalers like sharpen
change the image even if no scaling happens, disappeared anyway.
("sharpen", unsharp masking, is neither prominent nor a scaler anymore.
This is an artifact from MPlayer, which fuses unsharp masking with
bilinear scaling in order to make it single-pass, or so.)
Add a "blend-tiles" choice to the "alpha" sub-option. This is pretty
simplistic and uses the GL raster position to derive the tiles. A weird
consequence is that using --vo=opengl and --vo=opengl-hq gives different
scaling behavior (screenspace pixel size vs. source video pixel size
16x16 tiles), but it seems we don't have easy access to the original
texture coordinates. Using the rasterpos is probably simpler.
Make this option the default.
WGL_NV_DX_interop is widely supported by Nvidia and AMD drivers. It
allows a texture to be shared between Direct3D and WGL, so that
rendering can be done with WGL and presentation can be done with
Direct3D. This should allow us to work around some persistent WGL
issues, such as dropped frames with some driver/OS combos, drivers that
buffer frames to increase performance at the cost of latency, and the
inability to disable exclusive fullscreen mode when using WGL to render
to a fullscreen window.
The addition of a DX_interop backend might also enable some cool
Direct3D-specific enhancements in the future, such as using the
GetPresentStatistics API to get accurate frame presentation timestamps.
Note that due to a driver bug, this backend is currently broken on
Intel. It will appear to work as long as the window is not resized too
often, but after a few changes of size it will be unable to share the
newly created renderbuffer with GL. See:
https://software.intel.com/en-us/forums/graphics-driver-bug-reporting/topic/562051
It turns out that with accurate lookup we can decrease the
default size of texture now. Do it to compensate the performance
loss introduced by the LUT_POS macro.