The purpose of the new API is to make it useable with other APIs than
OpenGL, especially D3D11 and vulkan. In theory it's now possible to
support other vo_gpu backends, as well as backends that don't use the
vo_gpu code at all.
This also aims to get rid of the dumb mpv_get_sub_api() function. The
life cycle of the new mpv_render_context is a bit different from
mpv_opengl_cb_context, and you explicitly create/destroy the new
context, instead of calling init/uninit on an object returned by
mpv_get_sub_api().
In other to make the render API generic, it's annoyingly EGL style, and
requires you to pass in API-specific objects to generic functions. This
is to avoid explicit objects like the internal ra API has, because that
sounds more complicated and annoying for an API that's supposed to never
change.
The opengl_cb API will continue to exist for a bit longer, but
internally there are already a few tradeoffs, like reduced
thread-safety.
Mostly untested. Seems to work fine with mpc-qt.
This introduces the option --drm-format (currently used only by
context_drm_egl, vo_drm implementation is pending) which allows you to
pick between a xrgb8888 or a xrgb2101010 visual for --gpu-context=drm.
Requires a recent mesa (18.0.0_rc4 or later) to work.
This also fixes a bug when using --gpu-context=drm on a 30bpp-enabled
mesa (allow_rgb10_configs set to true). Previously it would've set up
an XRGB8888 format at the DRM/GBM level, while a 30bpp EGLConfig would
be picked, resulting in a garbled image.
this is meant to replace the old and not properly working vo_gpu/opengl
cocoa backend in the future. the problems are various shortcomings of
Apple's opengl implementation and buggy behaviour in certain
circumstances that couldn't be properly worked around. there are also
certain regressions on newer macOS versions from 10.11 onwards.
- awful opengl performance with a none layer backed context
- huge amount of dropped frames with an early context flush
- flickering of system elements like the dock or volume indicator
- double buffering not properly working with a none layer backed context
- bad performance in fullscreen because of system optimisations
all the problems were caused by using a normal opengl context, that
seems somewhat abandoned by apple, and are fixed by using a layer backed
opengl context instead. problems that couldn't be fixed could be
properly worked around.
this has all features our old backend has sans the wid embedding,
the possibility to disable the automatic GPU switching and taking
screenshots of the window content. the first was deemed unnecessary by
me for now, since i just use the libmpv API that others can use anyway.
second is technically not possible atm because we have to pre-allocate
our opengl context at a time the config isn't read yet, so we can't get
the needed property. third one is a bit tricky because of deadlocking
and it needed to be in sync, hopefully i can work around that in the
future.
this also has at least one additional feature or eye-candy. a properly
working fullscreen animation with the native fs. also since this is a
direct port of the old backend of the parts that could be used, though
with adaptions and improvements, this looks a lot cleaner and easier to
understand.
some credit goes to @pigoz for the initial swift build support which
i could improve upon.
Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739,
#2392, #2217
This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced
format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor
struct.
That struct holds dmabuf fds and information allowing zerocopy rendering
using KMS / DRM Atomic.
This has been tested on RockChip ROCK64 device.
At the moment, rendering on Android requires ``--vo=opengl-cb`` and
a lot of java<->c++ bridging code to receive the receive and react to
the render callback in java. Performance also suffers with opengl-cb,
due to the overhead of context switching in JNI.
With this patch, Android can render using ``--vo=gpu --gpu-context=android``
(after setting ``--wid`` to point to an android.view.Surface on-screen).
This is done in several steps:
1. refactor MPGLContext -> struct ra_ctx
2. move GL-specific stuff in vo_opengl into opengl/context.c
3. generalize context creation to support other APIs, and add --gpu-api
4. rename all of the --opengl- options that are no longer opengl-specific
5. move all of the stuff from opengl/* that isn't GL-specific into gpu/
(note: opengl/gl_utils.h became opengl/utils.h)
6. rename vo_opengl to vo_gpu
7. to handle window screenshots, the short-term approach was to just add
it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to
ra itself (and vo_gpu altered to compensate), but this was a stop-gap
measure to prevent this commit from getting too big
8. move ra->fns->flush to ra_gl_ctx instead
9. some other minor changes that I've probably already forgotten
Note: This is one half of a major refactor, the other half of which is
provided by rossy's following commit. This commit enables support for
all linux platforms, while his version enables support for all non-linux
platforms.
Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the
--opengl- options like --opengl-early-flush, --opengl-finish etc. Should
be a strict superset of the old functionality.
Disclaimer: Since I have no way of compiling mpv on all platforms, some
of these ports were done blindly. Specifically, the blind ports included
context_mali_fbdev.c and context_rpi.c. Since they're both based on
egl_helpers, the port should have gone smoothly without any major
changes required. But if somebody complains about a compile error on
those platforms (assuming anybody actually uses them), you know where to
complain.
image_writer.c has code originating from vf_screenshot.c, vo_jpeg.c, and
potentially others. vo_image.c is based on a bunch of those VOs as well,
and the intention was to replace them with a single codebase.
vo_tga.c was written by someone who was not or not could be contacted,
but it doesn't matter anyway, as no code from that initial patch was
used.
One rather old patch (57f77bb41a9) reordered by libjpeg patch API calls,
and the author of the patch was not contacted. But at least with the
smoothing_factor override removed, this pretty much exactly corresponds
to the official libjpeg API example (and might even reflect a change to
those - didn't dig deeper). This removes the -jpeg-smooth option. While
we're at it, remove all the other dropped jpeg options from the manpage
(which was forgotten in past changes).
Long planned. Leads to some sanity.
There still are some rather gross things. Especially g_groups is ugly,
and a hack that can hopefully be removed. (There is a plan for it, but
whether it's implemented depends on how much energy is left.)
Since the recent release was named 0.22.0 instead of 0.21.1, bump all
mentions of 0.22.0 to 0.23.0. These were planned removals of deprecated
versions, which obviously didn't happen in 0.22.0.
- Change connector selection to accept human readable names (such as
eDP-1, HDMI-A-2) rather than arbitrary numbers.
- Change GPU selection to accept GPU number rather than device paths.
- Merge connector and GPU selection into one --drm-connector.
- Add support for --drm-connector=help.
- Add support for --drm-* in EGL backend.
- Refactor KMS; reduce state sharing across drm_common.
With the conversion from sub-options to global options, this becomes
useless. This change also comes slightly too soon, because not all VOs
have been changed yet.
vo_opengl sub-option were always rather annoying to handle. It seems
better to make them global options instead. This is simpler and easier
to use. The only disadvantage we are aware of is that it's not clear
that many/all of these new global options work with vo_opengl only.
--vo=opengl-hq is also deprecated.
There is extensive compatibility with the old behavior. One exception is
that --vo-defaults will not apply to opengl-hq (though with opengl it
still works). vo-cmdline is also dysfunctional and will be removed in a
following commit.
These changes also affect opengl-cb.
The update mechanism is still rather inefficient: it requires syncing
with the VO after each option change, rather than batching updates.
There's also no granularity (video.c just updates "everything", and if
auto-ICC profiles are enabled, vo_opengl.c will fetch them on each
update).
Most of the manpage changes were done by Niklas Haas <git@haasn.xyz>.
Deprecated in favor of user-shaders, which are functionally equivalent
but superior. (Except in the case of scaler-shader, which has no direct
replacement, but it turned out to be a very unpopular feature either way
- most custom scalers don't fit into the mpv kernel infrastructure and
are therefore implemented as user shaders either way)
Signed-off-by: wm4 <wm4@nowhere>
This requires changing the pixel upload alignment because the odd sizes
might not be aligned to multiples of 4.
Anyway, the restriction has no real benefit and the sizes in between 32
and 64 might be worth using, so just drop it.
Following testing after ebe798a, this is a more than sufficient size to
cover our use case.
The old default was a drop of about 58 dB PSNR using the old code, and
this new default is about 65 dB PSNR, so it's actually an improvement
despite resulting in a smaller size.
There was no outlier whatsoever when comparing sizes around the 64
neighbourhood (with every step corresponding to a PSNR drop of about
0.07 dB), so I picked this since it's a power of two and requires no
change to the current 3dlut-size parsing logic.
I also tested smaller sizes such as 32x32x32 which performed almost as
well on colorful samples, but this results in noticeable black boost in
the dark regions, which is pretty undesirable. Therefore, we should
avoid going much further below 64x64x64.
Either way, this new size is so fast to compute that the 3dlut cache is
almost useless on my end. In fact, it might even be slower to load the
profile from the cache than to recompute it from scratch. (For caches on
a disk. For cache on a tmpfs, it makes no difference)
Old-style commands using _ as separator (e.g. show_progress) were still
used in some places, including documentation and configuration files.
This commit updates all such instances to the new style (show-progress)
so that commands are easier to find in the manual.
User request and not that hard. Closes#3157.
Note that FFmpeg doesn't support this and there's no signalling in HEVC
etc., so the only way users can access it is by using vf_format
manually.
Mind: This encoding uses full range values, not TV range.
This is actually not entirely trivial since it involves negative Yxy
coordinates, so the CMM has to be capable of full floating point
operation. Fortunately, LittleCMS is, so we can just blindly implement
it.
Most devices seems to require special signalling (e.g. via HDMI
metadata) to actually decode HDR signals and treat them as such, so it's
probably worth warning the potential user about the fact that mpv pretty
definitely does *not* set any of this metadata signalling.
This HDR function is unique in that it's still display-referred, it just
allows for values above the reference peak (super-highlights). The
official standard doesn't actually document this very well, but the
nominal peak turns out to be exactly 12.0 - so we normalize to this
value internally in mpv. (This lets us preserve the property that the
textures are encoded in the range [0,1], preventing clipping and making
the best use of an integer texture's range)
This was grouped together with SMPTE ST2084 when checking libavutil
compatibility since they were added in the same release window, in a
similar timeframe.
User hooks can now use an extra WHEN expression to specify when the
shader should be run. For example, this can be used to only run a chroma
scaling shader `WHEN CHROMA.w LUMA.w <`.
There's a slight semantics change to user shaders: When trying to bind a
texture that does not exist, a shader will now be silently skipped
(similar to when the condition is false) instead of generating an error.
This allows shader stages to depend on an optional earlier stage without
having to copy/paste the same condition everywhere.
(In other words: there's an implicit condition on all of the bound
textures existing)
This algorithm works really well. Setting it is a much better
"out-of-the-box" experience than just clipping, which will always look
ugly.
In other words, with this default, users of mpv will just be able to
play HDR content without even realizing it's HDR (pretty much).
Instead of doing HDR tone mapping on an ad-hoc basis inside
pass_colormanage, the reference peak of an image is now part of the
image params (alongside colorspace, gamma, etc.) and tone mapping is
done whenever peak_src != peak_dst.
To get sensible behavior when mixing HDR and SDR content and displays,
target-brightness is a generic filler for "the assumed brightness of SDR
content".
This gets rid of the weird display_scaled hack, sets the framework
for multiple HDR functions with difference reference peaks, and allows
us to (in a future commit) autodetect the right source peak from
the HDR metadata.
(Apart from metadata, the source peak can also be controlled via
vf_format. For HDR content this adjusts the overall image brightness,
for SDR content it's like simulating a different exposure)