- change asserts to silent exits
- check all pointers before use
- move the p->pass initialization code to the right place
This should hopefully cut down on the amount of crashing by making the
code fundamentally more robust, while also fixing a concrete issue where
opengl-cb failed to initialize p->pass.
This allows filter functions to be prematurely cut off once their
contributions start becoming insignificant. This effectively prevents
wasted GPU time sampling from parts of the function that are essentially
reduced to zero by the window function, providing anywhere from a 10% to
20% speedup. (5700μs -> 4700μs for me)
2f41c4e8 exposed some other edge cases as well. Globally resetting the
pass info was not the right way to go about it, because we don't know in
advance what the frame type is going to be - at least not with the
current code structure. (In principle, we could separately indicate the
frame type and the pass type and then only reset it on the first
actual pass_describe call, but that's annoying as well)
Also fixes a latent issue where p->pass was never initialized, which
broke the MP_DBG debugging code in some cases.
Since all existing code does gl_video_upload immediately followed by
pass_render_frame, we can just move the upload into pass_render_frame
itself, which arguably makes more sense anyway.
This replaces `vo-performance` by `vo-passes`, bringing with it a number
of changes and improvements:
1. mpv users can now introspect the vo_opengl passes, which is something
that has been requested multiple times.
2. performance data is now measured per-pass, which helps both
development and debugging.
3. since adding more passes is cheap, we can now report information for
more passes (e.g. the blit pass, and the osd pass). Note: we also
switch to nanosecond scale, to be able to measure these passes
better.
4. `--user-shaders` authors can now describe their own passes, helping
users both identify which user shaders are active at any given time
as well as helping shader authors identify performance issues.
5. the timing data per pass is now exported as a full list of samples,
so projects like Argon-/mpv-stats can immediately read out all of the
samples and render a graph without having to manually poll this
option constantly.
Due to gl_timer's design being complicated (directly reading performance
data would block, so we delay the actual read-back until the next _start
command), it's vital not to conflate different passes that might be
doing different things from one frame to another. To accomplish this,
the actual timers are stored as part of the gl_shader_cache's sc_entry,
which makes them unique for that exact shader.
Starting and stopping the time measurement is easy to unify with the
gl_sc architecture, because the existing API already relies on a
"generate, render, reset" flow, so we can just put timer_start and
timer_stop in sc_generate and sc_reset, respectively.
The ugliest thing about this code is that due to the need to keep pass
information relatively stable in between frames, we need to distinguish
between "new" and "redrawn" frames, which bloats the code somewhat and
also feels hacky and vo_opengl-specific. (But then again, this entire
thing is vo_opengl-specific)
For some braindead reason, Microsoft decided to prevent you from
dynamically loading system libraries. This makes portability harder.
And we're talking about portability between Microsoft OSes!
This partially reverts the change from a longer time ago to always build
DXVA2 and D3D11VA together.
To make it simpler, we change the following:
- building with ANGLE headers is now required to build D3D hwaccels
- if DXVA2 is enabled, D3D11VA is still forcibly built
- the CLI vo_opengl ANGLE backend is now under --egl-angle-win32
This is done to reduce the dependency mess slightly.
Another legacy annoyance. The only place where packed YUV is still
important is slightly older Apple hardware or drivers, which require
it for efficient hardware decoding.
Instead of setting up a weird swizzle (which is linked to how the
internal renderer code works, rather than the generic format code), add
per-component mapping to gl_imgfmt_desc.
The renderer still computes the weird swizzle, but at least it's
confined to itself. Also, it appears the hwdec backends don't need this
anymore.
It's really nice that the messy init_format() goes away too.
The changes to path list options is basically getting rid of the need to
pass multiple paths to a single option. Instead, you can use the option
multiple times. The old behavior can be used by using the -set suffix
with the option.
Change some options to path lists. For example --script is now append by
default, and if you use --script-set, you need to use ":"/";" as
separator instead of ",".
--sub-paths/--audio-file-paths is a deprecated alias now, and will break
if the user tries to pass multiple paths to it. I'm assuming that if
these are used, most users will pass only 1 path anyway.
--opengl-shaders has more compatibility handling, since it's probably
rather common that users pass multiple options to it.
Also document all that in the manpage.
I'll probably regret this later, as it somewhat increases the complexity
of the option parser, rather than increasing it.
Add something that allows is to extract the component order from various
RGBA formats. In fact, also handle YUV, GBRP, and XYZ formats with this.
It introduces a new struct mp_regular_imgfmt, that hopefully will
eventually replace struct mp_imgfmt_desc. The latter is still needed by
a lot of code though, especially generic code. Also vo_opengl still uses
the old one, so this commit is sort of incomplete.
Due to its genericness, it's also possible that this commit introduces
rendering bugs, or accepts formats it shouldn't accept.
Commit 3fb6380 was supposed to increase MAX_TEXTURE_HOOKS but instead
increased SHADER_MAX_HOOKS, since I forgot that they were separate (for
whatever reason).
To prevent this mistake from happening again, and to unify the location
in which user_shader-specific #defines are placed, get rid of the two
constants in opengl/video.c and move/reuse them from user_shaders.h
instead.
Also bump up MAX_SAVED_TEXTURES (now SHADER_MAX_SAVED) slightly as a
precaution against adding more passes to vo_opengl. I think we're
already flirting with the limit.
This is even better at preventing discoloration than tone mapping on the
XYZ image. Partly inspired by the HLG OOTF. Also simplifies the way we
tone map, and moves this logic to the pass_tone_map function where it
belongs.
This also fixes what could arguably be considered a bug in the HLG
implementation when using HLG for non-BT.2020 colorspaces, which is not
permitted by spec but thinkable in theory. Although in this case, I
guess it will be arbitrary whether people use the BT.2020-normalized
luma coefficients or change it to fit the colorspace, so I guess either
way could be considered "right", depending on what people end up doing.
Either way, in lieue of standard practice, we do what makes the most
sense (to me), and hopefully others will follow.
The downside is that we upload an extra vec3 uniform even if we don't
use it, but eliminating that would be ugly.
These can never be uninitialized because the enum cases are exhaustive and
the fallback is in the correct order, but GCC is too dumb to understand
this.
Also explicitly initialize tex_type, because while GCC doesn't warn
about it (for some reason), maybe it will in the future.
Moves the DXLockObjectsNV call to after PresentEx. This fixes an issue
where the presented image is a single frame late. This may be due to
DXLockObjectsNV locking the render target before StretchRect is done.
The spec indicates that the lock call should provide synchronization
for the resource, so this may be due to a driver bug.
This preserves channel balance better and helps reduce discoloration due
to nonlinear tone mapping.
I wasn't sure whether to stuff this inside pass_color_manage or
pass_tone_map but decided for the former because adding the extra
mp_csp_prim would have made the signature of the latter longer than
80col, and also because the `mp_get_cms_matrix` below it basically does
the same thing anyway, so it doesn't look that out of place. Also why is
this justification longer than the actual description of the algorithm
and what it's good for?
This introduces (yet another..) mp_colorspace members, an enum `light`
(for lack of a better name) which basically tells us whether we're
dealing with scene-referred or display-referred light, but also a bit
more metadata (in which way is the scene-referred light expected to be
mapped to the display?).
The addition of this parameter accomplishes two goals:
1. Allows us to actually support HLG more-or-less correctly[1]
2. Allows people playing back direct “camera” content (e.g. v-log or
s-log2) to treat it as scene-referred instead of display-referred
[1] Even better would be to use the display-referred OOTF instead of the
idealized OOTF, but this would require either native HLG support in
LittleCMS (unlikely) or more communication between lcms.c and
video_shaders.c than I'm remotely comfortable with
That being said, in principle we could switch our usage of the BT.1886
EOTF to the BT.709 OETF instead and treat BT.709 content as being
scene-referred under application of the 709+1886 OOTF; which moves that
particular conversion from the 3dlut to the shader code; but also allows
a) users like UliZappe to turn it off and b) supporting the full HLG
OOTF in the same framework. But I think I prefer things as they are
right now.
st2084 and std-b67 are really weird names for PQ and HLG, which is what
everybody else (including e.g. the ITU-R) calls them. Follow their
example.
I decided against naming them bt2020-pq and bt2020-hlg because it's not
necessary in this case. The standard name is only used for the other
colorspaces etc. because those literally have no other names.
List of changes:
1. Kill nom_peak, since it's a pointless non-field that stores nothing
of value and is _always_ derived from ref_white anyway.
2. Kill ref_white/--target-brightness, because the only case it really
existed for (PQ) actually doesn't need to be this general: According
to ITU-R BT.2100, PQ *always* assumes a reference monitor with a
white point of 100 cd/m².
3. Improve documentation and comments surrounding this stuff.
4. Clean up some of the code in general. Move stuff where it belongs.
This file is an leftover from when img_format.h was changed from using
the ancient FourCCs (based on Microsoft multimedia conventions) for
pixel formats to a simple enum. The remaining cases still inherently
used FourCCs for whatever reasons.
Instead of worrying about residual copyrights in this file, just move it
into code we don't want to relicense (the ancient Linux TV code). We
have to fix some other code depending on it. For the most part, we just
replace the MP_FOURCC macro with libavutil's MKTAG (although the macro
definition is exactly the same). In demux_raw, we drop some pre-defined
FourCCs, but it's not like it matters. (Instead of
--demuxer-rawvideo-format use --demuxer-rawvideo-mp-format.)
In GLES 2 mode, we can do dither, but "fruit" dithering is still out of
the question, because it does not support any high depth textures.
(Actually we probably could use an 8 bit texture too for this, at least
with small matrix sizes, but it's still too much of a pain to convert
the data, so why bother.)
This is actually a regression; before this, forcibly enabling dumb mode
due to low GL caps actually happened to avoid this case.
Fixes#4519.
I call it `mobius` because apparently the form f(x) = (cx+a)/(dx+b) is
called a Möbius transform, which is the algorithm this is based on. In
the extremes it becomes `reinhard` (param=0.0 and `clip` (param=1.0),
smoothly transitioning between the two depending on the parameter.
This is a useful tone mapping algorithm since the tunable mobius
transform allows the user to decide the trade-off between color accuracy
and detail preservation on a continuous scale. The default of 0.3 is
already far more accurate than `reinhard` while also being reasonably
good at preserving highlights, without suffering from the overall
brightness drop and color distortion of `hable`.
For these reasons, make this the new default. Also expand and improve
the documentation for these tone mapping functions.
Unfortunately quite a mess, in particular due to the need to have some
compatibility with the old API. (The old API will be supported only in
short term.)
In a multi GPU scenario, it may be desirable to use different GPUs
for decode and display responsibilities. For example, if a secondary
GPU has better video decoding capabilities.
In such a scenario, we need to initialise a separate context for each
GPU, and use the display context in hwdec_cuda, while passing the
decode context to avcodec.
Once that's done, the actually hand-off between the two GPUs is
transparent to us (It happens during the cuMemcpy2D operation which
copies the decoded frame from a cuda buffer to the OpenGL texture).
In the end, the bulk of the work is around introducing a new
configuration option to specify the decode device.
The new API has literally no advantages (other than that we can drop
mp_vt_download_image and other things later), but it's sort-of uniform
with the other hwaccels.
"--videotoolbox-format=no" is not supported with the new API, because it
doesn't "fit in". Probably could be added later again.
The iOS code change is untested (no way to test).
This was broken in e0250b9604. In some cases, device creation will
succeed, but creating an EGL context on the device will fail. With
--angle-renderer=auto, it should try to create the context again on a
D3D9 device.
This fixes mpv in Windows Vista on VirtualBox for me.
TLS is a headache. We should avoid it if we can.
The involved mechanism is unfortunately entangled with the unfortunate
libmpv API for returning pointers to host API objects. This has to be
kept until we change the API somehow.
Practically untested out of pure laziness. I'm sure I'll get a bunch of
reports if it's broken.
Until now, the texture pointer was stored in plane 1, and the texture
array index was in plane 2. Move this down to plane 0 and plane 1. This
is to align it to the new WIP D3D11 decoding API in Libav, where we
decided that there is no reason to avoid setting plane 0, and that it
would be less weird to start at plane 0.
This reverts commit 142b2f23d4, and replaces it with another try. The
previous attempt removed the overlay on every rendering, because the
normal rendering path actually unrefs the mp_image. Consequently,
unmap_current_image() was completely inappropriate for removing the
overlay.
This drops support for the old libavcodec APIs. Now FFmpeg 3.3 or FFmpeg
git is required. Libav has no release with the new APIs yet, so for
Libav git as of a few weeks or months ago or so is required if you want
to use Libav.
Not much actually changes in hwdec_vaegl.c - some code is removed, but
the reindentation inflates the diff.
Wayland is still too amateurish, and multiple features don't work,
including critical ones. There is no solution in sight, so prefer X11.
(Which seems to mostly work ok via xwayland.)
Once all problems are solved, the defaults can be switched back.
Mostly because of ANGLE (sadly).
The implementation became unpleasantly big, but at least it's relatively
self-contained.
I'm not sure to what degree shaders from different drivers are
compatible as in whether a driver would randomly misbehave if it's fed
a binary created by another driver. The useless binayFormat parameter
won't help it, as they can probably easily clash. As usual, OpenGL is
pretty shit here.
gl_headers.h is basically header_fixes.h done consequently. It contains
all OpenGL defines (and some typedefs) we need. We don't include GL
headers provided by the system anymore.
Some care has to be taken by certain windowing APIs including all of
gl.h anyway. Then the definitions could clash. Fortunately, redefining
preprocessor symbols to the same content is allowed and ignored. Also,
redefining typedefs to the same thing is allowed in C11. Apparently the
latter is not allowed in C99, so there is an imperfect attempt to avoid
the typedefs if required API symbols are apparently present already.
The nost risky part about this are the standard typedefs and GLAPIENTRY.
The latter is different only on win32 (and at least consistently so).
The typedefs are mostly based on stdint.h typedefs, which khrplatform.h
clumsily emulates on platforms which don't have it. The biggest
difference is that we define GLsizeiptr directly to ptrdiff_t, instead
of checking for the _WIN64 symbol and defining it to long or long long.
This also typedefs GLsync to __GLsync, just like the khronos headers.
Although symbols prefixed with __ are implementation reserved, khronos
also violates this rule, and having the same definition as khronos will
avoid problems on duplicate definitions.
We can simplify the build scripts too. The ios-gl check seems a bit
wrong now (what we really want to test for is EAGLContext), but I can't
test and thus can't improve it.
cuda_dynamic.h redefined two GL symbols; just include the new headers
directly instead.
This is pretty trivial, but also quite annoying due to details like
mismatching eglGetProcAddress() function signature (most callers just
cast the function pointer), and ARM/Linux hacks. So move them all to one
place.
With the recent GLES3 header detection, and if ANGLE is in the search
path, the ANGLE headers will be used over the desktop GL ones. It
appears the ANGLE headers do not include <windows.h>, which leads to the
dxinterop code to fail building. Oops.
Fix this by including <windows.h> is dxinterop is compiled in.
It appears we expect IOS to provide GLES 3. The IOS block contains all
symbols from the GLES block. Weirdly not all, so it's possible that some
symbols will be redefined, which is annoying, but harmless. I don't have
an iOS setup to test, otherwise it's likely that a modification of the
IOS include statements would take care of this.