Instead of having 9 different properties, requiring 18 different
VOCTRLs to read them all, they are now exposed as a single property.
This is not only cleaner (since they're all together) but also allows
querying all 9 of them with only a single VOCTRL (by using
mp.get_property_native).
(The extra factor of 2 was due to an extra query being needed to get the
type, which is now also unnecessary)
This makes it much easier to access performance metrics from within a
lua script, and also makes it easier to just show a readable, formatted
version via show-text.
User hooks can now use an extra WHEN expression to specify when the
shader should be run. For example, this can be used to only run a chroma
scaling shader `WHEN CHROMA.w LUMA.w <`.
There's a slight semantics change to user shaders: When trying to bind a
texture that does not exist, a shader will now be silently skipped
(similar to when the condition is false) instead of generating an error.
This allows shader stages to depend on an optional earlier stage without
having to copy/paste the same condition everywhere.
(In other words: there's an implicit condition on all of the bound
textures existing)
When using --hwdec=auto, systems that don't provide
D3D11_CREATE_DEVICE_VIDEO_SUPPORT, which probably includes all Windows
Vista and 7 systems, will print an error message. Reduce the log level
to verbose when probing and skip the error message entirely if d3d11.dll
is not present.
This commit is in a similar spirit to 991af7d.
This comes up often, see e.g. #3220. The issue is that if the stream
input is not seekable, the demuxer is marked as not seekable. But if the
stream cache is enabled, the file still _might_ be seekable to a degree.
We recently disabled seeking in this mode because it can cause very
weird issues, mostly because if stream-layer seeking fails, the demuxers
will arbitrarily misbehave. On the other hand, it can work if the seek
is within the cached range, which is why the user can still enable it
with --force-seeking. There is a weird trade-off between allowing this
and not crapping up too easily, so just informing the user about the
possibility seems best.
Since the libavformat API is crap, we have to apply tons of heuristics
to check whether seeking will work. (No, checking it at seek time isn't
going to work either, because if a seek fails, the demuxer will be in an
undefined state. Because the libavformat API is crap.)
For clang, it's enough to just put (void) around usages we are
intentionally ignoring the result of.
Since GCC does not seem to want to respect this decision, we are forced
to disable the warning globally.
The default behavior of vo_opengl has pretty much always been 'show the
source colors as-is, without caring to adapt it to the target device'.
This decision is mostly based on the fact that if we do anything else,
lots of people will complain.
With the rise of content like BT.2020, however, it turns out more people
complain about this content being very desaturated than people complain
about this content not matching VLC - so let's just map ultra-wide gamut
content back down to standard gamut by default.
Instead of measuring the actual upload time, this instead measures the
time needed to render + map the texture via vdpau. These numbers are
still useful, since they're part of the critical path.
This is plumbed through a new VOCTRL, VOCTRL_PERFORMANCE_DATA, and
exposed as properties render-time-last, render-time-avg etc.
All of these numbers are in microseconds, which gives a good precision
range when just outputting them via show-text. (Lua scripts can
obviously still do their own formatting etc.)
Signed-off-by: wm4 <wm4@nowhere>
To avoid blocking the CPU, we use 8 time objects and rotate through
them, only blocking until the last possible moment (before we need
access to them on the next iteration through the ring buffer). I tested
it out on my machine and 4 query objects were enough to guarantee
block-free querying, but the extra margin shouldn't hurt.
Frame render times are just output at the end of each frame, via MP_DBG.
This might be improved in the future. (In particular, I want to expose
these numbers as properties so that users get some more visible feedback
about render times)
Currently, we measure pass_render_frame and pass_draw_to_screen
separately because the former might be called multiple times due to
interpolation. Doing it this way gives more faithful numbers. Same goes
for frame upload times.
When ANGLE is using D3D11 and not running in DirectComposition mode,
DXGI will hook the video window's message loop and override Alt+Enter to
trigger a transition to exclusive fullscreen mode (which doesn't even
work with mpv's renderer for some reason.) This behaviour can be
disabled by getting a pointer to the IDXGIFactory associated with the
D3D11 device and calling MakeWindowAssociation with the appropriate
flags.
Since the main thread is shared by other things in the player, using STA (single
threaded aparement) may have caused problems. Instead initialize in MTA
(multithreaded apartment).
Instead of implicitly resetting the options to defaults and then
applying the options, they're always applied on top of the current
options (in the same way adding new options to the CLI command line
will).
This does not apply to vo_opengl_cb, because that has an even worse mess
which I refuse to deal with.
Enable m_sub_options_copy() to copy nested sub-options, and also enable
it to create an option struct from defaults. We can get rid of most of
the crap in assign_options() now.
Calling handle_scaler_opt() to get a static allocation for scaler name
is still needed. It's moved to reinit_scaler(), which seems to be a
better place for it. Without it, dangling pointers could be created when
options are changed. (And in fact, this fixes possible dangling pointers
for window.name.) In theory we could create a dynamic copy, but that
seemed even more messy.
Chance of regressions.
Commit 026b75e7 actually enabled changing icc options at runtime (via
vo_cmdline), but it didn't quite work. In particular, changing the icc-
profile option just kept the old profile, because it was cached
accordingly.
As part of this, change gl_lcms.opts from a struct to a pointer to a
struct. We properly copy it, instead of allowing possibly dangling
strings, like it was done in a working but unclean way before.
Also, reinit the whole rendering chain when the auto icc profile
changes, just like it's done when icc options are changed.
Passing the bstr thing as pointer makes no sense. Everywhere else bstr
structs are passed by value because they're so small. Only when it's
supposed to receive a return value they're not.
Originally, video.c did not access any CMS things (other than lut3d
being set on it), but this has changed. In practice, almost all accesses
to it have moved to video.c. vo_opengl only created it, and set the auto
icc profile path.
Complete the move.
Some things wrt. option handling are a bit fishy. (But when is this not
the case.)
icc-profile-auto was not tested, but the distributed human CI will take
care of it.
It gets printed on every alt+tab or desktop switch under mutter and
weston, and offers no useful information since it's handled by
destroying the previous entry.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This commit will cause the wayland backend and vo to correctly report
the display frame rate. This didn't work as VOCTRL_GET_DISPLAY_FPS was
received way too early, before the window was created (and thus
current_output set).
The VO will now signal VO_EVENT_WIN_STATE after window initialization
and upon a resize.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
This algorithm works really well. Setting it is a much better
"out-of-the-box" experience than just clipping, which will always look
ugly.
In other words, with this default, users of mpv will just be able to
play HDR content without even realizing it's HDR (pretty much).
Instead of doing HDR tone mapping on an ad-hoc basis inside
pass_colormanage, the reference peak of an image is now part of the
image params (alongside colorspace, gamma, etc.) and tone mapping is
done whenever peak_src != peak_dst.
To get sensible behavior when mixing HDR and SDR content and displays,
target-brightness is a generic filler for "the assumed brightness of SDR
content".
This gets rid of the weird display_scaled hack, sets the framework
for multiple HDR functions with difference reference peaks, and allows
us to (in a future commit) autodetect the right source peak from
the HDR metadata.
(Apart from metadata, the source peak can also be controlled via
vf_format. For HDR content this adjusts the overall image brightness,
for SDR content it's like simulating a different exposure)
The wayland protocol exposes scaling done by the compositor to
compensate for small window sizes on small high DPI displays. If the
program ignores the scaling done, what'll happen is the compositor is
going to ask the program to be scaled down by N times the window size and
then it'll upscale the program's surface by N times. The scaling
algorithm seems to be bilinear so the scaling is quite obvious.
This commit sets up callbacks to listen for the scaling factor of each
output and, on rescale events, notifies the compositor that the
surface's scale is what the compositor asked for and changes the
player's surface to the appropriate size, causing no scaling to be done
by the compositor.
Compositors not supporting this interface will ignore the callbacks and do
nothing, keeping program behaviour the same. For compositors supporting
and using this interface (mutter), this will fix the rendering to be pixel
precise as it should be.
Both the opengl wayland backend and the wayland vo have been fixed to support
this. Verified to not break either on weston and mutter.
Signed-off-by: Rostislav Pehlivanov <atomnuker@gmail.com>
Commit 0348cd08 was too naive/simple, and always inserted the d3d11vpp
filter if any d3d11 output image formats were supported, even if it
makes no sense. For example --vf=format=rgb8 already breaks it.
It needs to take the set of supported input formats into account. the
weird format negotiation makes this hard. As a simple and cheap
solution, make some assumptions about the supported formats of a filter.
I hope to simplify this one day by using another format negotiation
algorithm, but this can probably wait.
No reason to do so. See also commit 240ba92b.
Since now many mp_images will never have a pixel aspect ratio set,
redefine a 0/0 aspect ratio to "undefined" instead invalid. This also
brings it more in line with how decoder vs. container aspect ratios are
handled.
Most callers seem to be fine with the new behavior.
mp_image_params_valid() in particular has to be adjusted, or some things
stop working due to mp_images not becoming valid after setting size and
format.
Position the window around the original window center on video size change
(when switching to the next file with different resolution, for example)
instead of keeping the position of its top-left corner fixed.
This is quite unexpected. It's caused by mp_image_set_size(), which is
used to update certain fields which can be format-dependent, but which
is actually also supposed to reset the pixel aspect ratio.
We now have a video filter that uses the d3d11 video processor, so it
makes no sense to have one in the VO interop code. The VO uses it for
formats not directly supported by ANGLE (so the video data is converted
to a RGB texture, which ANGLE can take in).
Change this so that the video filter is automatically inserted if
needed. Move the code that maps RGB surfaces to its own inteorp backend.
Add a bunch of new image formats, which are used to enforce the new
constraints, and to automatically insert the filter only when needed.
The added vf mechanism to auto-insert the d3d11vpp filter is very dumb
and primitive, and will work only for this specific purpose. The format
negotiation mechanism in the filter chain is generally not very pretty,
and mostly broken as well. (libavfilter has a different mechanism, and
these mechanisms don't match well, so vf_lavfi uses some sort of hack.
It only works because hwaccel and non-hwaccel formats are strictly
separated.)
The RGB interop is now only used with older ANGLE versions. The only
reason I'm keeping it is because it's relatively isolated (uses only
existing mechanisms and adds no new concepts), and because I want to be
able to compare the behavior of the old code with the new one for
testing. It will be removed eventually.
If ANGLE has NV12 interop, P010 is now handled by converting to NV12
with the video processor, instead of converting it to RGB and using the
old mechanism to import that as a texture.
Main use: deinterlacing.
I'm not sure how to select the deinterlacing mode at all. You can
enumate the available video processors, but at least on Intel, all of
them either signal support for all deinterlacers, or none (the latter is
apparently used for IVTC). I haven't found anything that actually tells
the processor _which_ algorithm to use.
Another strange detail is how to select top/bottom fields and field
dominance. At least I'm getting quite similar results to vavpp on Linux,
so I'm content with it for now.
Future plans include removing the D3D11 video processor use from the
ANGLE interop code.
This avoids a copy of the video image and lowers vsync jitter. Since
there are now two options to add to the window_attribs list, it has been
made dynamic.
This makes vf_vdpaupp use the deinterlacer helper code already used by
vf_vavpp. I nice side-effect is that this also removes some traces of
code originating from vo_vdpau.c, so we can switch it to LGPL.
Extend the refqueue helper with a deint setting. If not set,
mp_refqueue_should_deint() always returns false, which slightly
simplifies vf_vdpaupp. It's of no consequence to vf_vavpp (other than it
has to set it to get expected behavior).
A lot of real-world shaders start off with comments explaining the usage
or license, generating lots of "empty" passes. This simply change allows
us to skip them, which silences the warning spam and prevents us from
having to store and copy around these empty passes.
It also adds a more useful failure check: Attempting to use a user
shader that doesn't define any passes at all.