When mpv attempts to play a video that is, on average, 60 FPS on a
display that is not exactly 60.00 Hz, two options try to fight each
other: `video-sync-max-video-change` and `interpolation-threshold`.
Normally, container FPS in something such as an .mp4 or a .mkv is
precise enough such that the video can be retimed exactly to the display
Hz and interpolation is not activated.
In the case of something like certain live streaming videos or other scenario
where container FPS is not known, the default option of 0.0001 for
`interpolation-threshold` is extremely low, and while
`video-sync-max-video-change` retimes the video to what it approximately
knows as the "real" FPS, this may or may not be outside of
`interpolation-threshold`'s logic at any given time, which causes
interpolation to be frequently flipped on and off giving an appearance
of stuttering or repeated frames that is oftern quite jarring and makes
a video unwatchable.
This commit changes the default of `interpolation-threshold` to 0.01,
which is the same value as `video-sync-max-video-change`, and guarantees
that if the user accepts a video being retimed to match the display,
they do not additionally have to worry about a much more
precise interpolation threshold randomly flipping on or off. No internal
logic is changed so setting `interpolation-threshold` to -1 will still
disable this logic entirely and always enable interpolation.
The documentation has been updated to reflect this change and give
context to the user for which scenarios they might want to disable
`interpolation-threshold` logic or change it to a smaller value.
Today, validation is only possible for string type options. But there's
no particular reason why it needs to be restricted in this way, and
there are potential uses, to allow other options to be validated
without forcing the option to have to reimplement parsing from
scratch.
The first part, simply making the validation function an explicit
field instead of overloading priv is simple enough. But if we only do
that, then the validation function still needs to deal with the raw
pre-parsed string. Instead, we want to allow the value to be parsed
before it is validated. That in turn leads to us having validator
functions that should be type aware. Unfortunately, that means we need
to keep the explicit macro like OPT_STRING_VALIDATE() as a way to
enforce the correct typing of the function. Otherwise, we'd have to
have the validator take a void * and hope the implementation can cast
it correctly.
For help, we don't have this problem, as help doesn't look at the
value.
Then, we turn validators that are really help generators into explicit
help functions and where a validator is help + validation, we split
them into two parts.
I have, however, left functions that need to query information for both
help and validation as single functions to avoid code duplication.
In this change, I have not added an other OPT_FOO_VALIDATE() macros as
they are not needed, but I will add some in a separate change to
illustrate the pattern.
in the original commit, that removed the conditional clearing, an
incorrect assumption was made that clearing "should be practically free"
and can be done always. though, at least on macOS + intel this can have
a performance impact of up to 50% increased usage. it might have an
impact on other platforms and setups as well, but this is unconfirmed.
the reason for removing the conditional clearing was to partially work
around a driver bug on very specific setups, X11 with amdgpu and OpenGL,
to clear garbled frames on start. though it still has issues with
garbled frames in other situation like fullscreening. there is also an
open bug report on the mesa bug tracker about this. setting the
radeonsi_zerovram flag works around all of those issues.
since the flag works around all these issues and the original fix
doesn't work completely we revert it and keep our optimisation.
Fixes#8273
vo_gpu has a small set of options for ra_ctx that can be set. In
practice, runtime toggling doesn't matter for most of these as they have
no effect while a video is playing. However, changing the alpha option
during runtime can actually work depending on the backend used. mpv
already detected when one of these options changed, but it made no
attempt to update the options in the ra_ctx accordingly (likely because
nothing made any use of this information). Another related change is to
add an update_render_opts to the fns and allow invidiual backends to
(optionally) use it.
Rather than after tone-mapping. This prevents overflow when the
pre-tonemapped signal contains inputs exceeding sig_peak. I also
realized that with this clipping in place, post-clipping no longer needs
to be done, so this isn't even particularly slower.
The only two exceptions to the rule are "clip" and "linear", which
relied on the post-clipping to do their tone mapping properly.
Fixes#7929
This can be used to make vo_libmpv render video to a memory buffer. It
only adds a new backend API that takes memory surfaces. All the render
API (such as frame rendering control and so on) is reused.
I'm not quite convinced of the usefulness of this, and until now I
always resisted providing something like this. It only seems to
facilitate inefficient implementation. But whatever.
Unfortunately, this duplicates the software rendering glue code yet
again (like it exists in vo_x11, vo_wlshm, vo_drm, and probably more).
But in theory, these could reuse this backend in the future, just like
vo_gpu could reuse the render_gl API.
Fixes: #7852
Using mediump float on GLES causes problems with kernel resampling,
PQ HDR, and possibly others. The issues are fixed by using highp,
which is available when GL_FRAGMENT_PRECISION_HIGH is defined.
--dscale= and --*scale-window= (i.e. an empty string) are respectively
valid settings for their options (and, in fact, the defaults).
This fixes the bug that it was impossible to reset e.g. tscale-window
back to the default "unset" setting after setting it once.
Credit goes to @CounterPillow for locating the cause of this bug.
Implementation copy/pasted from:
https://code.videolan.org/videolan/libplacebo/-/commit/f793fc0480f
This brings mpv's tone mapping more in line with industry standard
practices, for a hopefully more consistent result across the board.
Note that we ignore the black point adjustment of the tone mapping
entirely. In theory we could revisit this, if we ever make black point
compensation part of the mpv rendering pipeline.
This standard says we should use a value of 203 nits instead of 100 for
mapping between SDR and HDR.
Code copied from https://code.videolan.org/videolan/libplacebo/-/commit/9d9164773
In particular, that commit also includes a test case to make sure the
implementation doesn't break roundtrips.
Relevant to #4248 and #7357.
glslang accepted this, perhaps erronneously, but mesa does not. It seems
to be incorrect. A caveat is that this means *all* SSBOs are now
coherent, but since we only use SSBOs for peak detection, that's a
non-issue. (And besides, marking something as coherent when we don't
perform any synchronization commands on it should be a no-op anyway)
Fixes#7823
The "screenshot window" command (ctrl+s by default) somehow broke video
colors with --gpu-api=vulkan --profile=gpu-hq when playback was paused.
I don't know the cause, but the rest of the code seems to imply
gl_video_reset_surfaces() needs to be called manually to flush some
caches, and it fixes the issue, so I assume there's no great mystery
here.
The original solution for #7017 was sort of a hack, but this hack is no
longer needed because c05e5d9d fixed the underlying issue causing this
error to be spammed in the first place. So just remove the "fix" that
apparently introduced about as many issueas it fixed.
Fixes#7719, hopefully.
This resolves prefixes such as "~/" and "~~/" at the caller, instead of
relying on stream_read_file() to do it. One of the following commits
will remove this from stream_read_file() itself.
Untested.
This was incorrect at least because the colorspace matrix attempted to
center chroma at (conceptually) 0.5, instead of 0. Also, it tried to
apply the fixed point shift logic for component sizes > 8 bit.
There is no float yuv format in mpv/ffmpeg yet, but see next commit,
which enables zimg to output it. I'm assuming zimg defines this format
such that luma is in range [0,1] and chroma in range [-0.5,0.5], with
the levels flag being ignored. This is consistent with H264/5 Annex E (I
think...), and it sort of seems to look right, so that's it.
For some reason this was never done? Looking through the code, it was
never the case that the frame cache was hit for still frames. I have no
idea why not. It makes a lot of sense to do so.
Notably, this massively improves the performance of updating the OSC
when viewing e.g. large still images, or while paused. (Tested on a
4000x8000 image, the OSC now responds smoothly to user input)
This is mostly for testing. It adds passing through the metadata through
the video chain. The metadata can be manipulated with vf_format. Support
for zimg alpha conversion (if built with zimg after it gained alpha
support) is implemented. Support premultiplied input in vo_gpu.
Some things still seem to be buggy.
When I added mp_regular_imgfmt, I made the chroma subsampling use the
actual chroma division factor, instead of a shift (log2 of the actual
value). I had some ideas about how this was (probably?) more intuitive
and general. But nothing ever uses non-power of 2 subsampling (except
jpeg in rare cases apparently, because the world is a bad place).
Change the fields back to use shifts and rename them to avoid mistakes.
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
We have this cap now thanks to e2976e662, but we don't actually make
sure our FBOs are storable before we blindly attempt using them with
compute shaders.
There's no more need to unconditionally set `storage_dst = true` as long
as we make sure to include an extra condition on the `fbo_format`
selection to prevent users from accidentally enabling
compute-shader-only features with non-storable FBOs, alongside some
other miscellaneous adjustments to eliminate instances of "assumed
storability" from vo_gpu.
This simply makes the "is the destination FBO format bad?" check a tiny
bit less awful, by making sure we prefer storable FBO formats over
non-storable FBO formats. I'd love to make this also conditional on
whether or not we actually *need* a storable FBO format, but that logic
is decided later, in `pass_draw_to_screen`, and I don't want to
replicate the logic.
Fixes#7017.
Libav seems rather dead: no release for 2 years, no new git commits in
master for almost a year (with one exception ~6 months ago). From what I
can tell, some developers resigned themselves to the horrifying idea to
post patches to ffmpeg-devel instead, while the rest of the developers
went on to greener pastures.
Libav was a better project than FFmpeg. Unfortunately, FFmpeg won,
because it managed to keep the name and website. Libav was pushed more
and more into obscurity: while there was initially a big push for Libav,
FFmpeg just remained "in place" and visible for most people. FFmpeg was
slowly draining all manpower and energy from Libav. A big part of this
was that FFmpeg stole code from Libav (regular merges of the entire
Libav git tree), making it some sort of Frankenstein mirror of Libav,
think decaying zombie with additional legs ("features") nailed to it.
"Stealing" surely is the wrong word; I'm just aping the language that
some of the FFmpeg members used to use. All that is in the past now, I'm
probably the only person left who is annoyed by this, and with this
commit I'm putting this decade long problem finally to an end. I just
thought I'd express my annoyance about this fucking shitshow one last
time.
The most intrusive change in this commit is the resample filter, which
originally used libavresample. Since the FFmpeg developer refused to
enable libavresample by default for drama reasons, and the API was
slightly different, so the filter used some big preprocessor mess to
make it compatible to libswresample. All that falls away now. The
simplification to the build system is also significant.
Theoretically possible (and quite unlikely due to the small texture
size). The code was originally written with the assumption that texture
allocations can't fail, and it was never updated out of laziness.
Untested.
In the distant past, the cuviddec backed copy hwaccel could be
configured directly using lavc options. However, since that time,
we gained support for automatic hw ctx creation which ended up
bypassing the lavc options.
Rather than trying to find a way to pass those options again, a
better idea is to make the 'cuda-decode-device' option, used by
the interop hwaccels, work for the copy hwaccels too.
And that's pretty simple: we have to add a create function that
checks the option and passes it on to ffmpeg.
Note that this does require a slight re-jig to the configuration
flags, as we now have a scenario where we want to build with support
for the cuda copy hwaccels but not the interop ones. So we need
a distinct configuration flag for that combination.
Fixes#7295.
As we are less and less interested in vpdpau, with nvdec and vaapi
being better choices in general on nvidia and AMD respectively, we
might consider removing direct_mode, where we bypass the vdpau
mixer and work directly with yuv textures. Normally, working with
yuv textures would be great, but vdpau built in an assumption that
all frames are delivered as separate fields, causing us to have
to re-interleave them.
nvidia then introduces a new OpenGL extension that can return the
yuv frames as frames, but we can't just unconditionally switch to
that as we'd want to keep supporting older hardware where the drivers
are no longer getting new features. The end result is that we
wouldn't be able to get rid of the old code paths.
Removing direct_mode means we always use the mixer, and work with
rgba frame textures. There are some theoretical limitations to
this, but in practice they probably don't matter much - unsupported
colourspaces don't matter because without 10bit decoding support,
we can't use them anyway, and apparently we're not doing separate
chroma scaling these days, so scaling the rbga doesn't really lose
anything (and the vdpau hq scaling option remains available).
Pretty annoying affair. The vo_gpu code could of course not trigger
rendering from filters yet, so it needed to be extended. Also, this uses
some icky stuff made for vf_sub (and this was the reason I marked vf_sub
as deprecated), so everything is terrible.
Handling the window with this function makes no sense, since windows
and kernels are not the same thing and don't share the same option list.
The only reason it's done is to make sure the char* points at the static
string rather than the dynamically allocated one, which we can do
manually in this function. Rewrite a bit for clarity/quality.
pass_color_map() (in video_shaders.c) and pass_colormanage() (video.c)
both duplicate the condition on whether to do peak computation. Peak
computation requires a compute shader, so if the duplicated conditions
don't match, video_shaders.c will generate a compute shader, but video.c
will try to run it as fragment shader. This leads to a "blue screen".
This can be reproduced by playing a HDTV video with --target-peak=99.
It's not clear how to fix this. Should pass_tone_map() be only invoked
if mp_trc_is_hdr() == true (what pass_colormanage() uses to decide
whether to enable peak computation), or should pass_colormanage() just
tell pass_color_map() to skip peak computation? Decide for the latter,
as it's more robust.
Even if not correct, at least it gets rid of the blue shit.
Fixes: #7149
Probably. It's not like these pixel formats are formally specified -
FFmpeg added them because _some_ file format or decoder supports it, and
while that format/codec may define it precisely, the pixel format is
sort of disconnected and just a FFmpeg thing.
In any case, the yuva sample I had at hand uses the full range the
component data type can provide. The old code used the same "shifted"
range as for Y/U/V components, which must have been wrong.
This will not work correctly for packed YUVA formats, but fortunately
they matter even less.
For some reason, the first frame displayed on X11 with amdgpu and OpenGL
will be garbled. This is especially visible if the player starts,
displays a frame, but then still takes a while to properly start
playback.
With --interpolation, the behavior somehow changes (usually gets worse).
I'm not sure what exactly is going on, and the code in video.c is way
too abstruse. Maybe there is some slight possibility that a frame with
uncleared contents gets displayed, which somehow also corrupts another
frame that is displayed immediately after that.
If clear is unconditionally run, this somehow doesn't happen, and you
see a video frame. By any logic this shouldn't happen: a video frame
should always overwrite the background. So I can't exclude that this
isn't some sort of driver bug, or at least very obscure interaction.
Clearing should be practically free anyway, so always do it.
Fixes: #7105
By default utilizes the color space of the desktop on which the
swap chain is located. If a specific value is defined, it will be
instead be utilized.
Enables configuration of the PQ color space (BT.2020 primaries,
PQ transfer function) for HDR.
Additionally, signals the swap chain color space to the renderer,
so that the render looks correct without having to specify
target-trc or target-prim manually.
Due to all of the APIs being Win10+ only, will only work starting
with Windows 10.
This lets us set primaries, transfer function and the target peak
based on what the presenting layer would want us to have.
Now that this mechanism is available, warn if the user has
overridden values such as primaries or transfer function.
This seems to have been missed when the storable flag was added, since
all the other flags were logged here. It can be useful to know if an RA
format is storable, so log it as well.
Using e.g. --vf=format=0bgr showed obviously wrong colors with --vo=gpu.
The reason is that leading padding wasn't handled correctly.
Try to hack fix it. While the code in copy_image() is somewhat
reasonable, I can't tell what the fuck is going on with that HOOKED
shit. For some reason this HOOKED shit doesn't use copy_image() (???),
or uses it incorrectly. It affects debanding. --deband=no works
correctly. If it's enabled, the crap in hook_prelude() is needed.
I bet there are many more bugs with this. For example, the deband shader
will try to deband the alpha channel if the format abgr is used (because
the correct component order is only established later). This can be
tested by inserting a "color.x = 0;" at the end of the deband shader,
and using --vf=format=rgba vs. abgr.
I cannot comprehend why it doesn't just store explicitly which
components a texture contains, and why it doesn't just read the
components always in an uniform way.
There's a big chance this fix works only by coincidence. This shouldn't
have been so hard either. Time for a complete rewrite?
Internally, vo_gpu uses NaN for some options to indicate a default value
that is different depending on the context (e.g. different scalers).
There are 2 problems with this:
1. you couldn't reset the options to their defaults
2. NaN is a damn mess and shouldn't be part of the API
The option parser already rejected NaN explicitly, which is why 1.
didn't work. Regarding 2., JSON might be a good example, and actually
caused a bug report.
Fix this by mapping NaN to the special value "default". I think I'd
prefer other mechanisms (maybe just having every scaler expose separate
options?), but for now this will do. See you in a future commit, which
painfully deprecates this and replaces it with something else.
I refrained from using "no" (my favorite magic value for "unset" etc.)
because then I'd have e.g. make --no-scale-param1 work, which in
addition to a lot of effort looks dumb and nobody will use it.
Here's also an apology for the shitty added test script.
Fixes: #6691