Today, lavfi filters are provided a hw_device from the first
hwdec_interop that was loaded, regardless of whether it's the right one
or not. In most situations where a hardware based filter is used, we
need more control over the device.
In this change, a `hwdec_interop` option is added to the lavfi wrapper
filter configuration and this is used to pick the correct hw_device to
inject into the filter or graph (in the case of a graph, all filters
get the same device).
Note that this requires the use of the explicit lavfi syntax to allow
for the extra configuration.
eg:
```
mpv --vf=hwupload
```
becomes
```
mpv --vf=lavfi=[hwupload]:hwdec_interop=cuda-nvdec
```
or
```
mpv --vf=lavfi-bridge=[hwupload]:hwdec_interop=cuda-nvdec
```
It turns out that it's generally more useful to look up hwdecs by image
format, rather than device type. In the situations where we need to
find one, we generally know the image format we're dealing with. Doing
this avoids us having to create mappings from image format to device
type.
The most significant part of this change is filling in the image format
for the various hw interops. There is a hw_imgfmt field today today, but
only a couple of the interops fill it in, and that seems to be because
we've never actually used this piece of metadata before. Well, now we
have a good use for it.
When I introduced the concept of lazy loading of hwdecs by img format,
I did not propagate the probing flag correctly, leading to the new
normal loading path not runnng with probing set, meaning that any
errors would show up, creating unnecessary noise.
This change fixes this regression.
Historically, we have treated hwdec interop loading as a completely
separate step from loading the hwdecs themselves. Some hwdecs need an
interop, and some don't, and users generally configure the exact
hwdec they want, so interops that aren't relevant for that hwdec
shouldn't be loaded to save time and avoid warning/error spam.
The basic approach here is to recognise that interops are tied to
hwdecs by imgfmt. The hwdec outputs some format, and an interop is
needed to get that format to the vo without read back.
So, when we try to load an hwdec, instead of just blindly loading all
interops as we do today, let's pass the imgfmt in and only load
interops that work for that format. If more than one interop is
available for the format, the existing logic (whatever it is) will
continue to be used to pick one.
We also have one callsite in filters where we seem to pre-emptively
load all the interops. It's probably possible to trace down a specific
format but for now I'm just letting it keep loading all of them; it's
no worse than before.
You may notice there is no documentation update - and that's because
the current docs say that when the interop mode is `auto`, the interop
is loaded on demand. So reality now reflects the docs. How nice.
We only wish to touch actual video frames, which should have an
allocated image attached to them, so just check the frame type
early, and exit by passing through such non-video frames to further
filters in the chain without attempting to process them.
Fixes a crash in case of non-video (EOF/NONE) frames being passed
onto the filter when the dovi option was set to false since
05ccc51d53 .
Useful to strip dolbyvision from the output, in cases where the user
does not want it applied. Doing this as a video filter gives users the
abiilty to easily toggle this stripping at runtime in a way that
properly propagates to any (potentially stateful) VO.
It also thematically fits the rest of the options in vf_format, which
are similarly concerned with modifying the video image parameters.
When inserting vf_sub, the VO OSD is directed not to draw itself. But
this flag was never unset, even when removing vf_sub.
The fix is pretty shit, but appears to work.
draw_bmp.c is the software blender for subtitles and OSD. It's used by
encoding mode (burning subtitles), and some VOs, like vo_drm, vo_x11,
vo_xv, and possibly more.
This changes the algorithm from upsampling the video to 4:4:4 and then
blending to downsampling the OSD and then blending directly to video.
This has far-reaching consequences for its internals, and results in an
effective rewrite.
Since I wanted to avoid un-premultiplying, all blending is done with
premultiplied alpha. That's actually the sane thing to do. The old code
just didn't do it, because it's very weird in YUV fixed point.
Essentially, you'd have to compensate for the chroma centering constant
by subtracting src_alpha/255*128. This seemed so hairy (especially with
correct rounding and high bit depths involved) that I went for using
float.
I think it turned out mostly OK, although it's more complex and less
maintainable than before. reinit() is certainly a bit too long. While it
should be possible to optimize the RGB path more (for example by
blending directly instead of doing the stupid float conversion), this is
probably slower. vo_xv users probably lose in this, because it takes the
slowest path (due to subsampling requirements and using YUV).
Why this rewrite? Nobody knows. I simply forgot the reason. But you'll
have it anyway. Whether or not this would have required a full rewrite,
at least it supports target alpha now (you can for example hard sub
transparent PNGs, if you ever wanted to use mpv for this).
Remove the check in vf_sub. The new draw_bmp.c is not as reliant on
libswscale anymore (mostly uses repack.c now), and osd.c shows an
error message on missing support instead now.
Formats with chroma subsampling of 4 are not supported, because FFmpeg
doesn't provide pixfmt definitions for alpha variants. We could provide
those ourselves (relatively trivial), but why bother.
This is mostly for testing. It adds passing through the metadata through
the video chain. The metadata can be manipulated with vf_format. Support
for zimg alpha conversion (if built with zimg after it gained alpha
support) is implemented. Support premultiplied input in vo_gpu.
Some things still seem to be buggy.
When I added mp_regular_imgfmt, I made the chroma subsampling use the
actual chroma division factor, instead of a shift (log2 of the actual
value). I had some ideas about how this was (probably?) more intuitive
and general. But nothing ever uses non-power of 2 subsampling (except
jpeg in rare cases apparently, because the world is a bad place).
Change the fields back to use shifts and rename them to avoid mistakes.
This sucks, but is helpful for testing.
Obviously, it would be much nicer if there were a way to specify _all_
scaler options per filter (if the user wanted), instead of always using
the global options. But this is "too hard" for now. For testing, it is
extremely convenient to select the scaler backend, so add this option,
but make clear that it could go away. We'd delete it once there is a
better mechanism for this.
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
Pretty annoying affair. The vo_gpu code could of course not trigger
rendering from filters yet, so it needed to be extended. Also, this uses
some icky stuff made for vf_sub (and this was the reason I marked vf_sub
as deprecated), so everything is terrible.
Probably pretty useless in this form (see: the wall of warnings), but
someone wanted this.
I think this should be useful to perform some automated tests, maybe.
Fixes: #7194
Raise swscale and zimg default parameters. This restores screenshot
quality settings (maybe) unset in the commit before. Also expose some
more libswscale and zimg options.
Since these options are also used for VOs like x11 and drm, this will
make x11/drm/etc. much slower. For compensation, provide a profile that
sets the old option values: sw-fast. I'm also enabling zimg here, just
as an experiment.
The core problem is that we have a single set of command line options
which control the settings used for most swscale/zimg uses. This was
done in the previous commit. It cannot differentiate between the VOs,
which need to be realtime and may accept/require lower quality options,
and things like screenshots or vo_image, which can be slower, but should
not sacrifice quality by default.
Should this have two sets of options or something similar to do the
right thing depending on the code which calls libswscale? Maybe. Or
should I just ignore the problem, make it someone else's problem (users
who want to use software conversion VOs), provide a sub-optimal
solution, and call it a day? Definitely, sounds good, pushing to master,
goodbye.
Form some reason (and because of my fault), vf_format converts image
formats, but nothing else. For example, setting the "colormatrix"
sub-parameter would not convert it to the new value, but instead
overwrite the metadata (basically "reinterpreting" the image data
without changing it).
Make the historical mistake worse, and go all the way and extend it such
that it can perform a conversion. For compatibility reasons, this needs
to be requested explicitly. (Maybe this would deserve a separate filter
to begin with, but things are messed up anyway. Feel free to suggest an
elegant and simple solution.)
This demonstrates how zimg can properly perform some conversions which
swscale cannot (see examples added to vf.rst).
Stupidly this requires 2 code paths, one for conversion, and one for
overriding the parameters.
Due to the filter bullshit (what was I thinking), this requires quite
some acrobatics that would not be necessary without these abstractions.
On the other hand, it'd definitely be more of a mess without it. Oh
whatever.
f_reset, which is called on seeks, was a good place for resetting the
warning flag (so the warning would be print again). Except some other
code abused f_reset when all metadata was read (in both cases you want
to clear the metadata). Instead of spending more time on getting this
flag reset correctly, just never reset it.
According to the zimg author, YUV->GREY conversion does not even read
the chroma planes, as long as no matrix conversion is involved. Since we
try to avoid the latter anyway by forcing the source parameters on the
target image, passing only the Y plane will not help with anything.
An unscientific test seems to confirm this, so remove this.
This would probably help with libswscale (I didn't test this), but on
the other hand, libswscale will rarely be used in cases where we can
extract the Y plane. (Except nv12, which should probably be added to the
zimg wrapper's unpacking.)
This was speculatively added 2 years ago in preparation for something
that apparently never happened. The D3D code was added as an "example",
but this too was never used/finished.
No reason to keep this.
With the previous commit, this is dead code.
This also makes the f_autoconvert.c code for this dead code
(fortunately). Will probably remove this later.
Leaks the entire zimg state on filter deinit. Not sure what I was
thinking; with some luck, I just didn't give a shit about this case, but
most likely I was thinking the same thing as always: nothing.
As pointed out by @olifre in #7016, this line of code was wrong. p->opts
at this point is a struct allocated and managed by m_config. opts->file
is a string, and m_config explicitly frees it on destruction. The line
of code in question replaced the opts->file value, and made both the old
and new value children of the talloc allocation, so they were _also_
freed on destruction.
This crashed due to a double-free. First, talloc auto-freeing freed the
string, then m_config explicitly called talloc_free() on the stale
pointer again.
As @v-fox pointed out, commit 36dd2348a1 seems to have triggered the
crash. I suspect this code merely worked out of coincidence, since it
allowed m_config to free the value first. This removed it from the
auto-free list, and thus did not result in a double-free. The change
in order of calling alloc destructors changed the order of these calls.
There is no strong reason why new behavior (as introduced by commit
36dd2348a1) would be wrong (it feels like cleaner behavior). On the
other hand, what the vf_vapoursynth code did is clearly unclean and
going by the m_config API, you're not at all supposed to do this.
m_config manages all memroy referenced by option structs, the end.
@olifre's suggested fix also would have been correct (not just hiding
the issue), I prefer my fix, as it doesn't mess with the option struct
in tricky ways.
This wouldn't have happened if mpv were written in Haskell.
This reverts commit 6385a5fd1b, and in
addition removes the code that automatically inserts the vavpp filter.
The reason is the same as the commit that is being reverted: this
filter seems to trigger driver bugs. It can cause GPU freezes or
just doesn't work.
This variant of disabling the filter is better. There was no way to
add the "force" parameter to the automatically inserted filter, so
the old approach just made manual filter insertion (with the --vf
option or "vf" command) more cumbersome.
I was assuming posix_memalign was the most portable function to use, but
MinGW does not provide it for some reason. Switch to C11 aligned_alloc()
which someone suggested was provided by MinGW (but actually isn't,
someone probably confused it with the incompatible _aligned_malloc),
and add a configure check.
Even though it turned out that MinGW doesn't provide it, the function
is slightly more elegant than posix_memalign(), so stay with it.
skip-logo.lua is just what I wanted to have. Explanations are on the top
of that file. As usual, all documentation threatens to remove this stuff
all the time, since this stuff is just for me, and unlike a normal user
I can afford the luxuary of hacking the shit directly into the player.
vf_fingerprint is needed to support this script. It needs to scale down
video frames as part of its operation. For that, it uses zimg. zimg is
much faster than libswscale and generates more correct output. (The
filter includes a runtime fallback, but it doesn't even work because
libswscale fucks up and can't do YUV->Gray with range adjustment.)
Note on the algorithm: seems almost too simple, but was suggested to me.
It seems to be pretty effective, although long time experience with
false positives is missing. At first I wanted to use dHash [1][2], which
is also pretty simple and effective, but might actually be worse than
the implemented mechanism. dHash has the advantage that the fingerprint
is smaller. But exact matching is too unreliable, and you'd still need
to determine the number of different bits for fuzzier comparison. So
there wasn't really a reason to use it.
[1] https://pypi.org/project/dhash/
[2] http://www.hackerfactor.com/blog/index.php?/archives/529-Kind-of-Like-That.html
I once created this because someone wanted to use vapoursynth without
the Python dependency. No idea if anyone ever actually used it. It's
sort of icky (it calls itself "lazy" to preempt complaints about how
much it sucks), and complicates the build process. Kill it.
It seems much more promising to have something like this:
https://github.com/vapoursynth/vapoursynth/issues/386
This would either solve the build distribution problem by relaxing the
Python dependency, and/or allow a Lua backend to be included without
pain.
Also rename stereo3d to stereo_in. The only real change is that the
vo_gpu OSD code now uses the actual stereo 3D mode, instead of the
--video-steroe-mode value. (Why does this vo_gpu code even exist?)
This means vf_vapoursynth doesn't need a hack to work around the filter
code, and libavfilter filters now actually get the frame_rate field on
input pads set.
The libavfilter doxygen says the frame_rate field is only to be set if
the frame rate is known to be constant, and uses the word "must" (which
probably means they really mean it?) - but ffmpeg.c sets the field to
mere guesses anyway, and it looks like this normally won't lead to
problems.
This switches the default away from "bob" to the best algorithm reported
as supported by the driver. This is convenient for users, and there is
no reason to use something worse by default.
Untested.
Before this, we made deinterlacing dependent on the video codec metadata
(AVFrame.interlaced_frame for libavcodec). So even if --deinterlace=yes
was set, we skipped deinterlacing if the flag wasn't set. This is very
unreliable and there are many streams with flags incorrectly set.
The potential problem is that this might upset people who alwase enabled
deinterlace and hoped it worked. But it's likely these people were
screwed by this setting anyway. The new behavior is less tricky and
easier to understand, and this preferable. Maybe one day we could
introduce a --deinterlace=auto, which does the right thing, but of
course this would be hard to implement (esecially with hwdec).
Fixes#5219.
In theory (and practice), this is not needed, because the VS filter get
frame callback will cause the process function to be called again if
there's not enough data. But it's still a bit weird to just add one more
frame on each iteration, so make it cleaner and make it request frames
until the input array is full.
This was obviously nonsense, and a previous "fix" to this code was
nonsense too. What is really needed here is temporarily dropping the
lock while calling destroy_vs()/reinit_vs().
Fixes#5470.
Unknown frames were not freed properly. Although this doesn't really
happen anyway, because we're never going to feed audio frames to a video
filter chain. Since it's theoretically possible, and all other filters
handle this consistently, fix it anyway.
Properly initialize the output frame parameters other than image format
and size. This includes colorspace hints. (We're still not reading them
back from VapourSynth if it sets them, though. Usually it doesn't
anyway.)
VapourSynth can't pass through timestamps, only frame durations. So we
need to remember the timestamp of the very first frame passed to it.
This was accidentally set to 0 instead of NOPTS on init, so inserting
the filter during playback could show strange behavior.
Might be part of #5470.
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.
This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.
vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.
f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).
The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.
Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)
In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.
vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.
The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.
Exporting VO capabilities is still a big mess (mp_stream_info stuff).
The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.
f_async_queue is unused.