Currently the vapoursynth video filter does not accept any argument for
passing arbitrary user data. This limits what the VS script can do.
Ideally, the vapoursynth filter has an user-data parameter that contain
string value. mpv passes that value to the VS script just like
container_fps and others. Once the VS script gets the data, it can do
all sorts of data extraction and transformation.
Another benefit is that instead of mpv always have to catch up to user
needs for this filter, with this users can just pass whatever needed
themselves, thus becomes more future-proof.
Fixes#14214
5f74ed5828 deprecated this many years ago.
The utility is questionable at best given that -remove exists and is
more natural to use. Free up some code and drop it.
This better reflects what it actually does. As a bonus, script writers
won't be misled into thinking that fps displays the actual video or
display fps.
These were deprecated a long time ago and apparently didn't even work
with lavfi filters. Go ahead and remove them and additionally clean up
some code related to them. m_config_from_obj_desc_and_args becomes much
simpler now and a couple of arguments can be completely removed.
Ideally, users should be using lavfi-complex instead of lavfi-bridge
for such a use case, however currently lavfi-complex doesn't support
hwdec. So we can allow filters with dynamic inputs to work with
lavfi-bridge, at the cost of them only being able to take in
only one input. This should probably be reverted when/if lavfi-complex
has hwdec, but for now this allows us to use libplacebo
as a video filter with hwdec in mpv again.
mpv has a generic method for getting the display resolution, so we can
save it in vf_vapoursynth without too much trouble. Unfortunately, the
resolution won't actually be available in many cases (like my own)
because the windowing backend doesn't actually know it yet. It looks
like at least windows always returns the default monitor (maybe we
should do something similar for x11 and wayland), so there's at least
some value. Of course, this still has a bunch of pitfalls like not being
able to cope with multi monitor setups at all but so does display_fps.
As an aside, the vapoursynth API this uses apparently requires R26 (an
ancient version anyway), so bump the build to compensate for this.
Fixes#11510
Vulkan Video Decoding has finally become a reality, as it's now
showing up in shipping drivers, and the ffmpeg support has been
merged.
With that in mind, this change introduces HW interop support for
ffmpeg Vulkan frames. The implementation is functionally complete - it
can display frames produced by hardware decoding, and it can work with
ffmpeg vulkan filters. There are still various caveats due to gaps and
bugs in drivers, so YMMV, as always.
Primary testing has been done on Intel, AMD, and nvidia hardware on
Linux with basic Windows testing on nvidia.
Notable caveats:
* Due to driver bugs, video decoding on nvidia does not work right now,
unless you use the Vulkan Beta driver. It can be worked around, but
requires ffmpeg changes that are not considered acceptable to merge.
* Even if those work-arounds are applied, Vulkan filters will not work
on video that was decoded by Vulkan, due to additional bugs in the
nvidia drivers. The filters do work correctly on content decoded some
other way, and then uploaded to Vulkan (eg: Decode with nvdec, upload
with --vf=format=vulkan)
* Vulkan filters can only be used with drivers that support
VK_EXT_descriptor_buffer which doesn't include Intel ANV as yet.
There is an MR outstanding for this.
* When dealing with 1080p content, there may be some visual distortion
in the bottom lines of frames due to chroma scaling incorporating the
extra hidden lines at the bottom of the frame (1080p content is
actually stored as 1088 lines), depending on the hardware/driver
combination and the scaling algorithm. This cannot be easily
addressed as the mechanical fix for it violates the Vulkan spec, and
probably requires a spec change to resolve properly.
All of these caveats will be fixed in either drivers or ffmpeg, and so
will not require mpv changes (unless something unexpected happens)
If you want to run on nvidia with the non-beta drivers, you can this
ffmpeg tree with the work-around patches:
* https://github.com/philipl/FFmpeg/tree/vulkan-nvidia-workarounds
Useful to strip dolbyvision from the output, in cases where the user
does not want it applied. Doing this as a video filter gives users the
abiilty to easily toggle this stripping at runtime in a way that
properly propagates to any (potentially stateful) VO.
It also thematically fits the rest of the options in vf_format, which
are similarly concerned with modifying the video image parameters.
Discovered with:
find . -type f \( -name '*.md' -o -name '*.rst' \) -exec grep -n 'http://' {} +
All real, i.e. non-example, links found were moved to https. There are
some dead links and websites with no https available which were not
converted.
This is mostly for testing. It adds passing through the metadata through
the video chain. The metadata can be manipulated with vf_format. Support
for zimg alpha conversion (if built with zimg after it gained alpha
support) is implemented. Support premultiplied input in vo_gpu.
Some things still seem to be buggy.
This sucks, but is helpful for testing.
Obviously, it would be much nicer if there were a way to specify _all_
scaler options per filter (if the user wanted), instead of always using
the global options. But this is "too hard" for now. For testing, it is
extremely convenient to select the scaler backend, so add this option,
but make clear that it could go away. We'd delete it once there is a
better mechanism for this.
Whenever I deal with this, I have to look at the code to make sense of
this. And beyond that, there are some strange inconsistencies. (I think
this code is cursed. It always was, and maybe always will be.)
Although the manpage claimed that using multiple items for -add etc. is
deprecated, string list options didn't warn against it. So add the
warning, and add something in the changelog (even though nobody will
ever read this).
The manpage mentioned --vf-append, but this didn't even exist. So add
it, I guess. We encourage using -append for the other option types, so
for consistency, it should work on filter options. (And I already
tricked me into believing it existed when I mentioned it in the
manpage.)
Make the "operations" table separate for all option types, and mention
the option type on every single of the top-level list options.
Pretty annoying affair. The vo_gpu code could of course not trigger
rendering from filters yet, so it needed to be extended. Also, this uses
some icky stuff made for vf_sub (and this was the reason I marked vf_sub
as deprecated), so everything is terrible.
Probably pretty useless in this form (see: the wall of warnings), but
someone wanted this.
I think this should be useful to perform some automated tests, maybe.
Fixes: #7194
Form some reason (and because of my fault), vf_format converts image
formats, but nothing else. For example, setting the "colormatrix"
sub-parameter would not convert it to the new value, but instead
overwrite the metadata (basically "reinterpreting" the image data
without changing it).
Make the historical mistake worse, and go all the way and extend it such
that it can perform a conversion. For compatibility reasons, this needs
to be requested explicitly. (Maybe this would deserve a separate filter
to begin with, but things are messed up anyway. Feel free to suggest an
elegant and simple solution.)
This demonstrates how zimg can properly perform some conversions which
swscale cannot (see examples added to vf.rst).
Stupidly this requires 2 code paths, one for conversion, and one for
overriding the parameters.
Due to the filter bullshit (what was I thinking), this requires quite
some acrobatics that would not be necessary without these abstractions.
On the other hand, it'd definitely be more of a mess without it. Oh
whatever.
skip-logo.lua is just what I wanted to have. Explanations are on the top
of that file. As usual, all documentation threatens to remove this stuff
all the time, since this stuff is just for me, and unlike a normal user
I can afford the luxuary of hacking the shit directly into the player.
vf_fingerprint is needed to support this script. It needs to scale down
video frames as part of its operation. For that, it uses zimg. zimg is
much faster than libswscale and generates more correct output. (The
filter includes a runtime fallback, but it doesn't even work because
libswscale fucks up and can't do YUV->Gray with range adjustment.)
Note on the algorithm: seems almost too simple, but was suggested to me.
It seems to be pretty effective, although long time experience with
false positives is missing. At first I wanted to use dHash [1][2], which
is also pretty simple and effective, but might actually be worse than
the implemented mechanism. dHash has the advantage that the fingerprint
is smaller. But exact matching is too unreliable, and you'd still need
to determine the number of different bits for fuzzier comparison. So
there wasn't really a reason to use it.
[1] https://pypi.org/project/dhash/
[2] http://www.hackerfactor.com/blog/index.php?/archives/529-Kind-of-Like-That.html
I once created this because someone wanted to use vapoursynth without
the Python dependency. No idea if anyone ever actually used it. It's
sort of icky (it calls itself "lazy" to preempt complaints about how
much it sucks), and complicates the build process. Kill it.
It seems much more promising to have something like this:
https://github.com/vapoursynth/vapoursynth/issues/386
This would either solve the build distribution problem by relaxing the
Python dependency, and/or allow a Lua backend to be included without
pain.
I assume (but cannot confirm) that VA-AP-API is in fact a typo, because
most if not all search engine results related to it are from mpv's manual
page.
By changing this to VA-API and clarifying that this requires VA-API support
on a system to use it, we can hopefully make it clear to unsuspecting
Windows users that this is not the filter they're looking for.
Concerns #6690.
This switches the default away from "bob" to the best algorithm reported
as supported by the driver. This is convenient for users, and there is
no reason to use something worse by default.
Untested.
Before this, we made deinterlacing dependent on the video codec metadata
(AVFrame.interlaced_frame for libavcodec). So even if --deinterlace=yes
was set, we skipped deinterlacing if the flag wasn't set. This is very
unreliable and there are many streams with flags incorrectly set.
The potential problem is that this might upset people who alwase enabled
deinterlace and hoped it worked. But it's likely these people were
screwed by this setting anyway. The new behavior is less tricky and
easier to understand, and this preferable. Maybe one day we could
introduce a --deinterlace=auto, which does the right thing, but of
course this would be hard to implement (esecially with hwdec).
Fixes#5219.
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.
This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.
vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.
f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).
The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.
Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)
In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.
vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.
The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.
Exporting VO capabilities is still a big mess (mp_stream_info stuff).
The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.
f_async_queue is unused.