by default utilises the color space of the screen on which the window is
located. if a specific value is defined, it will instead be utilised.
depending on the chosen color space the macOS EDR (HDR) support is
activated and that OS's transformation (tone mapping) is used.
Fixes#7341
fuzzer_set_property.c:
fuzz mpv_set_property in both initialized and non-initialized state.
Useful for user provided values sanitization test. I've already seen
some memory leaks in parsing code, good to drill it.
fuzzer_loadfile.c:
mpv_command "loadfile" test. Good for testing demuxers, decoding and
playback loop. Sadly in headless mode we can't really test AO and VO,
but at least all the code around can be fuzzed. Especially our custom
demuxers like demux_mkv.
fuzzer_loadfile_direct.c:
Similar to loadfile above, but instead of saving the data to file, it
passes the fuzz input in the command. Generated protocol specific
versions (mf:// and memory:// for now) and generic one.
Nothing really complex, but good start and even those few targets should
give good coverage of the most common code paths in libmpv.
vo_rpi and its related code has pretty much historically been a
disaster in mpv. The build regularly gets broken and since nobody uses
it, it takes months for anyone to notice. There was also that time where
fullscreen was broken for about a year and a half. Also building in waf
was entirely broken for about a couple of years or so due to mysterious
reasons no one ever figured out (meson magically fixed it).
Anyways, once again the build is broken due to rpi being forgotten about
again, but instead of pretending to support this crap. Just drop it all.
Nowadays, mmal hwdec is a relic since these devices are better off using
the v4l2m2m ffmpeg fork instead which actually uses KMS properly. RPI 1
and 2 probably can't do this and will remain broken but oh well blame
Broadcom for being special snowflakes and not using standard APIs (my
rockpro worked out of the box; just saying). RPI 2 is nearly 10 years
old anyways, so I think you can afford a new SBC by now. If we were
nicer, there would be a deprecation period, but this is broken in the
last major release anyway so too late.
Closes#13402.
This was disabled by default in 99cef59fc9 (from 2017)
due to build issues with old kernel headers.
Whatever was considered old at that time will be ancient now
and with the last commit there should be no isses auto-detecting it.
Only vaapi-copy variant as nothing can map D3D12 resources currently.
And even if we would add resource sharing to D3D11 it would invoke copy
at some point, so there is no point really. Maybe in the future when
libplacebo get smarter about resource sharing on Windows, but practical
advantages are really small. I've tested it with Vulkan <-> D3D11
sharing and GPU <-> GPU copy is still invoked. Better than CPU memcpy,
something for the future.
These have been build options since the waf build, but that doesn't
really make sense. The build can detect whatever macOS sdk version is
available and then use that information to determine whether to enable
the features or not. Potentially disabling multiple sdk versions doesn't
really make any sense. Because f5ca11e12b
effectively made macOS 10.15 the minimum supported version, we can drop
all of these checks and bump the required sdk version to 10.15. The rest
of the build simplifies from there.
Make it not possible to build mpv without the latest libplacebo anymore.
This will allow for less code duplication between mpv and libplacebo,
and in the future also let us delete legacy ifdefs and track libplacebo
better.
It makes more sense as an option at this point. Also libdrm is not
optional at all. You have to get a drm format and modifier for this to
even work (the VO will just fail without DRM). Just hard require it and
remove the guards. vaapi can remain optional.
Vulkan Video Decoding has finally become a reality, as it's now
showing up in shipping drivers, and the ffmpeg support has been
merged.
With that in mind, this change introduces HW interop support for
ffmpeg Vulkan frames. The implementation is functionally complete - it
can display frames produced by hardware decoding, and it can work with
ffmpeg vulkan filters. There are still various caveats due to gaps and
bugs in drivers, so YMMV, as always.
Primary testing has been done on Intel, AMD, and nvidia hardware on
Linux with basic Windows testing on nvidia.
Notable caveats:
* Due to driver bugs, video decoding on nvidia does not work right now,
unless you use the Vulkan Beta driver. It can be worked around, but
requires ffmpeg changes that are not considered acceptable to merge.
* Even if those work-arounds are applied, Vulkan filters will not work
on video that was decoded by Vulkan, due to additional bugs in the
nvidia drivers. The filters do work correctly on content decoded some
other way, and then uploaded to Vulkan (eg: Decode with nvdec, upload
with --vf=format=vulkan)
* Vulkan filters can only be used with drivers that support
VK_EXT_descriptor_buffer which doesn't include Intel ANV as yet.
There is an MR outstanding for this.
* When dealing with 1080p content, there may be some visual distortion
in the bottom lines of frames due to chroma scaling incorporating the
extra hidden lines at the bottom of the frame (1080p content is
actually stored as 1088 lines), depending on the hardware/driver
combination and the scaling algorithm. This cannot be easily
addressed as the mechanical fix for it violates the Vulkan spec, and
probably requires a spec change to resolve properly.
All of these caveats will be fixed in either drivers or ffmpeg, and so
will not require mpv changes (unless something unexpected happens)
If you want to run on nvidia with the non-beta drivers, you can this
ffmpeg tree with the work-around patches:
* https://github.com/philipl/FFmpeg/tree/vulkan-nvidia-workarounds
This reworks all of mpv's unit tests so they are compiled as separate
executables (optional) and run via meson test. Because most of the tests
are dependant on mpv's internals, existing compiled objects are
leveraged to create static libs and used when necessary. As an aside, a
function was moved into video/out/gpu/utils for sanity's sake (otherwise
most of vo would have been needed). As a plus, meson multithreads
running tests automatically and also the output no longer pollutes the
source directory. There are tests that can break due to ffmpeg changes,
so they require a specific minimum libavutil version to be built.
Changes:
* fixed hangups in the loop function and in some other cases
* refactoring according to @michaelforney's recommendations in #8314
* a few minor and/or cosmetic changes
* ability to build ao_sndio using meson
The AO provides a way for mpv to directly submit audio to the PipeWire
audio server.
Doing this directly instead of going through the various compatibility
layers provided by PipeWire has the following advantages:
* It reduces complexity of going through the compatibility layers
* It allows a richer integration between mpv and PipeWire
(for example for metadata)
* Some users report issues with the compatibility layers that to not
occur with the native AO
For now the AO is ordered after all the other relevant AOs, so it will
most probably not be picked up by default.
This is for the following reasons:
* Currently it is not possible to detect if the PipeWire daemon that mpv
connects to is actually driving the system audio.
(https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/1835)
* It gives the AO time to stabilize before it is used by everyone.
Based-on-patch-by: Oschowa <oschowa@web.de>
Based-on-patch-by: Andreas Kempf <aakempf@gmail.com>
Helped-by: Ivan <etircopyhdot@gmail.com>