Alphabetizing the extensions cleans up the code and makes it less
ambiguous where newer extensions should be added. The video line
also was wrapped to 72 characters for cleanliness.
When af_scaletempo2.c:process() detects a format change, it goes back
through mp_scaletempo2_init() to reinitialize everything. However,
mp_scaletempo2.input_buffer is not necessarily reallocated due to a
check in af_scaletempo2_internals.c:resize_input_buffer(). This is a
problem if the number of audio channels increases, since without
reallocating, the buffer for the new channel(s) will at best point to
NULL, and at worst uninitialized memory.
Since resize_input_buffer() is only called from two places, pull size
check out into mp_scaletempo2_fill_input_buffer(). This allows each
caller to decide whether they want to resize or not. We could be
smarter about when to reallocate, but that would add a lot of machinery
for a case I don't expect to be hit often in practice.
Certain combinations of hardware formats require the use of hwmap to
transfer frames between the formats, rather than hwupload, which will
fail if attempted.
To keep the usage of vf_format for HW -> HW transfers as intuitive as
possible, we should detect these cases and do the map operation instead
of uploading.
For now, the relevant cases are moving between VAAPI and Vulkan, and
VAAPI and DRM Prime, in both directions. I have introduced the IMGFMT
entry for Vulkan here so that I can put in the complete mapping table.
It's actually not useless, as you can map to Vulkan, use a Vulkan
filter and then map back to VAAPI for display output.
Historically, HW -> HW uploads did not exist, so the current code
assumes they will never happen. But as part of introducing Vulkan
support into ffmpeg, we added HW -> HW support to enable transfers
between Vulkan and CUDA.
Today, that means you can use the lavfi hwupload filter with the
correct configuration (and previous changes in this series) but it
would be more convenient to enable HW -> HW in the format filter so
that the transfers can be done more intuitively:
```
--vf=format=fmt=cuda
```
and
```
--vf=format=fmt=vulkan
```
Most of the work here is skipping logic that is specific to SW -> HW
uploads doing format conversion. There is no ability to do inline
conversion when moving between HW formats, so the format must be
mutually understood to begin with.
Additional work needs to be done to enable transfers between VAAPI
and Vulkan which uses mapping, rather than uploads. I'll tackle that
in the next change.
Today, lavfi filters are provided a hw_device from the first
hwdec_interop that was loaded, regardless of whether it's the right one
or not. In most situations where a hardware based filter is used, we
need more control over the device.
In this change, a `hwdec_interop` option is added to the lavfi wrapper
filter configuration and this is used to pick the correct hw_device to
inject into the filter or graph (in the case of a graph, all filters
get the same device).
Note that this requires the use of the explicit lavfi syntax to allow
for the extra configuration.
eg:
```
mpv --vf=hwupload
```
becomes
```
mpv --vf=lavfi=[hwupload]:hwdec_interop=cuda-nvdec
```
or
```
mpv --vf=lavfi-bridge=[hwupload]:hwdec_interop=cuda-nvdec
```
If we want to be able to handle conversion between hw formats in filter
chains, then we need to be able to load hwdec_interops from filters, as
the VO is only ever going to initialise one interop, based on its
configuration. That means that in almost all situations, only one of
the required interops will be loaded at the time the filter is
initialised.
The existing code has some assumptions that new hwdec_interops will not
be loaded after the vo has picked one to use. This change fixes two
instances:
* Refusing to load a new hwdec_interop if there is at least one
loaded already.
* Not recalculating the set of formats known to the autoconvert
filter when a new output format shows up. This leads to autoconvert
not knowing that a new format is supported when the hwdec interop is
lazily loaded.
It turns out that it's generally more useful to look up hwdecs by image
format, rather than device type. In the situations where we need to
find one, we generally know the image format we're dealing with. Doing
this avoids us having to create mappings from image format to device
type.
The most significant part of this change is filling in the image format
for the various hw interops. There is a hw_imgfmt field today today, but
only a couple of the interops fill it in, and that seems to be because
we've never actually used this piece of metadata before. Well, now we
have a good use for it.
ASS must only automatically break at ASCII spaces (\x20), but other
subtitle formats might expect more breaking oppurtinities.
Especially non-ASS subs in scripts, which typically do not use (ASCII)
spaces to seperate words, like e.g. CJK, might overflow the screen
if the conversion didn't insert additional linebreaks (ffmpeg does not).
Thus try to enable Unicode linebreaking for converted subs and the OSD
if libass is new enough. The feature may still be unavailable at runtime
if libass wasn't build with Unicode linebreaking support.
Previously, running a debug build of mpv would crash with this output
when preinit() at vo_libmpv.c:732 calls control(vo, VOCTRL_PREINIT, NULL):
Swift/Optional.swift:247: Fatal error: unsafelyUnwrapped of nil optional
This comes from this line of code:
var data = UnsafeMutableRawPointer.init(bitPattern: 0).unsafelyUnwrapped
Unsafely unwrapping a UnsafeMutableRawPointer.init has always been UB,
but the Swift runtime began asserting on it in debug builds a couple macOS
versions ago.
Previously we would only call list_devs() on available AOs if an AO
*did not* have a hotplug_init() callback or for the first one that *did*
have it.
This is problematic when multiple fully functional hotplug-capable AOs
are available.
The second one would not be able to contribute discovered devices.
This problem prevents ao_pipewire from introducing full hotplug support
with hotplug_init().
When a platform has multiple valid AOs that can provide hotplug events
we should try to use the one that also provides playback.
Concretely this will help when introducing hotplug support for
ao_pipewire.
Currently ao_pulse is probed by ao_hotplug_get_device_list() before
ao_pipewire and on the common setups where both AOs could work pulse
will be selected for hotplug handling.
This means that hotplug_init() of ao_pipewire will never be called and
list_devs() has to do its own initialization.
But if ao_pulse is non-functional or not compiled-in suddenly
ao_pipewire *must* implement hotplug_init() for hotplugging events to
work for all.
Also if the hotplug ao_pulse connects to a PulseAudio instance that is
not emulated by the same PipeWire instance as the playback ao_pipewire
the hotplug events are useless.
30 characters are not a lot.
Also when the name is calculated from the allocation site the line
number will be the at the end of the name, so truncating it is
especially inconvenient.
I'm adding support in ffmpeg for the XV36 format which will be used
by VAAPI for 12bit 4:4:4 content. It's an undefined-alpha channel
variant of Y412 which is itself a 12bit+4bits padding variant of Y416.
We currently have a repacker for full four channel cccc16, and for
three channel ccc16, but nothing for ccc16x16 with the undefined alpha
channel.
It's simple enough to add one using the existing macros.
uau did some investigation and noticed that we do not send a wakeup
event when we encounter end-of-stream in ao_read_data(), in contrast to
the equivalent logic for push AOs in ao_play_data().
Inserting that wakeup fixes the original problem of lack of
reinitialization on a format change without the problems we saw with
the previous attempted fix.
Fixes#10566