This has been a long standing problem that I think was the root cause
for wm4's vaapi shitlist. As DRM explicit supports all possible rgba
orderings, but OpenGL only guarantees support for 'rgba8', something
has to happen to support import of dmabufs with non-native ordering.
Although undocumented, it appears that somewhere in the import on the
GL side, the ordering is swizzled to rgba - which makes sense as that's
the only way to be able to support importing all the different formats.
But we didn't know this, and we do our own swizzling based on the
original imgfmt - which matches the drm format. So we get double
swizzling and messed up colours.
As there is no way for us to express that the GL swizzle happened to
the rest of mpv, the easiest thing to do is fudge the format to the
natural ordering so the GL doesn't change anything, then our swizzling
will do the right thing in the end.
Note that this change doesn't handle 0rgb and 0bgr because they seem to
be affected by some additional feature/bug that is mangling the
ordering somewhere that equally affects Vulkan interop. That could be a
vaapi driver bug or something going on in ffmpeg.
A few years ago, wm4 got sufficiently annoyed with how vaapi image
format support was being discovered that he flipped the table and
introduced the shit list (which just included vaapi) to hard-code the
set of supported formats.
While that might have been necessary at the time, I haven't been able
to find a situation where the true list of supported formats was unsafe
to use. We filter down the list based on what the vo reports - and the
vo is already doing a thorough testing of formats, and if a format
makes it through that gauntlet, it does actually work.
Interestingly, as far as I can tell, the hwdec_vaapi probing code was
already good enough at the time (also written by wm4), so perhaps the
key difference here is that the driver side of things has improved.
I dug into this because of the support for the 422/444 high bit depth
vaapi formats I added to ffmpeg. These are obviously not in the hard
coded list today, but they work fine.
Finally, although it's positioned as a vaapi thing, it's really just
Intel specific, as the AMD vaapi driver has never exposed support for
anything except the formats used by the decoder/encoder profiles.
The logic around trying to establish what formats are supported by
vaapi is confusing, and it turns out that we've been doing it wrong.
Up until now, we've been going through the list of decoding profile
endpoints and checking the formats declared in those profiles to build
our list.
However, the set of formats that the vaapi driver can actually support
is potentially a superset of those supported by any given profile
endpoint. This master list is exposed by libavutil via the
av_hwframe_transfer_get_formats() function, and so we should use that
list as our starting point.
Perhaps surprisingly, this change actually removes no code, because we
still want the logic that enumerates endpoints and finds supported
formats for endpoints. We need this because we must have at least one
known sw format to initialise the hwframe_ctx with.
Additionally, while in the general case,
av_hwframe_transfer_get_formats can return different formats depending
on what format you initialise the hwframe_ctx with, I happen to have
read the libavutil code, and it doesn't care, so we just need to call
it once, we know we'll get back all known formats.
In practice, with an Intel vaapi driver, this will result in us
supporting a handful of extra formats for hwuploads - particularly
yuv420p (so no need to convert to nv12 first) and various orderings
of rgb(a).
Commit 740b701 introduced handling for subtitles with unknown duration,
but it came with a minor flaw where a track event that shares identical
start time with following track event will has its Duration value set
to 0, we don't want this to happen since it will prevent simultaneous
rendering of multiple track events.
This commit aims to address this issue.
When the width is exactly a multiple of SLICE_W (currently 256),
heap buffer overflow is reported by Address Sanitizer. So adjust
the maximum index for the line array accordingly.
We've had some annoying names for interops, which we can't simply
rename because that would break config files and command lines. So we
need to put a little more effort in and add a concept of legacy names
that allow us to continue loading them, but with a warning.
The two I'm renaming here are:
* vaapi-egl -> vaapi (vaapi works with Vulkan too)
* drmprime-drm -> drmprime-overlay (actually describes what it does)
* cuda-nvdec -> cuda (cuda interop is not nvdec specific)
mpv set _NET_WM_DESKTOP to 0xFFFFFFFF, which behaves differently with each window manager.
Instead, set the window property to STICKY, and let the window managers deal with it.
The new status quo is simple: all messages coming from libplacebo are
marked "vo/gpu{-next}/libplacebo", regardless of the backend API (vulkan
vs opengl/d3d11).
Messages coming from mpv's internal vulkan code will continue to come
from "vo/gpu{-next}/vulkan", and messages coming from the vo module
itself will be marked "vo/gpu{-next}".
This is significantly better than the old status quo of vulkan messages
coming from "vo/gpu{-next}/vulkan/libplacebo" whereas opengl/d3d11
messages simply came from "vo/gpu{-next}", even when those messages
originated from libplacebo.
(It's worth noting that the the destructor for the log is redundant
because it's attached to the ctx which is freed on uninit anyway)
Managed to go completely unnoticed for months (who was the bad person
that introduced these*). Fairly self-explanatory. After getting
ProviderInfo from randr, we need to free it. The other one is pretty bad
because it leaked every frame (ouch). It turns out that you're supposed
to free any event data after you cast a generic event to an
XGenericEventCookie. Free that as well.
*: It was me.
The wayland presentation time code currently always assumes that only
CLOCK_MONOTONIC can be used. There is a naive attempt to ignore clocks
other than CLOCK_MONOTONIC, but the logic is actually totally wrong and
the timestamps would be used anyway. Fix this by checking a use_present
bool (similar to use_present in xorg) which is set to true if we receive
a valid clock in the clockid event. Additionally, allow
CLOCK_MONOTONIC_RAW as a valid clockid. In practice, it should be the
same as CLOCK_MONOTONIC for us (ntp/adjustime difference wouldn't
matter). Since this is a linux-specific clock, add a define for it if it
is not found.
This has had no effect since libplacebo v4.192.0, and was deprecated
upstream a year ago. No deprecation period in mpv is justified by this
being a debug / work-around option.
Removed the outdated information about environmental brightness
with respect to --gamma-factor, and mention that the option is
deprecated and subject to future removal. Also deprecated the
--gamma-auto option as it relies on the same outdated way of doing
things.
This brings vo_gpu_next's behavior more in line with vo_gpu. Currently,
the status quo was that `vo_gpu_next` generated subtitles with the full
resolution of the dst crop, even though practical limitations of
libplacebo meant that such subtitles inevitably got cut off at the video
boundaries. This was actually a worse status quo than `vo_gpu`, which
simply constrained subtitles to the video dimensions to prevent these
issues.
With this change, the behavior is the same as `vo_gpu`, which has the
exact same problem w.r.t interpolation. Users are back to choosing
between --blend-subtitles=yes and not having subtitles in margins,
and using --blend-subtitles=no and not having subtitle interpolation.
There are different pros and cons to each approach, and I still
ultimately plan on allowing libplacebo to interpolate subtitles even in
the black bars, to solve this issue once and for all. But for now, let's
not let perfect be the enemy of good.
No attempt is made to implement `--blend-subs=video`, which I consider
deprecated.
Fixes: https://github.com/mpv-player/mpv/issues/10272
Specifying the id of the target node during stream connect is
deprecated. Instead the property target.object should be used to link
by target serial or name. Using the name allows us to drop a bunch of
custom code.
I actually don't quite understand why this is needed, since in theory we
should already be doing the rotation as part of applying the crop. But I
guess that code doesn't quite work. This is the only way it works, so
it's probably correct. (And note that `vo_gpu` does the same internally)
This fixes an issue where blend stages with multiple passes got
overwritten by later passes, with only the final pass (typically the
overlayp pass) actually being shown.
Alphabetizing the extensions cleans up the code and makes it less
ambiguous where newer extensions should be added. The video line
also was wrapped to 72 characters for cleanliness.
When af_scaletempo2.c:process() detects a format change, it goes back
through mp_scaletempo2_init() to reinitialize everything. However,
mp_scaletempo2.input_buffer is not necessarily reallocated due to a
check in af_scaletempo2_internals.c:resize_input_buffer(). This is a
problem if the number of audio channels increases, since without
reallocating, the buffer for the new channel(s) will at best point to
NULL, and at worst uninitialized memory.
Since resize_input_buffer() is only called from two places, pull size
check out into mp_scaletempo2_fill_input_buffer(). This allows each
caller to decide whether they want to resize or not. We could be
smarter about when to reallocate, but that would add a lot of machinery
for a case I don't expect to be hit often in practice.
Certain combinations of hardware formats require the use of hwmap to
transfer frames between the formats, rather than hwupload, which will
fail if attempted.
To keep the usage of vf_format for HW -> HW transfers as intuitive as
possible, we should detect these cases and do the map operation instead
of uploading.
For now, the relevant cases are moving between VAAPI and Vulkan, and
VAAPI and DRM Prime, in both directions. I have introduced the IMGFMT
entry for Vulkan here so that I can put in the complete mapping table.
It's actually not useless, as you can map to Vulkan, use a Vulkan
filter and then map back to VAAPI for display output.
Historically, HW -> HW uploads did not exist, so the current code
assumes they will never happen. But as part of introducing Vulkan
support into ffmpeg, we added HW -> HW support to enable transfers
between Vulkan and CUDA.
Today, that means you can use the lavfi hwupload filter with the
correct configuration (and previous changes in this series) but it
would be more convenient to enable HW -> HW in the format filter so
that the transfers can be done more intuitively:
```
--vf=format=fmt=cuda
```
and
```
--vf=format=fmt=vulkan
```
Most of the work here is skipping logic that is specific to SW -> HW
uploads doing format conversion. There is no ability to do inline
conversion when moving between HW formats, so the format must be
mutually understood to begin with.
Additional work needs to be done to enable transfers between VAAPI
and Vulkan which uses mapping, rather than uploads. I'll tackle that
in the next change.
Today, lavfi filters are provided a hw_device from the first
hwdec_interop that was loaded, regardless of whether it's the right one
or not. In most situations where a hardware based filter is used, we
need more control over the device.
In this change, a `hwdec_interop` option is added to the lavfi wrapper
filter configuration and this is used to pick the correct hw_device to
inject into the filter or graph (in the case of a graph, all filters
get the same device).
Note that this requires the use of the explicit lavfi syntax to allow
for the extra configuration.
eg:
```
mpv --vf=hwupload
```
becomes
```
mpv --vf=lavfi=[hwupload]:hwdec_interop=cuda-nvdec
```
or
```
mpv --vf=lavfi-bridge=[hwupload]:hwdec_interop=cuda-nvdec
```
If we want to be able to handle conversion between hw formats in filter
chains, then we need to be able to load hwdec_interops from filters, as
the VO is only ever going to initialise one interop, based on its
configuration. That means that in almost all situations, only one of
the required interops will be loaded at the time the filter is
initialised.
The existing code has some assumptions that new hwdec_interops will not
be loaded after the vo has picked one to use. This change fixes two
instances:
* Refusing to load a new hwdec_interop if there is at least one
loaded already.
* Not recalculating the set of formats known to the autoconvert
filter when a new output format shows up. This leads to autoconvert
not knowing that a new format is supported when the hwdec interop is
lazily loaded.
It turns out that it's generally more useful to look up hwdecs by image
format, rather than device type. In the situations where we need to
find one, we generally know the image format we're dealing with. Doing
this avoids us having to create mappings from image format to device
type.
The most significant part of this change is filling in the image format
for the various hw interops. There is a hw_imgfmt field today today, but
only a couple of the interops fill it in, and that seems to be because
we've never actually used this piece of metadata before. Well, now we
have a good use for it.
ASS must only automatically break at ASCII spaces (\x20), but other
subtitle formats might expect more breaking oppurtinities.
Especially non-ASS subs in scripts, which typically do not use (ASCII)
spaces to seperate words, like e.g. CJK, might overflow the screen
if the conversion didn't insert additional linebreaks (ffmpeg does not).
Thus try to enable Unicode linebreaking for converted subs and the OSD
if libass is new enough. The feature may still be unavailable at runtime
if libass wasn't build with Unicode linebreaking support.
Previously, running a debug build of mpv would crash with this output
when preinit() at vo_libmpv.c:732 calls control(vo, VOCTRL_PREINIT, NULL):
Swift/Optional.swift:247: Fatal error: unsafelyUnwrapped of nil optional
This comes from this line of code:
var data = UnsafeMutableRawPointer.init(bitPattern: 0).unsafelyUnwrapped
Unsafely unwrapping a UnsafeMutableRawPointer.init has always been UB,
but the Swift runtime began asserting on it in debug builds a couple macOS
versions ago.