It's a bit of an edge case, but since we now allow the disabling of the
software fallback it's possible to have a situation where hwdec
completely fails and the mpv window is still lingering from the previous
item in the playlist. What needs to happen is simply that the vo_chain
should uninit itself and handle force_window if needed. In order to do
that, a new VDCTRL is added that checks vd_lavc if force_eof was set.
player/video will then start the uninit process if needed after getting
this.
Although we can support vulkan multiplane images, cuda lacks any such
support, and so cannot natively import such images for interop. It's
possible that we can do separate exports for each plane in the image
and have it work, but for now, we can selectively disable multiplane
when we know that we'll be consuming cuda frames.
As a reminder, even though cuda is the frame source, interop is one way
so the vulkan images have to be imported to cuda before we copy the
frame contents over.
This logic here is slightly more complex than I'd like but you can't
just set the flag blindly, as it will cause hwframes ctx creation to
fail if the format is packed or if it's planar rgb. Oh well.
Vulkan Video Decoding has finally become a reality, as it's now
showing up in shipping drivers, and the ffmpeg support has been
merged.
With that in mind, this change introduces HW interop support for
ffmpeg Vulkan frames. The implementation is functionally complete - it
can display frames produced by hardware decoding, and it can work with
ffmpeg vulkan filters. There are still various caveats due to gaps and
bugs in drivers, so YMMV, as always.
Primary testing has been done on Intel, AMD, and nvidia hardware on
Linux with basic Windows testing on nvidia.
Notable caveats:
* Due to driver bugs, video decoding on nvidia does not work right now,
unless you use the Vulkan Beta driver. It can be worked around, but
requires ffmpeg changes that are not considered acceptable to merge.
* Even if those work-arounds are applied, Vulkan filters will not work
on video that was decoded by Vulkan, due to additional bugs in the
nvidia drivers. The filters do work correctly on content decoded some
other way, and then uploaded to Vulkan (eg: Decode with nvdec, upload
with --vf=format=vulkan)
* Vulkan filters can only be used with drivers that support
VK_EXT_descriptor_buffer which doesn't include Intel ANV as yet.
There is an MR outstanding for this.
* When dealing with 1080p content, there may be some visual distortion
in the bottom lines of frames due to chroma scaling incorporating the
extra hidden lines at the bottom of the frame (1080p content is
actually stored as 1088 lines), depending on the hardware/driver
combination and the scaling algorithm. This cannot be easily
addressed as the mechanical fix for it violates the Vulkan spec, and
probably requires a spec change to resolve properly.
All of these caveats will be fixed in either drivers or ffmpeg, and so
will not require mpv changes (unless something unexpected happens)
If you want to run on nvidia with the non-beta drivers, you can this
ffmpeg tree with the work-around patches:
* https://github.com/philipl/FFmpeg/tree/vulkan-nvidia-workarounds
There is no good reason to do so, both zimg and swscale supports full
range output. If downstream something expects limited range yuv always,
it needs to be comunicated in different way.
c784820454 introduced a bool option type
as a replacement for the flag type, but didn't actually transition and
remove the flag type because it would have been too much mundane work.
This mapping isn't actually relevant until we have the Vulkan interop
merged, and it requires a newer version of libavutil than our minimum
requirement. So I'm going to remove it from master and put it in the
interop PR.
Fixes#10813
Wayland VO that can display images from either vaapi or drm hwdec
The PR adds the following changes:
1. a context_wldmabuf context with no gl dependencies
2. no-op ra_wldmabuf and dmabuf_interop_wldmabuf objects
no-op because there is no need to map/unmap the drmprime buffer,
and there is no need to manage any textures.
Tested on both x86_64 and rk3399 AArch64
vaapi allows for implicit conversion on upload, which has some
relevance as the set of supported source formats is larger than the
set of displayable formats. In theory, this allows for offloading the
conversion to the GPU - if you have any confidence in the hardware
and/or driver's ability to do the conversion.
Today, we actually track the 'input' and 'output' upload formats
separately all the way up until the point we do a check as to whether
the original source format is an accepted 'output' format and then
reject it if it is not.
This means that we're essentially ignoring all the work we did to track
those 'input' formats in the first place. But it also means that it's a
simple change to compare against the 'input' format instead. The logic
is already in place to do best format selection on both sides.
I imagine that if I read through the history here, wm4 tried to
implement all of this properly and then gave up in disgust after seeing
vaapi mangle various conversions.
This is particularly interesting for vo-dmabuf-wayland where it is only
possible to display the subset of valid vaapi formats that are
supported by the compositor, yet all playback has to go through vaapi.
Users will then be able to take advantage of all possible vaapi formats
to avoid having to do software format conversion.
A few years ago, wm4 got sufficiently annoyed with how vaapi image
format support was being discovered that he flipped the table and
introduced the shit list (which just included vaapi) to hard-code the
set of supported formats.
While that might have been necessary at the time, I haven't been able
to find a situation where the true list of supported formats was unsafe
to use. We filter down the list based on what the vo reports - and the
vo is already doing a thorough testing of formats, and if a format
makes it through that gauntlet, it does actually work.
Interestingly, as far as I can tell, the hwdec_vaapi probing code was
already good enough at the time (also written by wm4), so perhaps the
key difference here is that the driver side of things has improved.
I dug into this because of the support for the 422/444 high bit depth
vaapi formats I added to ffmpeg. These are obviously not in the hard
coded list today, but they work fine.
Finally, although it's positioned as a vaapi thing, it's really just
Intel specific, as the AMD vaapi driver has never exposed support for
anything except the formats used by the decoder/encoder profiles.
Certain combinations of hardware formats require the use of hwmap to
transfer frames between the formats, rather than hwupload, which will
fail if attempted.
To keep the usage of vf_format for HW -> HW transfers as intuitive as
possible, we should detect these cases and do the map operation instead
of uploading.
For now, the relevant cases are moving between VAAPI and Vulkan, and
VAAPI and DRM Prime, in both directions. I have introduced the IMGFMT
entry for Vulkan here so that I can put in the complete mapping table.
It's actually not useless, as you can map to Vulkan, use a Vulkan
filter and then map back to VAAPI for display output.
Historically, HW -> HW uploads did not exist, so the current code
assumes they will never happen. But as part of introducing Vulkan
support into ffmpeg, we added HW -> HW support to enable transfers
between Vulkan and CUDA.
Today, that means you can use the lavfi hwupload filter with the
correct configuration (and previous changes in this series) but it
would be more convenient to enable HW -> HW in the format filter so
that the transfers can be done more intuitively:
```
--vf=format=fmt=cuda
```
and
```
--vf=format=fmt=vulkan
```
Most of the work here is skipping logic that is specific to SW -> HW
uploads doing format conversion. There is no ability to do inline
conversion when moving between HW formats, so the format must be
mutually understood to begin with.
Additional work needs to be done to enable transfers between VAAPI
and Vulkan which uses mapping, rather than uploads. I'll tackle that
in the next change.
Today, lavfi filters are provided a hw_device from the first
hwdec_interop that was loaded, regardless of whether it's the right one
or not. In most situations where a hardware based filter is used, we
need more control over the device.
In this change, a `hwdec_interop` option is added to the lavfi wrapper
filter configuration and this is used to pick the correct hw_device to
inject into the filter or graph (in the case of a graph, all filters
get the same device).
Note that this requires the use of the explicit lavfi syntax to allow
for the extra configuration.
eg:
```
mpv --vf=hwupload
```
becomes
```
mpv --vf=lavfi=[hwupload]:hwdec_interop=cuda-nvdec
```
or
```
mpv --vf=lavfi-bridge=[hwupload]:hwdec_interop=cuda-nvdec
```
If we want to be able to handle conversion between hw formats in filter
chains, then we need to be able to load hwdec_interops from filters, as
the VO is only ever going to initialise one interop, based on its
configuration. That means that in almost all situations, only one of
the required interops will be loaded at the time the filter is
initialised.
The existing code has some assumptions that new hwdec_interops will not
be loaded after the vo has picked one to use. This change fixes two
instances:
* Refusing to load a new hwdec_interop if there is at least one
loaded already.
* Not recalculating the set of formats known to the autoconvert
filter when a new output format shows up. This leads to autoconvert
not knowing that a new format is supported when the hwdec interop is
lazily loaded.
It turns out that it's generally more useful to look up hwdecs by image
format, rather than device type. In the situations where we need to
find one, we generally know the image format we're dealing with. Doing
this avoids us having to create mappings from image format to device
type.
The most significant part of this change is filling in the image format
for the various hw interops. There is a hw_imgfmt field today today, but
only a couple of the interops fill it in, and that seems to be because
we've never actually used this piece of metadata before. Well, now we
have a good use for it.
This was introduced in 04257417 without a clear explanation of the bug
it was solving, so I have no idea if it's still needed (or why it ever
was). And it definitely creates unexpected behavior, e.g. forced
clipping when converting between float and floatp.
I therefore think we should simply remove this logic and see if it
regresses anything else, then fix those other bugs *properly* (if
they're still around).
Fixes#9979
When I introduced the concept of lazy loading of hwdecs by img format,
I did not propagate the probing flag correctly, leading to the new
normal loading path not runnng with probing set, meaning that any
errors would show up, creating unnecessary noise.
This change fixes this regression.
A few years ago, in 25e70f4743c44db59fe8fc902c93a966d95ba8c3, we
disabled the vavpp deinterlacing auto-filter on the basis that it
caused crashes on _some_ hardware with _some_ driver version(s). But
since then, the situation has improved. There is still a limitation
where you can't turn deinterlacing on on the fly with AMD, but it
doesn't crash anymore (That is #7388).
So, given that AMD users have to set up the deinterlacing filter
manually either way, let's re-add the auto-filter for Intel users.
Historically, we have treated hwdec interop loading as a completely
separate step from loading the hwdecs themselves. Some hwdecs need an
interop, and some don't, and users generally configure the exact
hwdec they want, so interops that aren't relevant for that hwdec
shouldn't be loaded to save time and avoid warning/error spam.
The basic approach here is to recognise that interops are tied to
hwdecs by imgfmt. The hwdec outputs some format, and an interop is
needed to get that format to the vo without read back.
So, when we try to load an hwdec, instead of just blindly loading all
interops as we do today, let's pass the imgfmt in and only load
interops that work for that format. If more than one interop is
available for the format, the existing logic (whatever it is) will
continue to be used to pick one.
We also have one callsite in filters where we seem to pre-emptively
load all the interops. It's probably possible to trace down a specific
format but for now I'm just letting it keep loading all of them; it's
no worse than before.
You may notice there is no documentation update - and that's because
the current docs say that when the interop mode is `auto`, the interop
is loaded on demand. So reality now reflects the docs. How nice.
Ever instance of m_obj_list is a constant and for all of them, the field
is true. Just remove the field all together.
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
This seems to work on gcc, clang and mingw as-is, but I made it
conditional on __GNUC__ just in case, even though I can't figure out
which compilers we care about that don't export this define.
Also replace all instances of assert(0) in the code by MP_UNREACHABLE(),
which is a strict improvement.
Today, validation is only possible for string type options. But there's
no particular reason why it needs to be restricted in this way, and
there are potential uses, to allow other options to be validated
without forcing the option to have to reimplement parsing from
scratch.
The first part, simply making the validation function an explicit
field instead of overloading priv is simple enough. But if we only do
that, then the validation function still needs to deal with the raw
pre-parsed string. Instead, we want to allow the value to be parsed
before it is validated. That in turn leads to us having validator
functions that should be type aware. Unfortunately, that means we need
to keep the explicit macro like OPT_STRING_VALIDATE() as a way to
enforce the correct typing of the function. Otherwise, we'd have to
have the validator take a void * and hope the implementation can cast
it correctly.
For help, we don't have this problem, as help doesn't look at the
value.
Then, we turn validators that are really help generators into explicit
help functions and where a validator is help + validation, we split
them into two parts.
I have, however, left functions that need to query information for both
help and validation as single functions to avoid code duplication.
In this change, I have not added an other OPT_FOO_VALIDATE() macros as
they are not needed, but I will add some in a separate change to
illustrate the pattern.
Essentially, this lets video.c decide whether to consider a video track
cover art, instead of having the decoder wrapper use the lower level
sh_stream flag.
Some pain because of the dumb threading shit. Moving the code further
down to make some of it part of the lock should not change behavior,
although it could (framedrop nonsense).
This commit should not change actual behavior, and is only preparation
for the following commit.
Do not make resetting the "access" filters reset the queue itself. This
is more flexible, and will be used in a later commit.
Also, if the queue is not in the reset state while the input access
filter is reset, make it immediately request data again. This is more
consistent, because it'll enter the state it "should" be, rather when
the filter's process function is called at an (essentially) random point
in the future. This means the filter graph will resume work on its own
if the queue was not reset before filter reset.
This could affect the only current user of f_async_queue, the code for
the --vd-queue-enable/--ad-queue-enable feature in f_decoder_wrapper.
But it looks like this already uses it in a compatible way.
This is a kind of bad hack (with bad implementation) to paint over other
problems of the filter system. The main problem is that some filters
might be left with pending frames if the filter runner is "paused",
which we don't want. To be used in a later commit.
It's relevant in some obscure corner cases (EDL file that has a segment
without audio). Didn't test what's actually going on (is ad_lavc.c
behaving wrong? is libavcodec behaving wrong or in an unexpected way? is
lavc_process wrong?) and just patched it over with some bullshit, so the
fix might be too complicated, and could be reworked at some later point.
This sure is a real data flow fuckmess.
scaletempo2 is a new audio filter for playing back
audio at modified speed and is based on chromium
commit 51ed77e3f37a9a9b80d6d0a8259e84a8ca635259.
It sounds subjectively better than the existing
implementions scaletempo and rubberband.
This mode drops or repeats audio data to adapt to video speed, instead
of resampling it or such. It was added to deal with SPDIF. The
implementation was part of fill_audio_out_buffers() - the entire
function is something whose complexity exploded in my face, and which I
want to clean up, and this is hopefully a first step.
Put it in a filter, and mess with the shitty glue code. It's all sort of
roundabout and illogical, but that can be rectified later. The important
part is that it works much like the resample or scaletempo filters.
For PCM audio, this does not work on samples anymore. This makes it much
worse. But for PCM you can use saner mechanisms that sound better. Also,
something about PTS tracking is wrong. But not wasting more time on
this.
This sucks, but is helpful for testing.
Obviously, it would be much nicer if there were a way to specify _all_
scaler options per filter (if the user wanted), instead of always using
the global options. But this is "too hard" for now. For testing, it is
extremely convenient to select the scaler backend, so add this option,
but make clear that it could go away. We'd delete it once there is a
better mechanism for this.
When the player core requests new frames from the filter, this is called
external/recursive filtering, which acts slightly differently from when
filters request new data internally. Mainly this is so the external user
doesn't have to call mp_filter_graph_run() just to get a frame. This
causes a number of complications, and the short version is that until
now, mp_filter_graph_run() has unnecessarily returned true in the
current common case, which made the playloop run too often for no
reason.
The problem is that when a mp_pin is read externally, updating the same
mp_pin during recursive filtering flagged external_pending when the
result was written, which made mp_filter_graph_run() return true, which
made the playloop call mp_filter_graph_run() again. This is redundant
because the caller is obviously checking the new state of the mp_pin
immediately.
The only situation in which external_pending really must be set is if
_another_ pin is changed. In theory, we could also unset it if the set
of "external" pins that are not in a signaled state becomes empty, but
we don't track that in a convenient way.
This commit removes the redundant signaling, and avoids running the
playloop an additional time for each video and audio frame (as it
actually was planned from the beginning, but duh).