Every format that was not detected as a video format was added to the
audio tracks. This resulted in e.g. YouTube storyboards from ending up
in the list of audio tracks.
Now formats that are already known to be neither video formats nor audio
formats, will also not end up in any track list.
Formats where it is unknown if they are video or audio get added to
tracks if `force_all_formats` is used, otherwise only
formats that are known to contain video or audio become video or audio
tracks respectively.
https://github.com/yt-dlp/yt-dlp/issues/4373#issuecomment-1186637357
Add xoshiro as a PRNG implementation instead of relying
on srand() and rand() from the C standard library. This,
in particular, lets us avoid platform-defined behavior with
respect to threading.
Since the previous commit introduced the notion of a features dictionary
that conveniently tells us whether or not to use a feature in a simple
yes/no, we can make use of this everywhere in the build. Instead of
doing something like 'if foo.()', change it to 'if feature['foo']
instead. This enforces a consistent standard instead of having a lot of
different possible combinations of booleans that may or may not do
something.
mpv has a ton of defines that are generated during building. Previously,
the meson build just had this as a big giant wall of text that manually
set each one but we can do this smarter. Instead, change the "features"
object to a dictionary and have it hold the name of the feature and its
value (true/false on whether it is enabled). Then at the end, just loop
through it and reformat the name of the feature so it becomes
HAVE_FEATURE. A side effect of this is that a lot of extra defines are
generated that aren't actually used in the code, but the waf build
worked like this for years anyway. A nice result of this is that the use
of foo['use'] internally can be completely eliminated and replaced with
feature['foo'] instead when needed.
The build was a bit overzealous with using dictionaries. These are fine
for when the feature checking is more complicated, but there's no point
in having them for the simplier things. This also eliminates the usage
of the 'name' key completely.
One would expect that e.g.
`--script-opts=ytdl_hook-all_formats=no --ytdl-format=bestaudio` and
`--script-opts=ytdl_hook-all_formats=yes --ytdl-format=bestaudio`
to play the exact same tracks without manual intervention.
This already worked when two formats were requested.
For a single format with `all_formats=yes` it would also play a track
that was not requested when available. This was inconsistant with the
behavior of `all_formats=no` (default), which would not play a second
track when only a single one was requested.
This combined with #10395 now plays the exact same tracks with
`all_formats=yes` as without, even when only one format is requested.
In wayland-protocols 1.25, xdg-shell got a version bump which added the
configure_bounds event. The compositor can send this to clients to
indicate that they should not resize past a certain size. For mpv, we'll
choose to only listen to this on reconfig events (i.e. when the window
first appears and if the video resolution changes later in the
playlist). However, this behavior is still exposed as a user option
(default on) because it will neccesarily conflict with a user setting a
specific geometry size and/or window scale. Presumably, if someone is
setting a really large size that goes beyond the bounds of their
monitor, they actually want it like that. The wayland-protocols version
is newer-ish, but we can get around having to poke the build system by
just using a define that exists in the generated xdg-shell header.
Unexpectedly, x11->screenrc actually doesn't update with randr events.
In a multimonitor configuration it could easily be wrong depending on
the user's layout. While it's tempting to modify the logic so screenrc
changes with randr events, this rectangle is currently used everywhere
and as far as we know, this pretty much works fine. Instead, just loop
over the randr displays that we have and select it if it overlaps with
the winrc. This follows the same logic as the fps selection in the case
of the mpv window overlapping multiple monitors (the last one is
selected).
Tracks are marked as default tracks based on what yt-dlp/youtube-dl
returns in the field `requested_formats`. The problem is that this field
only exists when there is more then one requested format.
So `ytdl-format=bestvideo+bestaudio` would have that field,
but `ytdl-format=bestaudio` would not,
leading to no tracks being marked as default tracks.
The requested formats can also be found under `requested_downloads`,
which exists regardless of the number of requested formats.
However when there is more then one requested format,
`requested_downloads` doesn't contain those formats directly and instead
has a field `requested_formats` that is identical to the other
`requested_formats`. Therefore use `requested_downloads` as a fallback
for when `requested_formats` doesn't exist.
ae768a1e14 forgot to bump the required
libdrm version however Debian 11 just barely misses the requirement,
which is a good reason not to require it unconditionally anyway.
The older overlay based drmprime hwdec should be preferred to the new
texture mapping one. This is for a few reasons:
1. In any situation where both hwdecs work, it's probably right to use
the more mature one by default, for now.
2. It seems like the overlay path primarily works on older SoCs
where the texture path is less performant, and in at least one
tested case is visually buggy, so you definitely want it to be
tried first.
3. In situations where the old hwdec doesn't work, it will fall through
to the new one.
sfan5 found a few things after I pushed the change, so this fixes them.
* Use-after-free on drm_device_Path
* Not comparing render_fd against -1
* Not handling dup() errors
This gives pull-based AOs the chance to play all queued audio.
Also it will make sure that the audio has finished playing so we can
reinitialize the AO if format changes are necessary.
Fixes#10018Fixes#9835Fixes#8904
In the confusing landscape of hardware video decoding APIs, we have had
a long standing support gap for the v4l2 based APIs implemented for the
various SoCs from Rockship, Amlogic, Allwinner, etc. While VAAPI is the
defacto default for desktop GPUs, the developers who work on these SoCs
(who are not the vendors!) have preferred to implement kernel APIs
rather than maintain a userspace driver as VAAPI would require.
While there are two v4l2 APIs (m2m and requests), and multiple forks of
ffmpeg where support for those APIs languishes without reaching
upstream, we can at least say that these APIs export frames as DRMPrime
dmabufs, and that they use the ffmpeg drm hwcontext.
With those two constants, it is possible for us to write a
hwdec-interop without worrying about the mess underneath - for the most
part.
Accordingly, this change implements a hwdec-interop for any decoder
that produces frames as DRMPrime dmabufs. The bulk of the heavy
lifting is done by the dmabuf interop code we already had from
supporting vaapi, and which I refactored for reusability in a previous
set of changes.
When we combine that with the fact that we can't probe for supported
formats, the new code in this change is pretty simple.
This change also includes the hwcontext_fns that are required for us to
be able to configure the hwcontext used by `hwdec=drm-copy`. This is
technically unrelated, but it seemed a good time to fill this gap.
From a testing perspective, I have directly tested on a RockPRO64,
while others have tested with different flavours of Rockchip and on
Amlogic, providing m2m coverage.
I have some other SoCs that I need to spin up to test with, but I don't
expect big surprises, and when we inevitably need to account for new
special cases down the line, we can do so - we won't be able to support
every possible configuration blindly.
Whether or not the GNOME project has a tendency to make life
difficult for anyone outside their ecosystem, the user manual is
no place for childish rants such as this.
Keep it to what is relevant for users.
I already added the equivalent logic for dmabuf_interop_pl previously
but I skipped the GL support because importing dmabufs into GL requires
explicitly providing the DRM format, and if you are taking a
multi-plane format and trying to treat each plane as a separate layer,
you need to come up with a DRM format for each synthetic layer.
But my initial testing has shown that the RockPRO64 board I've got to
work on drmprime hwdec will only produce NV12 in a single layer multi
plane format, and it doesn't have Vulkan support, so I have had to
tackle the GL multi-plane problem.
To that end, this change introduces the infrastructure to provide new
formats for synthetic layers. We only have lookup code for NV12 and
P010 as these were the only ones I could test.
Annoyingly, libva and libdrm use different structs to describe dmabufs
and if we are going to support drmprime, we must pick one format and do
some shuffling in the other case.
I've decided to use AVDRMFrameDescriptor as our internal format as this
removes the libva dependency from dmabuf_interop. That means that the
future drmprime hwdec will be able to populate it directly and the
existing hwdec_vaapi needs to copy the struct members around, but
that's cheap and not a concern.
With the files renamed, we can now disentangle the shared private
struct between the interops and hwdec_vaapi. We need this separation
to allow the future drmprime hwdec to use the interops.
This is the first in a series of changes that will introduce a drmprime
hwdec. As our vaapi hwdec is based around exporting surfaces as
drmprime dmabufs, we've actually got a lot of useful code already in
place in the GL/PL interops. I'm going to reorganise and adjust this
code to make the interops usable with the new hwdec as well.
The first step is to rename the files and functions. There are no
functional or other changes here. They will come next.
Avoids another pitfall on systems where the first card has a primary
node but is not capable of KMS. With this change --drm-context=drm
should work correctly out-of-the-box in all cases.
On S905X (meson) boards drmModeAtomicCommit called from
disable_video_plane in hwdec_drmprime_drm.c might still be running when
another call is made from queue_flip in context_drm_egl.c.
This causes EBUSY error in queue_flip, and causes mpv to hang.
The example given in #3024 would not play the correct video when
combined with `--ytdl-raw-options=yes-playlist=`.
Allowing `youtube:tab` as extractor and correcting the id check fixes
that.
This is somewhat academic for now, as we explicitly ask for separate
layers and the scenarios where multi-plane images are required also use
complex formats that cannot be decomposed after the fact, but
nevertheless it is possible for us to consume simple multi-plane
images where there is one layer with n planes instead of n layers with
one plane each.
In these cases, we just treat the planes the same as we would if they
were each in a separate layer and everything works out.
It ought to be possible to make this work for OpenGL but I couldn't
wrap my head around how to provide the right DRM fourcc when
pretending a plane is a layer by itself. So I've left that
unimplemented.
A VRAM memory leak was present in d3d11 when `idle=yes` and playback
stops for an item. This patch re-enables some of the code which is
only used during diagnostic which fixes the issue.
Generally, the hard-coded sizes used for the OSC elements are
comfortable regardless of the font used, but the timecode fields have
relatively many characters, and so are affected to a greater degree by
fonts with a wider or narrower average character width than expected.
This allow users to adjust the space reserved for the timecode fields to
compensate.
libplacebo 4.157 [1] rename context.h to log.h, and left a compatibility
header. In 5.x, this header has been removed.
Since we require libplacebo 4.157 to build mpv, we can just use log.h to
fix compatibility with 5.x.
[1]: 2459200a13
Signed-off-by: Coelacanthus <coelacanthus@outlook.com>