As suggested by the zimg author. This is mostly related to XYZ support.
It's unclear whether this works. Using the only XYZ test sample we know,
and the next commits to consume the pixfmt, it looks wrong.
According to the zimg author, YUV->GREY conversion does not even read
the chroma planes, as long as no matrix conversion is involved. Since we
try to avoid the latter anyway by forcing the source parameters on the
target image, passing only the Y plane will not help with anything.
An unscientific test seems to confirm this, so remove this.
This would probably help with libswscale (I didn't test this), but on
the other hand, libswscale will rarely be used in cases where we can
extract the Y plane. (Except nv12, which should probably be added to the
zimg wrapper's unpacking.)
And update the comment both explaining why this defaulting matters and
why we use BT.2020 instead.
tl;dr BT.709 clips even the one test file we *do* have, so if we don't
handle XYZ "natively" in vo_gpu we might as well at least handle it in a
way that runs less risk of clipping
This no longer hard-codes BT.709, it converts to whatever primaries are
tagged in the same metadata struct. The actual BT.709 defaulting comes
from `mp_image_params_guess_csp`.
This will perform conversion and scaling of video with zimg, if
--sws-allow-zimg is used.
The performance probably depends on how well the compiler optimizes the
RGB pack code in zimg.c, which is written in C.
Awful shit. I probably wouldn't accept this code from someone else, just
so you know.
The idea is that a sws_utils user can automatically use zimg without
large code changes. Basically, laziness. Since zimg support is still
very new, and I don't want that anything breaks just because zimg was
enabled at build time, an option needs to be set to enable it. (I have
especially especially obscure stuff in mind, which is all what
libswscale is used in mpv.)
This _still_ doesn't cause zimg to be used anywhere, because the
sws_utils user has to opt-in by setting allow_zimg. This is because some
users depend on certain libswscale features.
This provides a very similar API to sws_utils.h, which can be used to
convert and scale from one mp_image to another.
This commit adds only the code, but does not use it anywhere.
The code is quite preliminary and barely tested. It supports only a few
pixel formats, and will return failure for many others. (Unlike
libswscale, which tries to support anything that FFmpeg knows.)
zimg itself accepts only planar formats. Supporting other formats
requires manual packing/unpacking. (Compared to libswscale, the zimg API
is generally lower level, but allows for more flexibility.) Only BGR0
output was actually tested. It appears to work.
This used to be needed for the "GPU memcpy" (shitty Intel methods to
deal with certain uncached memory types). This is now done in FFmpeg,
and the code in mp_image.c was just unnecessarily convoluted.
The plane pointer checking assert() triggered at least on gray8, because
that has a "pseudo palettes" in ffmpeg, which mpv refuses to allocate.
Remove a strange duplicated printf().
Log the component type where available.
(Why is this even here, I hate it when there are commented test programs
in source files.)
Since a track may not be selected twice, it makes sense e.g. for
secondary subtitles to select the next best match and avoid the
duplicate selection.
This allows for example `--slang=en,ja --secondary-sid=auto` to select
'en' as primary and 'ja' as secondary without needing to know the actual
sid for 'ja'.
This was obviously missing from the recent commit, which probably broke
10 bit decoding. The original commit didn't test this for lack of
working hardware; this commit isn't tested either.
Fixes: a1c7d61393
This was speculatively added 2 years ago in preparation for something
that apparently never happened. The D3D code was added as an "example",
but this too was never used/finished.
No reason to keep this.
This syscall avoids the need to guess an unused filename in /dev/shm and
allows seals to be placed on it. We immediately return if no fd got
returned, as there isn’t anything we can do otherwise.
Seals especially allow the compositor to drop the SIGBUS protections,
since the kernel promises the fd won’t ever shrink.
This removes support for any platform but Linux from this vo.
On a audio/video desync by more than 0.5 seconds, display-sync mode was
disabled, and not enabled again (until playback restart, e.g. a seek).
The idea was that it this only happens when this playback mode is broken
and can't perform well anyway (A/V desync is a clear indication that
something is very wrong). Instead of behaving like a god damn POS, it
should revert to the more robust audio-sync mode.
Unfortunately, this could happen sporadically due to temporary system
performance problems, such as toggling fullscreen. Users didn't like
this, and asked for a function to disable it, or to recover in some
other way.
This mechanism is questionable anyway. If an ignorant user enables
display-sync, and encounters problems with it (without being able to
determine that display-sync is messing up), the player will still behave
like a POS on every playback, and even after every seek. It might
actually be helpful to fail more consistently. Also, I've found that
it's sill relatively reliable anyway even without this mechanism.
So just remove the fallback.
Fixes: #7048
vo_wayland was removed during the wayland rewrite done in 0.28. However,
it is still useful for systems that do not have OpenGL.
The new wayland_common code makes vo_wayland much simpler, and
eliminates many of the issues the previous vo_wayland had.
With the previous commit, this is dead code.
This also makes the f_autoconvert.c code for this dead code
(fortunately). Will probably remove this later.
2 years ago, ANGLE removed the old NV12-specific extension, and added
a new one that supports a number of formats, including P010. Actually
they just renamed it and removed their initial annoying and obvious
design error (bravo, Google).
Since it broke 2 years ago, nobody should give a shit about this code,
and it should just be removed. But for some reason I still dived the
shit-tank (Windows development).
I guess Intel code monkeys can't write drivers (or maybe the issue is
because we're doing zero-copy, which probably maybe is not actually
allowed by D3D11 due to array textures, see --d3d11va-zero-copy), so
the P010 path is completely untested. It doesn't work, I'll delete all
this ANGLE hwdec code.
Fixes: #7054
The --cache option does not take a number anymore. (Oh boy, this is
going to break a lot of user configs?)
The cache site is now configured with those obscure-sounding --demuxer
options.
--cache-secs is not useful anymore. The default is very high, so the
obscure-sounding --demuxer options determine how much is cached.
Advertise the --cache-on-disk option a bit. I found it useful once, and
it will trick users into wearing out their SSD for no gain, or so.
OK, so --cache-secs is useless, because the default is set to 10 hours.
And that part about the "maximum" was obviously a lie (I wonder if it
simply changed at some point).
Some failures by youtube-dl prompt the user to submit a bug report.
If such a failure occurs, we can compare youtube-dl's version to the
current calendar date to see how old it is. We don't make this check
on every youtube-dl failure, as failing to extract an URL is quite
common, and waiting for a second blocking python interpreter startup
for every such case would be a bit unpleasant.
Here the assumption is made that any youtube-dl version older than
3 months is probably severely out of date. Users will be warned about
this.
We also output the trimmed stderr of youtube-dl with msg.error, as this
appeared to have been the behaviour of utils.subprocess without stderr
capturing. Since this uses mp.command_native now, we'll have to do this
ourselves where appropriate.
Query information on the system output most linked to the swap chain,
and either utilize a user-configured format, or either 8bit
RGBA or 10bit RGB with 2bit alpha depending on the system output's
bit depth.
mpv warned if the FFmpeg runtime library version was not exactly the
same as the build version. This seemed to cause frequent conflicts. At
this point, most mpv code probably adheres to the FFmpeg ABI rules, and
FFmpeg stopped breaking ABI "accidentally". Another source of problems
were mixed FFmpeg/Libav installations, something which nobody does
anymore. It's not "our" job to check and enforce ABI compatibility
either. So I guess this behavior can be removed.
OK, still check for incompatible libraries (according to FFmpeg
versioning rules), i.e. different major versions, or if the build
version is newer than the runtime version. For now.
The comment about ABI problems is still true. In particular, the
bytes_read field mentioned in the removed comment is still accessed, and
is still an ABI violation. Have fun.
This was all dead code. Commit 995c47da9a (over 3 years ago) removed all
uses of the controls.
It would be nice if AOs could apply a linear gain volume, that only
affects the AO's audio stream for low-latency volume adjust and muting.
AOCONTROL_HAS_SOFT_VOLUME was supposed to signal this, but to use it,
we'd have to thoroughly check whether it really uses the expected
semantics, so there's really nothing useful left in this old code.
See previous commits. ao_sdl is worthless, but it might be a good test
for pull-based AOs.
This stops using the old underrun reporting if the new one is enabled.
Also, since the AO's behavior can in theory not be according to
expectations, this needs to be enabled for every single pull AO
separately.
For some reason, in certain cases I get multiple underrun warnings while
cache-pausing is active. It fills the cache, restarts the AO,
immediately underruns again, and then fills the cache again. I'm not
sure why this happens; maybe ao_sdl tries to catch up when it shouldn't.
Who knows.
I think this was _always_ wrong. Due to the line above the first changed
line, buffered_bytes==bytes always. I can only hope I broke this in a
less under-tested edit when I originally wrote this.
Fixes: c5a82f729b
The --cache-pause feature (enabled by default) will pause playback for a
while if network runs out of data. If this is not done, then playback
will go on frame-wise (as packets are slowly read from the network and
then instantly decoded and displayed). This feature is actually useless,
as you won't get nice playback no matter what if network is too slow,
but I guess I still prefer this behavior for some reason.
This commit changes this behavior from using the demuxer cache state
only, to trying to use underrun information from the AO/VO. This means
if you have a very large audio buffer, then cache-pausing will trigger
once that buffer is depleted, which will be some time _after_ the
demuxer cache has run out.
This requires explicit support from the AO. Otherwise, the behavior
should be mostly the same as before this commit.
This does not care about the AO buffer. In theory, the AO may underrun,
then the player will write some data to the AO buffer, then the AO will
recover and play this bit of data, then the player will probably trigger
the cache-pause behavior. The probability of this happening should be
pretty low, so I will hold off fixing this until the next refactor of
the AO chain (if ever).
The VO underflow detection was devised and tested in 5 minutes, and may
not be correct. At least I'm fairly sure that the combination of all the
factors should make incorrect behavior relatively unlikely, but problems
are possible.
Also, the demux_reader_state.underrun field may be inaccurate. It's only
the present state at the time demux_get_reader_state() was called, and
may exclude past underruns. In theory, this could cause "close" cases to
be missed. Then you might get an audio underrun without cache-pausing
acting on it. If the stars align, this could happen multiple times in
the row, effectively making this feature not work.
The most user-visible consequence of this change is that the user
will now see an AO underrun warning every time the cache runs out.
Maybe this cache-pause feature should just be removed...
AOs can now call ao_underrun_event() (in any context) if an underrun has
happened. It will print a message.
This will be used in the following commits. But for now, audio.c only
clears the underrun bit, so that subsequent underruns still print the
warning message.
Since the underrun flag will be used in fragile ways by the playback
state machine, there is the "reports_underruns" field that signals
strong support for underrun reporting. (Otherwise, underrun events will
not be used by it.)
This commit tries to prepare for better underrun reporting. The goal is
to report underruns relatively immediately. Until now, this happened
only when play() was called. Change this, and abuse that get_delay() is
called "relatively often" - this reports the underrun immediately in
practice.
Background:
In commit 81e51a15f7 (and also e38b0b245e), we were quite confused
about ALSA underrun handling. The commit message showed uncertainty how
case 3 happened, but it's blindingly obvious and simple.
Actually reading the code shows that ALSA does not have a concept of a
"final chunk" (or we don't use it). It's obvious we never pass the
AOPLAY_FINAL_CHUNK flag along to the ALSA API in any way. The only thing
we do is simply writing a partial fragment. Of course this will cause an
underrun. Doing a partial write saves us the trouble to pad the last
frame with silence, or so.
The main reason why the underrun message was avoided was that play() was
never called with a non-0 sample count again (except if reset() was
called before that). That was OK, at least the goal of avoiding the
unwanted message was reached. (And the original "bogus" message at end
of playback was perfectly correct, as far as ALSA goes.)
If network stalls, play() will called again only once new data is
available. Obviously, this could take a long time, thus it's too late.
The old way of using wayland in mpv relied on an external renderloop for
semi-accurate timings. This had multiple issues though. Display sync
would break whenever the window was hidden (since the frame callback
stopped being executed) which was really annoying. Also the entire
external renderloop logic was kind of fragile and didn't play well with
mpv's internal structure (i.e. using presentation time in that old
paradigm breaks stats.lua).
Basically the problem is that swap buffers blocks on wayland which is
crap whenever you hide the mpv window since it looks up the entire
player. So you have to make swap buffers not block, but this has a
different problem. Timings will be terrible if you use the unblocked
swap buffers call.
Based on some discussion in #wayland, the trick here is relatively
simple and works well enough for our purposes. Instead we basically
build a way to block with a timeout in the wayland buffer swap
functions.
A bool is set in the frame callback function that indicates whether or
not mpv is waiting for a frame to be displayed. In the actual buffer
swap function, we enter into a while loop waiting for this flag to be
set. At the same time, the wl_display is polled to block the thread and
wakeup if it receives any events from the compositor. This loop only
breaks if enough time has passed or if the frame callback bool is
received.
In the near future, it is better to set whether or not frame a frame has
been displayed in the presentation feedback. However as a first pass,
doing it in the frame callback is more than good enough.
The "downside" is that we render frames that aren't actually shown on
screen when the player is hidden (it seems like wayland people don't
like that). But who cares. Accurate timings are way more important. It's
probably not too hard to add that behavior back in the player though.
The externally driven renderloop was originally added for the wayland
context (to make display sync somewhat work), but it has a lot of issues
with mpv's internal structure. A different approach should be used.
This reverts commit a743fef837.
This affects hwdec_dxva2dxgi, which uses ra_d3d11_wrap_tex to wrap RGB
video frames that are shared with a D3D9 device. Without it, mpv uses
nearest instead of bilinear scaling with --scale=bilinear (the default)
and --hwdec=dxva2. It's kind of hard to believe this bug has gone
unnoticed for almost two years, but that seems to have been the case.
Fixes: #7042
In pseudo-DASH mode, we may have no real streams opened until the
demuxer layer is fully loaded and playback actually starts. The only
hint that the stream is from network is, at that point, the init
segment, which is only opened as stream, and then separately as demuxer
(which is dumb but happened to fit the internal architecture better).
So just propagate the flags from the init segment stream. Seems like an
annoyance, but doesn't hurt that much I guess. (Until someone gets the
idea to pass the init segment data inline or so, but nothing does that.)
The sample link in the linked issue will probably soon switch to another
format, because that service always does this after recent uploads or
so.
Fixes: #7038
mpv typically decodes and filters at least 2 frames before starting
playback. This happens during seeks, as well as when playback starts
from the beginning of the file.
skip-logo.lua receives notifications for all filtered frames, even
during seeking. It should interrupt during seeking, so as a crude
heuristic, it ignored all frames while the player was seeking. This does
not mean all these frames are skipped due to seeking (thus it's a "crude
hueristic"). In particular, it means that the first 2 frames of a video
cannot be skipped, since they're filtered within the playback restart
phase (equivalent to "seeking").
Fix this by making the heuristic slightly less crude. Since we observe
the property as "none", the property is not actually read until we do it
explicitly. By not reading it during seeking, we can let the frames
internally queue up (vf_fingerprint discards them in a ringbuffer-like
fashion if they're too many). Then, if seeking ends, we get the current
playback timestamp, and check queued up frames that are at or after that
timestamp. (In some ways, this duplicates what the player's seeking
logic does.)
A disadvantage is that this is racy. While playback-time is guaranteed
to be set when seeking changes from false to true, playback could
already have progressed to the next frame (or more) before the script
gets time to react. In theory, we could add a seek restart hook or so,
but I don't want to. A property that returns the last playback restart
time would also do it, but feels to special. Not an important problem
in practice anyway.