Currently using the drmprime interop with external mpv intgration can lead
to rendering issues because the current frame is being released too early.
Typically using this with Qt results in one frame shift because Qt
will do waitforvsync and swap, rather than swap and waitforvsync.
This leads to tearing as the frambuffer is released while being
displayed on screen.
In order to avoid releasing the framebuffer that is displayed, We keep
the framebuffer alive for one more frame with triple buffering to make
sure that whatever rendering process is used, the framebuffer will not
be released when it's still on screen.
This was tested on RockChip Rock64
Also a regression of the filter change. The new code is more picky about
EOF states, and it turns out the weird delay queue (used with some hwdec
copy back modes only) accidentally dropped an EOF event. It reset the
avctx before the delay queue was drained, which meant it never returned
the expected AVERROR_EOF status code.
Also don't signal EOF when copy back fails. It should just try to
continue until fallback is performed.
The current peak detection algorithm was very bugged (which contributed
to the excessive cross-frame flicker without long normalization) and
also didn't take into account the frame average brightness level.
The new algorithm both takes into account frame average brightness (in
addition to peak brightness), and also computes the values in a more
stable/correct way. (The old path was basically undefined behavior)
In addition to improving the algorithm, we also switch to hable tone
mapping by default, and try to enable peak computation automatically
whever possible (compute shaders + SSBOs supported). We also make the
desaturation milder, after extensive testing during libplacebo
development.
I also had to compensate a bit for the representational differences
between mpv and libplacebo (libplacebo treats 1.0 as the reference peak,
but mpv treats it as the nominal peak), but it shouldn't have caused any
problems.
This is still not quite the same as libplacebo, since libplacebo also
allows tagging the desired scene average brightness on the output, and
it also supports reading the scene average brightness from static
metadata (MaxFALL) where available. But those changes are a bit more
involved. It's possible we could also read this from metadata in the
future, but we have problems communicating with AVFrames as it is and I
don't want to touch the mpv colorimetry structs for the time being.
The vulkan validation layers warn you if you try requesting a query
result from a timer that hasn't even been started yet, so we have to do
some extra bit of work to keep track of which indices we've seen so far,
and avoid the queries on them.
Instead of enabling every feature under the sun, make an effort to just
whitelist the ones we actually might use. Turns out the extended storage
format support is needed for some of the storage formats we use, in
particular rgba16.
Another "what was I thinking" thing - destroying filters explicitly
skipped async wakeups for no reason. These were notifications for
filters that are not going to be destroyed too, and so their wakeup will
be lost, leading to stalled playback. This is completely unnecessary and
the special code can be removed.
Fixes#5488. (This case destroyed all audio filters due to AO init
failure, which could make clear out the f_demux_in.c wakeup for video,
and "freeze" playback.)
This is a dataflow issue caused by the filters change. When the fallback
happens, vd_lavc does not return a frame, but also does not accept a new
packet, which confuses lavc_process(). Fix this by immediately retrying
to feed the buffered packet and decode a frame on fallback.
Fixes#5489.
In theory (and practice), this is not needed, because the VS filter get
frame callback will cause the process function to be called again if
there's not enough data. But it's still a bit weird to just add one more
frame on each iteration, so make it cleaner and make it request frames
until the input array is full.
This was obviously nonsense, and a previous "fix" to this code was
nonsense too. What is really needed here is temporarily dropping the
lock while calling destroy_vs()/reinit_vs().
Fixes#5470.
We called mp_pin_out_request_data() if there was input _and_ output.
This is not how it should be: we should request new input only if output
was requested, but we could not produce any output.
On the other hand, the upper half of the process() function will request
new input if output is required, but all input was consumed. But this
requires calling mp_filter_internal_mark_progress(), as otherwise the
general filter logic would not know that we can continue.
If the speed is changed by a large amount, we need to effectively change
the output rate by a large amount, and swr_set_compensation() is
apparently not designed to handle such large changes well. So it's
better to reinitialize the resampler on all large changes.
Also, strictly reinitialize the resampler if the rate changes, otherwise
it could happen that libavresample (which does not automatically
initialize resampling if avresample_set_compensation() is used) would
never apply speed changes properly.
Also document some conditions better that handle corner cases (remove
the inline condition from the if gating the compensation code).
It also appears that we crashed with very large compensation ratios
(when raising audio speed quickly by keeping the "[" key down), and this
commit accidentally mitigates it by not allowing large compensation.
Setting lavfi-complex at runtime will now forcefully reselect the tracks
as needed, even if it was a "proper" track selection via --aid or --vid.
Before this commit, it just failed and complained that the VO/AO was
already "used".
Requested.
This makes it actually somewhat simpler, and doesn't have any
disadvantages. It should also make some new features easier.
Mostly just moves code around.
The somewhat confusing thing is that many filters (including track->dec)
have a public struct, but to free them, you need to free the mp_filter
pointer itself (track->dec->f). The assignment wrote to a dangling
pointer, instead of removing the dangling pointer.
(Other than that, this idiom is actually nice.)
Unknown frames were not freed properly. Although this doesn't really
happen anyway, because we're never going to feed audio frames to a video
filter chain. Since it's theoretically possible, and all other filters
handle this consistently, fix it anyway.
Properly initialize the output frame parameters other than image format
and size. This includes colorspace hints. (We're still not reading them
back from VapourSynth if it sets them, though. Usually it doesn't
anyway.)
VapourSynth can't pass through timestamps, only frame durations. So we
need to remember the timestamp of the very first frame passed to it.
This was accidentally set to 0 instead of NOPTS on init, so inserting
the filter during playback could show strange behavior.
Might be part of #5470.
Similar to the previous commit, and for the same reasons. Unlike with
af_scaletempo, resampling does not have a natural frame size, so we set
an arbitrary size limit on output frames. We add a new option to control
this size, although I'm not sure whether anyone will use it, so mark it
for testing only.
Note that we go through some effort to avoid buffering data in
libswresample itself. One reason is that we might have to reinitialize
the resampler completely when changing speed, which drops the buffered
data. Another is that I'm not sure whether the resampler will do the
right thing when applying dynamic speed changes.
This helps the filter to adapt much faster to speed changes. Before this
commit, the filter just converted and output the full input frame, which
could cause problems with large input frames. This was made worse by
certain filters like dynaudnorm or loudnorm outputting pretty large
frames.
This commit changes the filter from trying to convert all input at once
to only outputting a single internally filtered frame. Internally, this
filter already output data in units of 60ms by default (controlled by
the "stride" sub-option), and concatenated as many output frames as
necessary to consume all input.
Behavior is still kind of bad when inserting the filter. This is because
the large frames can be buffered up after the insertion point, so the
speed change will be performed with a larger latency. The scaletempo
filter can't do anything against this, although it can be fixed by
inserting scaletempo as user filter as part of --af.
--vf=help will now list libavfilter filters, and e.g. --vf=yadif=help
will list libavfilter filter options.
The latter is rather bare, because the AVOption API is really awful
(holy shit how is it so bad), and would require us to handle _every_
option type manually.
Alternatively we could call av_opt_show2(), which ffmpeg uses for help
output in its CLI tools and which is much more detailed. But it's rather
foreign and forces output through av_log(), so I don't really want to
use it.
MPlayer used this to distinguish multiple decoder wrappers (such as
libavcodec vs. binary codec loader vs. builtin decoders). It lost
meaning in mpv as non-libavcodec things were dropped. Now it doesn't
serve any purpose anymore.
Parsing was removed quite a while ago, and the recent filter change
removed any use of the internal family field. Get rid of it.
This reverts commit b7f90be567.
The author agreed to the relicensing now (if that code is affected by
the original copyright at all - that was the only line possibly left of
it).
If tags like TITLE have the whole parameter in " quotes, strip them.
Also remove the leading whitespace, since even with a single space it
was always included.
Fixes#5462.
FFmpeg only suppports http proxies and ignores it if
the resulting url is https. Also, no SOCKS.
Use it like `--ytdl-raw-options=proxy=[http://127.0.0.1:3128]` so
it doesn't confuse mpv because of the colons.
You need to pass it as an option because youtube-dl doesn't give
us the proxy.
Or just set `http_proxy` environment variable as recommended before.
Added example using -append, which doesn't need escaping.
This is supposed to undo the ] binding. This uses a value closer to the
inverse. (Although it's not fully exact since the values are still
stored as floating point instead as fractions.)
Use the decoder wrapper that was introduced for video. This removes all
code duplication the old audio decoder wrapper had with the video code.
(The audio wrapper was copy pasted from the video one over a decade ago,
and has been kept in sync ever since by the power of copy&paste. Since
the original copy&paste was possibly done by someone who did not answer
to the LGPL relicensing, this should also remove all doubts about
whether any of this code is left, since we now completely remove any
code that could possibly have been based on it.)
There is some complication with spdif handling, and a minor behavior
change (it will restrict the list of codecs to spdif if spdif is to be
used), but there should not be any difference in practice.
Move dec_video.c to filters/f_decoder_wrapper.c. It essentially becomes
a source filter. vd.h mostly disappears, because mp_filter takes care of
the dataflow, but its remains are in struct mp_decoder_fns.
One goal is to simplify dataflow by letting the filter framework handle
it (or more accurately, using its conventions). One result is that the
decode calls disappear from video.c, because we simply connect the
decoder wrapper and the filter chain with mp_pin_connect().
Another goal is to eventually remove the code duplication between the
audio and video paths for this. This commit prepares for this by trying
to make f_decoder_wrapper.c extensible, so it can be used for audio as
well later.
Decoder framedropping changes a bit. It doesn't seem to be worse than
before, and it's an obscure feature, so I'm content with its new state.
Some special code that was apparently meant to avoid dropping too many
frames in a row is removed, though.
I'm not sure how the source code tree should be organized. For one,
video/decode/vd_lavc.c is the only file in its directory, which is a bit
annoying.
This is supposed to help making data flow easier and wakeup handling
more efficient. Once that change is done, reading a packet on any
stream won't have to wakeup and poll all decoders (which helps reducing
the mess even if all decoders are on the same thread).
This also improves the accuracy of wakeups by tracking better whether
a wakeup is needed.
This is preparation for a change in vd_lavc.c: it should not have to
access the demuxer (to pass along closed captions), so the idea is to
make them part of mp_image, and to let the layer above vd_lavc propagate
the buffer.
Don't bother with preserving them for mp_image->AVFrame, because we
don't need this.
Reduce the trivial but still annoying code duplication in
mp_image_new_ref(), which has to create new buffer references and deal
with possible failure of creating them. The tricky part is that if
creating a reference fails, we must set the target to NULL, so that
unreferencing the failed new mp_image reference does not release the
buffer references of the original mp_image. For the same reason, the
code can't jump to error handling when it can't create a new reference,
and has to set a flag instead.