Commit Graph

17 Commits

Author SHA1 Message Date
wm4 ab4e0c42fb audio: redo video-sync=display-adrop
This mode drops or repeats audio data to adapt to video speed, instead
of resampling it or such. It was added to deal with SPDIF. The
implementation was part of fill_audio_out_buffers() - the entire
function is something whose complexity exploded in my face, and which I
want to clean up, and this is hopefully a first step.

Put it in a filter, and mess with the shitty glue code. It's all sort of
roundabout and illogical, but that can be rectified later. The important
part is that it works much like the resample or scaletempo filters.

For PCM audio, this does not work on samples anymore. This makes it much
worse. But for PCM you can use saner mechanisms that sound better. Also,
something about PTS tracking is wrong. But not wasting more time on
this.
2020-05-23 04:04:46 +02:00
wm4 3e84b48a6f filters: fix a typo in a comment 2020-04-23 13:24:35 +02:00
wm4 ad1ae64251 f_autoconvert: remove subfmt conversion BS
This was used to convert e.g. P010 to NV12 within the filter chain,
which hopefully a thing that is not needed anymore. (And has been dead
code since the ANGLE "RGB" interop code was removed.)
2020-01-17 15:19:05 +01:00
wm4 9b9307ea9f options: fix filter list comparison (again)
This was completely broken: it compared the first item of the filter
list only. Apparently I forgot that this is a list. This probably broke
aspects of runtime filter changing probably since commit  b16cea750f.

Fix this, and remove some redundant code from obj_settings_equals().
Which is not the same as m_obj_settings_equal(), so rename it to make
confusing them harder. (obj_setting_match() has these very weird label
semantics that should probably just be killed. Or not.)
2019-12-18 06:49:48 +01:00
wm4 f95cdb2e97 f_output_chain: use m_option_equal()
This is used to detect whether any filters were changed. This code was
essentially ported to m_option.c.

One possible difference is how the old code did name comparison. It did
not actually compare the name (!?!?), so this might change behavior,
hopefully to the better.
2019-11-29 12:14:43 +01:00
wm4 5af9fefff6 f_output_chain: fix possible crash when changing filters
When changing filters at runtime (vf/af commands/properties), this could
crash due to a NULL pointer access. The code for comparing the old and
new option values (to detect changes) was simply buggy.
2019-11-25 01:04:38 +01:00
wm4 d2349cb833 f_output_chain: remove a redundant variable 2018-04-29 02:21:32 +03:00
wm4 4b32d51b96 f_output_chain: log status of auto filters
Just so users don't think the filters do anything when they don't insert
any filters.
2018-04-29 02:21:32 +03:00
wm4 d8807ca833 f_output_chain: log input instead of output format
I think this is more intuitive. This requires a dedicated "out" dummy
filter. But keep the "in" dummy filter for symmetry, like in the old
filter code. (We could remove the "in" dummy filter, because the first
actual filter would still show the real input format.)
2018-04-29 02:21:32 +03:00
wm4 ff24285eb1 video: pass through container fps to filters
This means vf_vapoursynth doesn't need a hack to work around the filter
code, and libavfilter filters now actually get the frame_rate field on
input pads set.

The libavfilter doxygen says the frame_rate field is only to be set if
the frame rate is known to be constant, and uses the word "must" (which
probably means they really mean it?) - but ffmpeg.c sets the field to
mere guesses anyway, and it looks like this normally won't lead to
problems.
2018-04-19 23:22:48 +02:00
wm4 3c123281a7 audio: change format negotiation, remove channel remix fudging
The audio format neogitation code was pretty complicated, although the
idea was simple: when the format changes (or on the first audio frame),
filter only the new frame through the entire filter chain, discard the
resulting frame, but use the format to initialize the AO.

This was useful for "fudging" the channel remix behavior (upmix or
downmix), and moving it before other filters. Apparently this was useful
for things like DRC filters, which might work better in stereo, and
which also can only achieve the desired volume levels by doing it before
a downmix, which would modify the volume. This mechanism was introduced
in commit 60048b7eb9 (which the commit message also describes as
"idiotic heuristic"). Knowing the output format is inherently necessary
for this, because otherwise we can't know what the hell the user defined
filters will do.

There were problems with robustness. Some filters needed more than one
frame. Resampling in particular would discard initial audio at high
resampling ratios. Some filters might drop audio intentionally (like
clipping data on timestamp ranges). There were also allegations that
some decoders output 0 length frames (although that is invalid in
libavcodec). The state machine was excessively complex and hard to
understand too.

There are 3 things that could have been done:

1. Fix robustness problems by doing more heuristics, like repeating
   audio frames or simply decoding several frames. Since filters can
   behave differently, this would have added lots of complexity.
2. Make use of libavfilter's format negotiation, and add the same to
   mpv builtin filters. This is sort of annoying, because the format
   negotiation in libavfilter changes the state of the filters. It also
   reports only some parameters (mostly all for audio, but a lot of
   holes for video). It would remove some of the state machine, but not
   all.
3. Drop the channel remix fudging, and do the same as the video chain.
   This would not require format negotiation, but instead you can just
   filter the audio frames, and look what comes out of it. If nothing
   comes out, simply never create an AO.

This commit selects option 3. It removes the remix fudging, which means
the loss of a feature. Users can instead add "--af=format=channels=2"
before their DRC filter, or something. I'm also considering changing the
default for --audio-channels back to stereo, and downmix in the decoder
or at the start of the filter chain, which would give the same results,
except requiring more configuration.

Implementation-wise, this is still a bit different from the video path.
The VO always remains the same instance, while the AO might have to be
recreated on configuration changes. This still requires explicit format
change handling + draining old data, but by putting it into
f_autoconvert, not much new code is needed.
2018-04-15 23:11:33 +03:00
wm4 4527409c8d audio: improve behavior if filters output nothing during probing
Just bail out immediately (and disable audio) if format probing has no
result, instead of doing nothing and then apparently freezing.

This can happen with bogus filters, cases where the first audio frame is
essentially dropped by filters (can happen with large resampling
factors), and if the audio track contains no packets at all, or all
packets fail to decode.
2018-02-21 22:35:24 -08:00
wm4 880ea467ca
f_output_chain: remove unused got_input_eof field
Was used by the player code before decoders were moved to filters.
2018-02-03 05:01:32 -08:00
wm4 76e7e78ce9 audio: move to decoder wrapper
Use the decoder wrapper that was introduced for video. This removes all
code duplication the old audio decoder wrapper had with the video code.

(The audio wrapper was copy pasted from the video one over a decade ago,
and has been kept in sync ever since by the power of copy&paste. Since
the original copy&paste was possibly done by someone who did not answer
to the LGPL relicensing, this should also remove all doubts about
whether any of this code is left, since we now completely remove any
code that could possibly have been based on it.)

There is some complication with spdif handling, and a minor behavior
change (it will restrict the list of codecs to spdif if spdif is to be
used), but there should not be any difference in practice.
2018-01-30 03:10:27 -08:00
wm4 6d36fad83c video: make decoder wrapper a filter
Move dec_video.c to filters/f_decoder_wrapper.c. It essentially becomes
a source filter. vd.h mostly disappears, because mp_filter takes care of
the dataflow, but its remains are in struct mp_decoder_fns.

One goal is to simplify dataflow by letting the filter framework handle
it (or more accurately, using its conventions). One result is that the
decode calls disappear from video.c, because we simply connect the
decoder wrapper and the filter chain with mp_pin_connect().

Another goal is to eventually remove the code duplication between the
audio and video paths for this. This commit prepares for this by trying
to make f_decoder_wrapper.c extensible, so it can be used for audio as
well later.

Decoder framedropping changes a bit. It doesn't seem to be worse than
before, and it's an obscure feature, so I'm content with its new state.
Some special code that was apparently meant to avoid dropping too many
frames in a row is removed, though.

I'm not sure how the source code tree should be organized. For one,
video/decode/vd_lavc.c is the only file in its directory, which is a bit
annoying.
2018-01-30 03:10:27 -08:00
wm4 b9f804b566 audio: rewrite filtering glue code
Use the new filtering code for audio too.
2018-01-30 03:10:27 -08:00
wm4 76276c9210 video: rewrite filtering glue code
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.

This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.

vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.

f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).

The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.

Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)

In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.

vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.

The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.

Exporting VO capabilities is still a big mess (mp_stream_info stuff).

The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.

f_async_queue is unused.
2018-01-30 03:10:27 -08:00