Deprecated upstream 1cc24d7495
We need to reallocate the context here because `avcodec_free_context`
also frees the context, and we want to reuse the context with some
reconfig.
Why a bigger search-interval is required:
scaletempo2 doesn't do a good job when the signal contains frequencies
less then 1/search_interval. With a search interval of 30ms that means
anything below 33.333Hz sounds bad.
Depending on the genre it's very for music to contain frequencies down
to 30Hz, and sometimes even a little bit below that. Therefore a higher
default value is needed to handle such cases.
Based on that an argument can be made for a value of 50, as that should
work down to 20Hz, or something even higher because movies sometimes
have some infrasonic content.
However the downside of big search intervals is increased CPU usage and
intelligibility at higher speeds, as it effectively leads to parts of
the audio being skipped.
A value of 40 can handle frequencies down to 25Hz, enough for all music
except very rare edge cases, while still providing decent
intelligibility.
Why a smaller window-size is required:
Large values reduce intelligibility at high speeds and therefore small
values are preferred.
However when values get too small it starts to sound weird
(similar to librubberband).
In my testing a value of 10 already works well, but adding a small
safety margin seems like a good idea, especially since it made no
noticeable difference to intelligibility, which is why 12 was chosen.
Avoid generating too much audio after EOF.
Note: This often has no effect, because less audio is produced than
required.
Usually this comes to effect with the userspeed filter at high speed
(4x) and going back to 1x speed to remove the filter.
After the final input packet, the filter padded with silence to allow
one more iteration. That was not enough to process the final frames.
Continue padding the end of `input_buffer` with silence until the final
frames have been processed.
Implementation: Instead of padding when adding final samples, pad before
running WSOLA iteration. Count number of added silent frames and
remaining input frames for time keeping.
This changes the emitted pts values from the start of the search block
to the center of the search block. Change initial `output_time`
accordingly. Initial `search_block_index` is irrelevant, because it's
overwritten before the first iteration.
Using the `output_time` removes the rounding of `search_block_index`,
which also fixes the <20 microsecond gaps in timestamps between output
packets.
Rationale:
The variance in audio position was in the range `0..search-interval`.
With this change, the range is
(- search-interval / 2)..(search-interval / 2)`
which ensures lower maximum offset.
Target block can be anywhere in the previous search-block, varying by
`search-interval` while the filter is active. This resulted in constant
audio offset when returning to 1x playback speed.
- Move the search block to the target block to sync up exactly.
- Drop old frames to minimize input_buffer usage.
The internal time update function involved multiple problems:
- Time was updated after WSOLA iteration. The means speed was updated
one iteration later than it could be.
- The update functions caused spikes of too many or too few samples
advanced, leading to audio glitches on speed changes.
- The inconsistent updates made it very difficult to produce gapless
audio packets.
- The `output_time` update function involved complicated feedback:
`search_block_index` influenced how many frames from `input_buffer`
are retained, which influenced how much `output_time` is changed,
which influenced `search_block_index`.
With these changes:
- Time is updated before WSOLA iterations. Speed changes are effective
instantly.
- There are no spikes in playback speed during speed changes.
- No significant gaps are introduced in output packets.
- The time update function becomes (function calls omitted for brevity)
output_time += ola_hop_size * playback_rate
Functions received a `playback_rate` parameter to check how many samples
are needed before iteration. Internal state is only updated when the
iteration is actually run, so the speed is allowed to change until
enough data is received.
The first WSOLA iteration overlapped audio with whatever was in the
`wsola_output` buffer. This was either silence (if not run before), or
old frames (if switching to 1x and back to a different speed).
Track the state of the output buffer and memcpy the whole window for the
first iteration instead.
`output_time` is used to set the center of the search block. Init of
both `search_block_index` and `output_time` with 0 caused inconsistent
search block movement for the first iterations.
Initialize with `search_block_center_offset` for consistency with initial
`search_block_index`.
Fixes#12028
There was an additional issue that audio was always delayed by half the
configured search-interval. This was caused by the `out` buffer length
not being included in the delay calculation.
Notes:
- Every WSOLA iteration advances the input buffer by _some amount_, and
produces data in the output buffer always of size `ola_hop_size`.
- `mp_scaletempo2_fill_buffer` is always called with `ola_hop_size`
- Thus, the rendered frames are always cleared immediately after
processing, and `num_complete_frames` is 0 in the delay calculation.
- The factors contributing to delay are:
- the pending samples in the input buffer according to the search
block position, and
- the pending rendered samples in the output buffer (always empty in
practice).
The frame_delay code looked like that of the rubberband filter, which
might not work for scaletempo2. Sometimes a different amount of input
audio was consumed by scaletempo2 than expected. It may have been caused
by speed changes being a more dynamic process in scaletempo2. This can
be seen by where `playback_rate` is used in `run_one_wsola_iteration`:
`playback_rate` is only referenced after the iteration, when updating
the time and removing old data from buffers.
In scaletempo2, the playback speed is applied by changing the amount the
search block is moved. That apparently averages out correctly at
constant playback speed, but when the speed changes, the error in this
assumption probably spikes. This error accumulated across all speed
changes because of the persistent `frame_delay` value.
With the removal of the persistent `frame_delay`, there should be no way
for the audio to drift off. By deriving the delay from filter buffer
positions, and the buffers are filled only as much as needed, the delay
always stays within buffer bounds.
c784820454 introduced a bool option type
as a replacement for the flag type, but didn't actually transition and
remove the flag type because it would have been too much mundane work.
When af_scaletempo2.c:process() detects a format change, it goes back
through mp_scaletempo2_init() to reinitialize everything. However,
mp_scaletempo2.input_buffer is not necessarily reallocated due to a
check in af_scaletempo2_internals.c:resize_input_buffer(). This is a
problem if the number of audio channels increases, since without
reallocating, the buffer for the new channel(s) will at best point to
NULL, and at worst uninitialized memory.
Since resize_input_buffer() is only called from two places, pull size
check out into mp_scaletempo2_fill_input_buffer(). This allows each
caller to decide whether they want to resize or not. We could be
smarter about when to reallocate, but that would add a lot of machinery
for a case I don't expect to be hit often in practice.
Simply returning out of this function leaks avpkt, need to always "goto
done".
Rewrite the logic a bit to make it more clear what's going on (IMO).
Fixes#9593
This brings my scaletempo2 benchmark down from ~22s to ~7s on my machine
(-march=native), and down to ~11s with a generic compile.
Guarded behind an appropriate #ifdef to avoid being ableist against
people who have the clinical need to run obscure platforms.
Closes#8848
scaletempo2 is a new audio filter for playing back
audio at modified speed and is based on chromium
commit 51ed77e3f37a9a9b80d6d0a8259e84a8ca635259.
It sounds subjectively better than the existing
implementions scaletempo and rubberband.
This mode drops or repeats audio data to adapt to video speed, instead
of resampling it or such. It was added to deal with SPDIF. The
implementation was part of fill_audio_out_buffers() - the entire
function is something whose complexity exploded in my face, and which I
want to clean up, and this is hopefully a first step.
Put it in a filter, and mess with the shitty glue code. It's all sort of
roundabout and illogical, but that can be rectified later. The important
part is that it works much like the resample or scaletempo filters.
For PCM audio, this does not work on samples anymore. This makes it much
worse. But for PCM you can use saner mechanisms that sound better. Also,
something about PTS tracking is wrong. But not wasting more time on
this.
Replace use of .min==1 with a proper flag. This is a good idea, because
it has nothing to do with numeric limits (also see commit 9d32d62b61
for how this can go wrong).
With this, m_option.min/max are strictly used for numeric limits.
Change all OPT_* macros such that they don't define the entire m_option
initializer, and instead expand only to a part of it, which sets certain
fields. This requires changing almost every option declaration, because
they all use these macros. A declaration now always starts with
{"name", ...
followed by designated initializers only (possibly wrapped in macros).
The OPT_* macros now initialize the .offset and .type fields only,
sometimes also .priv and others.
I think this change makes the option macros less tricky. The old code
had to stuff everything into macro arguments (and attempted to allow
setting arbitrary fields by letting the user pass designated
initializers in the vararg parts). Some of this was made messy due to
C99 and C11 not allowing 0-sized varargs with ',' removal. It's also
possible that this change is pointless, other than cosmetic preferences.
Not too happy about some things. For example, the OPT_CHOICE()
indentation I applied looks a bit ugly.
Much of this change was done with regex search&replace, but some places
required manual editing. In particular, code in "obscure" areas (which I
didn't include in compilation) might be broken now.
In wayland_common.c the author of some option declarations confused the
flags parameter with the default value (though the default value was
also properly set below). I fixed this with this change.
Before this commit, option declarations used M_OPT_MIN/M_OPT_MAX (and
some other identifiers based on these) to signal whether an option had
min/max values. Remove these flags, and make it use a range implicitly
on the condition if min<max is true.
This requires care in all cases when only M_OPT_MIN or M_OPT_MAX were
set (instead of both). Generally, the commit replaces all these
instances with using DBL_MAX/DBL_MIN for the "unset" part of the range.
This also happens to fix some cases where you could pass over-large
values to integer options, which were silently truncated, but now cause
an error.
This commit has some higher potential for regressions.
This filter wasn't referenced anywhere and thus was dead code. It should
have been in the audio filter list in user_filters.c. This was intended
as compatibility wrapper (to avoid breaking old command lines and config
files), and has no real use. Apparently I forgot to add it to the filter
list (did I even test this shit?), and so it was rotting around for 1.5
years doing nothing (just like myself).
Note that users can just use the libavfilter provided filter to force
resampling, just that it has a different name and different options.
There's also af_format to force inserting auto conversion through the
internal f_swsresample filter.
This helps the filter to adapt much faster to speed changes. Before this
commit, the filter just converted and output the full input frame, which
could cause problems with large input frames. This was made worse by
certain filters like dynaudnorm or loudnorm outputting pretty large
frames.
This commit changes the filter from trying to convert all input at once
to only outputting a single internally filtered frame. Internally, this
filter already output data in units of 60ms by default (controlled by
the "stride" sub-option), and concatenated as many output frames as
necessary to consume all input.
Behavior is still kind of bad when inserting the filter. This is because
the large frames can be buffered up after the insertion point, so the
speed change will be performed with a larger latency. The scaletempo
filter can't do anything against this, although it can be fixed by
inserting scaletempo as user filter as part of --af.
This commit introduces the multiply-pitch af-command. Users may bind
keys to this command in order to incrementally adjust the pitch of a
track. This will probably mostly be useful for musicians trying to
transpose up and down by semi tones without having to calculate
the correct ratio beforehand.
As an example, here is an input.conf to test this feature:
{ af-command all multiply-pitch 0.9438743126816935
} af-command all multiply-pitch 1.059463094352953
The future direction might be not having such a user-visible filter at
all, similar to how vf_scale went away (or actually, redirects to
libavfilter's vf_scale).
This is part of trying to get rid of --af-defaults, and the af
resample filter.
It requires a complicated mechanism to set the defaults on the resample
filter for backwards compatibility.
This does what af_volume used to do. Since we couldn't relicense it,
just rewrite it. Since we don't have a new filter mechanism yet, and the
libavfilter is too inconvenient, do applying the volume gain in ao.c
directly. This is done before handling the audio data to the driver.
Since push.c runs a separate thread, and pull.c is called asynchronously
from the audio driver's thread, the volume value needs to be
synchronized. There's no existing central mutex, so do some shit with
atomics. Since there's no atomic_float type predefined (which is at
least needed when using the legacy wrapper), do some nonsense about
reinterpret casting the float value to an int for the purpose of atomic
access. Not sure if using memcpy() is undefined behavior, but for now I
don't care.
The advantage of not using a filter is lower complexity (no filter auto
insertion), and lower latency (gain processing is done after our
internal audio buffer of at least 200ms).
Disavdantages include inability to use native volume control _before_
other filters with custom filter chains, and the need to add new
processing for each new sample type.
Since this doesn't reuse any of the old GPL code, nor does indirectly
rely on it, volume and replaygain handling now works in LGPL mode.
How to process the gain is inspired by libavfilter's af_volume (LGPL).
In particular, we use exactly the same rounding, and we quantize
processing for integer sample types by 256 steps. Some of libavfilter's
copyright may or may not apply, but I think not, and it's the same
license anyway.
These couldn't be relicensed, and won't survive the LGPL transition. The
other existing filters are mostly LGPL (except libaf glue code).
This remove the deprecated pan option. I guess it could be restored by
inserting a libavfilter filter (if there's one), but for now let it be
gone.
This temporarily breaks volume control (and things related to it, like
replaygain).