Print them as a warning.
Note that there may be some cases where it underruns, without being a
bad condition. This could possibly happen e.g. if the last chunk is
written, and then it resumes playback some time after that. Eventually I
want to add more code to avoid such spurious warnings.
There is a dedicated thread for feeding audio to the ALSA API from a
buffer with a larger size. There is little reason to have such a large
device buffer.
One can now set the number of buffers and the buffer size.
This can reduce the CPU usage and the total latency stays mostly the same.
As there are sync mechanisms the A/V sync continue intact and working.
It also modifies 6.1 channel order, as per OpenAL spec
and add AOPLAY_FINAL_CHUNK support
OpenAL Soft's AL_SOFT_source_latency extension allows one to correctly
get the device output latency, facilitating the syncronization with
video.
Also added a simpler generic fallback that does not take into account
latency of the device.
Uses OpenAL Soft's AL_DIRECT_CHANNELS_SOFT extension and can be controlled through
a new CLI option, --openal-direct-channels.
This allows one to send the audio data direrctly to the desired channel without
effects applied.
Although half (non-fast track on sink rate) or one-third (non-fast track not on sink rate) of the buffer size of the created AudioTrack instance as the SL Enqueue buffer size is basically enough for dropout-free playback, only using the full size can avoid stutter upon (re)start of playback.
Here are the various buffer sizes on different track/sink rate when on Bluetooth audio on Android O:
aptX @ 48kHz:
Sink rate: 48000 Hz
44100 Hz: 10632 frames (241.09 ms)
48000 Hz: 11544 frames (240.50 ms)
88200 Hz: 21216 frames (240.54 ms)
96000 Hz: 23088 frames (240.50 ms)
176400 Hz: 42384 frames (240.27 ms)
192000 Hz: 46128 frames (240.25 ms)
SBC/AAC/aptX @ 44.1kHz:
Sink rate: 44100 Hz
44100 Hz: 10776 frames (244.35 ms)
48000 Hz: 11748 frames (244.75 ms)
88200 Hz: 21552 frames (244.35 ms)
96000 Hz: 23448 frames (244.25 ms)
176400 Hz: 43056 frames (244.08 ms)
192000 Hz: 46848 frames (244.00 ms)
The above results were produced with the following code:
import android.media.AudioAttributes;
import android.media.AudioFormat;
import android.media.AudioTrack;
class AudioInfo {
public static void main(String[] args) {
int nosr = AudioTrack.getNativeOutputSampleRate(3);
System.out.printf("Sink rate: %d Hz\n", nosr);
int[] rates = {44100,48000,88200,96000,176400,192000};
for (int rate: rates) {
AudioAttributes aa = new AudioAttributes.Builder().setFlags(256).build();
AudioFormat af = new AudioFormat.Builder().setSampleRate(rate).build();
AudioTrack at = new AudioTrack(aa, af, 4, 1, 0);
int sr = at.getSampleRate();
int bs = at.getBufferSizeInFrames();
float ms = bs * (float) 1000 / sr;
at.release();
System.out.printf("%d Hz: %d frames (%.2f ms)\n", sr, bs, ms);
}
}
}
Therefore bumping the device buffer size to 250ms.
If you set desired.samples to 0, SDL will return a default buffer size
on obtained.samples. This was broken, because ceil_power_of_two(0)
returns 1. Since 0 is usually not considered a power of two, this is
probably correct, but we still want to set desired.samples to 0 in this
case.
You can use --audio-buffer=0 to minimize the audio buffer size. But if
the AO reports no device buffer size (like e.g. ao_jack does), then the
buffer size is actually 0, and playback can never work properly.
Make it fallback to a size of 1, which is unlikely to work properly, but
you get what you asked for, instead of a freeze.
While the soft buffer size is already by default 200ms, it is not enough to guarantee dropout-free playback on Bluetooth audio. Bumping the device buffer size to the same value seems to suffice.
This helps the filter to adapt much faster to speed changes. Before this
commit, the filter just converted and output the full input frame, which
could cause problems with large input frames. This was made worse by
certain filters like dynaudnorm or loudnorm outputting pretty large
frames.
This commit changes the filter from trying to convert all input at once
to only outputting a single internally filtered frame. Internally, this
filter already output data in units of 60ms by default (controlled by
the "stride" sub-option), and concatenated as many output frames as
necessary to consume all input.
Behavior is still kind of bad when inserting the filter. This is because
the large frames can be buffered up after the insertion point, so the
speed change will be performed with a larger latency. The scaletempo
filter can't do anything against this, although it can be fixed by
inserting scaletempo as user filter as part of --af.
MPlayer used this to distinguish multiple decoder wrappers (such as
libavcodec vs. binary codec loader vs. builtin decoders). It lost
meaning in mpv as non-libavcodec things were dropped. Now it doesn't
serve any purpose anymore.
Parsing was removed quite a while ago, and the recent filter change
removed any use of the internal family field. Get rid of it.
Use the decoder wrapper that was introduced for video. This removes all
code duplication the old audio decoder wrapper had with the video code.
(The audio wrapper was copy pasted from the video one over a decade ago,
and has been kept in sync ever since by the power of copy&paste. Since
the original copy&paste was possibly done by someone who did not answer
to the LGPL relicensing, this should also remove all doubts about
whether any of this code is left, since we now completely remove any
code that could possibly have been based on it.)
There is some complication with spdif handling, and a minor behavior
change (it will restrict the list of codecs to spdif if spdif is to be
used), but there should not be any difference in practice.
Always make the hw params dump function use MSGL_DEBUG, and remove the
MSGL_V use. That means you need -v -v to see them. The detailed
information is usually not very interesting, so this reduces the log
noise.
The af_get_best_sample_formats() function had an argument of
int[AF_FORMAT_COUNT], which is slightly incorrect, because it's 0
terminated and should in theory have AF_FORMAT_COUNT+1 entries. It won't
actually write this many formats (since some formats are fundamentally
incompatible), but it still feels annoying and incorrect. So fix it, and
require that callers pass an AF_FORMAT_COUNT+1 array.
Note that the array size has no meaning in C function arguments (just
another issue with C static arrays being weird and stupid), so get rid
of it completely.
Not changing the af_lavcac3enc use, since that is rewritten in another
branch anyway.
This commit eliminates the following clang warning:
warning: macro expansion producing 'defined' has undefined behavior [-Wexpansion-to-defined]
Going by the clang commit message, this seems to be explicitly specified
as UB by the standard, and they added this warning because MSVC
apparently results in different behavior. Whatever, we can just avoid
the warning with some small changes.
This commit introduces the multiply-pitch af-command. Users may bind
keys to this command in order to incrementally adjust the pitch of a
track. This will probably mostly be useful for musicians trying to
transpose up and down by semi tones without having to calculate
the correct ratio beforehand.
As an example, here is an input.conf to test this feature:
{ af-command all multiply-pitch 0.9438743126816935
} af-command all multiply-pitch 1.059463094352953
The future direction might be not having such a user-visible filter at
all, similar to how vf_scale went away (or actually, redirects to
libavfilter's vf_scale).
This is part of trying to get rid of --af-defaults, and the af
resample filter.
It requires a complicated mechanism to set the defaults on the resample
filter for backwards compatibility.
If feed_packet() ended with DATA_WAIT, the player should have gone to
sleep, until the demuxer wakes it up again when there is new data. But
the call to read_frame() unconditionally overwrote this status code, so
it never waited. The consequence was that the core burned CPU by
effectively polling the demuxer status, which was noticeable especially
when seeking in network streams (since seeking is async, decoders will
start out with having to wait for network).
Regression since commit 33e5755c.
The old code tried to make sure at all times to try to read a new
packet. Only once that was read, it tried to retrieve new video or audio
frames the decoder might already have decoded.
Change this to strictly read frames from the decoder until it signals
that it wants a new packet, and only then read and feed a new packet.
This is in theory nicer, follows the libavcodec recommended data flow,
and and reduces the minimum latency by 1 frame.
This merely requires switching the order in which those calls are done.
Normally, the decoder will return only 1 frame until a new packet is
required. If we would just feed it 1 packet, return DATA_AGAIN, and wait
until the next frame is decoded, we would run the playloop 1 time too
often for no reason (which is fine but might have some overhead). To
avoid this, try to read a frame again after possibly feeding a packet.
For this reason, move the feed/read code to its own functions each,
instead of merely moving the code.
The audio and video code for this particular thing is basically
duplicated. The idea is to unify them one day, so make the change to
both. (Doing this for video is the real motivation for this change, see
below.)
The video code change is slightly more complicated, because we have to
care about the framedrop counting (which is just a heuristic, but for
now considered better than nothing, and possibly considered required to
warn the user of framedrops happening - maybe).
Apparently this change helps with stalling streams on Android with the
mediacodec wrapper and mpeg2 decoder implementations which deinterlace on
decoding (and return 2 frames per packet).
Based on an idea and observations by tmm1.
A release has been made, so drop options deprecated for that release.
Also drop some options which have been deprecated a much longer time
before.
Also fix a typo in client-api-changes.rst.
stdatomic.h defines no atomic_float typedef. We can't just use _Atomic
unconditionally, because we support compilers without C11 atomics. So
just create a custom atomic_float typedef in the wrapper, which uses
_Atomic in the C11 code path.
This does what af_volume used to do. Since we couldn't relicense it,
just rewrite it. Since we don't have a new filter mechanism yet, and the
libavfilter is too inconvenient, do applying the volume gain in ao.c
directly. This is done before handling the audio data to the driver.
Since push.c runs a separate thread, and pull.c is called asynchronously
from the audio driver's thread, the volume value needs to be
synchronized. There's no existing central mutex, so do some shit with
atomics. Since there's no atomic_float type predefined (which is at
least needed when using the legacy wrapper), do some nonsense about
reinterpret casting the float value to an int for the purpose of atomic
access. Not sure if using memcpy() is undefined behavior, but for now I
don't care.
The advantage of not using a filter is lower complexity (no filter auto
insertion), and lower latency (gain processing is done after our
internal audio buffer of at least 200ms).
Disavdantages include inability to use native volume control _before_
other filters with custom filter chains, and the need to add new
processing for each new sample type.
Since this doesn't reuse any of the old GPL code, nor does indirectly
rely on it, volume and replaygain handling now works in LGPL mode.
How to process the gain is inspired by libavfilter's af_volume (LGPL).
In particular, we use exactly the same rounding, and we quantize
processing for integer sample types by 256 steps. Some of libavfilter's
copyright may or may not apply, but I think not, and it's the same
license anyway.
These couldn't be relicensed, and won't survive the LGPL transition. The
other existing filters are mostly LGPL (except libaf glue code).
This remove the deprecated pan option. I guess it could be restored by
inserting a libavfilter filter (if there's one), but for now let it be
gone.
This temporarily breaks volume control (and things related to it, like
replaygain).
Looks like this is covered by LGPL relicensing agreements now.
Notes about contributors who could not be reached or who didn't agree:
Commit 7fccb6486e has tons of mp_msg changes look like they are not
copyrightable (even if they were, all mp_msg calls were rewritten in
mpv times again). The additional play() change looks suspicious, but
the function was rewritten several times anyway (first time after that
commit in 4f40ec312).
Commit 89ed1748ae was rewritten in commit 325311af3 and then again
several times after that. Basically all this code is unnecessary in
modern mpv and has been removed.
No code survived from the following commits: 4d31c3c53, 61ecf838f2,
d38968bd, 4deb67c3f. At least two cosmetic typo fixes are not
considered as well.
Commit 22bb046ad is reverted (this wasn't a valid warning anyway, just
a C++-ism icc applied to C). Using the constants is nicer, but at least
I don't have to decide whether that change was copyrightable.
Apparently some people want this. Actually making it compile is still
their problem, though, and I expect that build with FFmpeg upstream will
occasionally be broken (as it is right now). This is because mpv also
relies on API provided by Libav, and if FFmpeg hasn't merged that yet,
it's not our problem - we provide a version of FFmpeg upstream with
those changes merged, and it's called ffmpeg-mpv.
Also adjust the README which still talked about FFmpeg releases.
I _think_ this confuses Coverity and it thinks there is uninitialized
data to be read. Initialize the array to change/remove the warning, or
if there's a real problem, to make it easier to detect. (Basically apply
defensive coding.)