A previous commit moved the underrun reporting to report_underruns(),
and called it from get_space(). One reason was that I worried about
printing a log message from a "realtime" callback, so I tried to move it
out of the way. (Though there's little justification other than a bad
feeling. While an older version of the pull code tried to avoid any
mutexes at all in the callback to accommodate "requirements" from APIs
like jackaudio, we gave up on that. Nobody has complained yet.)
Simplify this and move underrun reporting back to the callback. But
instead of printing the message from there, move the message into the
playloop. Change the message slightly, because ao->log is inaccessible,
and without the log prefix (e.g. "[ao/alsa]"), some context is missing.
AOs can report audio underruns, but only ao_alsa and ao_sdl (???)
currently do so. If the AO was marked as not reporting it, the cache
state was used to determine whether playback was interrupted due to slow
input.
This caused problems in some cases, such as video with very low video
frame rate: when a new frame is displayed, a new frame has to be
decoded, and since there it's so much further into the file (long frame
durations), the cache gets into an underrun state for a short moment,
even though both audio and video are playing fine. Enlarging the audio
buffer didn't help.
Fix this by making all AOs report underruns. If the AO driver does not
report underruns, fall back to using the buffer state.
pull.c behavior is slightly changed. Pull AOs are normally intended to
be used by pseudo-realtime audio APIs that fetch an audio buffer from
the API user via callback. I think it makes no sense to consider a
buffer underflow not an underrun in any situation, since we return
silence to the reader. (OK, maybe the reader could check the return
value? But let's not go there as long as there's no implementation.)
Remove the flag from ao_sdl.c, since it just worked via the generic
mechanism. Make the redundant underrun message verbose only.
push.c seems to log a redundant underflow message when resuming (because
somehow ao_play_data() is called when there's still no new data in the
buffer). But since ao_alsa does its own underrun reporting, and I only
use ao_alsa, I don't really care.
Also in all my tests, there seemed to be a rather high delay until the
underflow was logged (with audio only). I have no idea why this happened
and didn't try to debug this, but there's probably something wrong
somewhere.
This commit may cause random regressions.
See: #7440
If ao_add_events() is used, but all events flags are already set, then
we don't need to wakeup the core again.
Also, make the underrun message "exact" by avoiding the race condition
mentioned in the comment.
Avoiding redundant wakeups is not really worth the trouble, and it's
actually just a bonus in the change making the ao_underrun_event()
function return whether a new underrun was set, which is needed by the
following commit.
Before this commit, runtime changes were only applied if something else
caused audio to be reinitialized. Now setting them reinitializes audio
explicitly.
The code is very basic:
- only handles gamepads, could be extended for generic joysticks in the
future.
- only has button mappings for controllers natively supported by SDL2.
I heard more can be added through env vars, there's also ways to load
mappings from text files, but I'd rather not go there yet. Common ones
like Dualshock are supported natively.
- analog buttons (TRIGGER and AXIS) are mapped to discrete buttons using an
activation threshold.
- only supports one gamepad at a time. the feature is intented to use
gamepads as evolved remote controls, not play multiplayer games in mpv :)
This was all dead code. Commit 995c47da9a (over 3 years ago) removed all
uses of the controls.
It would be nice if AOs could apply a linear gain volume, that only
affects the AO's audio stream for low-latency volume adjust and muting.
AOCONTROL_HAS_SOFT_VOLUME was supposed to signal this, but to use it,
we'd have to thoroughly check whether it really uses the expected
semantics, so there's really nothing useful left in this old code.
See previous commits. ao_sdl is worthless, but it might be a good test
for pull-based AOs.
This stops using the old underrun reporting if the new one is enabled.
Also, since the AO's behavior can in theory not be according to
expectations, this needs to be enabled for every single pull AO
separately.
For some reason, in certain cases I get multiple underrun warnings while
cache-pausing is active. It fills the cache, restarts the AO,
immediately underruns again, and then fills the cache again. I'm not
sure why this happens; maybe ao_sdl tries to catch up when it shouldn't.
Who knows.
I think this was _always_ wrong. Due to the line above the first changed
line, buffered_bytes==bytes always. I can only hope I broke this in a
less under-tested edit when I originally wrote this.
Fixes: c5a82f729b
AOs can now call ao_underrun_event() (in any context) if an underrun has
happened. It will print a message.
This will be used in the following commits. But for now, audio.c only
clears the underrun bit, so that subsequent underruns still print the
warning message.
Since the underrun flag will be used in fragile ways by the playback
state machine, there is the "reports_underruns" field that signals
strong support for underrun reporting. (Otherwise, underrun events will
not be used by it.)
This commit tries to prepare for better underrun reporting. The goal is
to report underruns relatively immediately. Until now, this happened
only when play() was called. Change this, and abuse that get_delay() is
called "relatively often" - this reports the underrun immediately in
practice.
Background:
In commit 81e51a15f7 (and also e38b0b245e), we were quite confused
about ALSA underrun handling. The commit message showed uncertainty how
case 3 happened, but it's blindingly obvious and simple.
Actually reading the code shows that ALSA does not have a concept of a
"final chunk" (or we don't use it). It's obvious we never pass the
AOPLAY_FINAL_CHUNK flag along to the ALSA API in any way. The only thing
we do is simply writing a partial fragment. Of course this will cause an
underrun. Doing a partial write saves us the trouble to pad the last
frame with silence, or so.
The main reason why the underrun message was avoided was that play() was
never called with a non-0 sample count again (except if reset() was
called before that). That was OK, at least the goal of avoiding the
unwanted message was reached. (And the original "bogus" message at end
of playback was perfectly correct, as far as ALSA goes.)
If network stalls, play() will called again only once new data is
available. Obviously, this could take a long time, thus it's too late.
It turns out that case 2) mentioned in the previous commit happened
quite often when playback ended normally.
There is probably a legitimate underrun with normal buffer sizes (100
ms, 4 fragments, gapless audio in "weak" mode). This is a result of the
player waiting for video to end, and/or the time needed to kill the
video window. The former case means that it depends on your test case
whether it happens (a file where video ends slightly before audio is
less likely to trigger it).
This in turn is due to how gapless playback works. Achieving not having
a "gap" requires queuing the audio of the next file without playing a
partial chunk (as AOPLAY_FINAL_CHUNK would do). The partial chunk is
then played as part of the first chunk played from the next file. But if
it detects "later" that there is no next file, it still needs to get rid
of the last fragment with AOPLAY_FINAL_CHUNK. At this point it's too
late, and an underrun may have actually happened. The way the player
uninits and reinits the entire playback engine for the next file in a
"serial" manner means it cannot know in advance whether this works.
This is the reason why the idiot who added the underrun exception for
the last chunk in play() was wrong (I wrote that btw., before you accuse
me of being rude). Yes, it's a real underrun, and you could probably
hear it.
This XRUN (aka underrun) message was printed in the following
situations:
1) legitimate underrun during playback
2) legitimate underrun when playing final chunk
3) bogus underrun when playing final chunk
The old underrun case (in play()) happens in cases 1) and 2) as well,
but 3) did not happen. It appears 3) is indeed something that happens,
although it's not known for sure. It's still pretty annoying, so remove
the new XRUN message.
When testing, care should be taken to play with buffer sizes, video
versus no video, and gapless enabled/disabled. Also, suspending the
player with Ctrl+Z in the terminal (SIGSTOP) and then resuming is a good
way to trigger a "normal" underrun.
ioctl(..., SNDCTL_DSP_CHANNELS, &nchannels) for not supported
nchannels does not return an error and instead set nchannels to
the default value.
Instead of failing with no audio, fallback to stereo.
This flag makes mpv continue using the PulseAudio driver even if the
sink is suspended.
This can be useful if JACK is running with PulseAudio in bridge mode and
the sink-input assigned to mpv is the one JACK controls, thus being
suspended.
By forcing mpv to still use PulseAudio in this case, the user can now
adjust the sink to an unsuspended one.
According to ALSA doxy, EPIPE is a synonym to SND_PCM_STATE_XRUN,
and that is a state that we should attempt to automatically recover
from. In case recovery fails, log an error and return zero.
A warning message will still be output for each XRUN since those
are not something we should generally be receiving.
This has been way too long coming, and for me to notice that a
whole lot of ao_alsa functions do an early return if the AO is
paused.
For the STATE_SETUP case, I had this reproduced once, and never
since. Still, seems like we can start calling this function before
the ALSA device has been fully initialized so we might as well
early exit in that case.
ao->device_buffer will only affect the enqueue size if the latter
is not specified. In other word, its intended purpose will solely
be setting/guarding the soft buffer size.
This guarantees that the soft buffer size will be consistent no
matter a specific enqueue size is set or not. (In the past it
would drop to the default of the generic audio-buffer option.)
opensles-frames-per-buffer has been renamed to opensles-frames-per
-enqueue, as it was never purposed to set the soft buffer size. It
will only make sure the size is never smaller than itself, just as
before.
opensles-buffer-size-in-ms is introduced to allow easy tuning of
the relative (i.e. in time) soft buffer size (and enqueue size,
unless the aforementioned option is set). As "device buffer" never
really made sense in this AO, this option OVERRIDES audio-buffer
whenever its value (including the default) is larger than 0.
Setting opensl-buffer-size-in-ms to 1 allows you to equate the soft
buffer size to the absolute enqueue size set with opensl-frames-per
-enqueue conveniently (unless it is less than 1ms).
When both are set to 0, audio-buffer will be the ultimate fallback.
If audio-buffer is also 0, the AO errors out.
Fixes a bug with alsa dmix on Fedora 29. After several minutes,
audio suddenly becomes bad and muted.
Actually, I don't know what causes this. Probably this is a bug in alsa.
In any case, as snd_pcm_status() returns not only 'avail', but also other
fields such as tstamp, htstamp, etc, this could be considered a good
simplification, as only avail is required for this function.
Until recently, ao_lavc and vo_lavc started encoding whenever the core
happened to send them data. Since audio and video are not initialized at
the same time, and the muxer was not necessarily opened when the first
encoder started to produce data, the resulting packets were put into a
queue. As soon as the muxer was opened, the queue was flushed.
Change this to make the core wait with sending data until all encoders
are initialized. This has the advantage that we don't need to queue up
the packets.
The main change is that we wait with opening the muxer ("writing
headers") until we have data from all streams. This fixes race
conditions at init due to broken assumptions in the old code.
This also changes a lot of other stuff. I found and fixed a few API
violations (often things for which better mechanisms were invented, and
the old ones are not valid anymore). I try to get away from the public
mutex and shared fields in encode_lavc_context. For now it's still
needed for some timestamp-related fields, but most are gone. It also
removes some bad code duplication between audio and video paths.
Print them as a warning.
Note that there may be some cases where it underruns, without being a
bad condition. This could possibly happen e.g. if the last chunk is
written, and then it resumes playback some time after that. Eventually I
want to add more code to avoid such spurious warnings.
There is a dedicated thread for feeding audio to the ALSA API from a
buffer with a larger size. There is little reason to have such a large
device buffer.
One can now set the number of buffers and the buffer size.
This can reduce the CPU usage and the total latency stays mostly the same.
As there are sync mechanisms the A/V sync continue intact and working.
It also modifies 6.1 channel order, as per OpenAL spec
and add AOPLAY_FINAL_CHUNK support