6863eefc3d handled this situation by using
an atomic variable to express the state for which the wakeup is caused
by AO control, and the dispatch queue is only processed at this state.
However, this can cause permanent lockup of the player core when the
following happens:
- AO control sets the thread state to WASAPI_THREAD_DISPATCH, and
sets the wakeup handle.
- WASAPI thread reads the WASAPI_THREAD_DISPATCH state and processes
the dispatch queue.
- Another AO control happens. A dispatch item is enqueued, and the
state stays at WASAPI_THREAD_DISPATCH.
- WASAPI thread resets the thread state to WASAPI_THREAD_FEED since
the state has not changed.
- WaitForSingleObject() returns in the WASAPI thread, sees this state,
and does not process the dispatch queue.
- The player core locks permanently because it is waiting for the dispatch
to be processed.
This has been experimentally verified on a system under high contention:
The easiest way to trigger this lockup is to continuously hold down "i",
which rapidly issues AO get volume/mute controls.
To properly handle this, use separate handles for system and user wakeup
requests. Only feed audio when woke up by system and only process the
dispatch queue when woke up by user.
Fixes: 6863eefc3d
This allows users to set buffer duration in exclusive mode. We have
been using the default device period as the buffer size and it is
robust enough in most cases. However, on some devices there are
horrible glitches after a stream reset. Unfortunately, the issue is not
consistently reproducible, but using a smaller buffer size (e.g., the
minimum device period) seems to resolve the problem.
Fixes#13715.
"playthread" is a confusing name which doesn't describe what it really
is. Rename it to ao_thread, and ao_wakeup_playthread to ao_wakeup,
in the same style as VO threads. This makes call stack function names
less confusing.
A figure from pipewire documentation:
```
stream time domain graph time domain
/-----------------------\/-----------------------------\
queue +-+ +-+ +-----------+ +--------+
----> | | | |->| converter | -> graph -> | kernel | -> speaker
<---- +-+ +-+ +-----------+ +--------+
dequeue buffers \-------------------/\--------/
graph internal
latency latency
\--------/\-------------/\-----------------------------/
queued buffered delay
```
We calculate `end_time` in the following steps:
1. get current timestamp in mpv
```
int64_t end_time = mp_time_ns();
```
2. add duration of samples to enqueue
```
end_time += MP_TIME_S_TO_NS(nframes) / ao->samplerate;
```
3. add delay of the pipewire graph
```
end_time += MP_TIME_S_TO_NS(time.delay) * time.rate.num / time.rate.denom;
```
4. add duration of queued and buffered samples.
```
end_time += MP_TIME_S_TO_NS(time.queued) / ao->samplerate;
end_time += MP_TIME_S_TO_NS(time.buffered) / ao->samplerate;
```
New in this commit. `time.queued` is usually zero as `SPA_PARAM_BUFFERS_buffers`
is default to 1; however it is not always.
`time.buffered` is non-zero if there is a resampler involved.
5. add elapsed duration from when `time` is captured
```
end_time -= pw_stream_get_nsec(p->stream) - time.now;
```
New in this commit. `time` is captured at `time.now`.
From then, time has passed so we need to exclude the elapsed time,
by calculating the diff of `pw_stream_get_nsec()` and `time.now`.
`hotplug_cb` was registered only in `hotplug_init()`.
This commit make it registered in `init()` as well,
so that the ao can listen for latency change
in playback.
`buf` contains a `struct spa_data` for each channel.
Therefore the number of channels does not matter to calculate the frame capacity of one `struct spa_data`.
In practice this shouldn't make a difference as `b->requested` would reduce nframes even more.
During AO init, snd_pcm_open() is called, which calls snd_config_update()
to allocate a global config node and stores it in the snd_config global
variable. This is never freed on uninit.
Fix this by freeing the global config node on uninit.
The device latency may change during hotplugging.
This commit updates p->hw_latency_ns each time
hotplug_cb is called so that it can reflect
updated device latency.
As far as I can tell PulseAudio introduced a bug in 16.0
where if a stream is (un)paused too often the reported latency
will momentarily spike by 3000% or more. Apparently in certain cases
just pausing once and waiting can also cause this.
Save the remaining users of PA the trouble of debugging the various
obscure issues that can arise from this (desync is a harmless example)
by enabling the latency hack code again.
ref: <https://github.com/mpv-player/mpv/issues/12057>
<https://github.com/mpv-player/mpv/issues/10333>
Commit 39f7f83 changed ao_driver.reset to use AudioUnitReset instead of
AudioOutputUnitStop. The problem with calling AudioOutputUnitStop was
that AudioOutputUnitStart takes a significant amount of time after a
stop when a wireless audio device is being used. This resulted in
lagging that was noticeable to users during seeking and short
pause/resume cycles. Switching to AudioUnitReset eliminated this
lagging.
However with the switch to AudioUnitReset the macOS daemon coreaudiod
continued to consume CPU time and did not release a powerd assertion
that it created on behalf of mpv, preventing macOS from sleeping.
This commit will change ao_coreaudio.reset to call AudioOutputUnitStop
after a delay if playback has not resumed. This preserves the faster
restart of playback for seeking and short pause/resume cycles and avoids
preventing sleep and needless CPU consumption.
Fixes#11617
The code changes were authored by @orion1vi and @lhc70000.
Co-authored-by: Collider LI <lhc199652@gmail.com>
Currently, the softvol gain control attempts to clip floating point ao
formats within -1 and +1. However, this is "optimized out" at unity gain,
where no clipping is applied. This results in inconsistent behavior when
the source audio is already out of -1 and +1 range, where a gain of 0.99
results in clipping, but not at exactly 1.
Since a big advantage of floating point audio data is that they do not
lose information through out-of-range data because the ao sink can apply
suitable negative gain to prevent clipping before converting them to
integer formats, clipping should not be performed on these data.
Fix this by removing the existing clipping behavior. It now results in
a simple multiplication, which faciliates compiler auto-vectorization
of this operation over audio data.
Currently, running AO control wakes up the WASAPI renderer thread in the
`WASAPI_THREAD_FEED` state, where `thread_feed` will be called. However,
it seems that in recent Windows versions (tested on Windows 10 build
19044.3930 and Windows 11 build 22631.3007) we can't know if it is safe
to feed more audio data in event-driven exclusive mode:
- `IAudioClient_GetCurrentPadding` always returns `bufferFrameCount`,
even if *NO* data has ever been written. This means we don't know how
much free space we have that is available for writing. This is not the
case in shared mode, where the return value correctly reflects the
size of data waiting to be processed. As a sidenote, MS did not
document the precise definition of the return value for an
event-driven, exclusive stream [1].
- `IAudioRenderClient_GetBuffer` never fails. We can call it for 10
times in a roll, each time requesting an entire buffer (the unit at
which data is exchanged in exclusive mode using event-driven
buffering; there are 2 such buffers) and get a successful return code
everytime. In shared mode, we get `AUDCLNT_E_BUFFER_TOO_LARGE` if we
request a buffer larger than that currently available.
As a result, `thread_feed` will always write `bufferFrameCount` frames
of audio in exclusive mode. There will therefore be glitches each time
`thread_control` is called due to the subsequent `thread_feed`
overwriting frames yet to be processed. Also, an irreversible error is
accumulated to `sample_count` as long as there is no AO reset, leading
to eventual, unbounded A/V desync.
As a fix to the issue, add a dedicated state for dispatch queue
processing so that `thread_feed` is only called when signaled by the OS.
The buffer checks in `thread_feed` that use `GetCurrentPadding` in
exclusive mode are kept in case there are older versions where the two
APIs behave differently.
Closes#12615.
[1] https://learn.microsoft.com/en-us/windows/win32/api/audioclient/nf-audioclient-iaudioclient-getcurrentpadding
As mentioned in [0] the suffix "_locked" would have been the appropriate
naming in line with similar uses inside mpv.
See `mp_abort_recheck_locked()`, `mp_abort_trigger_locked()`,
`retrigger_locked()`, `wakeup_locked()`...
[0] https://github.com/mpv-player/mpv/pull/12811#discussion_r1477518525
Stopping output implies that it can't be paused anymore.
This is consistent with the documented API in internal.h as well
as the behavior of other AOs.
resolves#13267
In commit c09245cdf2
long-path support was enabled for mpv without actually
making sure that there was no code left that used the
old limit (260 Unicode chars) for buffer sizes.
This commit fixes all but one case.
- Don't define _GNU_SOURCE on Windows, no need
- Define WIN32_LEAN_AND_MEAN to strip some unneded headers from
windows.h
- Define NOMINMAX and _USE_MATH_DEFINES as they are common for Windows
headers