6863eefc3d handled this situation by using
an atomic variable to express the state for which the wakeup is caused
by AO control, and the dispatch queue is only processed at this state.
However, this can cause permanent lockup of the player core when the
following happens:
- AO control sets the thread state to WASAPI_THREAD_DISPATCH, and
sets the wakeup handle.
- WASAPI thread reads the WASAPI_THREAD_DISPATCH state and processes
the dispatch queue.
- Another AO control happens. A dispatch item is enqueued, and the
state stays at WASAPI_THREAD_DISPATCH.
- WASAPI thread resets the thread state to WASAPI_THREAD_FEED since
the state has not changed.
- WaitForSingleObject() returns in the WASAPI thread, sees this state,
and does not process the dispatch queue.
- The player core locks permanently because it is waiting for the dispatch
to be processed.
This has been experimentally verified on a system under high contention:
The easiest way to trigger this lockup is to continuously hold down "i",
which rapidly issues AO get volume/mute controls.
To properly handle this, use separate handles for system and user wakeup
requests. Only feed audio when woke up by system and only process the
dispatch queue when woke up by user.
Fixes: 6863eefc3d
This allows users to set buffer duration in exclusive mode. We have
been using the default device period as the buffer size and it is
robust enough in most cases. However, on some devices there are
horrible glitches after a stream reset. Unfortunately, the issue is not
consistently reproducible, but using a smaller buffer size (e.g., the
minimum device period) seems to resolve the problem.
Fixes#13715.
Adds support for extracting codec profile. Old properties are redirected
to new one and removed from docs. Likely will stay like that forever as
there is no reason to remove them.
As a effect of unification of properties between audio and video,
video-codec will now print codec (format) descriptive name, not decoder
long name as it were before. In practice this change fixes what docs
says. If you really need decoder name, use the `track-list/N/decoder-desc`.
Playback with many audio channels could be distorted when using
scaletempo2. This was most noticeable when there were a lot of quiet
channels and few louder channels.
Fix this by increasing the weight of louder channels in relation to
quieter channels. Each channel's target block energy is factored into
the usual similarity measure.
This should have little effect on very correlated channels (such as most
stereo media), where the factors are very similar for all channels.
See-Also: #8705
See-Also: #13737
Lots of filters have generic internal function names like "process".
On a stack trace, all of the different filters use this name,
which causes confusion of the actual filter being processed.
This renames these internal function names to carry the filter names.
This matches what had already been done for some filters.
"playthread" is a confusing name which doesn't describe what it really
is. Rename it to ao_thread, and ao_wakeup_playthread to ao_wakeup,
in the same style as VO threads. This makes call stack function names
less confusing.
A figure from pipewire documentation:
```
stream time domain graph time domain
/-----------------------\/-----------------------------\
queue +-+ +-+ +-----------+ +--------+
----> | | | |->| converter | -> graph -> | kernel | -> speaker
<---- +-+ +-+ +-----------+ +--------+
dequeue buffers \-------------------/\--------/
graph internal
latency latency
\--------/\-------------/\-----------------------------/
queued buffered delay
```
We calculate `end_time` in the following steps:
1. get current timestamp in mpv
```
int64_t end_time = mp_time_ns();
```
2. add duration of samples to enqueue
```
end_time += MP_TIME_S_TO_NS(nframes) / ao->samplerate;
```
3. add delay of the pipewire graph
```
end_time += MP_TIME_S_TO_NS(time.delay) * time.rate.num / time.rate.denom;
```
4. add duration of queued and buffered samples.
```
end_time += MP_TIME_S_TO_NS(time.queued) / ao->samplerate;
end_time += MP_TIME_S_TO_NS(time.buffered) / ao->samplerate;
```
New in this commit. `time.queued` is usually zero as `SPA_PARAM_BUFFERS_buffers`
is default to 1; however it is not always.
`time.buffered` is non-zero if there is a resampler involved.
5. add elapsed duration from when `time` is captured
```
end_time -= pw_stream_get_nsec(p->stream) - time.now;
```
New in this commit. `time` is captured at `time.now`.
From then, time has passed so we need to exclude the elapsed time,
by calculating the diff of `pw_stream_get_nsec()` and `time.now`.
No idea how things previously worked without having these set, but
apparently they did...
If this was a normal encoder to muxer case, we would utilize
`avcodec_parameters_to_context`, but alas this is not.
Fixes: #13794
`hotplug_cb` was registered only in `hotplug_init()`.
This commit make it registered in `init()` as well,
so that the ao can listen for latency change
in playback.
`buf` contains a `struct spa_data` for each channel.
Therefore the number of channels does not matter to calculate the frame capacity of one `struct spa_data`.
In practice this shouldn't make a difference as `b->requested` would reduce nframes even more.
During AO init, snd_pcm_open() is called, which calls snd_config_update()
to allocate a global config node and stores it in the snd_config global
variable. This is never freed on uninit.
Fix this by freeing the global config node on uninit.
The device latency may change during hotplugging.
This commit updates p->hw_latency_ns each time
hotplug_cb is called so that it can reflect
updated device latency.
With certain speed settings, the following can happen at the start of
the playback:
- can_perform_wsola returns false, so no frames are written
- mp_scaletempo2_frames_available returns true when
p->input_buffer_final_frames is 0 and target_block_index < 0
This results in infinite loop and completely stalls audio filter
processing and playback. Fix this by only checking this condition
after the final frame is set.
Fixes: 8080d00d7f
As far as I can tell PulseAudio introduced a bug in 16.0
where if a stream is (un)paused too often the reported latency
will momentarily spike by 3000% or more. Apparently in certain cases
just pausing once and waiting can also cause this.
Save the remaining users of PA the trouble of debugging the various
obscure issues that can arise from this (desync is a harmless example)
by enabling the latency hack code again.
ref: <https://github.com/mpv-player/mpv/issues/12057>
<https://github.com/mpv-player/mpv/issues/10333>
Commit 39f7f83 changed ao_driver.reset to use AudioUnitReset instead of
AudioOutputUnitStop. The problem with calling AudioOutputUnitStop was
that AudioOutputUnitStart takes a significant amount of time after a
stop when a wireless audio device is being used. This resulted in
lagging that was noticeable to users during seeking and short
pause/resume cycles. Switching to AudioUnitReset eliminated this
lagging.
However with the switch to AudioUnitReset the macOS daemon coreaudiod
continued to consume CPU time and did not release a powerd assertion
that it created on behalf of mpv, preventing macOS from sleeping.
This commit will change ao_coreaudio.reset to call AudioOutputUnitStop
after a delay if playback has not resumed. This preserves the faster
restart of playback for seeking and short pause/resume cycles and avoids
preventing sleep and needless CPU consumption.
Fixes#11617
The code changes were authored by @orion1vi and @lhc70000.
Co-authored-by: Collider LI <lhc199652@gmail.com>
Currently, the softvol gain control attempts to clip floating point ao
formats within -1 and +1. However, this is "optimized out" at unity gain,
where no clipping is applied. This results in inconsistent behavior when
the source audio is already out of -1 and +1 range, where a gain of 0.99
results in clipping, but not at exactly 1.
Since a big advantage of floating point audio data is that they do not
lose information through out-of-range data because the ao sink can apply
suitable negative gain to prevent clipping before converting them to
integer formats, clipping should not be performed on these data.
Fix this by removing the existing clipping behavior. It now results in
a simple multiplication, which faciliates compiler auto-vectorization
of this operation over audio data.
Currently, running AO control wakes up the WASAPI renderer thread in the
`WASAPI_THREAD_FEED` state, where `thread_feed` will be called. However,
it seems that in recent Windows versions (tested on Windows 10 build
19044.3930 and Windows 11 build 22631.3007) we can't know if it is safe
to feed more audio data in event-driven exclusive mode:
- `IAudioClient_GetCurrentPadding` always returns `bufferFrameCount`,
even if *NO* data has ever been written. This means we don't know how
much free space we have that is available for writing. This is not the
case in shared mode, where the return value correctly reflects the
size of data waiting to be processed. As a sidenote, MS did not
document the precise definition of the return value for an
event-driven, exclusive stream [1].
- `IAudioRenderClient_GetBuffer` never fails. We can call it for 10
times in a roll, each time requesting an entire buffer (the unit at
which data is exchanged in exclusive mode using event-driven
buffering; there are 2 such buffers) and get a successful return code
everytime. In shared mode, we get `AUDCLNT_E_BUFFER_TOO_LARGE` if we
request a buffer larger than that currently available.
As a result, `thread_feed` will always write `bufferFrameCount` frames
of audio in exclusive mode. There will therefore be glitches each time
`thread_control` is called due to the subsequent `thread_feed`
overwriting frames yet to be processed. Also, an irreversible error is
accumulated to `sample_count` as long as there is no AO reset, leading
to eventual, unbounded A/V desync.
As a fix to the issue, add a dedicated state for dispatch queue
processing so that `thread_feed` is only called when signaled by the OS.
The buffer checks in `thread_feed` that use `GetCurrentPadding` in
exclusive mode are kept in case there are older versions where the two
APIs behave differently.
Closes#12615.
[1] https://learn.microsoft.com/en-us/windows/win32/api/audioclient/nf-audioclient-iaudioclient-getcurrentpadding
Deprecated upstream 1cc24d7495
We need to reallocate the context here because `avcodec_free_context`
also frees the context, and we want to reuse the context with some
reconfig.
As mentioned in [0] the suffix "_locked" would have been the appropriate
naming in line with similar uses inside mpv.
See `mp_abort_recheck_locked()`, `mp_abort_trigger_locked()`,
`retrigger_locked()`, `wakeup_locked()`...
[0] https://github.com/mpv-player/mpv/pull/12811#discussion_r1477518525
Fix DTS passthrough playback of 44.1 khz content. Also, take into account that there are some DTS variants with a samplerate of 96khz (e.g. DTS 24/96), somehow they are recognized wrongly as 48khz by the code. Don´t rely on this "bug", do it correctly. Now every samplerate above 44.1Khz is correctly treated as 48khz, and 44.1khz files are treated as 44.1khz for bitstreaming.
Stopping output implies that it can't be paused anymore.
This is consistent with the documented API in internal.h as well
as the behavior of other AOs.
resolves#13267
In commit c09245cdf2
long-path support was enabled for mpv without actually
making sure that there was no code left that used the
old limit (260 Unicode chars) for buffer sizes.
This commit fixes all but one case.
- Don't define _GNU_SOURCE on Windows, no need
- Define WIN32_LEAN_AND_MEAN to strip some unneded headers from
windows.h
- Define NOMINMAX and _USE_MATH_DEFINES as they are common for Windows
headers
We prefer to fail fast rather than degrade in unpredictable ways.
The example in sub/ is particularly egregious because the code just
skips the work it's meant to do when an allocation fails.
I'd like some names to be more descriptive, but to work with 15 chars
limit we have to make some sacrifice.
Also because of the limit, remove the `mpv/` prefix and prioritize
actuall thread name.
It was found that this causes issues with at least ao_coreaudio,
essentially revealing a way bigger issue:
Some AOs don't check for 0 and/or have no way to deal with short writes.
Someone will have to figure out a fix later but get rid of the direct
cause for now.
This reverts commit ae908a70ce.
ao_read_data() is used by pull AOs potentially from threads managed by
external libraries. These threads can be sensitive to blocking.
For example the pipewire ao is using a realtime thread for the
callbacks.
since i was going to fix the include order of stdatomic, might as well
sort the surrouding includes in accordance with the project's coding
style.
some headers can sometime require specific include order. standard
library headers usually don't. but mpv might "hack into" the standard
headers (e.g pthreads) so that complicates things a bit more.
hopefully nothing breaks. if it does, the style guide is to blame.
replace it with <stdatomic.h> and replace the mp_atomic_* typedefs with
explicit _Atomic qualified types.
also add missing config.h includes on some files.
Pull AOs work off of a callback that relies on mpv's internal timer. So
like with the related video changes, convert all of these to nanoseconds
instead. In many cases, the underlying audio API does actually provide
nanosecond resolution as well.
There's a lot of wild 1e6, 1000, etc. lying around in the code. A macro
is much easier to read and understand at a glance. Add some helpers for
this. We don't need to convert everything now but there's some simple
things that can be done so they are included in this commit.
Why a bigger search-interval is required:
scaletempo2 doesn't do a good job when the signal contains frequencies
less then 1/search_interval. With a search interval of 30ms that means
anything below 33.333Hz sounds bad.
Depending on the genre it's very for music to contain frequencies down
to 30Hz, and sometimes even a little bit below that. Therefore a higher
default value is needed to handle such cases.
Based on that an argument can be made for a value of 50, as that should
work down to 20Hz, or something even higher because movies sometimes
have some infrasonic content.
However the downside of big search intervals is increased CPU usage and
intelligibility at higher speeds, as it effectively leads to parts of
the audio being skipped.
A value of 40 can handle frequencies down to 25Hz, enough for all music
except very rare edge cases, while still providing decent
intelligibility.
Why a smaller window-size is required:
Large values reduce intelligibility at high speeds and therefore small
values are preferred.
However when values get too small it starts to sound weird
(similar to librubberband).
In my testing a value of 10 already works well, but adding a small
safety margin seems like a good idea, especially since it made no
noticeable difference to intelligibility, which is why 12 was chosen.
Linux and macOS already use nanosecond resolution for their sleep
functions. It was just being converted from microseconds before. Since
we have mp_time_ns now, go ahead and bump the precision here. The timer
for windows uses some timeBeginPeriod thing which I'm not sure what it
does really but whatever just convert the units to ms like they were
doing before. There's really no reason to keep the mp_sleep_us helper
around. A multiplication by 1000 is trivial and underlying OS clocks
have nanosecond precision.
This is the most supported in standard layout, if we request more it
tends to fallback to stereo instead. Also channels mask is 32-bit and it
can get truncated.
A bit different from the OPT_REPLACED/OPT_REMOVED ones in that the
options still possibly do something but they have a deprecation
message. Most of these are old and have no real usage. The only
potentially controversial ones are the removal of --oaffset and
--ovoffset which were deprecated years ago and seemingly have no real
replacement. There's a cryptic message about --audio-delay but who
knows. The less encoding mode code we have, the better so just chuck
it.
Avoid generating too much audio after EOF.
Note: This often has no effect, because less audio is produced than
required.
Usually this comes to effect with the userspeed filter at high speed
(4x) and going back to 1x speed to remove the filter.
After the final input packet, the filter padded with silence to allow
one more iteration. That was not enough to process the final frames.
Continue padding the end of `input_buffer` with silence until the final
frames have been processed.
Implementation: Instead of padding when adding final samples, pad before
running WSOLA iteration. Count number of added silent frames and
remaining input frames for time keeping.
This changes the emitted pts values from the start of the search block
to the center of the search block. Change initial `output_time`
accordingly. Initial `search_block_index` is irrelevant, because it's
overwritten before the first iteration.
Using the `output_time` removes the rounding of `search_block_index`,
which also fixes the <20 microsecond gaps in timestamps between output
packets.
Rationale:
The variance in audio position was in the range `0..search-interval`.
With this change, the range is
(- search-interval / 2)..(search-interval / 2)`
which ensures lower maximum offset.
Target block can be anywhere in the previous search-block, varying by
`search-interval` while the filter is active. This resulted in constant
audio offset when returning to 1x playback speed.
- Move the search block to the target block to sync up exactly.
- Drop old frames to minimize input_buffer usage.
The internal time update function involved multiple problems:
- Time was updated after WSOLA iteration. The means speed was updated
one iteration later than it could be.
- The update functions caused spikes of too many or too few samples
advanced, leading to audio glitches on speed changes.
- The inconsistent updates made it very difficult to produce gapless
audio packets.
- The `output_time` update function involved complicated feedback:
`search_block_index` influenced how many frames from `input_buffer`
are retained, which influenced how much `output_time` is changed,
which influenced `search_block_index`.
With these changes:
- Time is updated before WSOLA iterations. Speed changes are effective
instantly.
- There are no spikes in playback speed during speed changes.
- No significant gaps are introduced in output packets.
- The time update function becomes (function calls omitted for brevity)
output_time += ola_hop_size * playback_rate
Functions received a `playback_rate` parameter to check how many samples
are needed before iteration. Internal state is only updated when the
iteration is actually run, so the speed is allowed to change until
enough data is received.
The first WSOLA iteration overlapped audio with whatever was in the
`wsola_output` buffer. This was either silence (if not run before), or
old frames (if switching to 1x and back to a different speed).
Track the state of the output buffer and memcpy the whole window for the
first iteration instead.