The AO provides a way for mpv to directly submit audio to the PipeWire
audio server.
Doing this directly instead of going through the various compatibility
layers provided by PipeWire has the following advantages:
* It reduces complexity of going through the compatibility layers
* It allows a richer integration between mpv and PipeWire
(for example for metadata)
* Some users report issues with the compatibility layers that to not
occur with the native AO
For now the AO is ordered after all the other relevant AOs, so it will
most probably not be picked up by default.
This is for the following reasons:
* Currently it is not possible to detect if the PipeWire daemon that mpv
connects to is actually driving the system audio.
(https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/1835)
* It gives the AO time to stabilize before it is used by everyone.
Based-on-patch-by: Oschowa <oschowa@web.de>
Based-on-patch-by: Andreas Kempf <aakempf@gmail.com>
Helped-by: Ivan <etircopyhdot@gmail.com>
Simply returning out of this function leaks avpkt, need to always "goto
done".
Rewrite the logic a bit to make it more clear what's going on (IMO).
Fixes#9593
Ever instance of m_obj_list is a constant and for all of them, the field
is true. Just remove the field all together.
Signed-off-by: Emil Velikov <emil.l.velikov@gmail.com>
This fixes a mismatch between configure working and build time
failing with Linux + OSSv4, enabling compilation on Debian based
Linux systems with the oss4-dev package.
Fixes#9378
This brings my scaletempo2 benchmark down from ~22s to ~7s on my machine
(-march=native), and down to ~11s with a generic compile.
Guarded behind an appropriate #ifdef to avoid being ableist against
people who have the clinical need to run obscure platforms.
Closes#8848
This fixes audio encoding crashing under ASan.
When extended_data != data, FFmpeg copies more pointers from
extended_data (= the number of channels) than there really
are for non-planar formats (= exactly 1), but that's not our fault.
Regardless, this commit makes it work in all common cases.
Changes:
- code refactored;
- mixer options removed;
- new mpv sound API used;
- add sound devices detect (mpv --audio-device=help will show all available devices);
- only OSSv4 supported now;
Tested on FreeBSD 12.2 amd64.
This makes the behavior of all control messages consistent,
fixing an inconsistency that has been with us since
4d8266c739 - which is the initial
rework of the polyaudio AO into the pulseaudio AO.
Muting the stream also directly triggers an update to the OSD.
When not waiting for the command completion this read of the mute
property may read the old state. A stale read.
Note that this somehow was not triggered on native Pulseaudio, but it is
an issue on Pipewire.
See https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/868
Set pcm state to SND_PCM_STATE_XRUN in case -EPIPE is received,
and handle this state as per the usual logic.
This way snd_pcm_prepare gets called, and the loop continued.
Inspired by a patch posted by malc_ on #mpv.
--audio-stream-silence is a shitty feature compensating for awful
consumer garbage, that mutes PCM at first to check whether it's
compressed audio, using formats advocated and owned by malicious patent
troll companies (who spend more money on their lawyers than paying any
technicians), wrapped in a wasteful way to make it constant bitrate
using a standard whose text is not freely available, and only rude users
want it. This feature has been carelessly broken, because it's
complicated and stupid. What would Jesus do? If not getting an aneurysm,
or pushing over tables with expensive A/V receivers on top of them, he'd
probably fix the feature. So let's take inspiration from Jesus Christ
himself, and do something as dumb as wasting some of our limited
lifetime on this incredibly stupid fucking shit.
This is tricky, because state changes like end-of-audio are supposed to
be driven by the AO driver, while playing silence precludes this. But it
seems code paths for "untimed" AOs can be reused.
But there are still problems. For example, underruns will just happen
normally (and stop audio streaming), because we don't have a separate
heuristic to check whether the buffer is "low enough" (as a consequence
of a network stall, but before the audio output itself underruns).
Create a central function which pumps data through the filter. This also
might fix bogus use of the filter API on flushing. (The filter is just
used for convenience, but I guess the overall result is still simpler.)
AVFrame doesn't have public code for pool allocation, so mpv does it
manually. AVFrame allocation is very tricky, so we added a bug.
This crashed with libopus encoding, but not some other audio codecs,
because the libopus libavcodec wrapper accesses AVFrame.data. Most code
tries to avoid accessing AVFrame.data and uses AVFrame.extended_data,
because using the former would subtly corrupt memory on more than 8
channels. The fact that this problem manifested only now shows that most
AVFrame consuming FFmpeg code indeed uses extended_data for audio.
It is now the AO's responsibility to handle period size alignment. The
ao->period_size alignment field is unused as of the recent audio
refactor commit. Remove it.
It turns out that ao_alsa shows extremely inefficient behavior as a
consequence of the removal of period size aligned writes in the
mentioned refactor commit. This is because it could get into a state
where it repeatedly wrote single samples (as small as 1 sample), and
starved the rest of the player as a consequence. Too bad. Explicitly
align the size in ao_alsa. Other AOs, which need this, should do the
same.
One reason why it broke so badly with ao_alsa was that it retried the
write() even if all reported space could be written. So stop doing that
too. Retry the write only if we somehow wrote less.
I'm not sure about ao_pulse.
Unused, was terrible garbage. It was (or at least its implementation
was) always a make-shift solution, and just gross bullshit. It is unused
now, so delete it.
This replaces the two buffers (ao_chain.ao_buffer in the core, and
buffer_state.buffers in the AO) with a single queue. Instead of having a
byte based buffer, the queue is simply a list of audio frames, as output
by the decoder. This should make dataflow simpler and reduce copying.
It also attempts to simplify fill_audio_out_buffers(), the function I
always hated most, because it's full of subtle and buggy logic.
Unfortunately, I got assaulted by corner cases, dumb features (attempt
at seamless looping, really?), and other crap, so it got pretty
complicated again. fill_audio_out_buffers() is still full of subtle and
buggy logic. Maybe it got worse. On the other hand, maybe there really
is some progress. Who knows.
Originally, the data flow parts was meant to be in f_output_chain, but
due to tricky interactions with the playloop code, it's now in the dummy
filter in audio.c.
At least this improves the way the audio PTS is passed to the encoder in
encoding mode. Now it attempts to pass frames directly, along with the
pts, which should minimize timestamp problems. But to be honest, encoder
mode is one big kludge that shouldn't exist in this way.
This commit should be considered pre-alpha code. There are lots of bugs
still hiding.
Allow mp_aframe_clip_timestamps() to discard a spdif frame if it's
entirely out of the timestamp range. Just a minor thing that might make
handling these dumb formats easier.
Previously get_state() would keep setting the cork status
while paused, but it only does for that after underflows now.
Correct this oversight by creating the stream corked for start()
to uncork it at a later time.
fixes#8026
FFmpeg expects those fields to be set on the AVFrame when
encoding audio, not doing so will cause the avcodec_send_frame
call to return EINVAL (at least in recent builds).
scaletempo2 is a new audio filter for playing back
audio at modified speed and is based on chromium
commit 51ed77e3f37a9a9b80d6d0a8259e84a8ca635259.
It sounds subjectively better than the existing
implementions scaletempo and rubberband.
When get_state() corks the stream after an underrun happens
priv->playing is incorrectly reset to true, which can cause the
player to miss the underrun entirely. Stop resetting priv->playing
during corking (but not uncorking) to fix this.
The underflow callback introduced in d27ad96 can be called
when the buffer is still full, causing playback to never
resume afterwards since get_state() reports free_samples == 0.
Fix this by fully resetting on underrun, which flushes
the stream and ensures free buffer space.
fixes#7874