While conceptually this sink stuff in PulseAudio does just the right
thing, actually listing the sinks is unbelievable complicated. Not only
is the idea that listing them should happen asynchronously completely
bullshit (who the fuck runs the PulseAudio server on a separate
computer), but the way this is done is full of bullshit too. Why
separate callbacks for each device? Why this obtuse mainloop shit?
Especially the mainloop shit makes it actively worse than doing things
manually with pthread primitives, and the reason for that (different
mainloop implementations for GUIs?) is laughable too. It's like they
chose the most complicated API possible just because they attempted
to "abstract" basic mechanisms in order to handle "everything". While
I don't claim to design the best APIs, this API is fucking terrible
without any excuse. (End of rant.)
All the dumb crap in pa_init_boilerplate() is needed to talk to the
audio server at all. Might also fix some subtle bugs in the init code
(which is strange, because the original file was contributed by the
devil himself).
The one in msg.c was mistakenly removed with commit e99a37f6.
I didn't actually test the change in ao_sndio.c (but obviously "ap"
shouldn't be static).
Don't wait after the audio thread has pushed the remaining audio to the
AO. Avoids hard hangs if the heuristic fails completely (could still
happen if get_delay returns absurd values).
CC: @mpv-player/stable
Since the internal AO driver API has no proper way to determine EOF, we
need to guess by querying get_delay. But some AOs (e.g. ao_pulse with
no-latency-hacks set) may never reach 0, maybe because they naively add
the latency to the buffer level. In this case our heuristic can break.
Fix by always using the delay to estimate the EOF time. It's not even
that important - it's mostly used to avoid blocking draining. So this
should be ok.
CC: @mpv-player/stable (maybe)
Unfortunately, ALSA is particularly bad with this, because mpv has to
add all sorts of magic crap to the device name to make things work. The
device selection overrides this, so explicitly selecting devices will
most likely break your audio. This has yet to be solved.
This function is available starting with PulseAudio 2.0, while we only
require 1.0. This broke compilation on Ubuntu 12.04.5 LTS.
Use our own function to calculate the buffer size, which is actually
simpler and needs slightly less code.
Hopefully fixes#1154.
CC: @mpv-player/stable
It was more complicated than it had to be: the audio thread already
determines whether audio has ended, so we can use that. Remove the
separate logic for draining.
Commit 957097 attempted to use PA_STREAM_FAIL_ON_SUSPEND to make
ao_pulse exit if the stream was started suspended.
Unfortunately, PA_STREAM_FAIL_ON_SUSPEND is active even during playback.
If you pause mpv, pulseaudio will close the actual audio device after a
while (or something like this), and unpausing won't work. Instead, it
will spam "Entity killed" error messages.
Undo this change and check for suspended audio manually during init.
CC: @mpv-player/stable
Sometimes, ao_pulse starts in suspended mode, which means playback is
essentially paused in pulseaudio. This gives the impression that mpv is
hanging, since it times video against the audio playback progress, and
audio never makes progress in this state.
I'm not sure if this will help - possibly it does with mixed
pulseaudio/alsa setups. However, if the alsa setup has the pulseaudio
plugin, alsa will hang too. But there's still a chance we get less
blame for pulseaudio messes.
This gets rid of this warning:
Could not update timestamps for skipped samples.
This required an API addition to FFmpeg (otherwise it would instead
doing arithmetic on the timestamps itself), so whether it works depends
on the FFmpeg version.
Although the "af" command already could do this, it seems it's better
to introduce a lower level mechanism for now. This avoids some messy
issues, since that code would recursive call reinit_audio_chain().
To be used by the next commit.
There's no real reason why audio_init_filter() should exist. Just use
af_init or af_reinit directly. (We lose a useless message; the same
information is printed in a quite close place with more details.)
Requires less code, and the way the filter chain is marked as having
failed to initialize allows just switching off audio instead of
crashing if trying to insert a volume filter in mixer.c fails, and
recreating the old filter chain fails too.
libsndio has absolutely no mechanism to discard already written audio
(other than SIGKILLing the sound server). sio_stop() will always block
until all audio is played. This is a legitimate design bug.
In theory, we could just not stop it at all, so if the player is e.g.
paused, the remaining audio would be played. When resuming, we would
have to do something to ensure get_delay() returns the right value. But
I couldn't get it to work in all cases.
get_delay needs to report the current audio buffer status. It's
important for A/V sync that this information is current, but functions
which update it were called on play() or get_space() calls only.
This was in bytes, but it's more convenient to use samples (or frames;
in any case the smallest unit of audio that includes all channels).
Remove the ao->bps line too; it will be set after init() returns.
Let codec_tags.c do the messy mapping.
In theory we could simplify further by makign demux_mkv.c directly use
codec names instead of the MPlayer-inherited "internal FourCC" business,
but I'd rather not touch this - it would just break things.
For a while, we used this to transfer PCM from demuxer to the filter
chain. We had a special "codec" that mapped what MPlayer used to do
(MPlayer passes the AF sample format over an extra field to ad_pcm,
which specially interprets it).
Do this by providing a mp_set_pcm_codec() function, which describes a
sample format in a generic way, and sets the appropriate demuxer header
fields so that libavcodec interprets it correctly. We use the fact that
libavcodec has separate PCM decoders for each format. These are
systematically named, so we can easily map them.
This has the advantage that we can change the audio filter chain as we
like, without losing features from the "rawaudio" demuxer. In fact, this
commit also gets rid of the audio filter chain formats completely.
Instead have an explicit list of PCM formats. (We could even just have
the user pass libavcodec PCM decoder names directly, but that would be
annoying in other ways.)
Digital pass-through was probably broken. Possibly fix it (no way to
test). This also should make the logic slightly saner.
Fortunately, it's unlikely that anyone who uses OSS has a spdif setup.
Commit 5b5a3d0c broke this. The really funny thing is that this code was
actually always under "#if BYTE_ORDER == BIG_ENDIAN". The breaking
commit just edited this code slightly, but it must have failed to
compile on big endian long before (since over 1 year ago, commit d3fb58).
Should be able to pass-through AC3, DTS, and others.
It seems PulseAudio wants players to fallback to PCM on certain events
signaled by the server, but we don't implement that. There's not much
documentation available anyway.
Before this commit, there was AF_FORMAT_AC3 (the original spdif format,
used for AC3 and DTS core), and AF_FORMAT_IEC61937 (used for AC3, DTS
and DTS-HD), which was handled as some sort of superset for
AF_FORMAT_AC3. There also was AF_FORMAT_MPEG2, which used
IEC61937-framing, but still was handled as something "separate".
Technically, all of them are pretty similar, but may use different
bitrates. Since digital passthrough pretends to be PCM (just with
special headers that wrap digital packets), this is easily detectable by
the higher samplerate or higher number of channels, so I don't know why
you'd need a separate "class" of sample formats (AF_FORMAT_AC3 vs.
AF_FORMAT_IEC61937) to distinguish them. Actually, this whole thing is
just a mess.
Simplify this by handling all these formats the same way.
AF_FORMAT_IS_IEC61937() now returns 1 for all spdif formats (even MP3).
All AOs just accept all spdif formats now - whether that works or not is
not really clear (seems inconsistent due to earlier attempts to make
DTS-HD work). But on the other hand, enabling spdif requires manual user
interaction, so it doesn't matter much if initialization fails in
slightly less graceful ways if it can't work at all.
At a later point, we will support passthrough with ao_pulse. It seems
the PulseAudio API wants to know the codec type (or maybe not - feeding
it DTS while telling it it's AC3 works), add separate formats for each
codecs. While this reminds of the earlier chaos, it's stricter, and most
code just uses AF_FORMAT_IS_IEC61937().
Also, modify AF_FORMAT_TYPE_MASK (renamed from AF_FORMAT_POINT_MASK) to
include special formats, so that it always describes the fundamental
sample format type. This also ensures valid AF formats are never 0 (this
was probably broken in one of the earlier commits from today).