Right now, the current order pretty much means that pulse defaults to
S16 for arbitrary unsupported formats, but fallback to float would make
more sense since it's the easiest to convert everything to without
requiring dithering, and PA will probably just internally convert things
to float anyway.
Also move S32 above S16, which essentially means format_maps is sorted
by preference. (Although ao_pulse currently ignores this and always
picks the first as a fallback)
All authors have agreed.
One exception is 71247a97b3, whose author was not asked, but we deem
the change as trivial. (And technically it was replaced when the audio
chain dropped non-native endian sample formats.)
Long planned. Leads to some sanity.
There still are some rather gross things. Especially g_groups is ugly,
and a hack that can hopefully be removed. (There is a plan for it, but
whether it's implemented depends on how much energy is left.)
Normally, PulseAudio accepts any combination of sample format, sample
rate, channel count/map. Sometimes it does not. For example, the channel
rate or channel count have fixed maximum values. We should not fail
fatally in such cases, but attempt to fall back to a working format.
We could just send pass an "unset" format to Pulse, but this is not too
attractive. Pulse could use a format which we do not support, and also
doing so much for an obscure corner case is not reasonable. So just pick
a format that is very likely supported.
This still could fail at runtime (the stream could fail instead of going
to the ready state), but this sounds also too complicated. In
particular, it doesn't look like pulse will tell us the cause of the
stream failure. (Or maybe it does - but I didn't find anything.)
Last but not least, our fallback could be less dumb, and e.g. try to fix
only one of samplerate or channel count first to reduce the loss, but
this is also not particularly worthy the effort.
Fixes#2654.
Replace all the check macros with function calls. Give them all the
same case and naming schema.
Drop af_fmt2bits(). Only af_fmt2bps() survives as af_fmt_to_bytes().
Introduce af_fmt_is_pcm(), and use it in situations that used
!AF_FORMAT_IS_SPECIAL. Nobody really knew what a "special" format
was. It simply meant "not PCM".
This requires jumping through multiple hoops on fire. Since the
PulseAudio API is virtually undocumented, I'm not sure if this is
correct either. We only react to sink events, and only to the NEW/REMOVE
events. CHANGE events are ignored, because PulseAudio fires them far too
often - even if the system is completely idle! If pa_sink_info.name can
change, we're in trouble. pa_sink_info.description is not so important,
but it'd also be a bit un-nice if it can change, and we don't update it.
The weird way how the actual AO and the hotplug context share the same
struct (ao) comes in handy here, although context_success_cb() still had
to be duplicated from success_cb() - the unused argument has a different
type.
This used to be required to workaround PulseAudio bugs. Even later, when
the bugs were (partially?) fixed in PulseAudio, I had the feeling the
hacks gave better behavior. On the other hand, I couldn't actually
reproduce any bad behavior without the hacks lately. On top of this, it
seems our hacks sometimes perform much worse than PulseAudio's native
implementation (see #1430).
So disable the hacks by default, but still leave the code and the option
in case it still helps somewhere. Also, being able to blame PulseAudio's
code by using its native API is much easier than trying to debug our own
(mplayer2-derived) hacks.
The main need I see for this is with libmpv - it would be confusing if
some application showed up as "mpv" on whateverthehell PulseAudio uses
it for (generally it does show up on various PA GUI tools).
While conceptually this sink stuff in PulseAudio does just the right
thing, actually listing the sinks is unbelievable complicated. Not only
is the idea that listing them should happen asynchronously completely
bullshit (who the fuck runs the PulseAudio server on a separate
computer), but the way this is done is full of bullshit too. Why
separate callbacks for each device? Why this obtuse mainloop shit?
Especially the mainloop shit makes it actively worse than doing things
manually with pthread primitives, and the reason for that (different
mainloop implementations for GUIs?) is laughable too. It's like they
chose the most complicated API possible just because they attempted
to "abstract" basic mechanisms in order to handle "everything". While
I don't claim to design the best APIs, this API is fucking terrible
without any excuse. (End of rant.)
All the dumb crap in pa_init_boilerplate() is needed to talk to the
audio server at all. Might also fix some subtle bugs in the init code
(which is strange, because the original file was contributed by the
devil himself).
This function is available starting with PulseAudio 2.0, while we only
require 1.0. This broke compilation on Ubuntu 12.04.5 LTS.
Use our own function to calculate the buffer size, which is actually
simpler and needs slightly less code.
Hopefully fixes#1154.
CC: @mpv-player/stable
Commit 957097 attempted to use PA_STREAM_FAIL_ON_SUSPEND to make
ao_pulse exit if the stream was started suspended.
Unfortunately, PA_STREAM_FAIL_ON_SUSPEND is active even during playback.
If you pause mpv, pulseaudio will close the actual audio device after a
while (or something like this), and unpausing won't work. Instead, it
will spam "Entity killed" error messages.
Undo this change and check for suspended audio manually during init.
CC: @mpv-player/stable
Sometimes, ao_pulse starts in suspended mode, which means playback is
essentially paused in pulseaudio. This gives the impression that mpv is
hanging, since it times video against the audio playback progress, and
audio never makes progress in this state.
I'm not sure if this will help - possibly it does with mixed
pulseaudio/alsa setups. However, if the alsa setup has the pulseaudio
plugin, alsa will hang too. But there's still a chance we get less
blame for pulseaudio messes.
Should be able to pass-through AC3, DTS, and others.
It seems PulseAudio wants players to fallback to PCM on certain events
signaled by the server, but we don't implement that. There's not much
documentation available anyway.
Until now, the audio chain could handle both little endian and big
endian formats. This actually doesn't make much sense, since the audio
API and the HW will most likely prefer native formats. Or at the very
least, it should be trivial for audio drivers to do the byte swapping
themselves.
From now on, the audio chain contains native-endian formats only. All
AOs and some filters are adjusted. af_convertsignendian.c is now wrongly
named, but the filter name is adjusted. In some cases, the audio
infrastructure was reused on the demuxer side, but that is relatively
easy to rectify.
This is a quite intrusive and radical change. It's possible that it will
break some things (especially if they're obscure or not Linux), so watch
out for regressions. It's probably still better to do it the bulldozer
way, since slow transition and researching foreign platforms would take
a lot of time and effort.
Remove the unnecessary indirection through ao fields.
Also fix the inverted result of AOCONTROL_HAS_TEMP_VOLUME. Hopefully the
change is equivalent. But actually, it looks like the old code did it
wrong.
Add an option that enables using native PulseAudio auto-updated timing
information, instead of the manual calculations added in mplayer2 times.
You can use --ao=pulse:no-latency-hacks to enable the new code. The code
is almost the same as the code that was removed with commit de435ed5,
but I didn't readd some bits I didn't understand. Likewise, the option
will disable the code added with that commit.
In my tests this seemed to work well, though the A/V sync display looks
funny when seeking.
The default is still the old behavior.
See issue #959.
This was needed by very old (0.9) versions only. Get rid of it.
Unfortunately, I can't cross-check with the original bug report, since
the bug URL leads to this:
Internal Server Error
TracError: IOError: [Errno 2] No such file or directory: '/home/lennart/svn/trac/pulseaudio/VERSION'
Until now, we've always calculated a timeout based on a heuristic when
to refill the audio buffers. Allow AOs to do it completely event-based
by providing wait and wakeup callbacks.
This also shuffles around the heuristic used for other AOs, and there is
a minor possibility that behavior slightly changes in real-world cases.
But in general it should be much more robust now.
ao_pulse.c now makes use of event-based waiting. It already did before,
but the code for time-based waiting was also involved. This commit also
removes one awkward artifact of the PulseAudio API out of the generic
code: the callback asking for more data can be reentrant, and thus
requires a separate lock for waiting (or a recursive mutex).
I'm not quite sure why ao_pulse needs this. It was broken when a thread
to fill audio buffers was added to AO - the pulseaudio callback was
waking up the playback thread, not the audio thread. But nobody noticed,
so it can't be very important. In any case, this change makes it wake up
the audio thread instead (which in turn wakes up the playback thread if
needed).
Until now, this was always conflated with uninit. This was ugly, and
also many AOs emulated this manually (or just ignored it). Make draining
an explicit operation, so AOs which support it can provide it, and for
all others generic code will emulate it.
For ao_wasapi, we keep it simple and basically disable the internal
draining implementation (maybe it should be restored later).
Tested on Linux only.
We want to move the AO to its own thread. There's no technical reason
for making the ao struct opaque to do this. But it helps us sleep at
night, because we can control access to shared state better.
1000ms is a bit insane. It makes behavior on playback speed changes
worse (because the player has to catch up the dropped audio due to
audio-chain reset), and perhaps makes seeking slower.
Note that the problem of playback speed changes misbehaving will be
fixed in the future, but even then we don't want to have a buffer that
large.
This comes with two internal AO API changes:
1. ao_driver.play now can take non-interleaved audio. For this purpose,
the data pointer is changed to void **data, where data[0] corresponds to
the pointer in the old API. Also, the len argument as well as the return
value are now in samples, not bytes. "Sample" in this context means the
unit of the smallest possible audio frame, i.e. sample_size * channels.
2. ao_driver.get_space now returns samples instead of bytes. (Similar to
the play function.)
Change all AOs to use the new API.
The AO API as exposed to the rest of the player still uses the old API.
It's emulated in ao.c. This is purely to split the commits changing all
AOs and the commits adding actual support for outputting N-I audio.
No AO can handle these, so it would be a problem if they get added
later, and non-interleaved formats get accepted erroneously. Let them
gracefully fall back to other formats.
Most AOs actually would fall back, but to an unrelated formats. This is
covered by this commit too, and if possible they should pick the
interleaved variant if a non-interleaved format is requested.
Set the PulseAudio stream title, just like the VO window title is set.
Refactor update_vo_window_title() so that we can use it for AOs too.
The ao_pulse.c bit is stolen from MPlayer.
The code was selecting PA_CHANNEL_POSITION_MONO for MP_SPEAKER_ID_FC,
which is correct only with the "mono" channel layout, but not anything
else. Remove the mono entry, and handle mono separately.
See github issue #326.