Useful with some of the following commits.
ca_fill_asbd() should behave exactly as before.
Instead of actually implementing the inverse function of ca_fill_asbd(),
just loop over the (small) list of mpv functions and check if any mpv
equivalent to a given ASBD exists.
kAudioFormatFlagIsSignedInteger implicates that it's only used with
integer formats. The mpv internal flag on the other hand signals the
presence of a sign, and this is set on float formats.
Until now, this probably worked fine, because at least AudioUnit is
ignoring the uncorrect flag.
The message log level shouldn't get to decide whether something fails
or not. So replace the fatal error check on the verbose output code
path with a warning.
Previously we let the user use the audio device ID, but this is not persistent
and can change when plugging in new devices. That of course made it quite
worthless for storing it as a user setting for GUIs, or for user scripts.
In theory getting the kAudioDevicePropertyDeviceUID can fail but it doesn't
on any of my devices, so I'm leaving the error reporting quite high and see if
someone complains.
In general, you need to check errno when using strtol(), but as far as I
know, strtol() won't reset errno on success. This has to be done
manually. The code could have failed sporadically if strtol() succeeded,
and errno was already set to one of the checked values.
(This strtol() still isn't fully error checked, but I don't know if it's
intentional, e.g. for parsing a numeric prefix only.)
Before this commit, there was AF_FORMAT_AC3 (the original spdif format,
used for AC3 and DTS core), and AF_FORMAT_IEC61937 (used for AC3, DTS
and DTS-HD), which was handled as some sort of superset for
AF_FORMAT_AC3. There also was AF_FORMAT_MPEG2, which used
IEC61937-framing, but still was handled as something "separate".
Technically, all of them are pretty similar, but may use different
bitrates. Since digital passthrough pretends to be PCM (just with
special headers that wrap digital packets), this is easily detectable by
the higher samplerate or higher number of channels, so I don't know why
you'd need a separate "class" of sample formats (AF_FORMAT_AC3 vs.
AF_FORMAT_IEC61937) to distinguish them. Actually, this whole thing is
just a mess.
Simplify this by handling all these formats the same way.
AF_FORMAT_IS_IEC61937() now returns 1 for all spdif formats (even MP3).
All AOs just accept all spdif formats now - whether that works or not is
not really clear (seems inconsistent due to earlier attempts to make
DTS-HD work). But on the other hand, enabling spdif requires manual user
interaction, so it doesn't matter much if initialization fails in
slightly less graceful ways if it can't work at all.
At a later point, we will support passthrough with ao_pulse. It seems
the PulseAudio API wants to know the codec type (or maybe not - feeding
it DTS while telling it it's AC3 works), add separate formats for each
codecs. While this reminds of the earlier chaos, it's stricter, and most
code just uses AF_FORMAT_IS_IEC61937().
Also, modify AF_FORMAT_TYPE_MASK (renamed from AF_FORMAT_POINT_MASK) to
include special formats, so that it always describes the fundamental
sample format type. This also ensures valid AF formats are never 0 (this
was probably broken in one of the earlier commits from today).
Until now, the audio chain could handle both little endian and big
endian formats. This actually doesn't make much sense, since the audio
API and the HW will most likely prefer native formats. Or at the very
least, it should be trivial for audio drivers to do the byte swapping
themselves.
From now on, the audio chain contains native-endian formats only. All
AOs and some filters are adjusted. af_convertsignendian.c is now wrongly
named, but the filter name is adjusted. In some cases, the audio
infrastructure was reused on the demuxer side, but that is relatively
easy to rectify.
This is a quite intrusive and radical change. It's possible that it will
break some things (especially if they're obscure or not Linux), so watch
out for regressions. It's probably still better to do it the bulldozer
way, since slow transition and researching foreign platforms would take
a lot of time and effort.
The mplayer1/2/mpv CoreAudio audio output historically contained both usage
of AUHAL APIs (these go through the CoreAudio audio server) and the Device
based APIs (used only for output of compressed formats in exclusive mode).
The latter is a very unwieldy and low level API and pretty much forces us to
write a lot of code for little workr. Also with the widespread of HDMI, the
actual need for outputting compressed audio directly to the device is getting
lower (it was very useful with S/PDIF for bandwidth constraints not allowing
a number if channels transmitted in LPCM).
Considering how invasive it is (uses hog/exclusive mode), the new AO
(`ao_coreaudio_device`) is not going to be autoprobed but the user will have
to select it.
Something like "char *s = ...; isdigit(s[0]);" triggers undefined
behavior, because char can be signed, and thus s[0] can be a negative
value. The is*() functions require unsigned char _or_ EOF. EOF is a
special value outside of unsigned char range, thus the argument to the
is*() functions can't be a char.
This undefined behavior can actually trigger crashes if the
implementation of these functions e.g. uses lookup tables, which are
then indexed with out-of-range values.
Replace all <ctype.h> uses with our own custom mp_is*() functions added
with misc/ctype.h. As a bonus, these functions are locale-independent.
(Although currently, we _require_ C locale for other reasons.)
I don't think this is really a very good idea because it is conceptually
incorrect but other prominent multimedia programs use this approach
(VLC and xbmc), and it seems to make the conversion more robust in certain
cases.
For example it has been reported, that configuring a receiver that can output
7.1 to output 5.1, will make CoreAudio report 8 channel descriptions, and the
last 2 descriptions will be tagged kAudioChannelLabel_Unknown.
Fixes#737
CoreAudio supports 3 kinds of layouts: bitmap based, tag based, and speaker
description based (using either channel labels or positional data).
Previously we tried to convert everything to bitmap based channel layouts,
but it turns out description based ones are the most generic and there are
built-in CoreAudio APIs to perform the conversion in this direction.
Moreover description based layouts support waveext extensions (like SDL and
SDR), and are easier to map to mp_chmaps.
Previous code was using the values of the AudioChannelLabel enum directly to
create the channel bitmap. While this was quite smart it was pretty unreadable
and fragile (what if Apple changes the values of those enums?).
Change it to use a 'dumb' conversion table.
The code stopped at kAudioChannelLabel_TopBackRight and missed mapping for
5 more channel labels. These are in a completely different order that the mpv
ones so they must be mapped manually.
b2f9e0610 introduced this functionality with code that was quite 'monolithic'.
Split the functionality over several functions and ose the new macros to get
array properties.
Introduce some macros to deal with properties. These allow to work around the
limitation of CoreAudio's API being `void **` based. The macros allow to keep
their client's code DRY, by not asking size and other details which can be
derived by the macro itself. I have no idea why Apple didn't design their API
like this in the first place.
* ao_coreaudio_utils: contains several utility function
* ao_coreaudio_properties: contains functions to set and get audio object
properties.
Conflicts:
audio/out/ao_coreaudio.c