Since mixer->ao is always NULL now (it was really just forgotten to be
removed), the uninit call never actually cleared the af field, leaving
a dangling pointer that could be accessed by volume control.
This code was supposed to adjust existing conversion filters (to make
them output a different format). But the code was just broken,
apparently a refactoring accident. It accessed af instead of af->prev.
The bug tended to add new conversion filters, even if an existing one
could have been used. (Can be tested by inserting a dummy lavrresample
filter followed by a format filter which forces conversion.)
In addition, it's probably better to return the actual error code if
reinitializing the filter fails. It would then respect an AF_FALSE
return value, which means format negotiation failed, instead of a
generic error.
af_volume has a volumedb sub-option, which allows the user to set an
explicit volume. Until recently, the player read back this value and
used it as initial softvol volume. But now it just overwrites it.
Instead of overwriting it, multiply the different gain values. Above
all, this will do the right thing if only softvol is used, or if the
user only adds the af_volume filter manually.
Normally, you want downmixing to happen first thing in the filter chain.
This is reflected in codec downmixing, which feeds the filter chain
downmixed audio in the first place. Doing this has the advantage of
needing less data to process. But the main motivation is that if there
is a drc filter in the chain, you want to process it the downmixed
audio.
Add an idiotic heuristic to achieve this. It tries to detect whether the
audio was indeed automatically downmixed (or upmixed). To detect what
the output format is going to be, it builds the filter chain normally,
and then retries with the heuristic applied (and for extra paranoia,
retries without the heuristic again if it fails to successfully rebuild
the filter chain for unknown reasons). This is simple and will work in
almost all cases.
Doing it in a more complete way is rather hard, because filters are so
generic. For example, we know absolutely nothing about the behavior of
af_lavfi, which creates an opaque filter graph with libavfilter. We
don't know why a filter would e.g. change the channel layout on its
output. (Our heuristic bails out in this case.) We're also slave to the
lowest common denominator of how our format negotiation works, and how
libavfilter's works.
In theory, we could make this mechanism explicit by introducing a
special dummy filter. The filter chain would then try to convert between
input and output formats at the dummy filter, which would give the user
more control over how downmix happens. On the other hand, the user could
just insert explicit conversion filters instead, so this would probably
have questionable value.
The algorithm and functionality is the same, but the code becomes much
simpler and easier to follow.
The assumption that there is only 1 conversion filter (lavrresample)
helps with the simplification, but the main change is to use the same
code for format/channels/rate. Get rid of the different AF_CONTROL_SET_*
controls, and change the af->data parameters directly. (af->data is
badly named, but essentially is a placeholder for the output format.)
Also, instead of trying to use the af_reinit() loop to init inserted
conversion filters or filters with changed output formats, do it inline,
and move the common code to a filter_reinit() function. This gets rid of
the awful retry variable.
In general, this should not change any runtime behavior.
This happens to be better for the af_volume filter (for softvol), and
saves some code too. It's "better" because you want to affect the
final filtered audio, such as after a manually inserted drc filter.
Drop the code for switching the volume options and properties between
af_volume and AO volume controls. interface-changes.rst mentions the
changes in detail.
Do this because this was exceedingly complex and had other problems as
well. It was also very hard to test. It's just not worth the trouble.
Some leftovers like AOCONTROL_HAS_PER_APP_VOLUME will be removed at a
later point.
Fixes#3322.
When selecting a device that simply doesn't exist with --audio-device,
AudioUnit will still initialize and start playback without complaining.
But it will never call the audio render callback, which leads to audio
playback simply not progressing.
I couldn't find a way to get AudioUnit to report an error at all, so
here's a crappy hack that takes care of this in most cases. We assume
that all devices have a kAudioDevicePropertyDeviceIsAlive property.
Invalid devices will error when querying the property (with 'obj!' as
status code).
This is not the correct fix, because we try to double-guess AudioUnit's
behavior by accessing a lower label API. Suggestions welcome.
The libavcodec wmapro decoder will skip some bytes at the start of the
first packet and return each time. It will not return any audio data in
this state.
Our own code as well as libavcodec's new API handling
(avcodec_send_packet() etc.) discard the PTS on the first return, which
means the PTS is never known for the first packet. This results in a
"Failed audio resync." message.
Fixy it by remember the PTS in next_pts. This field is used only if the
decoder outputs no PTS, and is updated after each frame - and thus
should be safe to set.
(Possibly this should be fixed in libavcodec new API handling by not
setting the PTS to NOPTS as long as no real data has been output. It
could even interpolate the PTS if the timebase is known.)
Fixes the failure message seen in #3297.
Some bugs in this code are exposed by e.g. playing lossless audio files
with --ad-lavc-threads=16. (libavcodec doesn't really support threaded
audio decoding, except for lossless files.) In these cases, a major
amount of audio can be buffered, which makes incorrect handling of this
buffering obvious.
For one, draining the decoder can take a while, so if there's a new
segment, we shouldn't read audio.
The segment end check was completely wrong, and used the start value.
Instead of hardcoding what the libavcodec ac3 encoder expects, configure
it based on the AVCodec fields.
Unfortunately, it doesn't export the list of sample rates, so that is
done manually. This commit actually fixes the rate always to 48Khz. I
don't even know whether the other rates worked. (Possibly did, but
they'd still change the spdif parameters, and would work differently
from ad_spdif.c.)
Workaround for an awful corner-case. The new decode API "locks" the
decoder into the EOF state once a drain packet has been sent. The
problem starts with a file containing a 0-sized packet, which is
interpreted as drain packet.
This should probably be changed in libavcodec (not treating 0-sized
packets as drain packets with the new API) or in libavformat (discard
0-sized packets as invalid), but efforts to do so have been fruitless.
Note that vd_lavc.c already does something similar, but originally for
other reasons.
Fixes#3106.
This helps with shitty APIs and even shittier drivers (I'm looking at
you, ALSA). Sometimes they won't send proper wakeups. This can be fine
during playback, when for example playing video, because mpv still will
wakeup the AO outside of its own wakeup mechanisms when sending new data
to it. But when draining, it entirely relies on the driver's wakeup
mechanism. So when the driver wakeup mechanism didn't work, it could
hard freeze while waiting for the audio thread to play the rest of the
data.
Avoid this by waiting for an upper bound. We set this upper bound at the
total mpv audio buffer size plus 1 second. We don't use the get_delay
value, because the audio API could return crap for it, and we're being
paranoid here. I couldn't confirm whether this works correctly, because
my driver issue fixed itself.
(In the case that happened to me, the driver somehow stopped getting
interrupts. aplay froze instead of playing audio, and playing audio-only
files resulted in a chop party. Video worked, for reasons mentioned
above, but drainign froze hard. The driver problem was solved when
closing all audio output streams in the system. Might have been a dmix
related problem too.)
When we're draining, don't wakeup the core on every buffer fill, since
unlike during normal playback, we won't actually get more data. The
wakeup here conceptually works like wakeups with condition variables, so
redundant wakeups do not hurt, so this is just a minor change and
nothing of consequence.
(Final EOF also requires waking up the core, but there is separate code
to send this notification.)
Also dump the p->still_playing field in trace logging.
For clang, it's enough to just put (void) around usages we are
intentionally ignoring the result of.
Since GCC does not seem to want to respect this decision, we are forced
to disable the warning globally.
Since the main thread is shared by other things in the player, using STA (single
threaded aparement) may have caused problems. Instead initialize in MTA
(multithreaded apartment).
This reportedly makes it work on ODROID-C2. The idea for this hack is
taken from kodi; they unconditionally set some or all of those flags.
I don't trust ALSA enough to hope that setting these flags couldn't
break something else, so we try without them first.
It's not clear whether this is a driver bug or a bug in the ALSA libs.
There is no ALSA bug tracker (the ALSA website has had a dead link to
a deleted bug tracker fo years). There's not much we can do other than
piling up ridiculous hacks. At least I think that at this point invalid
API usage by mpv can be excluded as a cause.
ALSA might be the worst audio API ever.
Including initguid.h at the top of a file that uses references to GUIDs
causes the GUIDs to be declared globally with __declspec(selectany). The
'selectany' attribute tells the linker to consolidate multiple
definitions of each GUID, which would be great except that, in Cygwin
and MinGW GCC 6.1, this method of linking makes the GUIDs conflict with
the ones declared in libuuid.a.
Since initguid.h obsoletes libuuid.a in modern compilers that support
__declspec(selectany), add initguid.h to all files that use GUIDs and
remove libuuid.a from the build.
Fixes#3097
Setting this here is a race condition. It's called from a CoreAudio
callbacks, and there are no locks. It's a string, so this can be
potentially severe.
It's hard to fix and only CoreAudio supported it, so remove it.
This causes the "audio-out-detected-device" property to return nothing
on all platforms.
AVFormatContext.codec is deprecated now, and you're supposed to use
AVFormatContext.codecpar instead.
Handle this for all of the normal playback code.
Encoding mode isn't touched.
This is particularly useful for opus which allows only a fairly restrictive set
of samplerates. If the codec doesn't provide a list of samplerates, just
continue to try the requsted one and hope for the best.
fixes#2957
This function chooses the best match to a given samplerate from a provided
list. This can be used, for example, by the ao to decide what samplerate to use
for output.
This eliminates some intermittent pops heard in a HRT MicroStreamer DAC
uncorrelated with user interaction. As a bonus, this resolves#1773 which I can
o longer reproduce as of this commit. Leave the 50ms buffer for shared mode
since that seems to be working quite well.
This is also the way exclusive mode is done in the MSDN example code:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd370844%28v=vs.85%29.aspx
This was originally increased in c545c40 to mitigate glitches that subsequent
refactorings have eliminated.
A COM message loop is apparently totally inappropriate for a low latency
thread. It leads to audio glitches because the thread doesn't wake up fast
enough when it should. It also causes mysterious correlations between the vo
and ao thread (i.e., toggling fullscreen delays audio feed events). Instead use
an mp_dispatch_queue to set/get volume/mute/session display name from the audio
thread. This has the added benefit of obviating the need to marshal the
associated interfaces from the audio thread.
Don't wait for WASAPI to send another feed event if we detect an underfull
buffer. It seems that WASAPI doesn't always send extra feed events if
something causes rendering to fall behind. This causes every subsequent playback
buffer to under-run until playback is reset. The fix is simply to do a one-shot
double feed when this happens, which allows rendering to catch up with playback.
This was observed to happen when using MsgWaitForMultipleObjects to wait for the
feed event and toggling fullscreen with vo=opengl:backend=win. This commit
improves the behaviour in that specific case and more generally makes exclusive
mode significantly more robust.
This commit also moves the logic to avoid *over*filling the exclusive mode
buffer into thread_feed right next to the above described underfil logic.
OpenSL ES is used on Android. At the moment only stereo output is
supported. Two options are supported: 'frames-per-buffer' and
'sample-rate'. To get better latency the user of libmpv should pass
values obtained from AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER)
and AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE).
Fixes correctness_trimming_nobeeps.opus. One nasty thing is that this
mechanism interferes with the container-signalled mechanism with
AV_FRAME_DATA_SKIP_SAMPLES. So apply it only if that is apparently not
present. It's a mess, and it's still broken in FFmpeg CLI, so I'm sure
this will get fucked up later again.
I'm not quite sure what the FFmpeg AV_FRAME_DATA_SKIP_SAMPLES API
demands here. The code so far assumed that skipping can be more than a
frame, but not trimming. Extend it to trimming too.
This is actually already done by dec_audio.c. But if
AV_FRAME_DATA_SKIP_SAMPLES is applied, this happens too late here. The
problem is that this will slice off samples, and make it impossible for
later code to reconstruct the timestamp properly.
Missing timestamps can still happen with some demuxers, e.g. demux_mkv.c
with Opus tracks. (Although libavformat interpolates these itself.)
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.
The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.
The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)
To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
concatenating files with completely different codecs for the sake
of EDL (which also uses the timeline infrastructure). A "lighter"
approach would try to make use of decoder mechanism to update e.g.
the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
normal playback core mechanisms like hr-seek, but now the playback
core doesn't need to care about these things.
These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)
There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.
Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
The code is shared with the --vd-lavc-threads option, so using 0 for
auto-detection just works.
But no, this is not useful. Just change it for orthogonality.
The complex filter support that will be added makes much more complex
use of libavfilter, and I'm not going to bother with adding hacks to
keep libavfilter optional.
I can't explain this, but it seems to be a similar case to the ALSA HDMI
one. I find it hard to tell because of the slightly different names and
conventions in use in libavcodec, WAVEEXT channel masks, decoders, codec
specifications, HDMI, and platform audio APIs.
The fix is the same as the one for ao_alsa (see commit be49da72). This
should fix at least playing 7.1 sources on OSX with 7.1(rear) selected
in Audio MIDI Setup. The ao_alsa commit mentions XBMC, but I couldn't
find out where it does that or if it also does that for CoreAudio. It's
woth noting that PHT (essentially an old XBMC fork) also exhibited the
incorrect behavior (i.e. side and back speakers were swapped).
Remove flc-frc <-> sl<->sr. This was just plain wrong, and a mistaken
change to make 7.1 work properly on CoreAudio with 7.1(rear) layout.
Also see the following commit.
Add br-br <-> sl<->sr, because we decided that it makes sense.
Note that this "fudging" is applied only if the channel pairs are
replaced, i.e. they would get dropped and be replaced with silence. This
is done to compensate for libswresample's default rematrixing (which
takes care of some more common cases).
Will be helpful for the coming filter support. I planned on merging
audio/video decoding, but this will have to wait a bit longer, so only
remove the duplicate status codes.
This also makes it refcounted, i.e. the new AVFrame will reference the
mp_audio buffers, instead of potentially forcing the consumer of the
AVFrame to copy the data.
All the extra code is for handling the >8 channels case, which requires
very messy dealing with the extended_ fields (not our fault).
Correctly avoid a reload if the current device was specified by the user through
--audio-device. Previously, we only recognized if the user had specified
--ao=wasapi:device=.
This seems generally easier when using libmpv (and was already requested
and implemented before: see commit 327a779a; it was reverted some time
later).
With the weird internal logic we have to deal with, in particular the
--softvol=no case (using system volume), and using the audio API's mixer
(--softvol=auto on some systems), we still can't avoid all glitches and
corner cases that complicate this issue so much. The API user is either
recommended to use --softvol=yes or auto, or to watch the new
mixer-active property, and assume the volume/mute properties have
significant values if the mixer is active.
Remaining glitches:
- changing the volume/mute properties has no effect if no internal mixer
is used (--softvol=no) and the mixer is not active; the actual mixer
controls do not change, only the property values
- --volume/--mute do not have an effect on the volume/mute properties
before mixer initialization (the options strictly are only applied
during mixer init)
- volume-max is 100 while the mixer is not active
Notably, the address of the enumerator->count member is passed to
IMMDeviceCollection::GetCount(), which expects a UINT variable, not an int. How
did this ever work?
Previously, if the enumerator found no devices, attempting to get the default
device with IMMDeviceEnumerator::GetDefaultAudioEndpoint would result in the
cryptic (and undocumented) E_PROP_ID_UNSUPPORTED. This way, the user is given a
better indication of what exactly is wrong and isolates any other possible
triggers for this error.
Similar to the video path. dec_audio.c now handles decoding only. It
also looks very similar to dec_video.c, and actually contains some of
the rewritten code from it. (A further goal might be unifying the
decoders, I guess.)
High potential for regressions.
This is probably the 3rd time the user-visible behavior changes. This
time, switch back because not normalizing seems to be the more expected
behavior from users.
Seems useless.
This only helped in one case: one audio stream in the sample
av_find_best_stream_fails.ts had a AC3 packets which couldn't be
decoded, and for which avcodec_decode_audio4() returned 0 forever. In
this specific case, playback will now not start, and you have to
deselect audio manually.
(If someone complains, the old behavior might be restored, but
differently.)
Also remove the stale "bitrate" field.
While the situation is not really clear for the other rewritten
coreaudio code, it's very clear for the channel mapping code. It was all
written by us. (MPlayer doesn't even have any channel map handling.)
This covers source files which were added in mplayer2 and mpv times
only, and where all code is covered by LGPL relicensing agreements.
There are probably more files to which this applies, but I'm being
conservative here.
A file named ao_sdl.c exists in MPlayer too, but the mpv one is a
complete rewrite, and was added some time after the original ao_sdl.c
was removed. The same applies to vo_sdl.c, for which the SDL2 API is
radically different in addition (MPlayer supports SDL 1.2 only).
common.c contains only code written by me. But common.h is a strange
case: although it originally was named mp_common.h and exists in MPlayer
too, by now it contains only definitions written by uau and me. The
exceptions are the CONTROL_ defines - thus not changing the license of
common.h yet.
codec_tags.c contained once large tables generated from MPlayer's
codecs.conf, but all of these tables were removed.
From demux_playlist.c I'm removing a code fragment from someone who was
not asked; this probably could be done later (see commit 15dccc37).
misc.c is a bit complicated to reason about (it was split off mplayer.c
and thus contains random functions out of this file), but actually all
functions have been added post-MPlayer. Except get_relative_time(),
which was written by uau, but looks similar to 3 different versions of
something similar in each of the Unix/win32/OSX timer source files. I'm
not sure what that means in regards to copyright, so I've just moved it
into another still-GPL source file for now.
screenshot.c once had some minor parts of MPlayer's vf_screenshot.c, but
they're all gone.
Previously used opt_exclusive option to decide which volume control code to run.
The might not always reflect the actual state, for example if passthrough
is used. Admittedly, none of the volume controls will work anyway with
passthrough, but this is the right thing to do.
Prevents channels from being dropped, e.g. when going 7.1 -> 7.1(wide)
and similar cases. The reasoning here is that channel layouts over HDMI
don't work anyway, and not dropping a channel and playing it on a
slightly "wrong" (but expected) speaker is preferable to playing silence
on these speakers.
Do this to remove issues with ao_coreaudio. Frankly I'm not sure whether
our mapping (between CA and mpv/FFmpeg speakers) is correct, but on the
other hand due to the reasons stated above it's not all that meaningful.
This is mainly a refactor. I'm hoping it will make some things easier
in the future due to cleanly separating codec metadata and stream
metadata.
Also, declare that the "codec" field can not be NULL anymore. demux.c
will set it to "" if it's NULL when added. This gets rid of a corner
case everything had to handle, but which rarely happened.
Note that hresult_to_str() (coming from wasapi_explain_err()) is mostly
wasapi-specific, but since HRESULT error codes are unique, it can be
extended for any other use.
This is another attempt at making files with sparse video frames work
better.
The problem is that you generally can't know whether a jump in video
timestamps is just a (very) long video frame, or a timestamp reset. Due
to the existence of files with sparse video frames (new frame only every
few seconds or longer), every heuristic will be arbitrary (in general,
at least).
But we can use the fact that if video is continuous, audio should also
be continuous. Audio discontinuities can be easily detected, and if that
happens, reset some of the playback state.
The way the playback state is reset is rather radical (resets decoders
as well), but it's just better not to cause too much obscure stuff to
happen here. If the A/V sync code were to be rewritten, it should
probably strictly use PTS values (not this strange time_frame/delay
stuff), which would make it much easier to detect such situations and
to react to them.
It existed for XP-compatibility only. There was also a time where
ao_wasapi caused issues, but we're relatively confident that ao_wasapi
works better or at least as good as ao_dsound on Windows Vista and
later.
Normally, PulseAudio accepts any combination of sample format, sample
rate, channel count/map. Sometimes it does not. For example, the channel
rate or channel count have fixed maximum values. We should not fail
fatally in such cases, but attempt to fall back to a working format.
We could just send pass an "unset" format to Pulse, but this is not too
attractive. Pulse could use a format which we do not support, and also
doing so much for an obscure corner case is not reasonable. So just pick
a format that is very likely supported.
This still could fail at runtime (the stream could fail instead of going
to the ready state), but this sounds also too complicated. In
particular, it doesn't look like pulse will tell us the cause of the
stream failure. (Or maybe it does - but I didn't find anything.)
Last but not least, our fallback could be less dumb, and e.g. try to fix
only one of samplerate or channel count first to reduce the loss, but
this is also not particularly worthy the effort.
Fixes#2654.
Given 5.1(side), this lets it pick 5.1 from [5.1, 7.1]. Which was
probably the original intention of this replacement stuff. Until now,
the opposite was done in some cases.
Keep the old heuristic if the replacement is not perfect. This would
mean that a subset of the channel layout is an inexact equivalent, but
not all of it.
(My conclusion is that audio output APIs should be designed to simply
take any channel layout, like the PulseAudio API does.)
Unify and clean up listing and selection. Use common enumerator code for both
operations to avoid duplication or inconsistencies.
Maintain, but significatnly simplify manual device selection by id, name or
number. This actually fixes loading by name which didn't really work before
since the "name" displayed by --audio-device=help differed from that used to
match the selection, which used the device "description" instead.
Save the selected deviceID in the private structure for later loading. This will
permit moving the device selection into the main thread in a future commit.
Apparently it's only wine where the qpc_position returned by
IAudioClock_GetPosition can be overflowed. So actually do the rescaling
correctly, but throw away the result if it looks unreasonable.
this fixes a regression in 5afa68835a
Make sure that subtraction of performance counters is done correctly.
Follow the *exact* instructions for converting performance counter to something
comparable to the QPCposition returned by IAudioClient::GetPosition
https://msdn.microsoft.com/en-us/library/windows/desktop/dd370889%28v=vs.85%29.aspx
Also make sure that subtraction of unsigned integers is stored into a signed
integer to avoid nastiness. Also be more careful about overflow in the
conversion of the device position into number of samples.
Avoid casting mp_time_us() to a double, and use llrint to convert the
double precision delay_us back to integer for ao_read_data.
Finally, actually check the return value of ao_read_data and add a verbose
message if it is not the expected value. Unfortunately,
there is no way to tell WASAPI when this happens since the frame_count in
ReleaseBuffer must match GetBuffer.
Do not try and set/get master volume in exclusive if there is no
hardware support. This would just uselessly change the master slider,
but have no effect on the actual volume.
Furthermore if getting hardware volume support information fails, then assume
it has none.
It was complicated and not even very intuitive to the user.
If you are controlling the master volume, you just have to be
prepared to deal with the consequences.
A manually added af_volume could lead to muted audio when switching to a
new file. af_volume keeps the last volume set by AF_CONTROL_SET_VOLUME
to return it with AF_CONTROL_GET_VOLUME, but the initial value is 0. So
the mixer volume was forced to 0 when unintializing the filter chain and
reading back the previously set volume.
If there were many AO drivers without device selection, this added a
"Default" entry for each AO. These entries were not distinguishable, as
the device list feature is meant not to require to display the "raw"
device name in GUIs.
Disambiguate them by adding the driver name. If the AO is the first, the
name will remain just "Default". (The condition checks "num > 1",
because the very first entry is the dummy for AO autoselection.)
Of course, only FFmpeg has av_clipd(), while Libav does not. (Nevermind
that it doesn't do much more than the mpv MPCLAMP() macro. Supposedly,
libavutil can provide optimized platform-specific versions for av_clip*,
but of course nothing actually does for av_clipf() or av_clipd().)
libswresample doesn't do it - although it should, but the patch is stuck
in limbo.
Probably reduces problems with artifacts on downmixing in some cases.
Remove known useless device entries from the --audio-device list (and
corresponding property). Do this because the list is supposed to be a
high level list of devices the user can select. ALSA does not provide
such a list (in an useable manner), and ao_alsa.c is still in the best
position to improve the situation somewhat.
The ALSA doxygen says:
IOID - input / output identification ("Input" or "Output"), NULL
means both
This bug was blatantly introduced with commit cf94fce4.
Apparently, some audio drivers do not support the DTS subtype, but
passthrough works anyway if the AC3 subtype is set. Just retry with
AC3 if the proper format doesn't work. The audio device which
exposed this behavior reported itself as
"M601d-A3/A3R (Intel(R) Display Audio)".
xbmc/kodi even always passes DTS as AC3.