This brings the volume control closer to what is percepted as linear
volume change.
Adjust the --softvol-max default to roughly the old maximum (roughly
doubles the gain).
Now --volume takes an absolute volume, meaning it doesn't depend on
--softvol-max. 0 is still silence, and 100 now always means unchanged
volume. The OSD and the "volume" property are changed accordingly.
Also raise the minimum value of --softvol-max. A value below 100 makes
no sense and breaks the OSD.
Apparently some A/V receivers do not behave well if "normal" DTS is
passed through using the high bitrate spdif format normally used for
DTS-HD (other receivers are fine with it).
Parse the first packet passed to ad_spdif by decoding it with libavcodec
in order to get the profile. Ignore the --ad-spdif-dtshd if it's not
DTS-HD. (If the codec profile changes midstream, the user is out of
luck. But this is probably an insignificant corner case.)
I thought about parsing the bitstream, but let's not. While it probably
wouldn't be that much effort, we are trying to keep it down on codec
details here - otherwise we could just do our own spdif framing instead
of using libavformat's spdif pseudo-muxer.
Another possibility, using the codec parameters signalled by
libavformat, is disregarded. Our builtin Matroska decoder doesn't do
this, and also we do not want on the demuxer having to decode some
packets in order to retrieve codec params (as libavformat does).
Fixes#1949.
There is not much of a reason to have these wrappers around. Use POSIX
standard functions directly, and use a separate utility function to take
care of the timespec calculations. (Course POSIX for using this weird
format for time values.)
Drop mp_chmap_diff() (which is unused too now), and implement
mp_chmap_diffn() in a slightly simpler way. (Too bad there is no
standard function for counting set bits.)
Instead of somehow having 4 different cases with each their own weight,
do it with a single function that decides which channel layout is the
better fallback.
This is simpler, and also introduces new (fixed) semantics. The new test
added to test/chmap_sel.c actually works now. This is a mixed case with
no perfect upmix or downmix, but the better choice is the one which
loses the least channels from the original layout.
One test also changes. If the input is 7.1(wide-side), and the available
layouts are 7.1 and 5.1(side), the latter is now chosen instead of the
former. This makes sense: both layouts contain 6 out of 8 channels from
the original layout, but the 5.1(side) one is smaller. This follows the
general logic. The 7.1 layout has FLC/RLC speakers instead of BL/BR,
and judging by the names, "front left center" is completely different
from "back left". If these should be exchangeable, a separate exception
would have to be added.
Reuse MP_SPEAKER_ID_NA for this. If all mp_chmap entries are set to NA,
the channel layout has special "unknown channel layout" semantics, which
are used to deal with some corner cases.
Sometimes, ALSA will return channel layouts with padded channels (NA
speakers). Use them instead of failing.
This still includes the old "braindeath" code to retry with a layout
without NA channels. This might be helpful for performance, and also the
padded channel layout string looks confusing.
To be fair, I have not encountered a case yet which would really need
this, and for which the old "braindeath" code did not fix it.
volatile barely means anything.
The polling is kind of bad too, but relatively harmless as device
opening/closing is a rare event, and the format change is not expected
to take long.
Remove the pointless talloc call too (must have been a leftover
from previous refactoring).
No reason to keep them separate. It's an artifact from the old
ao_coreaudio.c, which kept usage of two different APIs in the same file.
Removes a forward reference too.
Instead of trying to use af_format_conversion_score() (which tries to be
all kinds of clever), just compare the raw bits as a quality measure. Do
this because otherwise, weird formats like padded 24 bit formats will be
excluded, even though they might be the highest precision formats for
some hardware.
This means that for now, the user would have to check whether the format
is usable at all before calling ca_asbd_is_better(). But since this is
currently only used for ao_coreaudio.c and for the physical format, it
doesn't matter.
If coreaudio-exclusive should get PCM support, the best would be to
revert this change, and to add support for 24 bit formats directly.
Some time ago, a mechanism was added for automatically removing PCM-only
filters if the input format is spdif.
This could cause an infinite loop if the AO did not support spdif, but
was falling back to some PCM format. Then this code tried to remove the
last filter, which is a dummy filter for receiving and queuing filter
output. af_remove() simply fails gracefully in this case, so this
happens over and over again.
Fix by explicitly checking whether the filter to remove is a dummy
filter. (af_remove() also fails only if the dummy filters are attempted
to be removed - checking this directly is simpler.)
Move all of the channel map retrieval/negotiation code to a separate
file. This will (probably) be helpful when extending
ao_coreaudio_exclusive.c.
Nothing else changes, other than some minor cosmetics and renaming,
and changing some details for decoupling it from the ao_coreaudio.c
internals.
It appears this is the reason coreaudio-exclusive does not work without
explicitly specifying a device, even if the default device maps to
something passthrough-capable.
Instead of always picking a somehow better format over the previous one,
select a format that is equal to or better the requested format, but is
also reasonably close.
Drop the mFormatID comparison - checking the sample format handles this
already.
Make sure to exclude channel counts that can't be used.
If for example the physical format is set to stereo, the reported
multichannel layout will actually be stereo. It fixes itself only after
the physical format is changed.
ao_coreaudio uses AudioUnit - the OSX software mixer. In theory, it
supports multichannel audio just fine. But in practice, this might be
disabled by default, and the user is supposed to select a multichannel
base format in the "Audio MIDI Setup" utility.
This option attempts to change this setting automatically. Some possible
disadvantages and caveats are listed in the manpage additions. It is off
by default, since changing this might be rather bad behavior for a
normal application.
If for example the audio settings are set to 5.1 output, but the
hardware does 8 channels natively (HDMI), the reported channel
layout will have 2 dummy channels. To avoid falling back to stereo,
we have to write audio in this format to the device.
Some audio APIs explicitly require you to add dummy channels. These are
not rendered, and only exist for the sake of the audio API or hardware
strangeness. At least ALSA, Sndio, and CoreAudio seem to have them.
This commit is preparation for using them with ao_coreaudio.
The result is a bit messy. libavresample/libswresample don't have good
API for this; avresample_set_channel_mapping() is pretty useless.
Although in theory you can use it to add and remove channels, you
can't set the channel counts. So we do the ordering ourselves by making
sure the audio data is planar, and by swapping the plane pointers. This
requires lots of messiness to get the conversions in place. Also, the
input reordering is still done with the "old" method, and doesn't
support padded channels - hopefully this will never be needed. (I tried
to come up with cleaner solutions, but compared to my other attempts,
the final commit is not that bad.)
ca_label_to_mp_speaker_id() checked whether the last entry was >= 0, but
actually this condition was never true, and MP_SPEAKER_ID_UNKNOWN0 is
not negative.
This should for now be equivalent; it's merely more explicit and will
be required if we add PCM support.
Note that the property listeners actually tell you what property
exactly changed, but resolving the current listener mess would be too
hard. So check for changes manually.
Useful with some of the following commits.
ca_fill_asbd() should behave exactly as before.
Instead of actually implementing the inverse function of ca_fill_asbd(),
just loop over the (small) list of mpv functions and check if any mpv
equivalent to a given ASBD exists.
kAudioFormatFlagIsSignedInteger implicates that it's only used with
integer formats. The mpv internal flag on the other hand signals the
presence of a sign, and this is set on float formats.
Until now, this probably worked fine, because at least AudioUnit is
ignoring the uncorrect flag.