All authors have agreed.
One exception is 71247a97b3, whose author was not asked, but we deem
the change as trivial. (And technically it was replaced when the audio
chain dropped non-native endian sample formats.)
All authors have agreed to the relicensing.
The code was pretty much rewritten by Stefano Pigozzi. Since the rewrite
happened incrementally, and seems to include refactored portions of
older code, this relicensing was done on the pre-refactor code do.
The original commit adding this AO (as ao_macosx.c) credits Timothy J.
Wood as original author. He was asked and agreed to LGPL. It's not
entirely sure from which project this code came from, but it's probably
libao. In that project, Stanley Seibert made some changes to it (who as
a major developer of libao was asked just to be sure), and also Ralph
Giles and Ben Hines made two small changes. The latter were not asked,
but none of their code survived anyway.
All authors have agreed.
Commit 94d3170bd0 is a bit murky: Nick could not be reached, and arpi's
changes were obviously inspired or copied from Nick's. However, the
changed symbols were removed and do not exist anymore.
Although pretty similar to the probably unrelicensable
video/fmt-conversion.c/h (basically using the same idea, but for audio),
it was written by someone else. The format mapping was first added in
commit ad95e046c2.
The new replaygain code accidentally applied the linear gain as cubic
volume level. Fix this by moving the computation of the volume scale out
of the af_volume filter.
(Still haven't verified whether the replaygain code works correctly.)
IMMDeviceEnumerator_RegisterEndpointNotificationCallback() will start
listening for notifications, and is the point at which callbacks can
start firing. These callbacks will read the fields we set after the
register calls, which is a potential race condition. Move it upwards.
Literally copy-pasted from the same commit for video filters. (Once new
code for filters is implemented, this will all go away or at least get
unified anyway.)
This was supposed to restrict output to formats supported by us. But we
usually support all FFmpeg sample formats anyway (if not, it will error
out gracefully, and we would add the missing format). Basically, it's
just useless bloat.
This tried to use AF_FORMAT_DOUBLE as KSDATAFORMAT_SUBTYPE_IEEE_FLOAT,
with wBitsPerSample==64. This is probably not allowed, and drivers
appear to react inconsistently to it. (With one user, the format was
accepted during format negotiation, but then rejected on actual init.)
Remove it, which essentially forces it to fall back to some other
format. (Looks like it'll use af_select_best_samplerate(), which would
probably make it try S32 next.)
The af_fmt_from_planar() is so that we don't have to care about
AF_FORMAT_FLOATP. Wasapi always requires packed data anyway.
This should actually handle other potentially unknown sample formats
better.
This changes that set_waveformat() always set the exact format. Now it
might set a "close" format instead. But all callers seem to deal with
this well. Although in theory, callers should probably handle the
fallback. The next cleanup (if ever) can take care of this.
Basically, see the example in input.rst.
This is better than the "old" vf-toggle method, because it doesn't
require the user to duplicate the filter string in mpv.conf and
input.conf.
Some aspects of this changes are untested, so enjoy your alpha testing.
Remove low quality drc filter. Anyone whishing to have dynamic range
compression should use the much more powerful acompressor ffmpeg filter:
mpv --af=lavfi=[acompressor] INPUT
Or with parameters:
mpv --af=lavfi=[acompressor=threshold=-25dB:ratio=3:makeup=8dB] INPUT
Refer to https://ffmpeg.org/ffmpeg-filters.html#acompressor for a full
list of supported parameters.
Signed-off-by: wm4 <wm4@nowhere>
The buffer_size may be updated before the process callback is called for
the first time. Or, the connection graph could change, which changes the
latency of the pipeline after mpv's output. Ensure we keep on top of
these changes by registering callbacks to update our latency estimation.
It appears some device can be missing if we filter too many. In
particular, I've seen devices starting with "front" and "sysdefault"
being mapped to different hardware. I conclude that it's not sane trying
to present a nice device list to users in ALSA. It's fucked. (Although
kodi appears to attempt some intense "beautification" of the device
list, which includes parsing parameters from the device name and such.
Well, let's not.)
No other audio API requires such ridiculous acrobatics.
Apparently POLLERR can be set if poll is called while the device is in
the SND_PCM_STATE_PREPARED state. So assume that we can simply call
snd_pcm_status() to check whether the error is because the device went
away (i.e. we expect it to return ENODEV if this happened).
This avoids sporadic device lost warnings and AO reloads. The actual
device lost case is untested.
For example, previously, --audio-device='alsa/' would provide ao->device="" to
the alsa driver in spite of the fact that this is an already parsed option. To
avoid requiring a check of ao->device[0] in every driver, make sure this never
happens.
Fall back on PATH_DEV_DSP if nothing is set.
This mirrors the behaviour of --audio-device / --alsa-device.
There doesn't appear to be a general way to list devices with oss, so
--audio-device=help doesn't list oss devices except for the default one if the
file exists.
Previously --audio-device was ignored entirely by ao_oss.
fixes#4122
See: https://msdn.microsoft.com/en-us/library/windows/desktop/dd743946.aspx
Microsoft example code often uses a SAFE_RELEASE macro like the one in
the above link. This makes it easier to avoid errors when releasing COM
interfaces. It also reduces noise in COM-heavy code.
ao_wasapi.h also had a macro called SAFE_RELEASE, though unlike the
version above, its SAFE_RELEASE macro accepted a second parameter which
allowed it to destroy arbitrary objects other than just COM interfaces.
This renames ao_wasapi's SAFE_RELEASE to SAFE_DESTROY, which should more
accurately reflect what it does and prevent confusion with the Microsoft
version.
In a first pass, we check whether libavcodec is present.
Then we try to compile a snippet and check for FFmpeg vs. Libav. (This
could probably also be done by somehow checking the pkgconfig version.
But pkg-config can't deal with that idiotic FFmpeg idea that a micro
version number >= 100 identifies FFmpeg vs. Libav.)
After that we check the project-specific version numbers. This means it
can no longer happen that we accidentally allow older, unsupported
versions of FFmpeg, just because the Libav version numbers are somehow
this way.
Also drop the resampler checks. We hardcode which resampler to each with
each project. A user can no longer force use of libavresample with
FFmpeg.
This can be useful in other contexts.
Note that we end up setting AVCodecContext.width/height instead of
coded_width/coded_height now. AVCodecParameters can't set coded_width,
but this is probably more correct anyway.
The FFmpeg versions we support all have the APIs we were checking for.
Only Libav missed them. Simplify this by explicitly checking for FFmpeg
in the code, instead of trying to detect the presence of the API.
Since we set "skip_manual", we can actually get frames with this set.
Currently, only AV_PKT_FLAG_DISCARD will trigger this flag, and only
mov.c sets the latter flags, so this is related to FFmpeg's half-broken
mp4 edit list support.
Apparently you set the native sample rate when passing through AC3.
This fixes passthrough with 44100 Hz AC3.
Avoid opening a decoder for this and only open the parser. (Hopefully
DTS will also support this some time in the future or so - having to
open a decoder just to get the profile is dumb.)
Same deal as with video. Including the EOF handling.
(It would be nice if this code were not duplicated, but right now we're
not even close to unifying the audio and video code paths.)
Looks quite like a bug. If you have a filter chain with only the
dynaudnorm filter, and send call av_buffersrc_add_frame(s, NULL), then
subsequent av_buffersink_get_frame() calls will return EAGAIN instead of
EOF.
This was apparently caused by a recent change in FFmpeg.
Some other circumstances (which I didn't fully analyze and which is due
to the playloop's absurd temporary-EOF behavior on seeks) then led the
decoder loop to send data again, but since libavfilter was stuck in the
EOF state now, it could never recover. It kept sending new input (due to
missing output), until the demuxer refused to return more audio packets.
Each time a filter error was printed.
Fortunately, it's pretty easy to workaround. We just mark the p->eof
flag as we send an EOF frame to libavfilter. The p->eof flag is used
only to recover from temporary EOF: it resets the filter if new data is
available again. We don't care much about av_buffersink_get_frame()
returning a broken EAGAIN state in this situation and essentially ignore
it, meaning if we get EAGAIN after sending EOF, we assume effectively
that EOF was fully reached.
Remove ad_spdif from the normal codec list, and select it explicitly.
One goal was to decouple this from the normal codec selection, so
they're less entangled and the decoder selection code can be simplified
in the far future. This means spdif codec selection is now done
explicitly via select_spdif_codec(). We can also remove the weird
requirements on "dts" and "dts-hd" for the --audio-spdif option, and it
can just do the right thing.
Now both video and audio codecs consist of a single codec family each,
vd_lavc and ad_lavc.
Currently, if init_filter fails after lavf_ctx is allocated, uninit is called
which frees lavf_ctx, but doesn't clear the pointer in spdif_ctx. So, on the
next call of decode_packet, it thinks it is already initialized and uses it,
resulting in a crash on my system.
We log a large number of formats, but we rarely log the result of the
probing. Change this.
The logic in try_format_exclusive() changes slightly, but should be
equivalent. EXIT_ON_ERROR() checks for FAILED(), which should be
exclusive to SUCCEEDED().
Long planned. Leads to some sanity.
There still are some rather gross things. Especially g_groups is ugly,
and a hack that can hopefully be removed. (There is a plan for it, but
whether it's implemented depends on how much energy is left.)
Until now, this was only implemented for ao_alsa and AOs not using
push.c. ao_alsa.c relied on enabling funny underrun semantics for
avoiding resets on lower levels, while other AOs using push.c didn't do
anything.
Change this and at least make push.c copy silent data to the AO. This
still isn't perfect as keeping track of how much silence was played when
seems complex, so we don't do it. The consequence is that frame-stepping
will essentially randomize the A/V offset (it'll recover immediately
when unpausing, but still ugly). Also, in order to empty the currently
buffered audio on seeks etc., we still call ao_driver->reset and so on,
so the AO driver will still need to handle this specially.
The intent is to make behavior with ALSA less weird (for one we can
remove the code in ao_alsa.c that tries to trigger an initial
underflow). Also might help with #3754.
The "default" entry (which is and always was mpv/mplayer's default) does
not have a description set in the ALSA API. (While "sysdefault"
strangely has.)
Instead of an empty description, this should show something nice, so
reuse the ao.c code for naming default devices (see previous commit).
It's still a bit ugly that audio-device-list will have a default entry
for "Autoselect device" and "Default (alsa)", but then again we probably
want to allow the user to force ALSA (i.e. prevent fallbacks to other
AOs) just because ALSA is so flaky and makes this a legitimate feature.
This will make it easier for AOs to add explicit default device entries.
(See next commit.)
Hopefully this change doesn't lead accidentally to bogus "Default"
entries to appear, but then it can only happen if the device ID is
empty, which would mean the underlying audio API returned bogus entries.
Use the device name as fallback. This is ugly, but still better than
skipping the description entirely. This can be an issue on ALSA, where
the API can return entries without proper description.
This happens when ALSA gives us more channels than we asked for, for
whatever reasons. It looks like this wasn't handled correctly. The mpv
and ALSA channel counts could mismatch, which would lead to UB.
I couldn't actually trigger this case, though. I'm fairly sure that
drivers or plugins exist that do it anyway. (Inofficial ALSA motto: if
it can be broken, then why not break it?)
If the input is already mono or stereo, or if channel map selection
results in mono or stereo, then disable further use of the champ ALSA
API (or rather, stop trusting its results). Then we behave like a simple
application that only wants to output mono or stereo.
See #3045 and #2905. I couldn't actually test these cases, but this
commit is supposed to fix them.
set_chmap() skipped _setting_ the ALSA chmap if chmap use was requested
to be disabled by setting dev_chmap.num=0 by the caller, but it still
queried the current ALSA channel map. We don't trust it that much, so
disable that as well.
But we still query and log it, because that could be helpful for
debugging. Otherwise we could skip the entire set_chmap() call in these
cases.
Both AVFrame.pts and AVFrame.pkt_pts have existed for a long time. Until
now, decoders always returned the pts via the pkt_pts field, while the
pts field was used for encoding and libavfilter only. Recently, pkt_pts
was deprecated, and pts was switched to always carry the pts.
This means we have to be careful not to accidentally use the wrong
field, depending on the libavcodec version. We have to explicitly check
the version numbers. Of course the version numbers are completely
idiotic, because idiotically the pkg-config and library names are the
same for FFmpeg and Libav, so we have to deal with this explicitly as
well.
If the "default" device refuses to be opened as spdif device (i.e. it
errors due to the AES0 etc. parameters), we were falling back to the
iec958 device. This is needed on some systems for smooth operation with
PCM vs. spdif.
Now change it to try "hdmi" before "iec958", which supposedly helps in
other situations.
Better suggestions welcome. Apparently kodi does this too, although I
didn't check directly.
The player tries to avoid splitting frames with spdif (sample alignment
stuff). This can in certain corner cases with certain drivers lead to
the situation that ao_get_space() returns a number higher than 0 and
lower than the audio frame size. The playloop will round this down to 0
bytes and do nothing, leading to a missed wakeup. This can lead to
underruns or playback completely getting stuck.
It can be reproduced by playing AC3 passthrough with no video and:
--ao=null --ao-null-buffer=0.256 --ao-null-outburst=6100
This commit attempts to fix it by allowing the playloop to write some
additional data (to get a complete frame), that will be buffered within
the AO ringbuffer even if the audio device doesn't want it.
We always want to use __declspec(selectany) to declare GUIDs, but
manually including <initguid.h> in every file that used GUIDs was
error-prone. Since all <initguid.h> does is define INITGUID and include
<guiddef.h>, we can remove all references to <initguid.h> and just
compile with -DINITGUID to get the same effect.
Also, this partially reverts 622bcb0 by re-adding libuuid.a to the
build, since apparently some GUIDs (such as GUID_NULL) are not declared
in the source file, even when INITGUID is set.
This was in the parser code all along. As far as I can tell, *cp was
intended. There is no need to check cp for NULL (nor does it make any
sense to do so every time around the loop) for AF_CONTROL_COMMAND.
However, s->matrixstr can be NULL, so checking for that separately is in
order.
For stereo and typical L/R-first channel arrangements, this avoids
undesirable phasing artifacts, especially obvious when speed is changed
and then reset. Without this, there is a very audible change in the
stereo field even when librubberband is no longer actually making any
speed changes.
There were multiple values under M_OPT_EXIT (M_OPT_EXIT-n for n>=0).
Somehow M_OPT_EXIT-n either meant error code n (with n==0 no error?), or
the number of option valus consumed (0 or 1). The latter is MPlayer
legacy, which left it to the option type parsers to determine whether an
option took a value or not. All of this was changed in mpv, by requiring
the user to use explicit syntax ("--opt=val" instead of "-opt val").
In any case, the n value wasn't even used (anymore), so rip this all
out. Now M_OPT_EXIT-1 doesn't mean anything, and could be used by a new
error code.
Currently, calling mp_input_wakeup() will wake up the core thread (also
called the playloop). This seems odd, but currently the core indeed
calls mp_input_wait() when it has nothing more to do. It's done this way
because MPlayer used input_ctx as central "mainloop".
This is probably going to change. Remove direct calls to this function,
and replace it with mp_wakeup_core() calls. ao and vo are changed to use
opaque callbacks and not use input_ctx for this purpose. Other code
already uses opaque callbacks, or has legitimate reasons to use
input_ctx directly (such as sending actual user input).
The first one is printed even if the user disabled video (or there's no
video), so just remove it. The second one uses deprecated sub-option
syntax, so remove that as well.
And introduce a global option which does this. Or more precisely, this
deprecates the global wasapi and coreaudio options, and adds a new one
that merges their functionality. (Due to the way the sub-option
deprecation mechanism works, this is simpler.)
I decided that it's too much work to convert all the VO/AOs to the new
option system manually at once. So here's a shitty hack instead, which
achieves almost the same thing. (The only user-visible difference is
that e.g. --vo=name:help will list the sub-options normally, instead of
showing them as deprecation placeholders. Also, the sub-option parser
will verify each option normally, instead of deferring to the global
option parser.)
Another advantage is that once we drop the deprecated options,
converting the remaining things will be easier, because we obviously
don't need to add the compatibility hacks.
Using this mechanism is separate in the next commit to keep the diff
noise down.
Instead of requiring each VO or AO to manually add members to MPOpts and
the global option table, make it possible to register them automatically
via vo_driver/ao_driver.global_opts members. This avoids modifying
options.c/options.h every time, including having to duplicate the exact
ifdeffery used to enable a driver.
Normally I'd prefer a bunch of smaller functions with fewer parameters
over a single function with a lot of parameters. But future changes will
require messing with the parameters in a slightly more complex way, so a
combined function will be needed anyway. The now-unused "global"
parameter is required for later as well.
Positional parameters cause problems because they can be ambiguous with
flag options. If a flag option is removed or turned into a non-flag
option, it'll usually be interpreted as value for the first sub-option
(as positional parameter), resulting in very confusing error messages.
This changes it into a simple "option not found" error.
I don't expect that anyone really used positional parameters with --vo
or --ao. Although the docs for --ao=pulse seem to encourage positional
parameters for the host/sink options, which means it could possibly
annoy some PulseAudio users.
--vf and --af are still mostly used with positional parameters, so this
must be a configurable option in the option parser.
These are different AVCodecContext fields. pkt_timebase is the correct
one for identifying the unit of packet/frame timestamps when decoding,
while time_base is for encoding. Some decoders also overwrite the
time_base field with some unrelated codec metadata.
pkt_timebase does not exist in Libav, so an #if is required.
There are situations where the resampler is destroyed and recreated
during playback. If recreating the resampler unexpectedly fails, the
filter function is supposed to return an error. This wasn't done
correctly, because get_out_samples() accessed the resampler before the
check. Move the check up to fix this.
Instead of passing through double float timestamps opaquely, pass real
timestamps. Do so by always setting a valid timebase on the
AVCodecContext for audio and video decoding.
Specifically try not to round timestamps to a too coarse timebase, which
could round off small adjustments to timestamps (such as for start time
rebasing or demux_timeline). If the timebase is considered too coarse,
make it finer.
This gets rid of the need to do this specifically for some hardware
decoding wrapper. The old method of passing through double timestamps
was also a bit questionable. While libavcodec is not supposed to
interpret timestamps at all if no timebase is provided, it was
needlessly tricky. Also, it actually does compare them with
AV_NOPTS_VALUE. This change will probably also reduce confusion in the
future.
The code actually kept going out of EOF mode into resync mode back into
EOF mode when the playloop had to wait after an audio EOF caused by the
endpts. This would break seamless looping (as added by the next commit).
Apply endpts earlier, to ensure the filter_audio() function always
returns AD_EOF in this case.
The idiotic ao_buffer makes this an amazing pain in the ass.
The touched code is for seek resets and such - we simply want to reset
the entire resample state. But I noticed after a seek a tiny bit of
audio is missing (mpv's audio sync code inserted silence to compensate).
It turns out swr_drop_output() either does not reset some internal state
as we expect, or it's designed to drop not only buffered samples, but
also future samples.
On the other hand, libavresample's avresample_read(), does not have this
problem. (It is also pretty explicit in what it does - return/skip
buffered data, nothing else.)
Is the libswresample behavior a bug? Or a feature? Does nobody even
know? Who cares - use the hammer to unfuck the situation. Destroy and
deallocate the libswresample context and recreate it. On every seek.
The demuxer layer usually doesn't log per-stream information, and even
the replaygain information was logged only if it came from tags.
So log it in af_volume instead.
With the previous commit, ao_alsa.c now has 3 possible ways to pause
playback. Actually all 3 of them need get_delay() to fake its return
value, so don't duplicate that code.
Also much of the code looks a bit questionable when considering
inconsistent pause/resume calls from outside, so ignore redundant calls.
push.c does not handle this automatically, and AOs using push.c have to
handle it themselves. Also, ALSA is low-level enough that it needs
explicit support in user code. At least I haven't found any option that
does this.
We still can get away relatively cheaply by abusing underflow-handling
for this. ao_alsa.c already configures ALSA to handle underflows by
playing silence. So we purposely induce an underflow when opening the
device, as well as when pausing or resetting the device.
This introduces minor misbehavior: it doesn't account for the additional
delay the initial silence adds, unless the device has fully played the
fragment of silence when the player starts sending data to it. But
nobody cares.
Exactly the same situation as with ao_alsa in commit 0b144eac (except
that we can detect the situation better under wasapi).
Essentially, wasapi will allow us to output any sample format, and not
just the one configured by the user in the audio system settings.
This commit adds an --audio-channel=auto-safe mode, and makes it the
default. This mode behaves like "auto" with most AOs, except with
ao_alsa. The intention is to allow multichannel output by default on
sane APIs. ALSA is not sane as in it's so low level that it will e.g.
configure any layout over HDMI, even if the connected A/V receiver does
not support it. The HDMI fuckup is of course not ALSA's fault, but other
audio APIs normally isolate applications from dealing with this and
require the user to globally configure the correct output layout.
This will help with other AOs too. ao_lavc (encoding) is changed to the
new semantics as well, because it used to force stereo (perhaps because
encoding mode is supposed to produce safe files for crap devices?).
Exclusive mode output on Windows might need to be adjusted accordingly,
as it grants the same kind of low level access as ALSA (requires more
research).
In addition to the things mentioned above, the --audio-channels option
is extended to accept a set of channel layouts. This is supposed to be
the correct way to configure mpv ALSA multichannel output. You need to
put a list of channel layouts that your A/V receiver supports.
Pointless anyway. With superficial checking I couldn't find any decoder
which actually outputs this, and AO chmap negotiation would properly
ignore them anyway in most cases.
We send a refcounted frame to the encoder, but then disrespect
refcounting rules and write to the frame data without making sure the
buffer is really writeable.
In theory this can lead to reallocation on every frame is the encoder
really keeps a reference. If we really cared, we could fix this by
providing a buffer pool. But then again, we don't care.
This is logical: the function makes sense only in situations where you
are going to write to the audio data. To make it worse,
av_buffer_realloc() also handles this situation, but only if the buffer
size changes (simply because it can't realloc memory in use), so we have
to explicitly force reallocation by unreffing the buffers first.
The FFmpeg API is incredibly weird and inconsistent about this. This is
also a FFmpeg-only issue and nothing like this is in Libav - which
doesn't really show FFmpeg in a very positive light.
(To make it even worse: this is a full-blown Libav API incompatibility,
even though this crap was added for Libav ABI-compatibility. It's
absurd.)
Quoting the FFmpeg header for the AVFrame.channels field:
/**
* number of audio channels, only used for audio.
* Code outside libavutil should access this field using:
* av_frame_get_channels(frame)
* - encoding: unused
* - decoding: Read by user.
*/
int channels;
It says "should" not must, and it doesn't even mention
av_frame_set_channels(). It's also in the section for public fields (not
below a marker that indicates private fields in a public struct, like
it's done e.g. in AVCodecContext).
But not using the accessor will cause silent failures on ABI changes.
The failure that happened due to this code didn't even make it apparent
what was wrong. So just use the idiotic accessor.
Also harmonize the FFmpeg-cursing in the code. (It's fully justified.)
Fixes#3295.
Note that mpv will still check the exact library version numbers, and
reject mismatches - to protect itself from such issues in the future.
mixer.c didn't really deserve to be separate anymore, as half of its
contents were unnecessary glue code after recent changes. It also
created a weird split between audio.c and af.c due to the fact that
mixer.c could insert audio filters. With the code being in audio.c
directly, together with other code that unserts filters during runtime,
it will be possible to cleanup this code a bit and make it work like the
video filter code.
As part of this change, make the balance code work like the volume code,
and add an option to back the current balance value. Also, since the
balance semantics are unexpected for most users (panning between the
audio channels, instead of just changing the relative volume), and there
are some other volumes, formally deprecate both the old property and the
new option.
Since mixer->ao is always NULL now (it was really just forgotten to be
removed), the uninit call never actually cleared the af field, leaving
a dangling pointer that could be accessed by volume control.
This code was supposed to adjust existing conversion filters (to make
them output a different format). But the code was just broken,
apparently a refactoring accident. It accessed af instead of af->prev.
The bug tended to add new conversion filters, even if an existing one
could have been used. (Can be tested by inserting a dummy lavrresample
filter followed by a format filter which forces conversion.)
In addition, it's probably better to return the actual error code if
reinitializing the filter fails. It would then respect an AF_FALSE
return value, which means format negotiation failed, instead of a
generic error.
af_volume has a volumedb sub-option, which allows the user to set an
explicit volume. Until recently, the player read back this value and
used it as initial softvol volume. But now it just overwrites it.
Instead of overwriting it, multiply the different gain values. Above
all, this will do the right thing if only softvol is used, or if the
user only adds the af_volume filter manually.
Normally, you want downmixing to happen first thing in the filter chain.
This is reflected in codec downmixing, which feeds the filter chain
downmixed audio in the first place. Doing this has the advantage of
needing less data to process. But the main motivation is that if there
is a drc filter in the chain, you want to process it the downmixed
audio.
Add an idiotic heuristic to achieve this. It tries to detect whether the
audio was indeed automatically downmixed (or upmixed). To detect what
the output format is going to be, it builds the filter chain normally,
and then retries with the heuristic applied (and for extra paranoia,
retries without the heuristic again if it fails to successfully rebuild
the filter chain for unknown reasons). This is simple and will work in
almost all cases.
Doing it in a more complete way is rather hard, because filters are so
generic. For example, we know absolutely nothing about the behavior of
af_lavfi, which creates an opaque filter graph with libavfilter. We
don't know why a filter would e.g. change the channel layout on its
output. (Our heuristic bails out in this case.) We're also slave to the
lowest common denominator of how our format negotiation works, and how
libavfilter's works.
In theory, we could make this mechanism explicit by introducing a
special dummy filter. The filter chain would then try to convert between
input and output formats at the dummy filter, which would give the user
more control over how downmix happens. On the other hand, the user could
just insert explicit conversion filters instead, so this would probably
have questionable value.
The algorithm and functionality is the same, but the code becomes much
simpler and easier to follow.
The assumption that there is only 1 conversion filter (lavrresample)
helps with the simplification, but the main change is to use the same
code for format/channels/rate. Get rid of the different AF_CONTROL_SET_*
controls, and change the af->data parameters directly. (af->data is
badly named, but essentially is a placeholder for the output format.)
Also, instead of trying to use the af_reinit() loop to init inserted
conversion filters or filters with changed output formats, do it inline,
and move the common code to a filter_reinit() function. This gets rid of
the awful retry variable.
In general, this should not change any runtime behavior.
This happens to be better for the af_volume filter (for softvol), and
saves some code too. It's "better" because you want to affect the
final filtered audio, such as after a manually inserted drc filter.
Drop the code for switching the volume options and properties between
af_volume and AO volume controls. interface-changes.rst mentions the
changes in detail.
Do this because this was exceedingly complex and had other problems as
well. It was also very hard to test. It's just not worth the trouble.
Some leftovers like AOCONTROL_HAS_PER_APP_VOLUME will be removed at a
later point.
Fixes#3322.
When selecting a device that simply doesn't exist with --audio-device,
AudioUnit will still initialize and start playback without complaining.
But it will never call the audio render callback, which leads to audio
playback simply not progressing.
I couldn't find a way to get AudioUnit to report an error at all, so
here's a crappy hack that takes care of this in most cases. We assume
that all devices have a kAudioDevicePropertyDeviceIsAlive property.
Invalid devices will error when querying the property (with 'obj!' as
status code).
This is not the correct fix, because we try to double-guess AudioUnit's
behavior by accessing a lower label API. Suggestions welcome.
The libavcodec wmapro decoder will skip some bytes at the start of the
first packet and return each time. It will not return any audio data in
this state.
Our own code as well as libavcodec's new API handling
(avcodec_send_packet() etc.) discard the PTS on the first return, which
means the PTS is never known for the first packet. This results in a
"Failed audio resync." message.
Fixy it by remember the PTS in next_pts. This field is used only if the
decoder outputs no PTS, and is updated after each frame - and thus
should be safe to set.
(Possibly this should be fixed in libavcodec new API handling by not
setting the PTS to NOPTS as long as no real data has been output. It
could even interpolate the PTS if the timebase is known.)
Fixes the failure message seen in #3297.
Some bugs in this code are exposed by e.g. playing lossless audio files
with --ad-lavc-threads=16. (libavcodec doesn't really support threaded
audio decoding, except for lossless files.) In these cases, a major
amount of audio can be buffered, which makes incorrect handling of this
buffering obvious.
For one, draining the decoder can take a while, so if there's a new
segment, we shouldn't read audio.
The segment end check was completely wrong, and used the start value.
Instead of hardcoding what the libavcodec ac3 encoder expects, configure
it based on the AVCodec fields.
Unfortunately, it doesn't export the list of sample rates, so that is
done manually. This commit actually fixes the rate always to 48Khz. I
don't even know whether the other rates worked. (Possibly did, but
they'd still change the spdif parameters, and would work differently
from ad_spdif.c.)
Workaround for an awful corner-case. The new decode API "locks" the
decoder into the EOF state once a drain packet has been sent. The
problem starts with a file containing a 0-sized packet, which is
interpreted as drain packet.
This should probably be changed in libavcodec (not treating 0-sized
packets as drain packets with the new API) or in libavformat (discard
0-sized packets as invalid), but efforts to do so have been fruitless.
Note that vd_lavc.c already does something similar, but originally for
other reasons.
Fixes#3106.
This helps with shitty APIs and even shittier drivers (I'm looking at
you, ALSA). Sometimes they won't send proper wakeups. This can be fine
during playback, when for example playing video, because mpv still will
wakeup the AO outside of its own wakeup mechanisms when sending new data
to it. But when draining, it entirely relies on the driver's wakeup
mechanism. So when the driver wakeup mechanism didn't work, it could
hard freeze while waiting for the audio thread to play the rest of the
data.
Avoid this by waiting for an upper bound. We set this upper bound at the
total mpv audio buffer size plus 1 second. We don't use the get_delay
value, because the audio API could return crap for it, and we're being
paranoid here. I couldn't confirm whether this works correctly, because
my driver issue fixed itself.
(In the case that happened to me, the driver somehow stopped getting
interrupts. aplay froze instead of playing audio, and playing audio-only
files resulted in a chop party. Video worked, for reasons mentioned
above, but drainign froze hard. The driver problem was solved when
closing all audio output streams in the system. Might have been a dmix
related problem too.)
When we're draining, don't wakeup the core on every buffer fill, since
unlike during normal playback, we won't actually get more data. The
wakeup here conceptually works like wakeups with condition variables, so
redundant wakeups do not hurt, so this is just a minor change and
nothing of consequence.
(Final EOF also requires waking up the core, but there is separate code
to send this notification.)
Also dump the p->still_playing field in trace logging.