Normally I'd prefer a bunch of smaller functions with fewer parameters
over a single function with a lot of parameters. But future changes will
require messing with the parameters in a slightly more complex way, so a
combined function will be needed anyway. The now-unused "global"
parameter is required for later as well.
Positional parameters cause problems because they can be ambiguous with
flag options. If a flag option is removed or turned into a non-flag
option, it'll usually be interpreted as value for the first sub-option
(as positional parameter), resulting in very confusing error messages.
This changes it into a simple "option not found" error.
I don't expect that anyone really used positional parameters with --vo
or --ao. Although the docs for --ao=pulse seem to encourage positional
parameters for the host/sink options, which means it could possibly
annoy some PulseAudio users.
--vf and --af are still mostly used with positional parameters, so this
must be a configurable option in the option parser.
These are different AVCodecContext fields. pkt_timebase is the correct
one for identifying the unit of packet/frame timestamps when decoding,
while time_base is for encoding. Some decoders also overwrite the
time_base field with some unrelated codec metadata.
pkt_timebase does not exist in Libav, so an #if is required.
There are situations where the resampler is destroyed and recreated
during playback. If recreating the resampler unexpectedly fails, the
filter function is supposed to return an error. This wasn't done
correctly, because get_out_samples() accessed the resampler before the
check. Move the check up to fix this.
Instead of passing through double float timestamps opaquely, pass real
timestamps. Do so by always setting a valid timebase on the
AVCodecContext for audio and video decoding.
Specifically try not to round timestamps to a too coarse timebase, which
could round off small adjustments to timestamps (such as for start time
rebasing or demux_timeline). If the timebase is considered too coarse,
make it finer.
This gets rid of the need to do this specifically for some hardware
decoding wrapper. The old method of passing through double timestamps
was also a bit questionable. While libavcodec is not supposed to
interpret timestamps at all if no timebase is provided, it was
needlessly tricky. Also, it actually does compare them with
AV_NOPTS_VALUE. This change will probably also reduce confusion in the
future.
The code actually kept going out of EOF mode into resync mode back into
EOF mode when the playloop had to wait after an audio EOF caused by the
endpts. This would break seamless looping (as added by the next commit).
Apply endpts earlier, to ensure the filter_audio() function always
returns AD_EOF in this case.
The idiotic ao_buffer makes this an amazing pain in the ass.
The touched code is for seek resets and such - we simply want to reset
the entire resample state. But I noticed after a seek a tiny bit of
audio is missing (mpv's audio sync code inserted silence to compensate).
It turns out swr_drop_output() either does not reset some internal state
as we expect, or it's designed to drop not only buffered samples, but
also future samples.
On the other hand, libavresample's avresample_read(), does not have this
problem. (It is also pretty explicit in what it does - return/skip
buffered data, nothing else.)
Is the libswresample behavior a bug? Or a feature? Does nobody even
know? Who cares - use the hammer to unfuck the situation. Destroy and
deallocate the libswresample context and recreate it. On every seek.
The demuxer layer usually doesn't log per-stream information, and even
the replaygain information was logged only if it came from tags.
So log it in af_volume instead.
With the previous commit, ao_alsa.c now has 3 possible ways to pause
playback. Actually all 3 of them need get_delay() to fake its return
value, so don't duplicate that code.
Also much of the code looks a bit questionable when considering
inconsistent pause/resume calls from outside, so ignore redundant calls.
push.c does not handle this automatically, and AOs using push.c have to
handle it themselves. Also, ALSA is low-level enough that it needs
explicit support in user code. At least I haven't found any option that
does this.
We still can get away relatively cheaply by abusing underflow-handling
for this. ao_alsa.c already configures ALSA to handle underflows by
playing silence. So we purposely induce an underflow when opening the
device, as well as when pausing or resetting the device.
This introduces minor misbehavior: it doesn't account for the additional
delay the initial silence adds, unless the device has fully played the
fragment of silence when the player starts sending data to it. But
nobody cares.
Exactly the same situation as with ao_alsa in commit 0b144eac (except
that we can detect the situation better under wasapi).
Essentially, wasapi will allow us to output any sample format, and not
just the one configured by the user in the audio system settings.
This commit adds an --audio-channel=auto-safe mode, and makes it the
default. This mode behaves like "auto" with most AOs, except with
ao_alsa. The intention is to allow multichannel output by default on
sane APIs. ALSA is not sane as in it's so low level that it will e.g.
configure any layout over HDMI, even if the connected A/V receiver does
not support it. The HDMI fuckup is of course not ALSA's fault, but other
audio APIs normally isolate applications from dealing with this and
require the user to globally configure the correct output layout.
This will help with other AOs too. ao_lavc (encoding) is changed to the
new semantics as well, because it used to force stereo (perhaps because
encoding mode is supposed to produce safe files for crap devices?).
Exclusive mode output on Windows might need to be adjusted accordingly,
as it grants the same kind of low level access as ALSA (requires more
research).
In addition to the things mentioned above, the --audio-channels option
is extended to accept a set of channel layouts. This is supposed to be
the correct way to configure mpv ALSA multichannel output. You need to
put a list of channel layouts that your A/V receiver supports.
Pointless anyway. With superficial checking I couldn't find any decoder
which actually outputs this, and AO chmap negotiation would properly
ignore them anyway in most cases.
We send a refcounted frame to the encoder, but then disrespect
refcounting rules and write to the frame data without making sure the
buffer is really writeable.
In theory this can lead to reallocation on every frame is the encoder
really keeps a reference. If we really cared, we could fix this by
providing a buffer pool. But then again, we don't care.
This is logical: the function makes sense only in situations where you
are going to write to the audio data. To make it worse,
av_buffer_realloc() also handles this situation, but only if the buffer
size changes (simply because it can't realloc memory in use), so we have
to explicitly force reallocation by unreffing the buffers first.
The FFmpeg API is incredibly weird and inconsistent about this. This is
also a FFmpeg-only issue and nothing like this is in Libav - which
doesn't really show FFmpeg in a very positive light.
(To make it even worse: this is a full-blown Libav API incompatibility,
even though this crap was added for Libav ABI-compatibility. It's
absurd.)
Quoting the FFmpeg header for the AVFrame.channels field:
/**
* number of audio channels, only used for audio.
* Code outside libavutil should access this field using:
* av_frame_get_channels(frame)
* - encoding: unused
* - decoding: Read by user.
*/
int channels;
It says "should" not must, and it doesn't even mention
av_frame_set_channels(). It's also in the section for public fields (not
below a marker that indicates private fields in a public struct, like
it's done e.g. in AVCodecContext).
But not using the accessor will cause silent failures on ABI changes.
The failure that happened due to this code didn't even make it apparent
what was wrong. So just use the idiotic accessor.
Also harmonize the FFmpeg-cursing in the code. (It's fully justified.)
Fixes#3295.
Note that mpv will still check the exact library version numbers, and
reject mismatches - to protect itself from such issues in the future.
mixer.c didn't really deserve to be separate anymore, as half of its
contents were unnecessary glue code after recent changes. It also
created a weird split between audio.c and af.c due to the fact that
mixer.c could insert audio filters. With the code being in audio.c
directly, together with other code that unserts filters during runtime,
it will be possible to cleanup this code a bit and make it work like the
video filter code.
As part of this change, make the balance code work like the volume code,
and add an option to back the current balance value. Also, since the
balance semantics are unexpected for most users (panning between the
audio channels, instead of just changing the relative volume), and there
are some other volumes, formally deprecate both the old property and the
new option.
Since mixer->ao is always NULL now (it was really just forgotten to be
removed), the uninit call never actually cleared the af field, leaving
a dangling pointer that could be accessed by volume control.
This code was supposed to adjust existing conversion filters (to make
them output a different format). But the code was just broken,
apparently a refactoring accident. It accessed af instead of af->prev.
The bug tended to add new conversion filters, even if an existing one
could have been used. (Can be tested by inserting a dummy lavrresample
filter followed by a format filter which forces conversion.)
In addition, it's probably better to return the actual error code if
reinitializing the filter fails. It would then respect an AF_FALSE
return value, which means format negotiation failed, instead of a
generic error.
af_volume has a volumedb sub-option, which allows the user to set an
explicit volume. Until recently, the player read back this value and
used it as initial softvol volume. But now it just overwrites it.
Instead of overwriting it, multiply the different gain values. Above
all, this will do the right thing if only softvol is used, or if the
user only adds the af_volume filter manually.
Normally, you want downmixing to happen first thing in the filter chain.
This is reflected in codec downmixing, which feeds the filter chain
downmixed audio in the first place. Doing this has the advantage of
needing less data to process. But the main motivation is that if there
is a drc filter in the chain, you want to process it the downmixed
audio.
Add an idiotic heuristic to achieve this. It tries to detect whether the
audio was indeed automatically downmixed (or upmixed). To detect what
the output format is going to be, it builds the filter chain normally,
and then retries with the heuristic applied (and for extra paranoia,
retries without the heuristic again if it fails to successfully rebuild
the filter chain for unknown reasons). This is simple and will work in
almost all cases.
Doing it in a more complete way is rather hard, because filters are so
generic. For example, we know absolutely nothing about the behavior of
af_lavfi, which creates an opaque filter graph with libavfilter. We
don't know why a filter would e.g. change the channel layout on its
output. (Our heuristic bails out in this case.) We're also slave to the
lowest common denominator of how our format negotiation works, and how
libavfilter's works.
In theory, we could make this mechanism explicit by introducing a
special dummy filter. The filter chain would then try to convert between
input and output formats at the dummy filter, which would give the user
more control over how downmix happens. On the other hand, the user could
just insert explicit conversion filters instead, so this would probably
have questionable value.
The algorithm and functionality is the same, but the code becomes much
simpler and easier to follow.
The assumption that there is only 1 conversion filter (lavrresample)
helps with the simplification, but the main change is to use the same
code for format/channels/rate. Get rid of the different AF_CONTROL_SET_*
controls, and change the af->data parameters directly. (af->data is
badly named, but essentially is a placeholder for the output format.)
Also, instead of trying to use the af_reinit() loop to init inserted
conversion filters or filters with changed output formats, do it inline,
and move the common code to a filter_reinit() function. This gets rid of
the awful retry variable.
In general, this should not change any runtime behavior.
This happens to be better for the af_volume filter (for softvol), and
saves some code too. It's "better" because you want to affect the
final filtered audio, such as after a manually inserted drc filter.
Drop the code for switching the volume options and properties between
af_volume and AO volume controls. interface-changes.rst mentions the
changes in detail.
Do this because this was exceedingly complex and had other problems as
well. It was also very hard to test. It's just not worth the trouble.
Some leftovers like AOCONTROL_HAS_PER_APP_VOLUME will be removed at a
later point.
Fixes#3322.
When selecting a device that simply doesn't exist with --audio-device,
AudioUnit will still initialize and start playback without complaining.
But it will never call the audio render callback, which leads to audio
playback simply not progressing.
I couldn't find a way to get AudioUnit to report an error at all, so
here's a crappy hack that takes care of this in most cases. We assume
that all devices have a kAudioDevicePropertyDeviceIsAlive property.
Invalid devices will error when querying the property (with 'obj!' as
status code).
This is not the correct fix, because we try to double-guess AudioUnit's
behavior by accessing a lower label API. Suggestions welcome.
The libavcodec wmapro decoder will skip some bytes at the start of the
first packet and return each time. It will not return any audio data in
this state.
Our own code as well as libavcodec's new API handling
(avcodec_send_packet() etc.) discard the PTS on the first return, which
means the PTS is never known for the first packet. This results in a
"Failed audio resync." message.
Fixy it by remember the PTS in next_pts. This field is used only if the
decoder outputs no PTS, and is updated after each frame - and thus
should be safe to set.
(Possibly this should be fixed in libavcodec new API handling by not
setting the PTS to NOPTS as long as no real data has been output. It
could even interpolate the PTS if the timebase is known.)
Fixes the failure message seen in #3297.
Some bugs in this code are exposed by e.g. playing lossless audio files
with --ad-lavc-threads=16. (libavcodec doesn't really support threaded
audio decoding, except for lossless files.) In these cases, a major
amount of audio can be buffered, which makes incorrect handling of this
buffering obvious.
For one, draining the decoder can take a while, so if there's a new
segment, we shouldn't read audio.
The segment end check was completely wrong, and used the start value.
Instead of hardcoding what the libavcodec ac3 encoder expects, configure
it based on the AVCodec fields.
Unfortunately, it doesn't export the list of sample rates, so that is
done manually. This commit actually fixes the rate always to 48Khz. I
don't even know whether the other rates worked. (Possibly did, but
they'd still change the spdif parameters, and would work differently
from ad_spdif.c.)
Workaround for an awful corner-case. The new decode API "locks" the
decoder into the EOF state once a drain packet has been sent. The
problem starts with a file containing a 0-sized packet, which is
interpreted as drain packet.
This should probably be changed in libavcodec (not treating 0-sized
packets as drain packets with the new API) or in libavformat (discard
0-sized packets as invalid), but efforts to do so have been fruitless.
Note that vd_lavc.c already does something similar, but originally for
other reasons.
Fixes#3106.
This helps with shitty APIs and even shittier drivers (I'm looking at
you, ALSA). Sometimes they won't send proper wakeups. This can be fine
during playback, when for example playing video, because mpv still will
wakeup the AO outside of its own wakeup mechanisms when sending new data
to it. But when draining, it entirely relies on the driver's wakeup
mechanism. So when the driver wakeup mechanism didn't work, it could
hard freeze while waiting for the audio thread to play the rest of the
data.
Avoid this by waiting for an upper bound. We set this upper bound at the
total mpv audio buffer size plus 1 second. We don't use the get_delay
value, because the audio API could return crap for it, and we're being
paranoid here. I couldn't confirm whether this works correctly, because
my driver issue fixed itself.
(In the case that happened to me, the driver somehow stopped getting
interrupts. aplay froze instead of playing audio, and playing audio-only
files resulted in a chop party. Video worked, for reasons mentioned
above, but drainign froze hard. The driver problem was solved when
closing all audio output streams in the system. Might have been a dmix
related problem too.)
When we're draining, don't wakeup the core on every buffer fill, since
unlike during normal playback, we won't actually get more data. The
wakeup here conceptually works like wakeups with condition variables, so
redundant wakeups do not hurt, so this is just a minor change and
nothing of consequence.
(Final EOF also requires waking up the core, but there is separate code
to send this notification.)
Also dump the p->still_playing field in trace logging.
For clang, it's enough to just put (void) around usages we are
intentionally ignoring the result of.
Since GCC does not seem to want to respect this decision, we are forced
to disable the warning globally.
Since the main thread is shared by other things in the player, using STA (single
threaded aparement) may have caused problems. Instead initialize in MTA
(multithreaded apartment).
This reportedly makes it work on ODROID-C2. The idea for this hack is
taken from kodi; they unconditionally set some or all of those flags.
I don't trust ALSA enough to hope that setting these flags couldn't
break something else, so we try without them first.
It's not clear whether this is a driver bug or a bug in the ALSA libs.
There is no ALSA bug tracker (the ALSA website has had a dead link to
a deleted bug tracker fo years). There's not much we can do other than
piling up ridiculous hacks. At least I think that at this point invalid
API usage by mpv can be excluded as a cause.
ALSA might be the worst audio API ever.
Including initguid.h at the top of a file that uses references to GUIDs
causes the GUIDs to be declared globally with __declspec(selectany). The
'selectany' attribute tells the linker to consolidate multiple
definitions of each GUID, which would be great except that, in Cygwin
and MinGW GCC 6.1, this method of linking makes the GUIDs conflict with
the ones declared in libuuid.a.
Since initguid.h obsoletes libuuid.a in modern compilers that support
__declspec(selectany), add initguid.h to all files that use GUIDs and
remove libuuid.a from the build.
Fixes#3097
Setting this here is a race condition. It's called from a CoreAudio
callbacks, and there are no locks. It's a string, so this can be
potentially severe.
It's hard to fix and only CoreAudio supported it, so remove it.
This causes the "audio-out-detected-device" property to return nothing
on all platforms.
AVFormatContext.codec is deprecated now, and you're supposed to use
AVFormatContext.codecpar instead.
Handle this for all of the normal playback code.
Encoding mode isn't touched.
This is particularly useful for opus which allows only a fairly restrictive set
of samplerates. If the codec doesn't provide a list of samplerates, just
continue to try the requsted one and hope for the best.
fixes#2957
This function chooses the best match to a given samplerate from a provided
list. This can be used, for example, by the ao to decide what samplerate to use
for output.
This eliminates some intermittent pops heard in a HRT MicroStreamer DAC
uncorrelated with user interaction. As a bonus, this resolves#1773 which I can
o longer reproduce as of this commit. Leave the 50ms buffer for shared mode
since that seems to be working quite well.
This is also the way exclusive mode is done in the MSDN example code:
https://msdn.microsoft.com/en-us/library/windows/desktop/dd370844%28v=vs.85%29.aspx
This was originally increased in c545c40 to mitigate glitches that subsequent
refactorings have eliminated.
A COM message loop is apparently totally inappropriate for a low latency
thread. It leads to audio glitches because the thread doesn't wake up fast
enough when it should. It also causes mysterious correlations between the vo
and ao thread (i.e., toggling fullscreen delays audio feed events). Instead use
an mp_dispatch_queue to set/get volume/mute/session display name from the audio
thread. This has the added benefit of obviating the need to marshal the
associated interfaces from the audio thread.
Don't wait for WASAPI to send another feed event if we detect an underfull
buffer. It seems that WASAPI doesn't always send extra feed events if
something causes rendering to fall behind. This causes every subsequent playback
buffer to under-run until playback is reset. The fix is simply to do a one-shot
double feed when this happens, which allows rendering to catch up with playback.
This was observed to happen when using MsgWaitForMultipleObjects to wait for the
feed event and toggling fullscreen with vo=opengl:backend=win. This commit
improves the behaviour in that specific case and more generally makes exclusive
mode significantly more robust.
This commit also moves the logic to avoid *over*filling the exclusive mode
buffer into thread_feed right next to the above described underfil logic.
OpenSL ES is used on Android. At the moment only stereo output is
supported. Two options are supported: 'frames-per-buffer' and
'sample-rate'. To get better latency the user of libmpv should pass
values obtained from AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER)
and AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE).
Fixes correctness_trimming_nobeeps.opus. One nasty thing is that this
mechanism interferes with the container-signalled mechanism with
AV_FRAME_DATA_SKIP_SAMPLES. So apply it only if that is apparently not
present. It's a mess, and it's still broken in FFmpeg CLI, so I'm sure
this will get fucked up later again.
I'm not quite sure what the FFmpeg AV_FRAME_DATA_SKIP_SAMPLES API
demands here. The code so far assumed that skipping can be more than a
frame, but not trimming. Extend it to trimming too.
This is actually already done by dec_audio.c. But if
AV_FRAME_DATA_SKIP_SAMPLES is applied, this happens too late here. The
problem is that this will slice off samples, and make it impossible for
later code to reconstruct the timestamp properly.
Missing timestamps can still happen with some demuxers, e.g. demux_mkv.c
with Opus tracks. (Although libavformat interpolates these itself.)
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.
The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.
The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)
To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
concatenating files with completely different codecs for the sake
of EDL (which also uses the timeline infrastructure). A "lighter"
approach would try to make use of decoder mechanism to update e.g.
the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
normal playback core mechanisms like hr-seek, but now the playback
core doesn't need to care about these things.
These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)
There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.
Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
The code is shared with the --vd-lavc-threads option, so using 0 for
auto-detection just works.
But no, this is not useful. Just change it for orthogonality.
The complex filter support that will be added makes much more complex
use of libavfilter, and I'm not going to bother with adding hacks to
keep libavfilter optional.
I can't explain this, but it seems to be a similar case to the ALSA HDMI
one. I find it hard to tell because of the slightly different names and
conventions in use in libavcodec, WAVEEXT channel masks, decoders, codec
specifications, HDMI, and platform audio APIs.
The fix is the same as the one for ao_alsa (see commit be49da72). This
should fix at least playing 7.1 sources on OSX with 7.1(rear) selected
in Audio MIDI Setup. The ao_alsa commit mentions XBMC, but I couldn't
find out where it does that or if it also does that for CoreAudio. It's
woth noting that PHT (essentially an old XBMC fork) also exhibited the
incorrect behavior (i.e. side and back speakers were swapped).
Remove flc-frc <-> sl<->sr. This was just plain wrong, and a mistaken
change to make 7.1 work properly on CoreAudio with 7.1(rear) layout.
Also see the following commit.
Add br-br <-> sl<->sr, because we decided that it makes sense.
Note that this "fudging" is applied only if the channel pairs are
replaced, i.e. they would get dropped and be replaced with silence. This
is done to compensate for libswresample's default rematrixing (which
takes care of some more common cases).
Will be helpful for the coming filter support. I planned on merging
audio/video decoding, but this will have to wait a bit longer, so only
remove the duplicate status codes.
This also makes it refcounted, i.e. the new AVFrame will reference the
mp_audio buffers, instead of potentially forcing the consumer of the
AVFrame to copy the data.
All the extra code is for handling the >8 channels case, which requires
very messy dealing with the extended_ fields (not our fault).
Correctly avoid a reload if the current device was specified by the user through
--audio-device. Previously, we only recognized if the user had specified
--ao=wasapi:device=.
This seems generally easier when using libmpv (and was already requested
and implemented before: see commit 327a779a; it was reverted some time
later).
With the weird internal logic we have to deal with, in particular the
--softvol=no case (using system volume), and using the audio API's mixer
(--softvol=auto on some systems), we still can't avoid all glitches and
corner cases that complicate this issue so much. The API user is either
recommended to use --softvol=yes or auto, or to watch the new
mixer-active property, and assume the volume/mute properties have
significant values if the mixer is active.
Remaining glitches:
- changing the volume/mute properties has no effect if no internal mixer
is used (--softvol=no) and the mixer is not active; the actual mixer
controls do not change, only the property values
- --volume/--mute do not have an effect on the volume/mute properties
before mixer initialization (the options strictly are only applied
during mixer init)
- volume-max is 100 while the mixer is not active
Notably, the address of the enumerator->count member is passed to
IMMDeviceCollection::GetCount(), which expects a UINT variable, not an int. How
did this ever work?
Previously, if the enumerator found no devices, attempting to get the default
device with IMMDeviceEnumerator::GetDefaultAudioEndpoint would result in the
cryptic (and undocumented) E_PROP_ID_UNSUPPORTED. This way, the user is given a
better indication of what exactly is wrong and isolates any other possible
triggers for this error.
Similar to the video path. dec_audio.c now handles decoding only. It
also looks very similar to dec_video.c, and actually contains some of
the rewritten code from it. (A further goal might be unifying the
decoders, I guess.)
High potential for regressions.
This is probably the 3rd time the user-visible behavior changes. This
time, switch back because not normalizing seems to be the more expected
behavior from users.
Seems useless.
This only helped in one case: one audio stream in the sample
av_find_best_stream_fails.ts had a AC3 packets which couldn't be
decoded, and for which avcodec_decode_audio4() returned 0 forever. In
this specific case, playback will now not start, and you have to
deselect audio manually.
(If someone complains, the old behavior might be restored, but
differently.)
Also remove the stale "bitrate" field.
While the situation is not really clear for the other rewritten
coreaudio code, it's very clear for the channel mapping code. It was all
written by us. (MPlayer doesn't even have any channel map handling.)
This covers source files which were added in mplayer2 and mpv times
only, and where all code is covered by LGPL relicensing agreements.
There are probably more files to which this applies, but I'm being
conservative here.
A file named ao_sdl.c exists in MPlayer too, but the mpv one is a
complete rewrite, and was added some time after the original ao_sdl.c
was removed. The same applies to vo_sdl.c, for which the SDL2 API is
radically different in addition (MPlayer supports SDL 1.2 only).
common.c contains only code written by me. But common.h is a strange
case: although it originally was named mp_common.h and exists in MPlayer
too, by now it contains only definitions written by uau and me. The
exceptions are the CONTROL_ defines - thus not changing the license of
common.h yet.
codec_tags.c contained once large tables generated from MPlayer's
codecs.conf, but all of these tables were removed.
From demux_playlist.c I'm removing a code fragment from someone who was
not asked; this probably could be done later (see commit 15dccc37).
misc.c is a bit complicated to reason about (it was split off mplayer.c
and thus contains random functions out of this file), but actually all
functions have been added post-MPlayer. Except get_relative_time(),
which was written by uau, but looks similar to 3 different versions of
something similar in each of the Unix/win32/OSX timer source files. I'm
not sure what that means in regards to copyright, so I've just moved it
into another still-GPL source file for now.
screenshot.c once had some minor parts of MPlayer's vf_screenshot.c, but
they're all gone.
Previously used opt_exclusive option to decide which volume control code to run.
The might not always reflect the actual state, for example if passthrough
is used. Admittedly, none of the volume controls will work anyway with
passthrough, but this is the right thing to do.
Prevents channels from being dropped, e.g. when going 7.1 -> 7.1(wide)
and similar cases. The reasoning here is that channel layouts over HDMI
don't work anyway, and not dropping a channel and playing it on a
slightly "wrong" (but expected) speaker is preferable to playing silence
on these speakers.
Do this to remove issues with ao_coreaudio. Frankly I'm not sure whether
our mapping (between CA and mpv/FFmpeg speakers) is correct, but on the
other hand due to the reasons stated above it's not all that meaningful.
This is mainly a refactor. I'm hoping it will make some things easier
in the future due to cleanly separating codec metadata and stream
metadata.
Also, declare that the "codec" field can not be NULL anymore. demux.c
will set it to "" if it's NULL when added. This gets rid of a corner
case everything had to handle, but which rarely happened.
Note that hresult_to_str() (coming from wasapi_explain_err()) is mostly
wasapi-specific, but since HRESULT error codes are unique, it can be
extended for any other use.
This is another attempt at making files with sparse video frames work
better.
The problem is that you generally can't know whether a jump in video
timestamps is just a (very) long video frame, or a timestamp reset. Due
to the existence of files with sparse video frames (new frame only every
few seconds or longer), every heuristic will be arbitrary (in general,
at least).
But we can use the fact that if video is continuous, audio should also
be continuous. Audio discontinuities can be easily detected, and if that
happens, reset some of the playback state.
The way the playback state is reset is rather radical (resets decoders
as well), but it's just better not to cause too much obscure stuff to
happen here. If the A/V sync code were to be rewritten, it should
probably strictly use PTS values (not this strange time_frame/delay
stuff), which would make it much easier to detect such situations and
to react to them.
It existed for XP-compatibility only. There was also a time where
ao_wasapi caused issues, but we're relatively confident that ao_wasapi
works better or at least as good as ao_dsound on Windows Vista and
later.
Normally, PulseAudio accepts any combination of sample format, sample
rate, channel count/map. Sometimes it does not. For example, the channel
rate or channel count have fixed maximum values. We should not fail
fatally in such cases, but attempt to fall back to a working format.
We could just send pass an "unset" format to Pulse, but this is not too
attractive. Pulse could use a format which we do not support, and also
doing so much for an obscure corner case is not reasonable. So just pick
a format that is very likely supported.
This still could fail at runtime (the stream could fail instead of going
to the ready state), but this sounds also too complicated. In
particular, it doesn't look like pulse will tell us the cause of the
stream failure. (Or maybe it does - but I didn't find anything.)
Last but not least, our fallback could be less dumb, and e.g. try to fix
only one of samplerate or channel count first to reduce the loss, but
this is also not particularly worthy the effort.
Fixes#2654.
Given 5.1(side), this lets it pick 5.1 from [5.1, 7.1]. Which was
probably the original intention of this replacement stuff. Until now,
the opposite was done in some cases.
Keep the old heuristic if the replacement is not perfect. This would
mean that a subset of the channel layout is an inexact equivalent, but
not all of it.
(My conclusion is that audio output APIs should be designed to simply
take any channel layout, like the PulseAudio API does.)
Unify and clean up listing and selection. Use common enumerator code for both
operations to avoid duplication or inconsistencies.
Maintain, but significatnly simplify manual device selection by id, name or
number. This actually fixes loading by name which didn't really work before
since the "name" displayed by --audio-device=help differed from that used to
match the selection, which used the device "description" instead.
Save the selected deviceID in the private structure for later loading. This will
permit moving the device selection into the main thread in a future commit.
Apparently it's only wine where the qpc_position returned by
IAudioClock_GetPosition can be overflowed. So actually do the rescaling
correctly, but throw away the result if it looks unreasonable.
this fixes a regression in 5afa68835a
Make sure that subtraction of performance counters is done correctly.
Follow the *exact* instructions for converting performance counter to something
comparable to the QPCposition returned by IAudioClient::GetPosition
https://msdn.microsoft.com/en-us/library/windows/desktop/dd370889%28v=vs.85%29.aspx
Also make sure that subtraction of unsigned integers is stored into a signed
integer to avoid nastiness. Also be more careful about overflow in the
conversion of the device position into number of samples.
Avoid casting mp_time_us() to a double, and use llrint to convert the
double precision delay_us back to integer for ao_read_data.
Finally, actually check the return value of ao_read_data and add a verbose
message if it is not the expected value. Unfortunately,
there is no way to tell WASAPI when this happens since the frame_count in
ReleaseBuffer must match GetBuffer.
Do not try and set/get master volume in exclusive if there is no
hardware support. This would just uselessly change the master slider,
but have no effect on the actual volume.
Furthermore if getting hardware volume support information fails, then assume
it has none.
It was complicated and not even very intuitive to the user.
If you are controlling the master volume, you just have to be
prepared to deal with the consequences.
A manually added af_volume could lead to muted audio when switching to a
new file. af_volume keeps the last volume set by AF_CONTROL_SET_VOLUME
to return it with AF_CONTROL_GET_VOLUME, but the initial value is 0. So
the mixer volume was forced to 0 when unintializing the filter chain and
reading back the previously set volume.
If there were many AO drivers without device selection, this added a
"Default" entry for each AO. These entries were not distinguishable, as
the device list feature is meant not to require to display the "raw"
device name in GUIs.
Disambiguate them by adding the driver name. If the AO is the first, the
name will remain just "Default". (The condition checks "num > 1",
because the very first entry is the dummy for AO autoselection.)
Of course, only FFmpeg has av_clipd(), while Libav does not. (Nevermind
that it doesn't do much more than the mpv MPCLAMP() macro. Supposedly,
libavutil can provide optimized platform-specific versions for av_clip*,
but of course nothing actually does for av_clipf() or av_clipd().)
libswresample doesn't do it - although it should, but the patch is stuck
in limbo.
Probably reduces problems with artifacts on downmixing in some cases.
Remove known useless device entries from the --audio-device list (and
corresponding property). Do this because the list is supposed to be a
high level list of devices the user can select. ALSA does not provide
such a list (in an useable manner), and ao_alsa.c is still in the best
position to improve the situation somewhat.
The ALSA doxygen says:
IOID - input / output identification ("Input" or "Output"), NULL
means both
This bug was blatantly introduced with commit cf94fce4.
Apparently, some audio drivers do not support the DTS subtype, but
passthrough works anyway if the AC3 subtype is set. Just retry with
AC3 if the proper format doesn't work. The audio device which
exposed this behavior reported itself as
"M601d-A3/A3R (Intel(R) Display Audio)".
xbmc/kodi even always passes DTS as AC3.
Just set the ratio directly by working around the intended semantics of
the API function. The silly rounding stuff we had isn't needed anymore
(and not entirely correct anyway).
Note that since the compensation is virtually active forever, we need to
reset if it's not needed. So always run this code to be sure to reset
it.
Also note that libswresample itself had a precision issue, until it
was fixed in FFmpeg commit 351e625d.
Deal with jittering Matroska crap timestamps. This reuses the mechanism
that is needed for frames without PTS, and adds a heuristic to it. If
the interpolated timestamp is less than 1ms away from the real one, it
might be due to Matroska timestamp rounding (or other file formats with
such rounding, or files remuxed from Matroska).
While there actually isn't much of a need to do this (audio PTS
jittering by such a low amount doesn't negatively influence much), it
helps with identifying jitter from other sources.
Instead of requiring the decoder to set the PTS directly on the
dec_audio context (including handling absence of PTS etc.), transfer the
packet PTS to the decoded audio frame. Marginally simpler, and gives
more control to the generic code.
Essentially we'd use something random, just because it's part of the srt
of traditionally used ALSA channel mappings. But each driver can do its
own things.
This doesn't let me sleep at night, so remove it.
This could accidentally change some spdif formats to AAC (because AAC is
the first on the list and will match first). spdif formats are
inherently uninterchangeable, so treat them as their own class of
formats (like int vs. float).
Might fix some issues with ao_wasapi.c.
Actually, it didn't really require that before (most work was avoided),
but some bits had to be run anyway. Separate the speed change into a
light-weight function, which merely updates already created filters, and
a heavy-weight one which messes with filter insertion.
This also happens to fix the case where the filters would "forget" the
current speed (force resampling, change speed, hit a volume control to
force af_volume insertion - it will reset speed and desync).
Since we now always run the light-weight function, remove the
af_scaletempo verbose message that is printed on speed setting. Other
than that, all setters are cheap.
For some reason, the encoder didn't like that the AVPacket already had
fields set. I'm not quite sure, but this might just be invalid API
usage. Do it as it's recommended.
We need to effectively swap the last channel pair. See commit 4e358a96
and 5a18c5ea for details.
Doing this seems rather strange, as 7.1 just extends 5.1 with 2 new
speakers, and 5.1 doesn't need this change. Going by the HDMI standard
and the Intel HDA sources (cited in the referenced commits), it also
looks like 7.1 should simply append two channels to 5.1 as well. But
swapping them is apparently correct. This is also what XBMC does. (I
didn't find any other applications doing 7.1 PCM using the ALSA channel
map API. VLC seems to ignore the 7.1 case.) Testing reveals that at
least the end result is correct.
"Normal" ALSA 7.1 is unaffected by this, as it reports a different
(and saner) channel layout.
Instead of constructing an ALSA channel map from mpv ones from scratch,
try to find the original ALSA channel map again. Th result is that we
need to convert channel maps only in one direction. If we need to map
a mp_chmap to ALSA, we fetch the device's channel map list, convert
each entry to mp_chmap, and find the first one which fits.
This seems helpful for the following commit. For now, this only gets rid
of mapping back the trivial MONO mapping, which alone would still be
acceptable, but with other channel layout mogrifications it gets messy
fast. While we need to do something awkward to keep our channel map
reordering for VAR chmaps (which basically gives nicer output and
possibly slightly better performance), this is still the better
solution.
This reverts commit 4e358a9636.
Testing shows the channel pairs must indeed be swapped (details see
commit message of the reverted commit). Making the downmix code move
sl/sr to sdl/sdr is not an appropriate solution anymore, and it's
better to fix the unusual channel layout in ao_alsa.c directly.
(Not reverting the change in chmap.c; this is still correct.)
ao_alsa: attempt to fix 7.1 over HDMI
The last 2 channels of 7.1 (RLC/RRC in ALSA) were exported as sdl/sdr
instead of sl/sr (I don't even know why I chose sdl/sdr, but SL/SR
and RLC/RRC are different in the ALSA API). libsw/avresample do not
move the sl/sr channels to sdl/sdr when rematrixing, so silence was
sent for 2 channels. If my selection of sdl/sdr is essentially API
abuse, there's no reason why they should do this differently.
The mess here is really that ALSa doesn't map the HDMI layouts cleanly.
Most ALSA drivers export 7.1 in a way compatible to our expectations,
but Intel HDA/HDMI does not:
mpv/ffmpeg: fl-fr-fc-lfe-bl-br-sl-sr
ALSA/generic: FL FR FC LFE RL RR SL SR [1]
ALSA/HDMI: FL FR LFE FC RL RR RLC RRC [2]
The HDMI layout is layout 0x13 (going by CEA-861-B). The comment in
the kernel code has to be correct too. The early standard defines only
1 other layout, which replaces RLC/RRC with FRC/FLC - this probably
corresponds to what we call "7.1(wide)".
So it appears when ALSA requests RLC/RRC, we should feed it sl/sr.
To make it more complicated, Kodi/xbmc apparently also have to deal with
ALSA being special, but instead of sending sl/sr to RLC/RRC, they swap
the last two pairs of the layout, and send sl/sr to RL/RR and bl/br to
RLC/RRC. Or I might have misunderstood their code. I don't have a
7.1-capable A/V receiver, so I can't test this.
For now, go with the simpler solution, and wait until someone tests it.
If the speakers end up swapped, a completely different solution will be
needed.
[1] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/sound/core/pcm_lib.c?id=refs/tags/v4.3#n2434
[2] https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/sound/pci/hda/patch_hdmi.c?id=refs/tags/v4.3#n307