Both AVFrame.pts and AVFrame.pkt_pts have existed for a long time. Until
now, decoders always returned the pts via the pkt_pts field, while the
pts field was used for encoding and libavfilter only. Recently, pkt_pts
was deprecated, and pts was switched to always carry the pts.
This means we have to be careful not to accidentally use the wrong
field, depending on the libavcodec version. We have to explicitly check
the version numbers. Of course the version numbers are completely
idiotic, because idiotically the pkg-config and library names are the
same for FFmpeg and Libav, so we have to deal with this explicitly as
well.
If the "default" device refuses to be opened as spdif device (i.e. it
errors due to the AES0 etc. parameters), we were falling back to the
iec958 device. This is needed on some systems for smooth operation with
PCM vs. spdif.
Now change it to try "hdmi" before "iec958", which supposedly helps in
other situations.
Better suggestions welcome. Apparently kodi does this too, although I
didn't check directly.
The player tries to avoid splitting frames with spdif (sample alignment
stuff). This can in certain corner cases with certain drivers lead to
the situation that ao_get_space() returns a number higher than 0 and
lower than the audio frame size. The playloop will round this down to 0
bytes and do nothing, leading to a missed wakeup. This can lead to
underruns or playback completely getting stuck.
It can be reproduced by playing AC3 passthrough with no video and:
--ao=null --ao-null-buffer=0.256 --ao-null-outburst=6100
This commit attempts to fix it by allowing the playloop to write some
additional data (to get a complete frame), that will be buffered within
the AO ringbuffer even if the audio device doesn't want it.
We always want to use __declspec(selectany) to declare GUIDs, but
manually including <initguid.h> in every file that used GUIDs was
error-prone. Since all <initguid.h> does is define INITGUID and include
<guiddef.h>, we can remove all references to <initguid.h> and just
compile with -DINITGUID to get the same effect.
Also, this partially reverts 622bcb0 by re-adding libuuid.a to the
build, since apparently some GUIDs (such as GUID_NULL) are not declared
in the source file, even when INITGUID is set.
This was in the parser code all along. As far as I can tell, *cp was
intended. There is no need to check cp for NULL (nor does it make any
sense to do so every time around the loop) for AF_CONTROL_COMMAND.
However, s->matrixstr can be NULL, so checking for that separately is in
order.
For stereo and typical L/R-first channel arrangements, this avoids
undesirable phasing artifacts, especially obvious when speed is changed
and then reset. Without this, there is a very audible change in the
stereo field even when librubberband is no longer actually making any
speed changes.
There were multiple values under M_OPT_EXIT (M_OPT_EXIT-n for n>=0).
Somehow M_OPT_EXIT-n either meant error code n (with n==0 no error?), or
the number of option valus consumed (0 or 1). The latter is MPlayer
legacy, which left it to the option type parsers to determine whether an
option took a value or not. All of this was changed in mpv, by requiring
the user to use explicit syntax ("--opt=val" instead of "-opt val").
In any case, the n value wasn't even used (anymore), so rip this all
out. Now M_OPT_EXIT-1 doesn't mean anything, and could be used by a new
error code.
Currently, calling mp_input_wakeup() will wake up the core thread (also
called the playloop). This seems odd, but currently the core indeed
calls mp_input_wait() when it has nothing more to do. It's done this way
because MPlayer used input_ctx as central "mainloop".
This is probably going to change. Remove direct calls to this function,
and replace it with mp_wakeup_core() calls. ao and vo are changed to use
opaque callbacks and not use input_ctx for this purpose. Other code
already uses opaque callbacks, or has legitimate reasons to use
input_ctx directly (such as sending actual user input).
The first one is printed even if the user disabled video (or there's no
video), so just remove it. The second one uses deprecated sub-option
syntax, so remove that as well.
And introduce a global option which does this. Or more precisely, this
deprecates the global wasapi and coreaudio options, and adds a new one
that merges their functionality. (Due to the way the sub-option
deprecation mechanism works, this is simpler.)
I decided that it's too much work to convert all the VO/AOs to the new
option system manually at once. So here's a shitty hack instead, which
achieves almost the same thing. (The only user-visible difference is
that e.g. --vo=name:help will list the sub-options normally, instead of
showing them as deprecation placeholders. Also, the sub-option parser
will verify each option normally, instead of deferring to the global
option parser.)
Another advantage is that once we drop the deprecated options,
converting the remaining things will be easier, because we obviously
don't need to add the compatibility hacks.
Using this mechanism is separate in the next commit to keep the diff
noise down.
Instead of requiring each VO or AO to manually add members to MPOpts and
the global option table, make it possible to register them automatically
via vo_driver/ao_driver.global_opts members. This avoids modifying
options.c/options.h every time, including having to duplicate the exact
ifdeffery used to enable a driver.
Normally I'd prefer a bunch of smaller functions with fewer parameters
over a single function with a lot of parameters. But future changes will
require messing with the parameters in a slightly more complex way, so a
combined function will be needed anyway. The now-unused "global"
parameter is required for later as well.
Positional parameters cause problems because they can be ambiguous with
flag options. If a flag option is removed or turned into a non-flag
option, it'll usually be interpreted as value for the first sub-option
(as positional parameter), resulting in very confusing error messages.
This changes it into a simple "option not found" error.
I don't expect that anyone really used positional parameters with --vo
or --ao. Although the docs for --ao=pulse seem to encourage positional
parameters for the host/sink options, which means it could possibly
annoy some PulseAudio users.
--vf and --af are still mostly used with positional parameters, so this
must be a configurable option in the option parser.
These are different AVCodecContext fields. pkt_timebase is the correct
one for identifying the unit of packet/frame timestamps when decoding,
while time_base is for encoding. Some decoders also overwrite the
time_base field with some unrelated codec metadata.
pkt_timebase does not exist in Libav, so an #if is required.
There are situations where the resampler is destroyed and recreated
during playback. If recreating the resampler unexpectedly fails, the
filter function is supposed to return an error. This wasn't done
correctly, because get_out_samples() accessed the resampler before the
check. Move the check up to fix this.
Instead of passing through double float timestamps opaquely, pass real
timestamps. Do so by always setting a valid timebase on the
AVCodecContext for audio and video decoding.
Specifically try not to round timestamps to a too coarse timebase, which
could round off small adjustments to timestamps (such as for start time
rebasing or demux_timeline). If the timebase is considered too coarse,
make it finer.
This gets rid of the need to do this specifically for some hardware
decoding wrapper. The old method of passing through double timestamps
was also a bit questionable. While libavcodec is not supposed to
interpret timestamps at all if no timebase is provided, it was
needlessly tricky. Also, it actually does compare them with
AV_NOPTS_VALUE. This change will probably also reduce confusion in the
future.
The code actually kept going out of EOF mode into resync mode back into
EOF mode when the playloop had to wait after an audio EOF caused by the
endpts. This would break seamless looping (as added by the next commit).
Apply endpts earlier, to ensure the filter_audio() function always
returns AD_EOF in this case.
The idiotic ao_buffer makes this an amazing pain in the ass.
The touched code is for seek resets and such - we simply want to reset
the entire resample state. But I noticed after a seek a tiny bit of
audio is missing (mpv's audio sync code inserted silence to compensate).
It turns out swr_drop_output() either does not reset some internal state
as we expect, or it's designed to drop not only buffered samples, but
also future samples.
On the other hand, libavresample's avresample_read(), does not have this
problem. (It is also pretty explicit in what it does - return/skip
buffered data, nothing else.)
Is the libswresample behavior a bug? Or a feature? Does nobody even
know? Who cares - use the hammer to unfuck the situation. Destroy and
deallocate the libswresample context and recreate it. On every seek.
The demuxer layer usually doesn't log per-stream information, and even
the replaygain information was logged only if it came from tags.
So log it in af_volume instead.
With the previous commit, ao_alsa.c now has 3 possible ways to pause
playback. Actually all 3 of them need get_delay() to fake its return
value, so don't duplicate that code.
Also much of the code looks a bit questionable when considering
inconsistent pause/resume calls from outside, so ignore redundant calls.
push.c does not handle this automatically, and AOs using push.c have to
handle it themselves. Also, ALSA is low-level enough that it needs
explicit support in user code. At least I haven't found any option that
does this.
We still can get away relatively cheaply by abusing underflow-handling
for this. ao_alsa.c already configures ALSA to handle underflows by
playing silence. So we purposely induce an underflow when opening the
device, as well as when pausing or resetting the device.
This introduces minor misbehavior: it doesn't account for the additional
delay the initial silence adds, unless the device has fully played the
fragment of silence when the player starts sending data to it. But
nobody cares.
Exactly the same situation as with ao_alsa in commit 0b144eac (except
that we can detect the situation better under wasapi).
Essentially, wasapi will allow us to output any sample format, and not
just the one configured by the user in the audio system settings.
This commit adds an --audio-channel=auto-safe mode, and makes it the
default. This mode behaves like "auto" with most AOs, except with
ao_alsa. The intention is to allow multichannel output by default on
sane APIs. ALSA is not sane as in it's so low level that it will e.g.
configure any layout over HDMI, even if the connected A/V receiver does
not support it. The HDMI fuckup is of course not ALSA's fault, but other
audio APIs normally isolate applications from dealing with this and
require the user to globally configure the correct output layout.
This will help with other AOs too. ao_lavc (encoding) is changed to the
new semantics as well, because it used to force stereo (perhaps because
encoding mode is supposed to produce safe files for crap devices?).
Exclusive mode output on Windows might need to be adjusted accordingly,
as it grants the same kind of low level access as ALSA (requires more
research).
In addition to the things mentioned above, the --audio-channels option
is extended to accept a set of channel layouts. This is supposed to be
the correct way to configure mpv ALSA multichannel output. You need to
put a list of channel layouts that your A/V receiver supports.
Pointless anyway. With superficial checking I couldn't find any decoder
which actually outputs this, and AO chmap negotiation would properly
ignore them anyway in most cases.
We send a refcounted frame to the encoder, but then disrespect
refcounting rules and write to the frame data without making sure the
buffer is really writeable.
In theory this can lead to reallocation on every frame is the encoder
really keeps a reference. If we really cared, we could fix this by
providing a buffer pool. But then again, we don't care.
This is logical: the function makes sense only in situations where you
are going to write to the audio data. To make it worse,
av_buffer_realloc() also handles this situation, but only if the buffer
size changes (simply because it can't realloc memory in use), so we have
to explicitly force reallocation by unreffing the buffers first.
The FFmpeg API is incredibly weird and inconsistent about this. This is
also a FFmpeg-only issue and nothing like this is in Libav - which
doesn't really show FFmpeg in a very positive light.
(To make it even worse: this is a full-blown Libav API incompatibility,
even though this crap was added for Libav ABI-compatibility. It's
absurd.)
Quoting the FFmpeg header for the AVFrame.channels field:
/**
* number of audio channels, only used for audio.
* Code outside libavutil should access this field using:
* av_frame_get_channels(frame)
* - encoding: unused
* - decoding: Read by user.
*/
int channels;
It says "should" not must, and it doesn't even mention
av_frame_set_channels(). It's also in the section for public fields (not
below a marker that indicates private fields in a public struct, like
it's done e.g. in AVCodecContext).
But not using the accessor will cause silent failures on ABI changes.
The failure that happened due to this code didn't even make it apparent
what was wrong. So just use the idiotic accessor.
Also harmonize the FFmpeg-cursing in the code. (It's fully justified.)
Fixes#3295.
Note that mpv will still check the exact library version numbers, and
reject mismatches - to protect itself from such issues in the future.
mixer.c didn't really deserve to be separate anymore, as half of its
contents were unnecessary glue code after recent changes. It also
created a weird split between audio.c and af.c due to the fact that
mixer.c could insert audio filters. With the code being in audio.c
directly, together with other code that unserts filters during runtime,
it will be possible to cleanup this code a bit and make it work like the
video filter code.
As part of this change, make the balance code work like the volume code,
and add an option to back the current balance value. Also, since the
balance semantics are unexpected for most users (panning between the
audio channels, instead of just changing the relative volume), and there
are some other volumes, formally deprecate both the old property and the
new option.
Since mixer->ao is always NULL now (it was really just forgotten to be
removed), the uninit call never actually cleared the af field, leaving
a dangling pointer that could be accessed by volume control.
This code was supposed to adjust existing conversion filters (to make
them output a different format). But the code was just broken,
apparently a refactoring accident. It accessed af instead of af->prev.
The bug tended to add new conversion filters, even if an existing one
could have been used. (Can be tested by inserting a dummy lavrresample
filter followed by a format filter which forces conversion.)
In addition, it's probably better to return the actual error code if
reinitializing the filter fails. It would then respect an AF_FALSE
return value, which means format negotiation failed, instead of a
generic error.
af_volume has a volumedb sub-option, which allows the user to set an
explicit volume. Until recently, the player read back this value and
used it as initial softvol volume. But now it just overwrites it.
Instead of overwriting it, multiply the different gain values. Above
all, this will do the right thing if only softvol is used, or if the
user only adds the af_volume filter manually.
Normally, you want downmixing to happen first thing in the filter chain.
This is reflected in codec downmixing, which feeds the filter chain
downmixed audio in the first place. Doing this has the advantage of
needing less data to process. But the main motivation is that if there
is a drc filter in the chain, you want to process it the downmixed
audio.
Add an idiotic heuristic to achieve this. It tries to detect whether the
audio was indeed automatically downmixed (or upmixed). To detect what
the output format is going to be, it builds the filter chain normally,
and then retries with the heuristic applied (and for extra paranoia,
retries without the heuristic again if it fails to successfully rebuild
the filter chain for unknown reasons). This is simple and will work in
almost all cases.
Doing it in a more complete way is rather hard, because filters are so
generic. For example, we know absolutely nothing about the behavior of
af_lavfi, which creates an opaque filter graph with libavfilter. We
don't know why a filter would e.g. change the channel layout on its
output. (Our heuristic bails out in this case.) We're also slave to the
lowest common denominator of how our format negotiation works, and how
libavfilter's works.
In theory, we could make this mechanism explicit by introducing a
special dummy filter. The filter chain would then try to convert between
input and output formats at the dummy filter, which would give the user
more control over how downmix happens. On the other hand, the user could
just insert explicit conversion filters instead, so this would probably
have questionable value.
The algorithm and functionality is the same, but the code becomes much
simpler and easier to follow.
The assumption that there is only 1 conversion filter (lavrresample)
helps with the simplification, but the main change is to use the same
code for format/channels/rate. Get rid of the different AF_CONTROL_SET_*
controls, and change the af->data parameters directly. (af->data is
badly named, but essentially is a placeholder for the output format.)
Also, instead of trying to use the af_reinit() loop to init inserted
conversion filters or filters with changed output formats, do it inline,
and move the common code to a filter_reinit() function. This gets rid of
the awful retry variable.
In general, this should not change any runtime behavior.
This happens to be better for the af_volume filter (for softvol), and
saves some code too. It's "better" because you want to affect the
final filtered audio, such as after a manually inserted drc filter.
Drop the code for switching the volume options and properties between
af_volume and AO volume controls. interface-changes.rst mentions the
changes in detail.
Do this because this was exceedingly complex and had other problems as
well. It was also very hard to test. It's just not worth the trouble.
Some leftovers like AOCONTROL_HAS_PER_APP_VOLUME will be removed at a
later point.
Fixes#3322.
When selecting a device that simply doesn't exist with --audio-device,
AudioUnit will still initialize and start playback without complaining.
But it will never call the audio render callback, which leads to audio
playback simply not progressing.
I couldn't find a way to get AudioUnit to report an error at all, so
here's a crappy hack that takes care of this in most cases. We assume
that all devices have a kAudioDevicePropertyDeviceIsAlive property.
Invalid devices will error when querying the property (with 'obj!' as
status code).
This is not the correct fix, because we try to double-guess AudioUnit's
behavior by accessing a lower label API. Suggestions welcome.
The libavcodec wmapro decoder will skip some bytes at the start of the
first packet and return each time. It will not return any audio data in
this state.
Our own code as well as libavcodec's new API handling
(avcodec_send_packet() etc.) discard the PTS on the first return, which
means the PTS is never known for the first packet. This results in a
"Failed audio resync." message.
Fixy it by remember the PTS in next_pts. This field is used only if the
decoder outputs no PTS, and is updated after each frame - and thus
should be safe to set.
(Possibly this should be fixed in libavcodec new API handling by not
setting the PTS to NOPTS as long as no real data has been output. It
could even interpolate the PTS if the timebase is known.)
Fixes the failure message seen in #3297.
Some bugs in this code are exposed by e.g. playing lossless audio files
with --ad-lavc-threads=16. (libavcodec doesn't really support threaded
audio decoding, except for lossless files.) In these cases, a major
amount of audio can be buffered, which makes incorrect handling of this
buffering obvious.
For one, draining the decoder can take a while, so if there's a new
segment, we shouldn't read audio.
The segment end check was completely wrong, and used the start value.
Instead of hardcoding what the libavcodec ac3 encoder expects, configure
it based on the AVCodec fields.
Unfortunately, it doesn't export the list of sample rates, so that is
done manually. This commit actually fixes the rate always to 48Khz. I
don't even know whether the other rates worked. (Possibly did, but
they'd still change the spdif parameters, and would work differently
from ad_spdif.c.)
Workaround for an awful corner-case. The new decode API "locks" the
decoder into the EOF state once a drain packet has been sent. The
problem starts with a file containing a 0-sized packet, which is
interpreted as drain packet.
This should probably be changed in libavcodec (not treating 0-sized
packets as drain packets with the new API) or in libavformat (discard
0-sized packets as invalid), but efforts to do so have been fruitless.
Note that vd_lavc.c already does something similar, but originally for
other reasons.
Fixes#3106.
This helps with shitty APIs and even shittier drivers (I'm looking at
you, ALSA). Sometimes they won't send proper wakeups. This can be fine
during playback, when for example playing video, because mpv still will
wakeup the AO outside of its own wakeup mechanisms when sending new data
to it. But when draining, it entirely relies on the driver's wakeup
mechanism. So when the driver wakeup mechanism didn't work, it could
hard freeze while waiting for the audio thread to play the rest of the
data.
Avoid this by waiting for an upper bound. We set this upper bound at the
total mpv audio buffer size plus 1 second. We don't use the get_delay
value, because the audio API could return crap for it, and we're being
paranoid here. I couldn't confirm whether this works correctly, because
my driver issue fixed itself.
(In the case that happened to me, the driver somehow stopped getting
interrupts. aplay froze instead of playing audio, and playing audio-only
files resulted in a chop party. Video worked, for reasons mentioned
above, but drainign froze hard. The driver problem was solved when
closing all audio output streams in the system. Might have been a dmix
related problem too.)
When we're draining, don't wakeup the core on every buffer fill, since
unlike during normal playback, we won't actually get more data. The
wakeup here conceptually works like wakeups with condition variables, so
redundant wakeups do not hurt, so this is just a minor change and
nothing of consequence.
(Final EOF also requires waking up the core, but there is separate code
to send this notification.)
Also dump the p->still_playing field in trace logging.
For clang, it's enough to just put (void) around usages we are
intentionally ignoring the result of.
Since GCC does not seem to want to respect this decision, we are forced
to disable the warning globally.
Since the main thread is shared by other things in the player, using STA (single
threaded aparement) may have caused problems. Instead initialize in MTA
(multithreaded apartment).
This reportedly makes it work on ODROID-C2. The idea for this hack is
taken from kodi; they unconditionally set some or all of those flags.
I don't trust ALSA enough to hope that setting these flags couldn't
break something else, so we try without them first.
It's not clear whether this is a driver bug or a bug in the ALSA libs.
There is no ALSA bug tracker (the ALSA website has had a dead link to
a deleted bug tracker fo years). There's not much we can do other than
piling up ridiculous hacks. At least I think that at this point invalid
API usage by mpv can be excluded as a cause.
ALSA might be the worst audio API ever.
Including initguid.h at the top of a file that uses references to GUIDs
causes the GUIDs to be declared globally with __declspec(selectany). The
'selectany' attribute tells the linker to consolidate multiple
definitions of each GUID, which would be great except that, in Cygwin
and MinGW GCC 6.1, this method of linking makes the GUIDs conflict with
the ones declared in libuuid.a.
Since initguid.h obsoletes libuuid.a in modern compilers that support
__declspec(selectany), add initguid.h to all files that use GUIDs and
remove libuuid.a from the build.
Fixes#3097
Setting this here is a race condition. It's called from a CoreAudio
callbacks, and there are no locks. It's a string, so this can be
potentially severe.
It's hard to fix and only CoreAudio supported it, so remove it.
This causes the "audio-out-detected-device" property to return nothing
on all platforms.