The "default" entry (which is and always was mpv/mplayer's default) does
not have a description set in the ALSA API. (While "sysdefault"
strangely has.)
Instead of an empty description, this should show something nice, so
reuse the ao.c code for naming default devices (see previous commit).
It's still a bit ugly that audio-device-list will have a default entry
for "Autoselect device" and "Default (alsa)", but then again we probably
want to allow the user to force ALSA (i.e. prevent fallbacks to other
AOs) just because ALSA is so flaky and makes this a legitimate feature.
This will make it easier for AOs to add explicit default device entries.
(See next commit.)
Hopefully this change doesn't lead accidentally to bogus "Default"
entries to appear, but then it can only happen if the device ID is
empty, which would mean the underlying audio API returned bogus entries.
Use the device name as fallback. This is ugly, but still better than
skipping the description entirely. This can be an issue on ALSA, where
the API can return entries without proper description.
This happens when ALSA gives us more channels than we asked for, for
whatever reasons. It looks like this wasn't handled correctly. The mpv
and ALSA channel counts could mismatch, which would lead to UB.
I couldn't actually trigger this case, though. I'm fairly sure that
drivers or plugins exist that do it anyway. (Inofficial ALSA motto: if
it can be broken, then why not break it?)
If the input is already mono or stereo, or if channel map selection
results in mono or stereo, then disable further use of the champ ALSA
API (or rather, stop trusting its results). Then we behave like a simple
application that only wants to output mono or stereo.
See #3045 and #2905. I couldn't actually test these cases, but this
commit is supposed to fix them.
set_chmap() skipped _setting_ the ALSA chmap if chmap use was requested
to be disabled by setting dev_chmap.num=0 by the caller, but it still
queried the current ALSA channel map. We don't trust it that much, so
disable that as well.
But we still query and log it, because that could be helpful for
debugging. Otherwise we could skip the entire set_chmap() call in these
cases.
Both AVFrame.pts and AVFrame.pkt_pts have existed for a long time. Until
now, decoders always returned the pts via the pkt_pts field, while the
pts field was used for encoding and libavfilter only. Recently, pkt_pts
was deprecated, and pts was switched to always carry the pts.
This means we have to be careful not to accidentally use the wrong
field, depending on the libavcodec version. We have to explicitly check
the version numbers. Of course the version numbers are completely
idiotic, because idiotically the pkg-config and library names are the
same for FFmpeg and Libav, so we have to deal with this explicitly as
well.
If the "default" device refuses to be opened as spdif device (i.e. it
errors due to the AES0 etc. parameters), we were falling back to the
iec958 device. This is needed on some systems for smooth operation with
PCM vs. spdif.
Now change it to try "hdmi" before "iec958", which supposedly helps in
other situations.
Better suggestions welcome. Apparently kodi does this too, although I
didn't check directly.
The player tries to avoid splitting frames with spdif (sample alignment
stuff). This can in certain corner cases with certain drivers lead to
the situation that ao_get_space() returns a number higher than 0 and
lower than the audio frame size. The playloop will round this down to 0
bytes and do nothing, leading to a missed wakeup. This can lead to
underruns or playback completely getting stuck.
It can be reproduced by playing AC3 passthrough with no video and:
--ao=null --ao-null-buffer=0.256 --ao-null-outburst=6100
This commit attempts to fix it by allowing the playloop to write some
additional data (to get a complete frame), that will be buffered within
the AO ringbuffer even if the audio device doesn't want it.
We always want to use __declspec(selectany) to declare GUIDs, but
manually including <initguid.h> in every file that used GUIDs was
error-prone. Since all <initguid.h> does is define INITGUID and include
<guiddef.h>, we can remove all references to <initguid.h> and just
compile with -DINITGUID to get the same effect.
Also, this partially reverts 622bcb0 by re-adding libuuid.a to the
build, since apparently some GUIDs (such as GUID_NULL) are not declared
in the source file, even when INITGUID is set.
This was in the parser code all along. As far as I can tell, *cp was
intended. There is no need to check cp for NULL (nor does it make any
sense to do so every time around the loop) for AF_CONTROL_COMMAND.
However, s->matrixstr can be NULL, so checking for that separately is in
order.
For stereo and typical L/R-first channel arrangements, this avoids
undesirable phasing artifacts, especially obvious when speed is changed
and then reset. Without this, there is a very audible change in the
stereo field even when librubberband is no longer actually making any
speed changes.
There were multiple values under M_OPT_EXIT (M_OPT_EXIT-n for n>=0).
Somehow M_OPT_EXIT-n either meant error code n (with n==0 no error?), or
the number of option valus consumed (0 or 1). The latter is MPlayer
legacy, which left it to the option type parsers to determine whether an
option took a value or not. All of this was changed in mpv, by requiring
the user to use explicit syntax ("--opt=val" instead of "-opt val").
In any case, the n value wasn't even used (anymore), so rip this all
out. Now M_OPT_EXIT-1 doesn't mean anything, and could be used by a new
error code.
Currently, calling mp_input_wakeup() will wake up the core thread (also
called the playloop). This seems odd, but currently the core indeed
calls mp_input_wait() when it has nothing more to do. It's done this way
because MPlayer used input_ctx as central "mainloop".
This is probably going to change. Remove direct calls to this function,
and replace it with mp_wakeup_core() calls. ao and vo are changed to use
opaque callbacks and not use input_ctx for this purpose. Other code
already uses opaque callbacks, or has legitimate reasons to use
input_ctx directly (such as sending actual user input).
The first one is printed even if the user disabled video (or there's no
video), so just remove it. The second one uses deprecated sub-option
syntax, so remove that as well.
And introduce a global option which does this. Or more precisely, this
deprecates the global wasapi and coreaudio options, and adds a new one
that merges their functionality. (Due to the way the sub-option
deprecation mechanism works, this is simpler.)
I decided that it's too much work to convert all the VO/AOs to the new
option system manually at once. So here's a shitty hack instead, which
achieves almost the same thing. (The only user-visible difference is
that e.g. --vo=name:help will list the sub-options normally, instead of
showing them as deprecation placeholders. Also, the sub-option parser
will verify each option normally, instead of deferring to the global
option parser.)
Another advantage is that once we drop the deprecated options,
converting the remaining things will be easier, because we obviously
don't need to add the compatibility hacks.
Using this mechanism is separate in the next commit to keep the diff
noise down.
Instead of requiring each VO or AO to manually add members to MPOpts and
the global option table, make it possible to register them automatically
via vo_driver/ao_driver.global_opts members. This avoids modifying
options.c/options.h every time, including having to duplicate the exact
ifdeffery used to enable a driver.
Normally I'd prefer a bunch of smaller functions with fewer parameters
over a single function with a lot of parameters. But future changes will
require messing with the parameters in a slightly more complex way, so a
combined function will be needed anyway. The now-unused "global"
parameter is required for later as well.
Positional parameters cause problems because they can be ambiguous with
flag options. If a flag option is removed or turned into a non-flag
option, it'll usually be interpreted as value for the first sub-option
(as positional parameter), resulting in very confusing error messages.
This changes it into a simple "option not found" error.
I don't expect that anyone really used positional parameters with --vo
or --ao. Although the docs for --ao=pulse seem to encourage positional
parameters for the host/sink options, which means it could possibly
annoy some PulseAudio users.
--vf and --af are still mostly used with positional parameters, so this
must be a configurable option in the option parser.
These are different AVCodecContext fields. pkt_timebase is the correct
one for identifying the unit of packet/frame timestamps when decoding,
while time_base is for encoding. Some decoders also overwrite the
time_base field with some unrelated codec metadata.
pkt_timebase does not exist in Libav, so an #if is required.
There are situations where the resampler is destroyed and recreated
during playback. If recreating the resampler unexpectedly fails, the
filter function is supposed to return an error. This wasn't done
correctly, because get_out_samples() accessed the resampler before the
check. Move the check up to fix this.
Instead of passing through double float timestamps opaquely, pass real
timestamps. Do so by always setting a valid timebase on the
AVCodecContext for audio and video decoding.
Specifically try not to round timestamps to a too coarse timebase, which
could round off small adjustments to timestamps (such as for start time
rebasing or demux_timeline). If the timebase is considered too coarse,
make it finer.
This gets rid of the need to do this specifically for some hardware
decoding wrapper. The old method of passing through double timestamps
was also a bit questionable. While libavcodec is not supposed to
interpret timestamps at all if no timebase is provided, it was
needlessly tricky. Also, it actually does compare them with
AV_NOPTS_VALUE. This change will probably also reduce confusion in the
future.
The code actually kept going out of EOF mode into resync mode back into
EOF mode when the playloop had to wait after an audio EOF caused by the
endpts. This would break seamless looping (as added by the next commit).
Apply endpts earlier, to ensure the filter_audio() function always
returns AD_EOF in this case.
The idiotic ao_buffer makes this an amazing pain in the ass.
The touched code is for seek resets and such - we simply want to reset
the entire resample state. But I noticed after a seek a tiny bit of
audio is missing (mpv's audio sync code inserted silence to compensate).
It turns out swr_drop_output() either does not reset some internal state
as we expect, or it's designed to drop not only buffered samples, but
also future samples.
On the other hand, libavresample's avresample_read(), does not have this
problem. (It is also pretty explicit in what it does - return/skip
buffered data, nothing else.)
Is the libswresample behavior a bug? Or a feature? Does nobody even
know? Who cares - use the hammer to unfuck the situation. Destroy and
deallocate the libswresample context and recreate it. On every seek.
The demuxer layer usually doesn't log per-stream information, and even
the replaygain information was logged only if it came from tags.
So log it in af_volume instead.
With the previous commit, ao_alsa.c now has 3 possible ways to pause
playback. Actually all 3 of them need get_delay() to fake its return
value, so don't duplicate that code.
Also much of the code looks a bit questionable when considering
inconsistent pause/resume calls from outside, so ignore redundant calls.
push.c does not handle this automatically, and AOs using push.c have to
handle it themselves. Also, ALSA is low-level enough that it needs
explicit support in user code. At least I haven't found any option that
does this.
We still can get away relatively cheaply by abusing underflow-handling
for this. ao_alsa.c already configures ALSA to handle underflows by
playing silence. So we purposely induce an underflow when opening the
device, as well as when pausing or resetting the device.
This introduces minor misbehavior: it doesn't account for the additional
delay the initial silence adds, unless the device has fully played the
fragment of silence when the player starts sending data to it. But
nobody cares.
Exactly the same situation as with ao_alsa in commit 0b144eac (except
that we can detect the situation better under wasapi).
Essentially, wasapi will allow us to output any sample format, and not
just the one configured by the user in the audio system settings.