Remove the nonsensical print_lock too.
Things that are called from the option validator are not converted yet,
because the option parser doesn't provide a log context yet.
This could output additional, potentially useful error messages. But the
callback is global and not library-safe, and would require us to add
additional state. Remove it, because it's obviously too much of a pain.
Also, it seems ALSA prints stuff to stderr anyway.
request_channels has been deprecated for years (request_channel_layout
is the replacement), but it appears it's still needed despite the
deprecation at least on older libavcodec versions.
So still set request_channels, but to it with the avoption API, which
hides the deprecation warning. This should also prevent mpv getting
trashed when libavcodec happens to bump its major version.
In my opinion, config.h inclusions should be kept to a minimum. MPlayer
code really liked including config.h everywhere, though, even in often
used header files. Try to reduce this.
Since m_option.h and options.h are extremely often included, a lot of
files have to be changed.
Moving path.c/h to options/ is a bit questionable, but since this is
mainly about access to config files (which are also handled in
options/), it's probably ok.
The tmsg stuff was for the internal gettext() based translation system,
which nobody ever attempted to use and thus was removed. mp_gtext() and
set_osd_tmsg() were also for this.
mp_dbg was once enabled in debug mode only, but since we have log level
for enabling debug messages, it seems utterly useless.
The previous RING_BUFFER_COUNT value, 64, would have ao_wasapi buffer 64
frames of audio in the ring buffer; a delay of 1280ms, which is clearly
overkill for everything. A value of 8 buffers 8 frames for a total of
160ms.
When get_space was converted to returning samples instead of bytes, a
unit type mismatch in get_delay's calculation returned bogus values. Fix
by converting get_space's value back to bytes.
Fixes playback with ao_wasapi when reaching EOF, or seeking past it.
This can be reproduced with:
mpv short.wav -af 'lavfi="aecho=0.8:0.9:5000|6800:0.3|0.25"'
An audio file that is just 1-2 seconds long should play for 8-9 seconds,
which audible echo towards the end.
The code assumes that when playing with AF_FILTER_FLAG_EOF, the filter
will either produce output, or has all remaining data flushed. I'm not
really sure whether this really works if there are multiple filters with
EOF handling in the chain. To handle it correctly, af_lavfi should retry
filtering if 1. EOF flag is set, 2. there were input samples, and 3. no
output samples were produced. But currently it seems to work well enough
anyway.
The new signature is actually closer to how it actually works, and
someone who is not familiar to the API and how it works might make fewer
fatal mistakes with the new signature than the old one. Pretty weird.
Do this to sneak in a flags parameter, which will later be used to flush
remaining data of at least vf_lavfi.
This will make af_channels output a channel layout that is compatible
with any destination layout. Not sure if that's a good idea though,
since the way the AO choses a layout is perhaps less predictable. On the
other hand, using the old MPlayer standard layouts doesn't make much
sense either. We'll see whether this improves or breaks someone's use
case.
Apparently this stopped working after some planar changes (broken format
negotiation). Radically change option parsing in an incompatible way.
Suggest alternatives to this filter, since it barely has any importance
anymore.
Normally, audio decoder don't have a decoder delay, so the code was
fine. But FFmpeg supports multithreaded decoding for some audio codecs,
which introduces such a delay.
The delay means that we won't get decoded audio for the first few
packets, and that we need to do something to get the trailing audio
still buffered in the decoder when reaching EOF.
Two changes are needed to deal with the delay:
- If EOF is reached, pass a "flush" packet to the decoder to return the
buffered audio. Such a flush packet is automatically setup when
calling mp_set_av_packet() with a NULL packet.
- Use the PTS returned by the decoder, instead of the packet's. This is
important to get correct timestamps for decoded audio. Ignoring this
would result into offsetting the audio playback time by the decoder
delay. Note that we can still use the timestamp of the first packet
to get the timestamp for the start of the audio.
If the timebase is set, it's used for converting the packet timestamps.
Otherwise, the previous method of reinterpret-casting the mpv style
double timestamps to libavcodec style int64_t timestamps is used.
Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by
mp_pts_from_av(), which simply converts timestamps in a way the old
function did. (Plus it takes a timebase parameter, similar to the
addition to mp_set_av_packet().)
Note that this should not change anything yet. The code in ad_lavc.c and
vd_lavc.c passes NULL for the timebase parameters. We could set
AVCodecContext.pkt_timebase and use that if we want to give libavcodec
"proper" timestamps.
This could be important for ad_lavc.c: some codecs (opus, probably mp3
and aac too) have weird requirements about doing decoding preroll on the
container level, and thus require adjusting the audio start timestamps
in some cases. libavcodec doesn't tell us how much was skipped, so we
either get shifted timestamps (by the length of the skipped data), or we
give it proper timestamps. (Note: libavcodec interprets or changes
timestamps only if pkt_timebase is set, which by default it is not.)
This would require selecting a timebase though, so I feel uncomfortable
with the idea. At least this change paves the way, and will allow some
testing.
There are some use cases for this. For example, you can use it to set
defaults of automatically inserted filters (like af_lavrresample). It's
also useful if you have a non-trivial VO configuration, and want to use
--vo to quickly change between the drivers without repeating the whole
configuration in the --vo argument.
This is needed so that new processes (created with fork+exec) don't
inherit open files, which can be important for a number of reasons.
Since O_CLOEXEC is relatively new (POSIX.1-2008, before that Linux
specific), we #define it to 0 in io.h to prevent compilation errors on
older/crappy systems. At least this is the plan.
input.c creates a pipe. For that, add a mp_set_cloexec() function (which
is based on Weston's code in vo_wayland.c, but more correct). We could
use pipe2() instead, but that is Linux specific. Technically, we have a
race condition, but it won't matter.
This partially reverts commit 7d152965. It turns out that at least some
ALSA drivers (at least snd-hda-intel) report incorrect audio delay with
non-native sample rates, even if the sample rate is only very slightly
different from the native one.
For example, 48000Hz is fine on my hda-intel system, while both 8000Hz
and 47999Hz lead to a delay off by 40ms (according to mpv's A/V
difference display), which suggests that something in ALSA is
calculating the delay using the wrong sample rate.
As an additional problem, with ALSA resampling enabled, using
48001Hz/float/2ch fails, while 49000Hz/float/2ch or 48001Hz/s16/2ch
work. With resampling disabled, all these cases work obviously, because
our own resampler doesn't just refuse any of these formats.
Since some people want to use the ALSA resampler (because it's highly
configurable, supports multiple backends, etc.), we still allow enabling
ALSA resampling with an ao_alsa suboption.
Previous code was using the values of the AudioChannelLabel enum directly to
create the channel bitmap. While this was quite smart it was pretty unreadable
and fragile (what if Apple changes the values of those enums?).
Change it to use a 'dumb' conversion table.
These used the suffix _resync_stream, which is a bit misleading. Nothing
gets "resynchronized", they really just reset state.
(Some audio decoders actually used to "resync" by reading packets for
resuming playback, but that's not the case anymore.)
Also move the function in dec_video.c to the top of the file.
The code stopped at kAudioChannelLabel_TopBackRight and missed mapping for
5 more channel labels. These are in a completely different order that the mpv
ones so they must be mapped manually.
This includes the case when lavc decodes audio with more than 8
channels, which our audio chain currently does not support.
the changes in ad_lavc.c are just simplifications. The code tried to
avoid overriding global parameters if it found something invalid, but
that is not needed anymore.
Resampling with non-ancient ALSA setups works fine, so there is no
need to keep this around. Furthermore, as of writing, the default
builtin resampler used by many ALSA setups (taken from libspeex)
actually has higher quality than the default resampling modes of
avresample and swresample.
Apparently just 5 packets is not enough for the initial audio decode
(which is needed to find the format). The old code (before the recent
refactor) appeared to use 5 packets, but there were apparently other
code paths which in the end amounted to more than 5 packets being read.
The sample that failed (see github issue #368) needed 9 packets.
Fixes#368.
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.
Also move the "format" field from the specific headers to sh_stream.
sh_audio is supposed to contain file headers, not whatever was decoded.
Fix this, and write the decoded format to separate fields in the decoder
context, the dec_audio.decoded field. (Note that this field is really
only needed to communicate the audio format from decoder driver to the
generic code, so no other code accesses it.)
Move all state that basically changes during decoding or is needed in
order to manage decoding itself into a new struct (dec_audio).
sh_audio (defined in stheader.h) is supposed to be the audio stream
header. This should reflect the file headers for the stream. Putting the
decoder context there is strange design, to say the least.
The AF control commands used an elaborate and unnecessary organization
for the command constants. Get rid of all that and convert the
definitions to a simple enum. Also remove the control commands that
were not really needed, because they were not used outside of the
filters that implemented them.
And by "cleanup", I mean "remove". Actually, only remove the parts that
are redundant and doxygen noise. Move useful parts to the comment above
the function's implementation in the C source file.
When the decoder detects a format change, it overwrites the values
stored in sh_audio (this affects the members sample_format, samplerate,
channels). In the case when the old audio data still needs to be
played/filtered, the audio format as identified by sh_audio and the
format used for the decoder buffer can mismatch. In particular, they
will mismatch in the very unlikely but possible case the audio chain is
reinitialized while old data is draining during a format change.
Or in other words, sh_audio might contain the new format, while the
audio chain is still configured to use the old format.
Currently, the audio code (player/audio.c and init_audio_filters) access
sh_audio to get the current format. This is in theory incorrect for the
reasons mentioned above. Use the decoder buffer's format instead, which
should be correct at any point.
Commit 22b3f522 not only redid major aspects of audio decoding, but also
attempted to fix audio format change handling. Before that commit, data
that was already decoded but not yet filtered was thrown away on a
format change. After that commit, data was supposed to finish playing
before rebuilding filters and so on.
It was still buggy, though: the decoder buffer was initialized to the
new format too early, triggering an assertion failure. Move the reinit
call below filtering to fix this.
ad_mpg123.c needs to be adjusted so that it doesn't decode new data
before the format change is actually executed.
Add some more assertions to af_play() (audio filtering) to make sure
input data and configured format don't mismatch. This will also catch
filters which don't set the format on their output data correctly.
Regression due to planar_audio branch.
Simulate proper handling of AOPLAY_FINAL_CHUNK. Print when underruns
occur (i.e. running out of data). Add some options that control
simulated buffer and outburst sizes.
All this is useful for debugging and self-documentation. (Note that
ao_null always was supposed to simulate an ideal AO, which is the reason
why it fools people who try to use it for benchmarking video.)
This should allow it to select better fallback formats, instead of
picking the first encoder sample format if ao->format is not equal to
any of the encoder sample formats.
Not sure what is supposed to happen if the encoder provides no
compatible sample format (or no sample format list at all), but in this
case ao_lavc.c still fails gracefully.
The added function af_format_conversion_score() can be used to select
the best sample format to convert to in order to reduce loss and extra
conversion work.
It calculates a "loss" score when going from one format to another, and
for each conversion that needs to be done a certain score is subtracted.
Thus, if you have to convert from one format to a set of other formats,
you can calculate the score for each conversion, and pick the one with
the highest score.
Conversion between int and float is considered the worst case. One odd
consequence is that when converting from s32 to u8 or float, u8 will be
picked.
Test program used to develop this follows:
#define MAX_FMT 200
struct entry {
const char *name;
int score;
};
static int compentry(const void *px1, const void *px2)
{
const struct entry *x1 = px1;
const struct entry *x2 = px2;
if (x1->score > x2->score)
return 1;
if (x1->score < x2->score)
return -1;
return 0;
}
int main(int argc, char *argv[])
{
for (int n = 0; af_fmtstr_table[n].name; n++) {
struct entry entry[MAX_FMT];
int entries = 0;
for (int i = 0; af_fmtstr_table[i].name; i++) {
assert(i < MAX_FMT);
entry[entries].name = af_fmtstr_table[i].name;
entry[entries].score =
af_format_conversion_score(af_fmtstr_table[i].format,
af_fmtstr_table[n].format);
entries++;
}
qsort(&entry[0], entries, sizeof(entry[0]), compentry);
for (int i = 0; i < entries; i++) {
printf("%s -> %s: %d \n", af_fmtstr_table[n].name,
entry[i].name, entry[i].score);
}
}
}
These must be written even if there was no "final frame", e.g. due to
the player being exited with "q".
Although the issue is mostly of theoretical nature, as most audio codecs
don't need the final encoding calls with NULL data. Maybe will be more
relevant in the future.
This used to be in bytes, now it's in samples. Divide the value by 8
(assuming a typical audio format, float samples with 2 channels).
Fix some editing mistake or non-sense about the extra buffering added
(1<<x instead of x<<5).
Also sneak in a s/MPlayer/mpv/.
This changes option parsing as well as filter defaults slightly. The
default is now to encode to spdif (this is way more useful than writing
raw AC3 - what was this even useful for, other than writing broken ac3
-in-wav files?). The bitrate parameter is now always in kbps.
Apparently this was completely broken after commit 22b3f522. Basically,
this locked up immediately completely while decoding the first packet.
The reason was that the buffer calculations confused bytes and number of
samples. Also, EOF reporting was broken (wrong return code).
The special-casing of ad_mpg123 and ad_spdif (with DECODE_MAX_UNIT) is a
bit annoying, but will eventually be solved in a better way.
ao_null should simulate a "perfect" AO, but framestepping behaved quite
badly with it. Framstepping usually exposes problems with AOs dropping
their buffers on pause, and that's what happened here.
Most libavcodec decoders output non-interleaved audio. Add direct
support for this, and remove the hack that repacked non-interleaved
audio back to packed audio.
Remove the minlen argument from the decoder callback. Instead of
forcing every decoder to have its own decode loop to fill the buffer
until minlen is reached, leave this to the caller. So if a decoder
doesn't return enough data, it's simply called again. (In future, I
even want to change it so that decoders don't read packets directly,
but instead the caller has to pass packets to the decoders. This fits
well with this change, because now the decoder callback typically
decodes at most one packet.)
ad_mpg123.c receives some heavy refactoring. The main problem is that
it wanted to handle format changes when there was no data in the decode
output buffer yet. This sounds reasonable, but actually it would write
data into a buffer prepared for old data, since the caller doesn't know
about the format change yet. (I.e. the best place for a format change
would be _after_ writing the last sample to the output buffer.) It's
possible that this code was not perfectly sane before this commit,
and perhaps lost one frame of data after a format change, but I didn't
confirm this. Trying to fix this, I ended up rewriting the decoding
and also the probing.
libav* is generally freaking horrible, and might do bad things if the
data pointer passed to it are not aligned. One way to be sure that the
alignment is correct is allocating all pointers using av_malloc().
It's possible that this is not needed at all, though. For now it might
be better to keep this, since the mp_audio code is intended to replace
another buffer in dec_audio.c, which is currently av_malloc() allocated.
The original reason why this uses av_malloc() is apparently because
libavcodec used to directly encode into mplayer buffers, which is not
the case anymore, and thus (probably) doesn't make sense anymore.
(The commit subject uses the word "cargo cult", after all.)
Remove the awkward planarization. It had to be done because the AC3
encoder requires planar formats, but now we support them natively.
Try to simplify buffer management with mp_audio_buffer.
Improve checking for buffer overflows and out of bound writes. In
theory, these shouldn't happen due to AC3 fixed frame sizes, but being
paranoid is better.
The format negotiation is the same, except don't confusingly copy the
input format into af->data, just to overwrite it later. af->data should
alwass contain the output format, and the existing code was just a very
misguided use of the af_test_output() helper function.
Just set af->data to the output format immediately, and modify the input
format properly.
Also, if format negotiation fails (and needs another iteration), don't
initialize the libavcodec encoder.
Before this commit, the af_instance->mul/delay values were in bytes.
Using bytes is confusing for non-interleaved audio, so switch mul to
samples, and delay to seconds. For delay, seconds are more intuitive
than bytes or samples, because it's used for the latency calculation.
We also might want to replace the delay mechanism with real PTS
tracking inside the filter chain some time in the future, and PTS
will also require time-adjustments to be done in seconds.
For most filters, we just remove the redundant mul=1 initialization.
(Setting this used to be required, but not anymore.)
Since ao_openal simulates multi-channel audio by placing a bunch of
mono-sources in 3D space, non-interleaved audio is a perfect match for
it. We just have to remove the interleaving code.
ALSA supports non-interleaved audio natively using a separate API
function for writing audio. (Though you have to tell it about this on
initialization.) ALSA doesn't have separate sample formats for this,
so just pretend to negotiate the interleaved format, and assume that
all non-interleaved formats have an interleaved companion format.
Replace the code that used a single buffer with mp_audio_buffer. This
also enables non-interleaved output operation, although it's still
disabled, and no AO supports it yet.
Implementation wise, this could be much improved, such as using a
ringbuffer that doesn't require copying data all the time. This is
why we don't use mp_audio directly instead of mp_audio_buffer.
This comes with two internal AO API changes:
1. ao_driver.play now can take non-interleaved audio. For this purpose,
the data pointer is changed to void **data, where data[0] corresponds to
the pointer in the old API. Also, the len argument as well as the return
value are now in samples, not bytes. "Sample" in this context means the
unit of the smallest possible audio frame, i.e. sample_size * channels.
2. ao_driver.get_space now returns samples instead of bytes. (Similar to
the play function.)
Change all AOs to use the new API.
The AO API as exposed to the rest of the player still uses the old API.
It's emulated in ao.c. This is purely to split the commits changing all
AOs and the commits adding actual support for outputting N-I audio.
Allocate af_instance->data in generic code before filter initialization.
Every filter needs af->data (since it contains the output
configuration), so there's no reason why every filter should allocate
and free it.
Remove RESIZE_LOCAL_BUFFER(), and replace it with mp_audio_realloc_min().
Interestingly, most code becomes simpler, because the new function takes
the size in samples, and not in bytes. There are larger change in
af_scaletempo.c and af_lavcac3enc.c, because these had copied and
modified versions of the RESIZE_LOCAL_BUFFER macro/function.
No AO can handle these, so it would be a problem if they get added
later, and non-interleaved formats get accepted erroneously. Let them
gracefully fall back to other formats.
Most AOs actually would fall back, but to an unrelated formats. This is
covered by this commit too, and if possible they should pick the
interleaved variant if a non-interleaved format is requested.
Based on earlier work by Stefano Pigozzi.
There are 2 changes:
1. Instead of mp_audio.audio, mp_audio.planes[0] must be used.
2. mp_audio.len used to contain the size of the audio in bytes. Now
mp_audio.samples must be used. (Where 1 sample is the smallest unit
of audio that covers all channels.)
Also, some filters need changes to reject non-interleaved formats
properly.
Nothing uses the non-interleaved features yet, but this is needed so
that things don't just break when doing so.
This affects 64 bit floats and big endian integer PCM variants
(basically crap nobody uses). Possibly not all MS-muxed files work, but
I couldn't get or produce any samples.
Remove a bunch of format tags that are not needed anymore. Most of these
were used by demux_mov, which is long gone. Repurpose/abuse 'twos' as
mpv-internal tag for dealing with the PCM variants mentioned above.
Now to shift audio pts when outputting to e.g. avi, you need an explicit
facility to insert/remove initial samples, to avoid initial regions of
the video to be sped up/slowed down.
One such facility is the delay filter in libavfilter.
My main problem with this is that the output format will be incorrect.
(This doesn't matter right, because there are no samples output.)
This assumes all audio filters can deal with len==0 passed in for
filtering (though I wouldn't see why not).
A filter can still signal an error by returning NULL.
af_lavrresample has to be fixed, since resampling 0 samples makes
libavresample fail and return a negative error code. (Even though it's
not documented to return an error code!)
Apparently we were using FFmpeg-specific APIs. I have no idea whether
this code is correct on both FFmpeg and Libav (no examples, bad
doxygen... why do they even complaint aht people are using their APIs
incorrectly?), but it appears to work on FFmpeg. That was also the case
before commit ebc4ccb though, where it used internal libavformat
symbols.
Untested on Libav, Travis will tell us.
Set the PulseAudio stream title, just like the VO window title is set.
Refactor update_vo_window_title() so that we can use it for AOs too.
The ao_pulse.c bit is stolen from MPlayer.
In theory, af_volume could use separate volume levels for each channel.
But this was never used anywhere.
MPlayer implemented something similar before (svn r36498), but kept the
old path for some reason.
This member was redundant. sh_audio->sample_format indicates the sample
size already.
The TV code is a bit strange: the redundant sample size was part of the
internal TV interface. Assume it's really redundant and not something
else. The PCM decoder ignores the sample size anyway.
Also do some cosmetic changes, like merging definition and
initialization of local variables.
Remove an annoying debug mp_msg() from af_open(). It just printed the
command line parameters; if this is really needed, it could be added
to af.c instead (similar as to what vf.c does).
Helps with readability. Also remove the ctx_opt_set_* helper macros and
use av_opt_set_* directly (I think these macros were used because the
lines ended up too long, but this commit removes two indentation levels,
giving more space).
This should allow to make format negotiation much simpler, since it
takes the responsibility to compare actual input and accepted input
formats from the filters. It's also backwards compatible. Filters which
have expensive initialization still can use the old method.
I have no idea what these do, but apparently they are needed to inform
ALSA about spdif configuration. First, replace the literal constant "6"
for the AES0 parameter with the symbolic constants from the ALSA
headers (the final value is the same). Second, copy paste some funky
looking parameter setup from VLC's alsa output for setting the AES1,
AES2, AES3 parameters. (The code is actually not literally copy-pasted,
but does exactly the same.)
My small but non-zero hope is that this could make DTS-HD work, or at
least work into that direction. I can't test spdif stuff though, and
for DTS-HD not even opening the ALSA device succeeds on my system.
Using spdif with alsa requires adding magic parameters to the device
name, and the existing code tried to deal with the situation when the
user wanted to add parameters too.
Rewrite this code, in particular remove the duplicated parameter string
as preparation for the next commit. The new code is a bit stricter, e.g.
it doesn't skip spaces before and after '{' and '}'. (Just don't add
spaces.)
This accessed tons of private libavformat symbols all over the place.
Don't do this and convert all code to proper public APIs. As a
consequence, the code becomes shorter and cleaner (many things the code
tried are done by libavformat APIs).
It's probably better if all auto-inserted filters are removed when doing
an af_add operation. If they're really needed, they will be
automatically re-added.
Fix the error message. It used to be for an actual internal error, but
now it happens when format negotiation fails, e.g. when trying to use
spdif and real audio filters.
ao_lavc.c accesses ao->buffer, which I consider internal. The access was
done in ao_lavc.c/uninit(), which tried to get the left-over audio in
order to write the last (possibly partial) audio frame. The play()
function didn't accept partial frames, because the AOPLAY_FINAL_CHUNK
flag was not correctly set, and handling it otherwise would require an
internal FIFO.
Fix this by making sure that with gapless audio (used with encoding),
the AOPLAY_FINAL_CHUNK is set only once, instead when each file ends.
Basically, move the hack in ao_lavc's uninit to uninit_player.
One thing can not be entirely correctly handled: if gapless audio is
active, we don't know really whether the AO is closed because the file
ended playing (i.e. we want to send the buffered remainder of the audio
to the AO), or whether the user is quitting the player. (The stop_play
flag is overwritten, fixing that is perhaps not worth it.) Handle this
by adding additional code to drain the AO and the buffers when playback
is quit (see play_current_file() change).
Test case: mpv avdevice://lavfi:sine=441 avdevice://lavfi:sine=441 -length 0.2267 -gapless-audio
It spams these in verbose mode. It's caused by format negotiation code
in af.c. It's for the mpv format to ffmpeg-format case, and that one is
very uninteresting. (The ffmpeg supported audio formats are practically
never extended.)
Turn the sample format definitions into an enum. (The format bits are
still macros.) The native endian versions of the new definitions don't
have a NE suffix anymore, although there are still compatibility defines
since too much code uses the NE variants.
Rename the format bits for special formats to help to distinguish them
from the actual definitions, e.g. AF_FORMAT_AC3 to AF_FORMAT_S_AC3.
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:
* #define HAVE_HURR 1 / #undef HAVE_DURR
* #define HAVE_HURR / #undef HAVE_DURR
* #define CONFIG_HURR 1 / #undef CONFIG_DURR
* #define HAVE_HURR 1 / #define HAVE_DURR 0
* #define CONFIG_HURR 1 / #define CONFIG_DURR 0
All is now uniform and uses:
* #define HAVE_HURR 1
* #define HAVE_DURR 0
We like definining to 0 as opposed to `undef` bcause it can help spot typos
and is very helpful when doing big reorganizations in the code.
[1]: http://xkcd.com/927/ related
There are some Microsoft Windows symbols which are traditionally used by
the mplayer core, because it used to be convenient (avi was the big
format, using binary windows decoders made sense...). So these symbols
have the exact same definition as the Windows one, and if mplayer is
compiled on Windows, the symbols from windows.h are used.
This broke recently just because some files were shuffled around, and
the symbols defined in ms_hdr.h collided with windows.h ones. Since we
don't have windows binary decoders anymore, there's not the slightest
reason our symbols should have the same names. Rename them to reduce the
risk for collision, and to fix the recent regression.
Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines
its own version if the windows headers don't define it, and ao_wasapi is
not available on systems where this symbol is missing.
Also reindent ms_hdr.h.
The code was selecting PA_CHANNEL_POSITION_MONO for MP_SPEAKER_ID_FC,
which is correct only with the "mono" channel layout, but not anything
else. Remove the mono entry, and handle mono separately.
See github issue #326.
Defining names like min, max etc. in an often used header is not really
a good idea.
Somewhat similar to MPlayer svn commit 36491, but don't use libavutil,
because that typically causes us sorrow.
Roughly follows MPlayer svn commits 36492 and 36493. We also remove
the volume peak reporting. (There are much better libavfilter filters
for this, I think.)
It's true that ALSA uses alloca() in some of its API functions, but
since this is hidden behind macros in the ALSA headers, we have no
reason to include alloca.h ourselves.
Might help with portability (FreeBSD).
Drop the author and comment fields. They were completely unused - not
even printed in verbose mode, just dead weight.
Also use designated initializers and drop redundant flags.
Set the input/output format in filter init. This doesn't change anything
functionally, but it makes the forced format show up in the filter chain
init verbose output (which sometimes prints the filter chain before all
filters have been configured).
af_format is the old audio conversion filter. It could do all possible
conversions supported by the audio chain. However, ever since the
addition of af_lavrresample, most conversions are done by
libav/swresample, and af_format is used as fallback.
Separate out the fallback cases and remove af_format. af_convert24 does
24 bit <-> 32 bit conversions, while af_convertsignendian does sign and
endian conversions. Maybe the way the conversions are split sounds a bit
odd. But the former changes the size of the audio data, while the latter
is fully in-place, so there's at least different buffer management.
This requires a quite complicated algorithm to make sure all these
"partial" conversion filters can actually get from one format to
another. E.g. s24le->s32be always requires convertsignendian and
convert24, but af.c has no idea what the intermediate format should
be. So I added a graph search (trying every possible format and
filter) to determine required format and filter. When I wrote this,
it seemed this was still better than messing everything into
af_lavrresample, but maybe this is overkill and I'll change my
opinion. For now, it seems nice to get rid of af_format though.
The AC3->IEC61937 conversion isn't supported anymore, but I don't think
this is needed anywhere. Most AOs test all formats explicitly, or use
the AF_FORMAT_IS_IEC61937() macro (which includes AC3).
One positive consequence of this change is that conversions always
include dithering (done by libav/swresample), instead of possibly going
through af_format, which doesn't do anything fancy.
Rename af_force to af_format. It's essentially compatible with command
line uses of af_format. We retain a compatibility alias for af_force.
At least not with ffmpeg.
Honestly, I have no idea how little endian AC3 works at all, since
ao_pcm doesn't do anything special about it, and treats it like s16le.
Maybe it's broken and ffmpeg has special logic to detect it.
Was disabled by default, was never used, internal support was
inconsistent and poor, and there has been virtually no interest in
creating translations.
And I don't even think that a terminal program should be translated.
This is something for (hypothetical) GUIs.
Changing volume when audio is disabled was a feature request (github
issue #215), and was introduced with commit 327a779.
But trying to fix github issue #280 (volume is not correct in no-audio
mode, and if audio is re-enabled, the volume set in no-audio mode isn't
set), I concluded that it's not worth the trouble and the current
implementation is questionable all around. (For example, you can't
change the real volume in no-audio mode, even if the AO is open - this
could happen with gapless audio.) It's hard to get right, and the
current mixer code is already hilariously overcomplicated. (Virtually
all of mixer.c is an amalgamation of various obscure corner cases.)
So just remove this feature again.
Note that "options/volume" and "options/mute" still can be used in
idle mode to adjust the volume used next time, though these properties
can't be used during playback and thus not in audio-only mode.
Querying the volume still "works" in audio-only mode, though it can
return bogus values.
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@36461 b3059339-0415-0410-9bf9-f77b7e298cf2
Fixes playback of http://mpg123.org/test/44and22.mp3
Cherry-picked from MPlayer SVN rev. #36461, a patch by
Thomas Orgis, committed by by Reimar Döffinger.
Output silence to the output buffer during underruns. This removes small
occasional glitches that happen before the AUHAL is actually paused from the
`audio_pause` call.
Fixes#269
Trying to connect multiple mpv clients to JACK with the
JackUseExactName option would fail unless the user manually
specifies a unique client name. This changes the behavior
to automatically generate a unique name if the requested
one is already in use.
Calling them separately doesn't really make sense, and all existing
calls to them usually combined them. One subtitle difference was that
af_init() didn't wipe the filter chain if initialization of the chain
itself failed, but that didn't really make sense anyway.
Also remove af_init() from the code for setting balance in mixer.c. The
mixer should be in the initialized state only if audio is fully
initialized, so the af_init() call made no sense.
Note that the filter "editing" code in command.c doesn't really do a
nice job of handling errors in case recreating an _old_ (known to work)
filter chain unexpectedly fails, and this obscure/rare case might be
differently handled after this change.
Note that this is intentionally never done if the AO or softvolume is
different, or if the current volume control method is thought to control
system wide volume (such as ALSA) or otherwise user controllable (such
as PulseAudio). The intention is to keep things robust and to avoid
messing with the user's audio settings as far as possible, while still
providing the ability to resume volume if it makes sense.
Refactor how mixer.c does volume/mute restoration and initialization.
Move to handling of --volume and --mute to mixer.c. Simplify the
implementation of these and hopefully fix bugs/strange behavior related
to using them as file-local options (this uses a somewhat dirty trick:
the option values are reverted to "auto" after initialization). Put most
code related to initialization and volume restoring in probe_softvol()
and restore_volume(). Having this code all in one place is less
confusing.
Instead of trying to detect whether to use softvol at runtime, detect it
at initialization time using AOCONTROL_GET_VOLUME (same with mute,
AOCONTROL_GET_MUTE). This implies we expect SET_VOLUME/SET_MUTE to work
if the GET variants work. Hopefully this is always the case.
This is also preparation for being able to change volume/mute settings
if audio is disabled, and for allowing restoring value with playback
resume.
Softvol always used a linear multiplier for volume control. This was
converted to dB, and then back to linear in af_volume. Remove this non-
sense. We still try to keep the command line argument to af_volume in
dB, though.
It's quite unlikely, but functions like mp_find_user_config_file() can
return NULL, e.g. if $HOME is unset.
Fix all the code that didn't check for this correctly yet.
This is basically a libavcodec API oddity: it can happen that
avcodec_decode_audio4() returns 0 (meaning 0 bytes were consumed). It
requires you to feed the complete packet again to decode the full
packet, and to successfully decode the following packets.
We ignored this case with the argument that there's the danger of an
endless decode loop (because nothing of that packet is apparently
decoded, so it would retry forever), but change it in order to decode
mpc8 files correctly.
Also add some comments to explain the mess.
af_str2fmt_short(), which is used by the command line option parser,
allowed passing a hex number. The user could set arbitrary integers as
internal audio formats, even formats which don't exist or make no sense.
This is not very useful, so get rid of it.
Having to use -1 for that is generally quite annoying.
Audio formats are created from bitmasks, and it can't be excluded that
0 is not a valid format. Fix this by adjusting AF_FORMAT_I so that it
is never 0. Along with AF_FORMAT_F and the special formats, all valid
formats are covered and guaranteed to be non-0.
It's possible that this commit will cause some regressions, as the
check for invalid audio formats changes a bit.
Use the new MP_ macros for some AOs instead of mp_msg.
Not all AOs are converted, and some only partially. In some cases, some
additional cosmetic changes are made.
The --speed option and the speed property used float. Change them to
double.
Change the commands that manipulate the property (speed_mult/add) to
double as well. Since the cycle command shares code with the add
command, we change that as well.
The reason for this change is that this allows better control over
speed, such as stepping by semitones. Using floats is also just plain
unnecessary.
Using the default output audio unit should provide a much better user
exeperience since it changes automatically the output device based on which
becomes the default one.
This was removed in d427b4fd. I now found a sample that causes underruns when
moving to a chapter and apparently this is also a problem when taking
screenshots.
Reverts one of the changes from 18777ecf. `kAudioObjectPropertyScopeOutput`
was introduced in the 10.8 SDK while `kAudioDevicePropertyScopeOutput` was
moved to `AudioHardwareDeprecated.h`. Since the deprecation is silent for now
(no warnings), just use the old constant.
Either way, they both evaluate to 'outp', and in the 10.8 SDK the deprecated
constant is defined in terms of the non-deprecated one.
Fixes#155