MPlayer traditionally had completely separate sh_ structs for
audio/video/subs, without a good way to share fields. This meant that
fields shared across all these headers had to be duplicated. This commit
deduplicates essentially the last remaining duplicated fields.
Remove the old implementation for these properties. It was never very
good, often returned very innaccurate values or just 0, and was static
even if the source was variable bitrate. Replace it with the
implementation of "packet-video-bitrate". Mark the "packet-..."
properties as deprecated. (The effective difference is different
formatting, and returning the raw value in bits instead of kilobits.)
Also extend the documentation a little.
It appears at least some decoders (sipr?) need the
AVCodecContext.bit_rate field set, so this one is still passed through.
This rewrites the audio decode loop to some degree. Audio filters don't
do refcounted frames yet, so af.c contains a hacky "emulation".
Remove some of the weird heuristic-heavy code in dec_audio.c. Instead of
estimating how much audio we need to filter, we always filter full
frames. Maybe this should be adjusted later: in case filtering increases
the volume of the audio data, we should try not to buffer too much
filter output by reducing the input that is fed at once.
For ad_spdif.c and ad_mpg123.c, we don't avoid extra copying yet - it
doesn't seem worth the trouble.
This gets rid of this warning:
Could not update timestamps for skipped samples.
This required an API addition to FFmpeg (otherwise it would instead
doing arithmetic on the timestamps itself), so whether it works depends
on the FFmpeg version.
Let codec_tags.c do the messy mapping.
In theory we could simplify further by makign demux_mkv.c directly use
codec names instead of the MPlayer-inherited "internal FourCC" business,
but I'd rather not touch this - it would just break things.
For a while, we used this to transfer PCM from demuxer to the filter
chain. We had a special "codec" that mapped what MPlayer used to do
(MPlayer passes the AF sample format over an extra field to ad_pcm,
which specially interprets it).
Do this by providing a mp_set_pcm_codec() function, which describes a
sample format in a generic way, and sets the appropriate demuxer header
fields so that libavcodec interprets it correctly. We use the fact that
libavcodec has separate PCM decoders for each format. These are
systematically named, so we can easily map them.
This has the advantage that we can change the audio filter chain as we
like, without losing features from the "rawaudio" demuxer. In fact, this
commit also gets rid of the audio filter chain formats completely.
Instead have an explicit list of PCM formats. (We could even just have
the user pass libavcodec PCM decoder names directly, but that would be
annoying in other ways.)
Until now, the audio chain could handle both little endian and big
endian formats. This actually doesn't make much sense, since the audio
API and the HW will most likely prefer native formats. Or at the very
least, it should be trivial for audio drivers to do the byte swapping
themselves.
From now on, the audio chain contains native-endian formats only. All
AOs and some filters are adjusted. af_convertsignendian.c is now wrongly
named, but the filter name is adjusted. In some cases, the audio
infrastructure was reused on the demuxer side, but that is relatively
easy to rectify.
This is a quite intrusive and radical change. It's possible that it will
break some things (especially if they're obscure or not Linux), so watch
out for regressions. It's probably still better to do it the bulldozer
way, since slow transition and researching foreign platforms would take
a lot of time and effort.
bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.
The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
Use OPT_KEYVALUELIST() for all places where AVOptions are directly set
from mpv command line options. This allows escaping values, better
diagnostics (also no more "pal"), and somehow reduces code size.
Remove the old crappy option parser (av_opts.c).
This commit makes audio decoding non-blocking. If e.g. the network is
too slow the playloop will just go to sleep, instead of blocking until
enough data is available.
For video, this was already done with commit 7083f88c. For audio, it's
unfortunately much more complicated, because the audio decoder was used
in a blocking manner. Large changes are required to get around this.
The whole playback restart mechanism must be turned into a statemachine,
especially since it has close interactions with video restart. Lots of
video code is thus also changed.
(For the record, I don't think switching this code to threads would
make this conceptually easier: the code would still have to deal with
external input while blocked, so these in-between states do get visible
[and thus need to be handled] anyway. On the other hand, it certainly
should be possible to modularize this code a bit better.)
This will probably cause a bunch of regressions.
This commit mainly moves the initial decoding of data (done to probe the
audio format) to generic code. This will make it easier to make audio
decoding non-blocking in a later commit.
This commit also changes how decoders return data: instead of having
them write the data into a prepared buffer, they return a reference to
an internal buffer (by setting dec_audio.decoded). This makes it
significantly easier to handle audio format changes, since the decoders
don't really need to care anymore.
If the decoder didn't set a samplerate, it was initialized from the
container samplerate.
This probably didn't make much sense, because it's passed to the
decoder on initialization (so it could definitely use it). It's an
artifact from commit 66a9eb57 (which removed some Matroska-specific non-
sense), and I've never seen it actually happen since it was made into a
warning. Just get rid of it.
The i_bps members of the sh_audio and dev_video structs are mostly used
for displaying the average audio and video bitrates. Keeping them in
bits-per-second avoids truncating them to bytes-per-second and changing
them back lateron.
Set refcounted_frames, because in some versions of libavcodec mixing the
new AVFrame API and non-refcounted decoding could cause memory
corruption. Likewise, it's probably still required to unref a frame
before calling the decoder.
request_channels has been deprecated for years (request_channel_layout
is the replacement), but it appears it's still needed despite the
deprecation at least on older libavcodec versions.
So still set request_channels, but to it with the avoption API, which
hides the deprecation warning. This should also prevent mpv getting
trashed when libavcodec happens to bump its major version.
Since m_option.h and options.h are extremely often included, a lot of
files have to be changed.
Moving path.c/h to options/ is a bit questionable, but since this is
mainly about access to config files (which are also handled in
options/), it's probably ok.
The tmsg stuff was for the internal gettext() based translation system,
which nobody ever attempted to use and thus was removed. mp_gtext() and
set_osd_tmsg() were also for this.
mp_dbg was once enabled in debug mode only, but since we have log level
for enabling debug messages, it seems utterly useless.
Normally, audio decoder don't have a decoder delay, so the code was
fine. But FFmpeg supports multithreaded decoding for some audio codecs,
which introduces such a delay.
The delay means that we won't get decoded audio for the first few
packets, and that we need to do something to get the trailing audio
still buffered in the decoder when reaching EOF.
Two changes are needed to deal with the delay:
- If EOF is reached, pass a "flush" packet to the decoder to return the
buffered audio. Such a flush packet is automatically setup when
calling mp_set_av_packet() with a NULL packet.
- Use the PTS returned by the decoder, instead of the packet's. This is
important to get correct timestamps for decoded audio. Ignoring this
would result into offsetting the audio playback time by the decoder
delay. Note that we can still use the timestamp of the first packet
to get the timestamp for the start of the audio.
If the timebase is set, it's used for converting the packet timestamps.
Otherwise, the previous method of reinterpret-casting the mpv style
double timestamps to libavcodec style int64_t timestamps is used.
Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by
mp_pts_from_av(), which simply converts timestamps in a way the old
function did. (Plus it takes a timebase parameter, similar to the
addition to mp_set_av_packet().)
Note that this should not change anything yet. The code in ad_lavc.c and
vd_lavc.c passes NULL for the timebase parameters. We could set
AVCodecContext.pkt_timebase and use that if we want to give libavcodec
"proper" timestamps.
This could be important for ad_lavc.c: some codecs (opus, probably mp3
and aac too) have weird requirements about doing decoding preroll on the
container level, and thus require adjusting the audio start timestamps
in some cases. libavcodec doesn't tell us how much was skipped, so we
either get shifted timestamps (by the length of the skipped data), or we
give it proper timestamps. (Note: libavcodec interprets or changes
timestamps only if pkt_timebase is set, which by default it is not.)
This would require selecting a timebase though, so I feel uncomfortable
with the idea. At least this change paves the way, and will allow some
testing.
These used the suffix _resync_stream, which is a bit misleading. Nothing
gets "resynchronized", they really just reset state.
(Some audio decoders actually used to "resync" by reading packets for
resuming playback, but that's not the case anymore.)
Also move the function in dec_video.c to the top of the file.
This includes the case when lavc decodes audio with more than 8
channels, which our audio chain currently does not support.
the changes in ad_lavc.c are just simplifications. The code tried to
avoid overriding global parameters if it found something invalid, but
that is not needed anymore.
Apparently just 5 packets is not enough for the initial audio decode
(which is needed to find the format). The old code (before the recent
refactor) appeared to use 5 packets, but there were apparently other
code paths which in the end amounted to more than 5 packets being read.
The sample that failed (see github issue #368) needed 9 packets.
Fixes#368.
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.
Also move the "format" field from the specific headers to sh_stream.
sh_audio is supposed to contain file headers, not whatever was decoded.
Fix this, and write the decoded format to separate fields in the decoder
context, the dec_audio.decoded field. (Note that this field is really
only needed to communicate the audio format from decoder driver to the
generic code, so no other code accesses it.)
Move all state that basically changes during decoding or is needed in
order to manage decoding itself into a new struct (dec_audio).
sh_audio (defined in stheader.h) is supposed to be the audio stream
header. This should reflect the file headers for the stream. Putting the
decoder context there is strange design, to say the least.
Most libavcodec decoders output non-interleaved audio. Add direct
support for this, and remove the hack that repacked non-interleaved
audio back to packed audio.
Remove the minlen argument from the decoder callback. Instead of
forcing every decoder to have its own decode loop to fill the buffer
until minlen is reached, leave this to the caller. So if a decoder
doesn't return enough data, it's simply called again. (In future, I
even want to change it so that decoders don't read packets directly,
but instead the caller has to pass packets to the decoders. This fits
well with this change, because now the decoder callback typically
decodes at most one packet.)
ad_mpg123.c receives some heavy refactoring. The main problem is that
it wanted to handle format changes when there was no data in the decode
output buffer yet. This sounds reasonable, but actually it would write
data into a buffer prepared for old data, since the caller doesn't know
about the format change yet. (I.e. the best place for a format change
would be _after_ writing the last sample to the output buffer.) It's
possible that this code was not perfectly sane before this commit,
and perhaps lost one frame of data after a format change, but I didn't
confirm this. Trying to fix this, I ended up rewriting the decoding
and also the probing.
This affects 64 bit floats and big endian integer PCM variants
(basically crap nobody uses). Possibly not all MS-muxed files work, but
I couldn't get or produce any samples.
Remove a bunch of format tags that are not needed anymore. Most of these
were used by demux_mov, which is long gone. Repurpose/abuse 'twos' as
mpv-internal tag for dealing with the PCM variants mentioned above.
This member was redundant. sh_audio->sample_format indicates the sample
size already.
The TV code is a bit strange: the redundant sample size was part of the
internal TV interface. Assume it's really redundant and not something
else. The PCM decoder ignores the sample size anyway.
There are some Microsoft Windows symbols which are traditionally used by
the mplayer core, because it used to be convenient (avi was the big
format, using binary windows decoders made sense...). So these symbols
have the exact same definition as the Windows one, and if mplayer is
compiled on Windows, the symbols from windows.h are used.
This broke recently just because some files were shuffled around, and
the symbols defined in ms_hdr.h collided with windows.h ones. Since we
don't have windows binary decoders anymore, there's not the slightest
reason our symbols should have the same names. Rename them to reduce the
risk for collision, and to fix the recent regression.
Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines
its own version if the windows headers don't define it, and ao_wasapi is
not available on systems where this symbol is missing.
Also reindent ms_hdr.h.
This is basically a libavcodec API oddity: it can happen that
avcodec_decode_audio4() returns 0 (meaning 0 bytes were consumed). It
requires you to feed the complete packet again to decode the full
packet, and to successfully decode the following packets.
We ignored this case with the argument that there's the danger of an
endless decode loop (because nothing of that packet is apparently
decoded, so it would retry forever), but change it in order to decode
mpc8 files correctly.
Also add some comments to explain the mess.