The i_bps members of the sh_audio and dev_video structs are mostly used
for displaying the average audio and video bitrates. Keeping them in
bits-per-second avoids truncating them to bytes-per-second and changing
them back lateron.
Set refcounted_frames, because in some versions of libavcodec mixing the
new AVFrame API and non-refcounted decoding could cause memory
corruption. Likewise, it's probably still required to unref a frame
before calling the decoder.
request_channels has been deprecated for years (request_channel_layout
is the replacement), but it appears it's still needed despite the
deprecation at least on older libavcodec versions.
So still set request_channels, but to it with the avoption API, which
hides the deprecation warning. This should also prevent mpv getting
trashed when libavcodec happens to bump its major version.
Since m_option.h and options.h are extremely often included, a lot of
files have to be changed.
Moving path.c/h to options/ is a bit questionable, but since this is
mainly about access to config files (which are also handled in
options/), it's probably ok.
The tmsg stuff was for the internal gettext() based translation system,
which nobody ever attempted to use and thus was removed. mp_gtext() and
set_osd_tmsg() were also for this.
mp_dbg was once enabled in debug mode only, but since we have log level
for enabling debug messages, it seems utterly useless.
Normally, audio decoder don't have a decoder delay, so the code was
fine. But FFmpeg supports multithreaded decoding for some audio codecs,
which introduces such a delay.
The delay means that we won't get decoded audio for the first few
packets, and that we need to do something to get the trailing audio
still buffered in the decoder when reaching EOF.
Two changes are needed to deal with the delay:
- If EOF is reached, pass a "flush" packet to the decoder to return the
buffered audio. Such a flush packet is automatically setup when
calling mp_set_av_packet() with a NULL packet.
- Use the PTS returned by the decoder, instead of the packet's. This is
important to get correct timestamps for decoded audio. Ignoring this
would result into offsetting the audio playback time by the decoder
delay. Note that we can still use the timestamp of the first packet
to get the timestamp for the start of the audio.
If the timebase is set, it's used for converting the packet timestamps.
Otherwise, the previous method of reinterpret-casting the mpv style
double timestamps to libavcodec style int64_t timestamps is used.
Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by
mp_pts_from_av(), which simply converts timestamps in a way the old
function did. (Plus it takes a timebase parameter, similar to the
addition to mp_set_av_packet().)
Note that this should not change anything yet. The code in ad_lavc.c and
vd_lavc.c passes NULL for the timebase parameters. We could set
AVCodecContext.pkt_timebase and use that if we want to give libavcodec
"proper" timestamps.
This could be important for ad_lavc.c: some codecs (opus, probably mp3
and aac too) have weird requirements about doing decoding preroll on the
container level, and thus require adjusting the audio start timestamps
in some cases. libavcodec doesn't tell us how much was skipped, so we
either get shifted timestamps (by the length of the skipped data), or we
give it proper timestamps. (Note: libavcodec interprets or changes
timestamps only if pkt_timebase is set, which by default it is not.)
This would require selecting a timebase though, so I feel uncomfortable
with the idea. At least this change paves the way, and will allow some
testing.
These used the suffix _resync_stream, which is a bit misleading. Nothing
gets "resynchronized", they really just reset state.
(Some audio decoders actually used to "resync" by reading packets for
resuming playback, but that's not the case anymore.)
Also move the function in dec_video.c to the top of the file.
This includes the case when lavc decodes audio with more than 8
channels, which our audio chain currently does not support.
the changes in ad_lavc.c are just simplifications. The code tried to
avoid overriding global parameters if it found something invalid, but
that is not needed anymore.
Apparently just 5 packets is not enough for the initial audio decode
(which is needed to find the format). The old code (before the recent
refactor) appeared to use 5 packets, but there were apparently other
code paths which in the end amounted to more than 5 packets being read.
The sample that failed (see github issue #368) needed 9 packets.
Fixes#368.
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.
Also move the "format" field from the specific headers to sh_stream.
sh_audio is supposed to contain file headers, not whatever was decoded.
Fix this, and write the decoded format to separate fields in the decoder
context, the dec_audio.decoded field. (Note that this field is really
only needed to communicate the audio format from decoder driver to the
generic code, so no other code accesses it.)
Move all state that basically changes during decoding or is needed in
order to manage decoding itself into a new struct (dec_audio).
sh_audio (defined in stheader.h) is supposed to be the audio stream
header. This should reflect the file headers for the stream. Putting the
decoder context there is strange design, to say the least.
Most libavcodec decoders output non-interleaved audio. Add direct
support for this, and remove the hack that repacked non-interleaved
audio back to packed audio.
Remove the minlen argument from the decoder callback. Instead of
forcing every decoder to have its own decode loop to fill the buffer
until minlen is reached, leave this to the caller. So if a decoder
doesn't return enough data, it's simply called again. (In future, I
even want to change it so that decoders don't read packets directly,
but instead the caller has to pass packets to the decoders. This fits
well with this change, because now the decoder callback typically
decodes at most one packet.)
ad_mpg123.c receives some heavy refactoring. The main problem is that
it wanted to handle format changes when there was no data in the decode
output buffer yet. This sounds reasonable, but actually it would write
data into a buffer prepared for old data, since the caller doesn't know
about the format change yet. (I.e. the best place for a format change
would be _after_ writing the last sample to the output buffer.) It's
possible that this code was not perfectly sane before this commit,
and perhaps lost one frame of data after a format change, but I didn't
confirm this. Trying to fix this, I ended up rewriting the decoding
and also the probing.
This affects 64 bit floats and big endian integer PCM variants
(basically crap nobody uses). Possibly not all MS-muxed files work, but
I couldn't get or produce any samples.
Remove a bunch of format tags that are not needed anymore. Most of these
were used by demux_mov, which is long gone. Repurpose/abuse 'twos' as
mpv-internal tag for dealing with the PCM variants mentioned above.
This member was redundant. sh_audio->sample_format indicates the sample
size already.
The TV code is a bit strange: the redundant sample size was part of the
internal TV interface. Assume it's really redundant and not something
else. The PCM decoder ignores the sample size anyway.
There are some Microsoft Windows symbols which are traditionally used by
the mplayer core, because it used to be convenient (avi was the big
format, using binary windows decoders made sense...). So these symbols
have the exact same definition as the Windows one, and if mplayer is
compiled on Windows, the symbols from windows.h are used.
This broke recently just because some files were shuffled around, and
the symbols defined in ms_hdr.h collided with windows.h ones. Since we
don't have windows binary decoders anymore, there's not the slightest
reason our symbols should have the same names. Rename them to reduce the
risk for collision, and to fix the recent regression.
Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines
its own version if the windows headers don't define it, and ao_wasapi is
not available on systems where this symbol is missing.
Also reindent ms_hdr.h.
This is basically a libavcodec API oddity: it can happen that
avcodec_decode_audio4() returns 0 (meaning 0 bytes were consumed). It
requires you to feed the complete packet again to decode the full
packet, and to successfully decode the following packets.
We ignored this case with the argument that there's the danger of an
endless decode loop (because nothing of that packet is apparently
decoded, so it would retry forever), but change it in order to decode
mpc8 files correctly.
Also add some comments to explain the mess.
Matroska has an output sample rate (OutputSamplingFrequency), which in
theory should be forced instead of whatever the decoder outputs. But it
appears no software (other than mplayer2 and mpv until now) actually
respects this. Even worse, there were broken files around, which played
correctly with (in theory) broken software, but not mplayer2/mpv. Hacks
were added to our code to play these files correctly, but they didn't
catch all cases.
Simplify this by doing what everyone else does, and always use the
decoder's sample rate instead. In particular, we try to handle all
sample rate issues like libavformat's Matroska demuxer does.
It turns out that some code that was removed earlier was still needed.
avcodec_decode_audio4() can decode packets "partially". In that case,
you have to "slice" the packet and call the decode function again.
Codecs which need this are obscure and in low numbers. One sample that
needs it is here:
rsync://fate-suite.ffmpeg.org/fate-suite/lossless-audio/luckynight-partial.shn
(This one decodes in rather small increments.)
The new code is much simpler than what has been removed earlier,
though. The fact that we own the packet returned by the demuxer helps
a lot.
Not sure what should happen if avcodec_decode_audio4() returns 0.
Currently, we throw away the packet in this case. We don't want to be
stuck in an endless loop (could happen if the decoder produces no
output either).
Partial packet reads were needed because the video/audio parsers were
working on top of them. So it could happen that a parser read a part of
a packet, and returned that to the decoder. With libavformat/libavcodec,
packets are already parsed, and everything is much simpler.
Most of the simplifications in ad_spdif could have been done earlier.
Remove some other stuff as well, like the questionable slave mode start
time reporting (could be replaced by proper code, but we don't bother).
Remove the unused skip_audio_frame() functionality as well (it was used
by old demuxers). Some functions become private to demux.c, like
demux_fill_buffer(). Introduce new packet read functions, which have
simpler semantics. Packets returned from them are owned by the caller,
and all packets in the demux.c packet queue are considered unread.
Remove special code that dropped subtitle packets with size 0. This
used to be needed because it caused special cases in the old code.
We don't need to deal with partial packet reads, manually using an audio
parser, or having to call the libavcodec decoder multiple times per
packet.
Actually, I'm not sure about the last point. ffplay still does this, but
the ffmpeg demuxing.c example doesn't.
The audio parser was needed only by the "old" demuxers, and
demux_rawaudio. All other demuxers output already parsed packets.
demux_rawaudio is usually for raw audio, so using a parser with it
doesn't usually make sense. But you can also force it to read
compressed formats with fixed packet sizes, in which case the parser
would have been used. This use case is probably broken now, but you
will be able to do the same thing with libavformat demuxers.
Audio and video had their own (very similar) functions to initialize an
AVPacket (ffmpeg's packet struct) from a demux_packet (mplayer's packet
struct). Add a common function for these.
Also use this function for sd_lavc_conv. This is actually a functional
change, as some libavfilter subtitle demuxers add weird out-of-band
stuff as side-data.
Using demux_rawaudio and the --rawaudio-channels option is useful for
testing channel map stuff. The libavcodec PCM decoder normalizes the
channel map to ffmpeg order, though. Prevent this by forcing the
original channel map when using the mp-pcm pseudo decoder entry (used by
demux_rawaudio and stream/tv.c only).
This helps passing the channel layout correctly from decoder to audio
filter chain. (Because that part "reuses" the demuxer level codec
parameters, which is very disgusting.)
Note that ffmpeg stuff already passed the channel layout via
mp_copy_lav_codec_headers(). So other than easier dealing with the
demuxer/decoder parameters mess, there's no real advantage to doing
this.
Make the --channels option accept a channel map. Since simple numbers
map to standard layouts with the given number of channels, this is
downwards compatible. Likewise for demux_rawaudio.
This is done in af_lavrresample now, and as part of format negotiation.
Also remove the remaining reorder_channel calls. They were redundant
and did nothing.
The old names have been deprecated a while ago, but were needed for
supporting older ffmpeg/libav versions. The deprecated identifiers
have been removed from recent Libav and FFmpeg git.
This change breaks compatibility with Libav 0.8.x and equivalent
FFmpeg releases.
Instead of putting codec header data into WAVEFORMATEX and
BITMAPINFOHEADER, pass it directly via AVCodecContext. To do this, we
add mp_copy_lav_codec_headers(), which copies the codec header data
from one AVCodecContext to another (originally, the plan was to use
avcodec_copy_context() for this, but it looks like this would turn
decoder initialization into an even worse mess).
Get rid of the silly CodecID <-> codec_tag mapping. This was originally
needed for codecs.conf: codec tags were used to identify codecs, but
libavformat didn't always return useful codec tags (different file
formats can have different, overlapping tag numbers). Since we don't
go through WAVEFORMATEX etc. and pass all header data directly via
AVCodecContext, we can be absolutely sure that the codec tag mapping is
not needed anymore.
Note that this also destroys the "standard" MPlayer method of exporting
codec header data. WAVEFORMATEX and BITMAPINFOHEADER made sure that
other non-libavcodec decoders could be initialized. However, all these
decoders have been removed, so this is just cruft full of old hacks that
are not needed anymore. There's still ad_spdif and ad_mpg123, bu neither
of these need codec header data. Should we ever add non-libavcodec
decoders, better data structures without the past hacks could be added
to export the headers.
Rearrange some code to make it easier readable. Remove some dead code,
and stop printing AVI headers in demux_lavf. (These are not actual AVI
headers, just for internal use.)
There should be no functional changes, other than reducing output in
verbose mode.
Use codec names instead of FourCCs to identify codecs. Rewrite how
codecs are selected and initialized. Now each decoder exports a list
of decoders (and the codec it supports) via add_decoders(). The order
matters, and the first decoder for a given decoder is preferred over
the other decoders. E.g. all ad_mpg123 decoders are preferred over
ad_lavc, because it comes first in the mpcodecs_ad_drivers array.
Likewise, decoders within ad_lavc that are enumerated first by
libavcodec (using av_codec_next()) are preferred. (This is actually
critical to select h264 software decoding by default instead of vdpau.
libavcodec and ffmpeg/avconv use the same method to select decoders by
default, so we hope this is sane.)
The codec names follow libavcodec's codec names as defined by
AVCodecDescriptor.name (see libavcodec/codec_desc.c). Some decoders
have names different from the canonical codec name. The AVCodecDescriptor
API is relatively new, so we need a compatibility layer for older
libavcodec versions for codec names that are referenced internally,
and which are different from the decoder name. (Add a configure check
for that, because checking versions is getting way too messy.)
demux/codec_tags.c is generated from the former codecs.conf (minus
"special" decoders like vdpau, and excluding the mappings that are the
same as the mappings libavformat's exported RIFF tables). It contains
all the mappings from FourCCs to codec name. This is needed for
demux_mkv, demux_mpg, demux_avi and demux_asf. demux_lavf will set the
codec as determined by libavformat, while the other demuxers have to do
this on their own, using the mp_set_audio/video_codec_from_tag()
functions. Note that the sh_audio/video->format members don't uniquely
identify the codec anymore, and sh->codec takes over this role.
Replace the --ac/--vc/--afm/--vfm with new --vd/--ad options, which
provide cover the functionality of the removed switched.
Note: there's no CODECS_FLAG_FLIP flag anymore. This means some obscure
container/video combinations (e.g. the sample Film_200_zygo_pro.mov)
are played flipped. ffplay/avplay doesn't handle this properly either,
so we don't care and blame ffmeg/libav instead.
Uses the same trick as the planarization code to turn per-sample memcpy
calls into mov instructions. Makes decoding a ~25min 48000Hz 2ch floatle
audio file faster from 3.8s to 2.7s.