Apparently using the stream index is the best way to refer to the same
streams across multiple FFmpeg-using programs, even if the stream index
itself is rarely meaningful in any way.
For Matroska, there are some possible problems, depending how FFmpeg
actually adds streams. Normally they seem to match though.
See previous commits. This finally replaces directly reading the file
data into a struct with reading them manually. In theory this is more
portable (no alignment issues and other things). For the most part,
it's nice seeing this gone.
MPlayer traditionally did this because it made sense: the most important
formats (avi, asf/wmv) used Microsoft formats, and many important
decoders (win32 binary codecs) also did. But the world has changed, and
I've always wanted to get rid of this thing from the codebase.
demux_mkv.c internally still uses it, because, guess what, Matroska has
a VfW muxing mode, which uses these data structures natively.
For a while, we used this to transfer PCM from demuxer to the filter
chain. We had a special "codec" that mapped what MPlayer used to do
(MPlayer passes the AF sample format over an extra field to ad_pcm,
which specially interprets it).
Do this by providing a mp_set_pcm_codec() function, which describes a
sample format in a generic way, and sets the appropriate demuxer header
fields so that libavcodec interprets it correctly. We use the fact that
libavcodec has separate PCM decoders for each format. These are
systematically named, so we can easily map them.
This has the advantage that we can change the audio filter chain as we
like, without losing features from the "rawaudio" demuxer. In fact, this
commit also gets rid of the audio filter chain formats completely.
Instead have an explicit list of PCM formats. (We could even just have
the user pass libavcodec PCM decoder names directly, but that would be
annoying in other ways.)
Until now, the audio chain could handle both little endian and big
endian formats. This actually doesn't make much sense, since the audio
API and the HW will most likely prefer native formats. Or at the very
least, it should be trivial for audio drivers to do the byte swapping
themselves.
From now on, the audio chain contains native-endian formats only. All
AOs and some filters are adjusted. af_convertsignendian.c is now wrongly
named, but the filter name is adjusted. In some cases, the audio
infrastructure was reused on the demuxer side, but that is relatively
easy to rectify.
This is a quite intrusive and radical change. It's possible that it will
break some things (especially if they're obscure or not Linux), so watch
out for regressions. It's probably still better to do it the bulldozer
way, since slow transition and researching foreign platforms would take
a lot of time and effort.
--hls-bitrate=min/max lets you select the min or max bitrate. That's it.
Something more sophisticated might be possible, but is probably not even
worth the effort.
This inserts an automatic conversion filter if a Matroska file is marked
as 3D (StereoMode element). The basic idea is similar to video rotation
and colorspace handling: the 3D mode is added as a property to the video
params. Depending on this property, a video filter can be inserted.
As of this commit, extending mp_image_params is actually completely
unnecessary - but the idea is that it will make it easier to integrate
with VOs supporting stereo 3D mogrification. Although vo_opengl does
support some stereo rendering, it didn't support the mode my sample file
used, so I'll leave that part for later.
Not that most mappings from Matroska mode to vf_stereo3d mode are
probably wrong, and some are missing.
Assuming that Matroska modes, and vf_stereo3d in modes, and out modes
are all the same might be an oversimplification - we'll see.
See issue #1045.
This adds a thread to the demuxer which reads packets asynchronously.
It will do so until a configurable minimum packet queue size is
reached. (See options.rst additions.)
For now, the thread is disabled by default. There are some corner cases
that have to be fixed, such as fixing cache behavior with webradios.
Note that most interaction with the demuxer is still blocking, so if
e.g. network dies, the player will still freeze. But this change will
make it possible to remove most causes for freezing.
Most of the new code in demux.c actually consists of weird caches to
compensate for thread-safety issues (with the previously single-threaded
design), or to avoid blocking by having to wait on the demuxer thread.
Most of the changes in the player are due to the fact that we must not
access the source stream directly. the demuxer thread already accesses
it, and the stream stuff is not thread-safe.
For timeline stuff (like ordered chapters), we enable the thread for the
current segment only. We also clear its packet queue on seek, so that
the remaining (unconsumed) readahead buffer doesn't waste memory.
Keep in mind that insane subtitles (such as ASS typesetting muxed into
mkv files) will practically disable the readahead, because the total
queue size is considered when checking whether the minimum queue size
was reached.
It's unlikely that files with multiple audio tracks and with replaygain
actually happen, but this change might help avoid minor corner cases
with later changes.
The i_bps members of the sh_audio and dev_video structs are mostly used
for displaying the average audio and video bitrates. Keeping them in
bits-per-second avoids truncating them to bytes-per-second and changing
them back lateron.
Instead of parsing the ASS file in demux_libass.c and trying to pass the
ASS_Track to the subtitle renderer, just read all file data in
demux_libass.c, and let the subtitle renderer pass the file contents to
ass_process_codec_private(). (This happens to parse full files too.)
Makes the code simpler, though it also relies harder on the (messy)
probe logic in demux_libass.c.
The mplayer decoder (spudec.c) actually handled this. There was explicit
code for binary palettes (16 32 bit values), and the subtitle resolution
was handled by video resolution coincidentally matching the subtitle
resolution.
Whoever puts vobsub into mp4 should be punished.
Fixes the sample gundam_sample.mp4, closes github issue #547.
So, FFmpeg/Libav requires us to figure out video timestamps ourselves
(see last 10 commits or so), but the methods it provides for this aren't
even sufficient. In particular, everything that uses AVI-style DTS (avi,
vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal
frame delay is broken. In this case, libavcodec will shift the packet-
to-image correspondence by the codec delay, meaning that with a delay=1,
the first AVFrame.pkt_dts is not 0, but that of the second packet. All
timestamps will appear shifted. The start time (e.g. the time displayed
when doing "mpv file.avi --pause") will not be exactly 0.
(According to Libav developers, this is how it's supposed to work; just
that the first DTS values are normally negative with formats that use
DTS "properly". Who cares if it doesn't work at all with very common
video formats? There's no indication that they'll fix this soon,
either. An elegant workaround is missing too.)
Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV.
Since these timestamps are not reorderd, we wouldn't need to sort them,
but it's less code this way (and possibly more robust, should a demuxer
unexpectedly output PTS).
The original intention of all the timestamp changes recently was
actually to get rid of demuxer-specific hacks and the old timestamp
sorting code, but it looks like this didn't work out. Yet another case
where trying to replace native MPlayer functionality with FFmpeg/Libav
led to disadvantages and bugs. (Note that the old PTS sorting code
doesn't and can't handle frame dropping correctly, though.)
Bug reports:
https://trac.ffmpeg.org/ticket/3178https://bugzilla.libav.org/show_bug.cgi?id=600
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.
Also move the "format" field from the specific headers to sh_stream.
This is similar to the sh_audio commit.
This is mostly cosmetic in nature, except that it also adds automatical
freeing of the decoder driver's state struct (which was in
sh_video->context, now in dec_video->priv).
Also remove all the stheader.h fields that are not needed anymore.
sh_audio is supposed to contain file headers, not whatever was decoded.
Fix this, and write the decoded format to separate fields in the decoder
context, the dec_audio.decoded field. (Note that this field is really
only needed to communicate the audio format from decoder driver to the
generic code, so no other code accesses it.)
Move all state that basically changes during decoding or is needed in
order to manage decoding itself into a new struct (dec_audio).
sh_audio (defined in stheader.h) is supposed to be the audio stream
header. This should reflect the file headers for the stream. Putting the
decoder context there is strange design, to say the least.
Most libavcodec decoders output non-interleaved audio. Add direct
support for this, and remove the hack that repacked non-interleaved
audio back to packed audio.
Remove the minlen argument from the decoder callback. Instead of
forcing every decoder to have its own decode loop to fill the buffer
until minlen is reached, leave this to the caller. So if a decoder
doesn't return enough data, it's simply called again. (In future, I
even want to change it so that decoders don't read packets directly,
but instead the caller has to pass packets to the decoders. This fits
well with this change, because now the decoder callback typically
decodes at most one packet.)
ad_mpg123.c receives some heavy refactoring. The main problem is that
it wanted to handle format changes when there was no data in the decode
output buffer yet. This sounds reasonable, but actually it would write
data into a buffer prepared for old data, since the caller doesn't know
about the format change yet. (I.e. the best place for a format change
would be _after_ writing the last sample to the output buffer.) It's
possible that this code was not perfectly sane before this commit,
and perhaps lost one frame of data after a format change, but I didn't
confirm this. Trying to fix this, I ended up rewriting the decoding
and also the probing.
This member was redundant. sh_audio->sample_format indicates the sample
size already.
The TV code is a bit strange: the redundant sample size was part of the
internal TV interface. Assume it's really redundant and not something
else. The PCM decoder ignores the sample size anyway.
There are some Microsoft Windows symbols which are traditionally used by
the mplayer core, because it used to be convenient (avi was the big
format, using binary windows decoders made sense...). So these symbols
have the exact same definition as the Windows one, and if mplayer is
compiled on Windows, the symbols from windows.h are used.
This broke recently just because some files were shuffled around, and
the symbols defined in ms_hdr.h collided with windows.h ones. Since we
don't have windows binary decoders anymore, there's not the slightest
reason our symbols should have the same names. Rename them to reduce the
risk for collision, and to fix the recent regression.
Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines
its own version if the windows headers don't define it, and ao_wasapi is
not available on systems where this symbol is missing.
Also reindent ms_hdr.h.
The --deinterlace option does on playback start what the "deinterlace"
property normally does at runtime. You could do this before by using the
--vf option or by messing with the vo_vdpau default options, but this
new option is supposed to be a "foolproof" way.
The main motivation for adding this is so that the deinterlace property
can be restored when using the video resume functionality
(quit_watch_later command).
Implementation-wise, this is a bit messy. The video chain is rebuilt in
mpcodecs_reconfig_vo(), where we don't have access to MPContext, so the
usual mechanism for enabling deinterlacing can't be used. Further,
mpcodecs_reconfig_vo() is called by the video decoder, which doesn't
have access to MPContext either. Moving this call to mplayer.c isn't
currently possible either (see below). So we just do this before frames
are filtered, which potentially means setting the deinterlacing every
frame. Fortunately, setting deinterlacing is stable and idempotent, so
this is hopefully not a problem. We also add a counter that is
incremented on each reconfig to reduce the amount of additional work per
frame to nearly zero.
The reason we can't move mpcodecs_reconfig_vo() to mplayer.c is because
of hardware decoding: we need to check whether the video chain works
before we decide that we can use hardware decoding. Changing it so that
this can be decided in advance without building a filter chain sounds
like a good idea and should be done, but we aren't there yet.
Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This
file is named so because because it's written against the "old"
libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau").
Add support for the "new" libavcodec vdpau support. This was recently
added and replaces the "old" vdpau parts. (In fact, Libav is about to
deprecate and remove the "old" API without deprecation grace period,
so we have to support it now. Moreover, there will probably be no Libav
release which supports both, so the transition is even less smooth than
we could hope, and we have to support both the old and new API.)
Whether the old or new API is used is checked by a configure test: if
the new API is found, it is used, otherwise the old API is assumed.
Some details might be handled differently. Especially display preemption
is a bit problematic with the "new" libavcodec vdpau support: it wants
to keep a pointer to a specific vdpau API function (which can be driver
specific, because preemption might switch drivers). Also, surface IDs
are now directly stored in AVFrames (and mp_images), so they can't be
forced to VDP_INVALID_HANDLE on preemption. (This changes even with
older libavcodec versions, because mp_image always uses the newer
representation to make vo_vdpau.c simpler.)
Decoder initialization in the new code tries to deal with codec
profiles, while the old code always uses the highest profile per codec.
Surface allocation changes. Since the decoder won't call config() in
vo_vdpau.c on video size change anymore, we allow allocating surfaces
of arbitrary size instead of locking it to what the VO was configured.
The non-hwdec code also has slightly different allocation behavior now.
Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau
doesn't work anymore (a warning suggesting the --hwdec option is
printed instead).
Matroska has an output sample rate (OutputSamplingFrequency), which in
theory should be forced instead of whatever the decoder outputs. But it
appears no software (other than mplayer2 and mpv until now) actually
respects this. Even worse, there were broken files around, which played
correctly with (in theory) broken software, but not mplayer2/mpv. Hacks
were added to our code to play these files correctly, but they didn't
catch all cases.
Simplify this by doing what everyone else does, and always use the
decoder's sample rate instead. In particular, we try to handle all
sample rate issues like libavformat's Matroska demuxer does.
Guess the colorspace directly in mpcodecs_reconfig_vo(), instead of in
set_video_colorspace(). The difference is that the latter function just
makes the video filter chain (and VOs) force the detected colorspace,
and then throws it away, while the former is a bit more general and
central. Not really a big difference and it doesn't matter much in
practice, but it guarantees that there is no internal disagreement about
the colorspace.
Move codec_tags.h include to demux_mkv.c, because this is the only file
which still uses it.
Move new_sh_stream() to demux.h, because this is more proper.
Before this commit, we tried to play along with libavformat and tried
to pretend that attached pictures are video streams with a single
frame, and that the frame magically appeared at the seek position when
seeking. The playback core would then switch to a mode where the video
has ended, and the "remaining" audio is played.
This didn't work very well:
- we needed a hack in demux.c, because we tried to read more packets in
order to find the "next" video frame (libavformat doesn't tell us if
a stream has ended)
- switching the video stream didn't work, because we can't tell
libavformat to send the packet again
- seeking and resuming after was hacky (for some reason libavformat sets
the returned packet's PTS to that of the previously returned audio
packet in generic code not related to attached pictures, and this
happened to work)
- if the user did something stupid and e.g. inserted a deinterlacer by
default, a picture was never displayed, only an inactive VO window)
- same when using a command that reconfigured the VO (like switching
aspect or video filters)
- hr-seek didn't work
For this reason, handle attached pictures as separate case with a
separate video decoding function, which doesn't read packets. Also,
do not synchronize audio to video start in this case.
Generally remove all accesses to demux_stream from all the code, except
inside of demux.c. Make it completely private to demux.c.
This simplifies the code because it removes an extra concept. In demux.c
it is reduced to a simple packet queue. There were other uses of
demux_stream, but they were removed or are removed with this commit.
Remove the extra "ds" argument to demux fill_buffer callback. It was
used by demux_avi and the TV pseudo-demuxer only.
Remove usage of d_video->last_pts from the no-correct-pts code. This
field contains the last PTS retrieved after a packet that is not NOPTS.
We can easily get this value manually because we read the packets
ourselves. Reuse sh_video->last_pts to store the packet PTS values. It
was used only by the correct-pts code before, and like d_video->last_pts,
it is reset on seek. The behavior should be exactly the same.
This is not directly related to the handling of format changes itself,
but playing audio normally after the change. This was broken: the output
byte rate was not recalculated, so audio-video sync was simply broken.
Fix this by calculating the byte rate on the fly, instead of storing it
in sh_audio.
Format changes are relatively common (switches between stereo and 5.1
in TV recordings), so this fixes a somewhat critical bug.
These separate arrays were used by the old demuxers and are not needed
anymore. We can simplify track switching as well.
One interesting thing is that stream/tv.c (which is a demuxer) won't
respect --no-audio anymore. It will probably work as expected, but it
will still open an audio device etc. - this is because track selection
is now always done with the runtime track switching mechanism. Maybe
the TV code could be updated to do proper runtime switching, but I
can't test this stuff.
The audio parser was needed only by the "old" demuxers, and
demux_rawaudio. All other demuxers output already parsed packets.
demux_rawaudio is usually for raw audio, so using a parser with it
doesn't usually make sense. But you can also force it to read
compressed formats with fixed packet sizes, in which case the parser
would have been used. This use case is probably broken now, but you
will be able to do the same thing with libavformat demuxers.
Delete demux_avi, demux_asf, demux_mpg, demux_ts. libavformat does
better than them (except in rare corner cases), and the demuxers have
a bad influence on the rest of the code. Often they don't output
proper packets, and require additional audio and video parsing. Most
work only in --no-correct-pts mode.
Remove them to facilitate further cleanups.
The filter chain and the video ouputs have config() functions. They are
strictly limited to transfering the video size and format. Other
parameters (like color levels) have to be transferred separately.
Improve upon this by introducing a separate set of reconfig() functions,
which use mp_image_params to carry format parameters. This struct
contains all image format related parameters from config(), plus
additional parameters such as colorspace.
Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just
an example/test case, but vo_opengl will need it later.
The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This
information is now handed to the VOs via reconfig(). The getter,
VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
subreader.c (before this commit renamed to demux_subreader.c) was
special cased to the -sub option. The plan is using the normal demuxer
codepath for all subtitle formats (so we can prefer libavformat demuxers
for most formats).
There are some subtle changes. The probe size is restricted to 32 KB
(instead of unlimitted + giving up after 100 lines of input). For
formats like MicroDVD, the video FPS isn't used anymore, because it's
not available on the subtitle demuxer level. Instead, hardcode it to
23.976 FPS (libavformat seems to do the same). The user can probably
still use -sub-fps to fix the timing. Checking the file extension for
".utf"/".utf8"/".utf-8" is simply removed (seems worthless, was in the
way, and I've never seen this anywhere).