The main problem is that this m_struct stuff uses pointers for offsets
(why...), so we mangle it by intptr_t. This stuff really should use ints
(or in theory ptrdiff_t) for offsets, but changing it would be too much
effort, and hopefully this m_struct stuff will go away and replaced by
the common option parser mechanism instead.
Shuts up warnings on Windows.
Patch suggested by jon_y and rossy on IRC.
Move codec_tags.h include to demux_mkv.c, because this is the only file
which still uses it.
Move new_sh_stream() to demux.h, because this is more proper.
Before this commit, we tried to play along with libavformat and tried
to pretend that attached pictures are video streams with a single
frame, and that the frame magically appeared at the seek position when
seeking. The playback core would then switch to a mode where the video
has ended, and the "remaining" audio is played.
This didn't work very well:
- we needed a hack in demux.c, because we tried to read more packets in
order to find the "next" video frame (libavformat doesn't tell us if
a stream has ended)
- switching the video stream didn't work, because we can't tell
libavformat to send the packet again
- seeking and resuming after was hacky (for some reason libavformat sets
the returned packet's PTS to that of the previously returned audio
packet in generic code not related to attached pictures, and this
happened to work)
- if the user did something stupid and e.g. inserted a deinterlacer by
default, a picture was never displayed, only an inactive VO window)
- same when using a command that reconfigured the VO (like switching
aspect or video filters)
- hr-seek didn't work
For this reason, handle attached pictures as separate case with a
separate video decoding function, which doesn't read packets. Also,
do not synchronize audio to video start in this case.
The code touched by this commit makes sure that DVD subtitle tracks
known by libdvdread but not known by demux_lavf can be selected and
displayed properly. These subtitle tracks have the first packet
some time late in the packet stream, so that libavformat won't
immediately recognize them, and will add the track as soon as the
first packet is seen during normal demuxing.
demux_mpg used to handle this elegantly: you just set the MPEG ID of
the stream you wanted. demux_lavf couldn't do this, so it was emulated
with a DEMUXER_CTRL. This commit changes it so that new streams are
selected by default (if autoselect is enabled), and the playloop
simply can take appropriate action before the lower layer throws away
the first packet.
This also changes the demux_lavf behavior that subtitle packets are
always demuxed, even if not needed. (They were immediately thrown away,
so there was no advantage to this.)
Further, this adds the ability to demux.c to deal with demuxing more
than one stream of a kind at once. (Though currently it's not useful.)
AVDISCARD_DEFAULT is probably a bit better for normal decoding.
AVDISCARD_NONE would (as by documentation) include "useless" packets
too, while DEFAULT filters these.
It turns out that some code that was removed earlier was still needed.
avcodec_decode_audio4() can decode packets "partially". In that case,
you have to "slice" the packet and call the decode function again.
Codecs which need this are obscure and in low numbers. One sample that
needs it is here:
rsync://fate-suite.ffmpeg.org/fate-suite/lossless-audio/luckynight-partial.shn
(This one decodes in rather small increments.)
The new code is much simpler than what has been removed earlier,
though. The fact that we own the packet returned by the demuxer helps
a lot.
Not sure what should happen if avcodec_decode_audio4() returns 0.
Currently, we throw away the packet in this case. We don't want to be
stuck in an endless loop (could happen if the decoder produces no
output either).
Generally remove all accesses to demux_stream from all the code, except
inside of demux.c. Make it completely private to demux.c.
This simplifies the code because it removes an extra concept. In demux.c
it is reduced to a simple packet queue. There were other uses of
demux_stream, but they were removed or are removed with this commit.
Remove the extra "ds" argument to demux fill_buffer callback. It was
used by demux_avi and the TV pseudo-demuxer only.
Remove usage of d_video->last_pts from the no-correct-pts code. This
field contains the last PTS retrieved after a packet that is not NOPTS.
We can easily get this value manually because we read the packets
ourselves. Reuse sh_video->last_pts to store the packet PTS values. It
was used only by the correct-pts code before, and like d_video->last_pts,
it is reset on seek. The behavior should be exactly the same.
Currently, all demuxer fill_buffer functions have a demux_stream
parameter. We want to remove that, but the TV code still depends on
it. Add a hack to remove that dependency.
The problem with the TV code is that reading video and audio frames
blocks, so in order to avoid a deadlock, you should read either of
them only if the decoder actually requests new data.
For now, we want to get rid of the demux->sub access, because this
field will become private to demux.c in a later commit. So replace the
current hack with another hack.
The need for the hack will be removed sooner or later. (Instead of
autoselecting a specific stream, all new streams will be enabled by
default, so that no packets can get lost. The frontend will then be
responsible to deselect unwanted streams.)
This is not directly related to the handling of format changes itself,
but playing audio normally after the change. This was broken: the output
byte rate was not recalculated, so audio-video sync was simply broken.
Fix this by calculating the byte rate on the fly, instead of storing it
in sh_audio.
Format changes are relatively common (switches between stereo and 5.1
in TV recordings), so this fixes a somewhat critical bug.
pts_bytes can't just be changed at the end. It must be offset to the pts
value, which is reset with each packet read from the demuxer. Make sure
the pts_byte field is always reset after receiving a new PTS, i.e.
increment it after actually writing to the output buffer.
Flush the AVFormatContext's write buffer, because otherwise the audio
PTS will jump around too much: the calculation doesn't use the exact
output buffer size if there's still data in the avio buffer.
Removing this code doesn't change anything. All remaining audio decoders
are well-behaved enough to not overwrite sh_audio->pts if they don't
know the PTS. And if they don't know the PTS, the d_audio->last_pts
field can't contain any usable value either, because both fields contain
theame value: the last known valid PTS found in an audio packet.
As the comment n the removed code says, this was once needed for
something subtitle related. This code has been cleaned up long ago,
so at least the original reason for it is gone.
Partial packet reads were needed because the video/audio parsers were
working on top of them. So it could happen that a parser read a part of
a packet, and returned that to the decoder. With libavformat/libavcodec,
packets are already parsed, and everything is much simpler.
Most of the simplifications in ad_spdif could have been done earlier.
Remove some other stuff as well, like the questionable slave mode start
time reporting (could be replaced by proper code, but we don't bother).
Remove the unused skip_audio_frame() functionality as well (it was used
by old demuxers). Some functions become private to demux.c, like
demux_fill_buffer(). Introduce new packet read functions, which have
simpler semantics. Packets returned from them are owned by the caller,
and all packets in the demux.c packet queue are considered unread.
Remove special code that dropped subtitle packets with size 0. This
used to be needed because it caused special cases in the old code.
Add this option, which lets users set the cache size without forcing it
even when playing from the local filesystem.
Also document the default value explicitly.
The Matroska linked segments case is slightly simplified: they can
never come from network (mostly because it'd be insane, and we can't
even list files from network sources), so the cache will never be
enabled automatically.
This code used to be part of the demux_mpg and vobsub specific code
path. Then (just recently) the different code paths for subtitles were
merged, so this code became active even for demux_lavf and demux_mkv.
As far as I can tell, this code won't help much, and at least sd_lavc
(which will be used for DVD subs and other potentially weird things) can
deal with NOPTS values.
Remove the special handling for mng/mkv. These don't profit at all from
no-correct-pts mode, and even removing the mkv specific code makes mkv
work better (wow!).
Don't check for (int)fps == 1000. I don't know where this value comes
from. Maybe it was once a special value which triggered certain
behavior, but the code for that must have gone away. The only way to
trigger this value would be by coincidence if two frames are 1 ms apart.
Otherwise, the behavior should be exactly the same, except for some
removed messages.
We don't need to deal with partial packet reads, manually using an audio
parser, or having to call the libavcodec decoder multiple times per
packet.
Actually, I'm not sure about the last point. ffplay still does this, but
the ffmpeg demuxing.c example doesn't.
This was missing from the previous commit. It worked by luck, because
the sub-commands weren't freed either (as long as the original command
was around), but this is proper.
Also, set the original string for command lists (needed for input-test
only).
This is a regression caused by 854303a. This commit removed the include of
`sys/time.h` which was included in `cache.c` through a chain of recurvive
includes.
This doesn't help if -pthread is omitted. (Apparently, glibc 2.17, on
which I tested the previous commit, doesn't require -lpthread in order
to use pthreads either.)