Until now (and in mplayer traditionally), avi timestamps were handled
with a timestamp FIFO. AVI timestamps are essentially just strictly
increasing frame numbers and are not reordered like normal timestamps.
Limiting the FIFO is required because frames can be dropped. To make
it worse, frame dropping can't be distinguished from the decoder not
returning output due to increasing the buffering required for B-frames.
("Measuring" the buffering at playback start seems like an interesting
idea, but won't work as the buffering could be increased mid-playback.)
Another problem are skipped frames (packets with data, but which do
not contain a video frame).
Besides dropped and skipped frames, there is the problem that we can't
always know the delay. External decoders like MMAL are not going to
tell us. (And later perhaps others, like direct VideoToolbox usage.)
In general, this works not-well enough that I prefer the solution of
passing through AVI timestamps as DTS. This is slightly incorrect,
because most decoders treat DTS as mpeg-style timestamps, which
already include a b-frame delay, and thus will be shifted by a few
frames. This means there will be a problem with A/V sync in some
situations.
Note that the FFmpeg AVI demuxer shifts timestamps by an additional
amount (which increases after the first seek!?!?), which makes the
situation worse. It works well with VfW-muxed Matroska files, though.
On RPI, the first X timestamps are broken until the MMAL decoder "locks
on".
It would make somewhat sense for libcs which don't implement locales at
all, such as Bionic.
Beyond that, setlocale() is specified that it can return NULL, and we
shouldn't crash if that happens.
Unfortunately I see no better solution.
The refresh seek is skipped if the amount of buffered audio is not
overly huge.
Unfortunately softvol af_volume insertion still can cause this issue,
because it's outside of the normal dynamic filter chain changing code.
Move the video refresh call to reinit_video_filters() to make it more
uniform along with the audio code.
The ctx->redrawing field signals whether flip_page() should block. Do
not block if a black frame (i.e. nothing) is to be rendered.
Also, frame==NULL can never happen.
To be more specific, enable vo_opengl and vo_opengl_cb, if libmpv is
compiled, and the GL headers happen to be in the default search paths.
Although platform specific code can be useful for libmpv (for window
embedding, and even with vo_opengl_cb for certain forms of hardware
decoding), it's not a requirement to use the opengl_cb API.
Enabling vo_opengl is not useful here, but the rest of the build system
doesn't distinguish vo_opengl and vo_opengl_cb, and I see no reason to.
We either need to be on Windows, or have posix_spawn available.
If someone can come up with a system that is POSIX, but does not provide
posix_spawn, we could make it optional.
This was dumb. Could make it burn 100% CPU and not exit at the end.
(Because it would retry as instructed, instead of terminating playback.)
It also needs to consider EOF as waiting for input. lavfi_process() will
decide if it's really EOF, or if further input might come in the future.
Without this, it'd would think that it does not need to wait for input,
i.e. that new input will be available immediately.
(Not so fond of the duplication of subtle logic.)
fd339e3f53 introduced a regression that caused
segfault while uniniting dxva2 decoder (and possibly vdpau too). The problem was
that it freed the avctx earlier, before calling the backend-specific uninit
which referenced it.
Revert some of the changes of that commit, and avoid calling flush by
checking whether the codec is open instead.
(Based on a PR by Kevin Mitchell.)
Signed-off-by: wm4 <wm4@nowhere>
It doesn't provide this function. The code is not really designed to
work without it, so it will probably mess up big time, but at least
make it compile again.
See --lavfi-complex option.
This is still quite rough. There's no support for dynamic configuration
of any kind. There are probably corner cases where playback might freeze
or burn 100% CPU (due to dataflow problems when interaction with
libavfilter).
Future possible plans might include:
- freely switch tracks by providing some sort of default track graph
label
- automatically enabling audio visualization
- automatically mix audio or stack video when multiple tracks are
selected at once (similar to how multiple sub tracks can be selected)
The complex filter support that will be added makes much more complex
use of libavfilter, and I'm not going to bother with adding hacks to
keep libavfilter optional.
track can't be NLUL at this point, so the if is redundant. Remove it and
unindent the block. Also, make the function check whether the track is
selected at all, which makes it safer and idempotent.
This makes it possible to set video size and position using the
--geometry and/or --autofit options. It's also possible to switch
between fullscreen/non-fullscreen playback during runtime.
It can be "dangerous". In particular, the decoder might have failed to
initialize, and is now in a broken state. avcodec_flush_buffers() is not
expected to be called in this state, and could trigger undefined
behavior.
I can't explain this, but it seems to be a similar case to the ALSA HDMI
one. I find it hard to tell because of the slightly different names and
conventions in use in libavcodec, WAVEEXT channel masks, decoders, codec
specifications, HDMI, and platform audio APIs.
The fix is the same as the one for ao_alsa (see commit be49da72). This
should fix at least playing 7.1 sources on OSX with 7.1(rear) selected
in Audio MIDI Setup. The ao_alsa commit mentions XBMC, but I couldn't
find out where it does that or if it also does that for CoreAudio. It's
woth noting that PHT (essentially an old XBMC fork) also exhibited the
incorrect behavior (i.e. side and back speakers were swapped).
Remove flc-frc <-> sl<->sr. This was just plain wrong, and a mistaken
change to make 7.1 work properly on CoreAudio with 7.1(rear) layout.
Also see the following commit.
Add br-br <-> sl<->sr, because we decided that it makes sense.
Note that this "fudging" is applied only if the channel pairs are
replaced, i.e. they would get dropped and be replaced with silence. This
is done to compensate for libswresample's default rematrixing (which
takes care of some more common cases).
Avoids "problems". In particular, it makes MMAL output a NOPTS timestamp
if the input timestamp was NOPTS.
Don't do it for other decoders. Ideally, we will at some point in the
future switch to integer fractions for timestamps at least up until the
filter layer. But this would be a larger change, and for now I'd prefer
keeping the not-rounded demuxer timestamps (if we have them).
This is the "unicode" version of it. It appears Firefox uses it now?
I'm not sure if we still need to support the old variant, but hopefully
not.
Fixes#2782.