This VA_FOURCC isn't even defined by latest drivers, so I'm just
assuming it doesn't exist and never existed. For planar 4:2:0,
VA_FOURCC_YV12 is normally preferred, and there's even a VA_FOURCC_IYUV
for 4:2:0 with unswapped planes.
0.34 and 0.35 don't have the buffer API, such as vaAcquireBufferHandle.
This is only needed for the EGL interop, but why bother staying
compatible for such old things (0.36 was released over a year ago).
We also can drop some minor compatibility ifdeffery.
MKV files can very well start with timestamps other than 0. While mpv
has support for such files in general, and demux_lavf enables this
feature, demux_mkv didn't export a start time.
Implement this by simply reading the first cluster timestamp. This in
turn is done by reading 1 block. While we don't need the block for this
prupose at all, it's the easiest way to get the cluster timestamp read
correctly without code duplication. In theory this could be wrong, and
a packet could start at a much later time, but in practice this won't
happen.
This commit also adds an option to disable this feature. It's not
documented because nobody should use it. (But I happen to have a need
for this.)
Let's hope this doesn't confuse client API users too much. It's still
the best solution to get rid of corner cases where it actually return
the wrong timestamp on start, and then suddenly jump.
matplotlib is pathetically slow, which makes using stats-conv.py to view
dumps longer than a few seconds a huge pain.
pyqtgraph is slightly faster than matplotlib. Other than that, it seems
to be worse in every aspect (at least for plotting), but such is life.
When seeking, the current frame will have the wrong timestamp, until the
seek is done and a new frame from the seek target is shown. We still use
the "old" frame for redrawing during seeks. This frame has to be
explicitly marked with still=true in order not to confuse the
interpolation queue. If this is not done, it will ignore frames until a
frame with approximately the same timestamp as the "old" frame is
reached. (Does this mean interpolation handles timestamp resets
incorrectly? I have no idea.)
Of course we also have to clear possibly queued frames on seeks.
Also, in pausing, explicitly let the frame redraw.
This adjustment is supposed to improve the audio speed calculation in
case of unexpected desync. The flipped sign made it actually worse,
although the total impact of this bug was very minor.
Small adjustments to the playback speed use swr_set_compensation()
to stretch the audio as it is required. But since large adjustments
are now handled by actually reinitializing libswresample, the small
adjustments get rounded off completely with typical frame sizes.
Compensate for this by accounting for the rounding error and keeping
track of fractional samples that should have been output to achieve
the correct ratio.
This fixes display sync mode behavior, which requires these adjustments
to be relatively accurate.
Newer nVidia drivers support EGL, but they seem to work badly,
apparently don't support some needed features or not in a form we want
(such as swap control), and vdpau interop is not available. Disable it
by default, because I'm tired of explaining this issue.
Can be reverted as soon as nVidia release working drivers.
The libavcodec h264 decoder contains some idiotic code with unknown
purpose (no sample or explanation known that necessitates its
existence), that causes the AVCodecContext.get_format callback to be
invoked at a time when hwaccels can't be initialized. By definition, the
get_format callback is supposed to initialize hwaccels (another idiotic
thing now part of the API, but different story). This causes hwdec
initialization sometimes to fail (WolfensteinTwitch.mp4): the first
get_format callback will mark it as failed, so the second get_format
(the "proper" normal one) will not bother restoring the state, and hwdec
init fails.
While this should be fixed in libavcodec (good luck with that), it's
quite easy to workaround.
This affects the subtitle preroll mode during seeking. It could matter
somewhat with insane files with ten-thousands of subtitle events, which
now seem to pop up, and will avoid packet queue overflow.
swr/avresample_set_compensation() was made for small speed adjustments.
Non-documentation says it should be used for changes not larger than 1%,
so reinitialize the sampler if the change is larger than that.
This fixes a regression since commit f4d62da8. The original code run
vo_cocoa_config_window() once without creating the window, which had the
effect that the last part of the function was run at least once before
the actual window was created. Fix the regression by moving it to before
the window is created.
The regression itself is hard to describe. One test case: start mpv from
a fullscreened terminal window. It should switch to another desktop,
with the mpv window visible. This didn't happen anymore.
mpv_opengl_cb_uninit_gl() still needs the OpenGL context. It makes calls
to free OpenGL objects. Strictly speaking, this is probably unnecessary,
because the OpenGL context is destroyed afterwards (implicitly freeing
all related objects). But mpv_opengl_cb_uninit_gl() does not require the
destruction of the OpenGL context, and thus has to free resources
manually.
It's also true that OpenGL normally simply ignores API calls (or returns
errors) if no context is set, but doing it properly is cleaner.
That makeCurrent() can be called in the destructor is explicitly allowed
and recommended for freeing GL resources in the Qt docs.
This fixes a mpv error message on exit.
Thanks to rcombs, ffmpeg now properly supports DASH and we can
remove our hacks for it and use it by default whenever
available. If you don't like this for whatever reason, you
can get the "normal" streams back with --ytdl-format=best .
Closes#579Closes#1321Closes#2359
If video EOF happens during playback restart, and audio is syncing, and
the demuxer packet queue overflows (i.e. no new packets will be read),
then it could happen that the player accidentally enters sleeping, and
continues playing anything only after e.g. user input wakes it up.
libass 0.13.0 breaks this due to removal of fontconfig from its core
(instead, fontconfig is one possible backend, and pattern lookup is
apparently not possible anymore).
FFmpeg supports all formats the old subreader code does, and is better
at it. On the other hand, subreader.c's probing is bad and can lead to
false positives easily.
swr_set_compensation() changes the apparent sample rate on the fly (who
would have guessed). It is thus very well-suited for adjusting audio
speed on the fly during playback (like needed by the display-sync mode).
It skips the relatively slow resampler reinitialization.
If this doesn't work (libswresample soxr backend), then fall back to the
old method.
The previous commit handled not falling back to normal decoding if the
AO was reloaded (I think...), and this tries to re-engage spdif pass-
through if it was previously falling back to normal decoding (e.g.
because it temporarily switched to an audio device incapable of
passthrough).
The stop command didn't always stop. In this case, opening a HLS URL and
then sending "stop" during loading would actually make it fallback to
parsing it as a playlist, and then continued to play the playlist items.
(This corner case makes several unfortunate factors come together to
produce this really odd behavior.)
Another issue is that the "stop" was not always explicitly set. This
could be a problem when sending several commands at once. Only the
"quit" command should have priority over the "stop" command, so this is
still checked.
The Copyright file explains the whole license mess. The earlier change
was apparently confusing, because the link reading "details" merely
linked to the GPLv2 license instead of explaining anything. In fact, I
meant to link to the Copyright file in the first place.
Use the first encountered packet PTS/DTS as base, instead of the last
one. This does not add the amount of frames buffered in the codec to the
PTS offset, and thus is better.
Also, don't add the frame time if there was no decoded frame yet. The
first frame should obviously have the timestamp of the first packet
(going by this heuristic).
While b-frame reordering limits the maximum required number to around
16, the number of additionally buffered frames can be much higher.
Guess when this actually matters? (For the libavcodec MMAL wrapper.)
Useless. Sometimes it might be useful to make some extremely broken
files work, but on the other hand --no-correct-pts is sufficient for
these cases.
While we still need some of the code for AVI, the "auto" mode in
particular inflated the size of the code.
This can't be handled correctly at all. Other cases when the decoder
might drop a frame (such as completely failing to decode a frame) will
shift timestamps by a frame, and it can't be avoided.
While we could maybe find a better way to handle this with libavcodec's
main decoders, this seems to be much harder if it should work with
certain HW decoders, which don't passthrough the DTS field (such as
MMAL). Another problem are .avi files with b-frames. So just leave it
as it is.
The manpage entry explains this.
(Maybe this option could be always enabled and removed. I don't quite
remember what valid use-cases there are for just disabling audio
entirely, other than that this is also needed for audio decoder init
failure.)
Make the code a bit more uniform. Always build a "dummy" audio output
list before probing, which means that opening preferred devices and
pure auto-probing is done with the same code. We can drop the second
ao_init() call.
This also makes the next commit easier, which wants to selectively
fallback to ao_null. This could have been implemented by passing a
different requested audio output list (instead of reading it from
MPOptions), but I think it's better if this rather special feature
is handled internally in the AO code. This also makes sure the AO
code can handle its own options (such as the audio output list) in
a self-contained way.
This was used only by the timestamp sorting code, which is a fallback
for avi files (as well as avi-muxed mkv files). This was supposed to
prevent accumulating timestamps in case the decoder consumes more
packets than it outputs frames (i.e. frames are dropped). This didn't
work very well (timestamps could be off by a large amount), the
estimation of the delay was fragile, and the interdependencies with the
decoder were annoying, so kill it.