Don't wait for WASAPI to send another feed event if we detect an underfull
buffer. It seems that WASAPI doesn't always send extra feed events if
something causes rendering to fall behind. This causes every subsequent playback
buffer to under-run until playback is reset. The fix is simply to do a one-shot
double feed when this happens, which allows rendering to catch up with playback.
This was observed to happen when using MsgWaitForMultipleObjects to wait for the
feed event and toggling fullscreen with vo=opengl:backend=win. This commit
improves the behaviour in that specific case and more generally makes exclusive
mode significantly more robust.
This commit also moves the logic to avoid *over*filling the exclusive mode
buffer into thread_feed right next to the above described underfil logic.
OpenSL ES is used on Android. At the moment only stereo output is
supported. Two options are supported: 'frames-per-buffer' and
'sample-rate'. To get better latency the user of libmpv should pass
values obtained from AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER)
and AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE).
_Of course_ the previous commit broke --force-window behavior (like it
does every single time I touch it).
vo_has_frame() gets cleared after a seek, so e.g. stopping playback of a
file and going to the next by keeping the seek key down will enter a
short moment without video at the end of the first file, which will set
the stalled_video variable to true. Prevent it by using the indication
whether the window was properly created (which is probably exactly what
we want here).
This function is also responsible for destroying the window when needed,
and obviously we should never do that while video is active. (This is
the actual bug, although the other change in this commit already hides
the common breakage it caused.)
If a stream is marked as EOF (due to no packets found in reach), then we
need to wakeup the decoder. This is important especially if no packets
are found at the start of the file, so the A/V sync logic actually
starts playback, instead of waiting for packets that will never come.
(It would randomly start playback when running the playback loop due to
arbitrary external events like user input.)
Some oddity that is not needed anymore. The only thing which still
referenced them was avoiding loading external files more than once,
which is now prevented by checking the list of tracks instead.
Why was this done so stupidly, with so many complicated special cases,
before? Declare it once so the shader bits don't have to figure out where
and when to do so themselves.
Commit 943f76e6, which already tried this, was very stupid: it didn't
actually override the samplerate for Opus, but overrode it for all
codecs other than Opus. And even then, it failed to use the overridden
samplerate. (Sigh...)
Fixes relative seeks. Without this, a seek back could skip so much data
that the seek would effectively jump forward. (Or insert silence for
files with video.)
There's the question whether the frontend should do this instead (by
using information from the decoders), but for now this seems more
proper.
demux_mkv.c does this already, sort of.
libavformat doesn't for seeks in .ogg (aka .opus), but might be doing it
for mkv. Seems to be a mess as well.
Fixes correctness_trimming_nobeeps.opus. One nasty thing is that this
mechanism interferes with the container-signalled mechanism with
AV_FRAME_DATA_SKIP_SAMPLES. So apply it only if that is apparently not
present. It's a mess, and it's still broken in FFmpeg CLI, so I'm sure
this will get fucked up later again.
I'm not quite sure what the FFmpeg AV_FRAME_DATA_SKIP_SAMPLES API
demands here. The code so far assumed that skipping can be more than a
frame, but not trimming. Extend it to trimming too.
This is actually already done by dec_audio.c. But if
AV_FRAME_DATA_SKIP_SAMPLES is applied, this happens too late here. The
problem is that this will slice off samples, and make it impossible for
later code to reconstruct the timestamp properly.
Missing timestamps can still happen with some demuxers, e.g. demux_mkv.c
with Opus tracks. (Although libavformat interpolates these itself.)
I think the conclusion is that AV_PKT_DATA_SKIP_SAMPLES is misdesigned
(at least for some formats), and an alternative mechanism using
durations would be better. (Combining it with a proper timebase would
keep sample-accuracy.)
This happens only if the new segment wasn't read yet.
This is not quite proper and a problem with dec_sub.c internals.
Ideally, it'd wait with rendering until a new enough segment has been
read. Normally, the new segment is available immediately, so the end
will be automatically clipped by switching to the right segment in the
exact moment it's supposed to become effective.
Usually shouldn't cause any problems, though.
Doing --hwdec=auto ends up picking dxva2, creating a decoder, and then
sending D3D frames down the video chain, which immediately fails and
falls back to software.
Consider dxva2 only if the VO provides a context. If this fails,
autoprobing will proceed to try dxva2-copy as usual.
Fixes#2844.
This is in preparation for a hypothetical API change in libavcodec,
which would allow the decoder to return multiple video frames before
accepting a new input packet.
In theory, the body of the if() added to vd_lavc.c could be replaced
with this code:
packet->buffer += ret;
packet->len -= ret;
but currently this is not needed, as libavformat already outputs one
frame per packet. Also, using libavcodec this way could lead to a
"deadlock" if the decoder refuses to consume e.g. garbage padding, so
enabling this now would introduce bugs.
(Adding this now for easier testing, and for symmetry with the audio
code.)
There is some strange code which sets the DTS of the packet to PTS (but
only if it's not AVI), which apparently helps with timestamp
determination with some broken files. This code is annoying because it
tries to avoid mutating the packet (which it logically doesn't own).
Move it to where it does and get rid of the packet_copy mess.
Needed for the following commit.
This tries to determine whether packet PTS values are accurate and can
be used for frame dropping during seeking. Move both checks (PTS is
missing; PTs is non-monotonic) to the earliest place where they can be
done.
The WGL_NV_DX_interop spec says that a shared IDirect3DSurface9 must not
be lockable, but off-screen plain surfaces are always lockable and using
them causes Nvidia drivers to crash. Use a rendertarget for the shared
surface instead.
This also changes the name of the DX_interop handle for the rendertarget
to match the name of the DirectX object (rather than the GL one) to
match the convention used in context_dxinterop.c.
This file was rewritten from scratch in 0cef033, so it should be okay.
As mentioned in #730, it's a complete rewrite referencing only MSDN and
POSIX, rather than the original code.
Apple crap (namely hardware decoding interop) forces us to use rectangle
textures for input. But after that we continue with normal textures.
This was not considered for debanding, and the sampler type used for it
can be different depending on the exact render chain. Simply use the
target type of the input texture.