Commit 943f76e6, which already tried this, was very stupid: it didn't
actually override the samplerate for Opus, but overrode it for all
codecs other than Opus. And even then, it failed to use the overridden
samplerate. (Sigh...)
Fixes relative seeks. Without this, a seek back could skip so much data
that the seek would effectively jump forward. (Or insert silence for
files with video.)
There's the question whether the frontend should do this instead (by
using information from the decoders), but for now this seems more
proper.
demux_mkv.c does this already, sort of.
libavformat doesn't for seeks in .ogg (aka .opus), but might be doing it
for mkv. Seems to be a mess as well.
Fixes correctness_trimming_nobeeps.opus. One nasty thing is that this
mechanism interferes with the container-signalled mechanism with
AV_FRAME_DATA_SKIP_SAMPLES. So apply it only if that is apparently not
present. It's a mess, and it's still broken in FFmpeg CLI, so I'm sure
this will get fucked up later again.
I'm not quite sure what the FFmpeg AV_FRAME_DATA_SKIP_SAMPLES API
demands here. The code so far assumed that skipping can be more than a
frame, but not trimming. Extend it to trimming too.
This is actually already done by dec_audio.c. But if
AV_FRAME_DATA_SKIP_SAMPLES is applied, this happens too late here. The
problem is that this will slice off samples, and make it impossible for
later code to reconstruct the timestamp properly.
Missing timestamps can still happen with some demuxers, e.g. demux_mkv.c
with Opus tracks. (Although libavformat interpolates these itself.)
I think the conclusion is that AV_PKT_DATA_SKIP_SAMPLES is misdesigned
(at least for some formats), and an alternative mechanism using
durations would be better. (Combining it with a proper timebase would
keep sample-accuracy.)
This happens only if the new segment wasn't read yet.
This is not quite proper and a problem with dec_sub.c internals.
Ideally, it'd wait with rendering until a new enough segment has been
read. Normally, the new segment is available immediately, so the end
will be automatically clipped by switching to the right segment in the
exact moment it's supposed to become effective.
Usually shouldn't cause any problems, though.
Doing --hwdec=auto ends up picking dxva2, creating a decoder, and then
sending D3D frames down the video chain, which immediately fails and
falls back to software.
Consider dxva2 only if the VO provides a context. If this fails,
autoprobing will proceed to try dxva2-copy as usual.
Fixes#2844.
This is in preparation for a hypothetical API change in libavcodec,
which would allow the decoder to return multiple video frames before
accepting a new input packet.
In theory, the body of the if() added to vd_lavc.c could be replaced
with this code:
packet->buffer += ret;
packet->len -= ret;
but currently this is not needed, as libavformat already outputs one
frame per packet. Also, using libavcodec this way could lead to a
"deadlock" if the decoder refuses to consume e.g. garbage padding, so
enabling this now would introduce bugs.
(Adding this now for easier testing, and for symmetry with the audio
code.)
There is some strange code which sets the DTS of the packet to PTS (but
only if it's not AVI), which apparently helps with timestamp
determination with some broken files. This code is annoying because it
tries to avoid mutating the packet (which it logically doesn't own).
Move it to where it does and get rid of the packet_copy mess.
Needed for the following commit.
This tries to determine whether packet PTS values are accurate and can
be used for frame dropping during seeking. Move both checks (PTS is
missing; PTs is non-monotonic) to the earliest place where they can be
done.
The WGL_NV_DX_interop spec says that a shared IDirect3DSurface9 must not
be lockable, but off-screen plain surfaces are always lockable and using
them causes Nvidia drivers to crash. Use a rendertarget for the shared
surface instead.
This also changes the name of the DX_interop handle for the rendertarget
to match the name of the DirectX object (rather than the GL one) to
match the convention used in context_dxinterop.c.
This file was rewritten from scratch in 0cef033, so it should be okay.
As mentioned in #730, it's a complete rewrite referencing only MSDN and
POSIX, rather than the original code.
Apple crap (namely hardware decoding interop) forces us to use rectangle
textures for input. But after that we continue with normal textures.
This was not considered for debanding, and the sampler type used for it
can be different depending on the exact render chain. Simply use the
target type of the input texture.
* use mp_HRESULT_to_str/mp_LastError_to_str
* make some messages non-identical
* replace "GL" -> "OpenGL"
* change some MP_FATAL to MP_ERR that don't actually kill the vo
This is useful in particular for GetLastError, unfortunately, it's stil pretty
dumb with regards to WASAPI or D3D specific errors, so keep the
hresult_to_string switch.
Apparently, some drivers require you to allocate all of the decoder d3d surfaces
at once. This commit changes the strategy from allocating surfaces as needed via
mp_image_pool_set_allocator, to allocating all the surfaces in one call to
IDirectXVideoDecoderService_CreateSurface and adding them to the pool with
mp_image_pool_add.
fixes#2822
Provide a way for the user to add mp_images to the pool. This is required for
dxva2, for which using set_allocator is extremely awkward since all the d3d9
surfaces must be allocated in advance and all together.
I mistakenly copied the wrong license text into these files when
I created them. Since I'm the only one to have touched these files,
it should be OK to change them.
This is achieved indirectly by deslecting all streams for the non-
current segment (and if the segment doesn't share the demuxer with the
currently active one).
Restores functionality added with commit 46bcdb70.
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.
The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.
The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)
To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
concatenating files with completely different codecs for the sake
of EDL (which also uses the timeline infrastructure). A "lighter"
approach would try to make use of decoder mechanism to update e.g.
the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
normal playback core mechanisms like hr-seek, but now the playback
core doesn't need to care about these things.
These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)
There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.
Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.