The WGL_NV_DX_interop spec says that a shared IDirect3DSurface9 must not
be lockable, but off-screen plain surfaces are always lockable and using
them causes Nvidia drivers to crash. Use a rendertarget for the shared
surface instead.
This also changes the name of the DX_interop handle for the rendertarget
to match the name of the DirectX object (rather than the GL one) to
match the convention used in context_dxinterop.c.
This file was rewritten from scratch in 0cef033, so it should be okay.
As mentioned in #730, it's a complete rewrite referencing only MSDN and
POSIX, rather than the original code.
Apple crap (namely hardware decoding interop) forces us to use rectangle
textures for input. But after that we continue with normal textures.
This was not considered for debanding, and the sampler type used for it
can be different depending on the exact render chain. Simply use the
target type of the input texture.
* use mp_HRESULT_to_str/mp_LastError_to_str
* make some messages non-identical
* replace "GL" -> "OpenGL"
* change some MP_FATAL to MP_ERR that don't actually kill the vo
This is useful in particular for GetLastError, unfortunately, it's stil pretty
dumb with regards to WASAPI or D3D specific errors, so keep the
hresult_to_string switch.
Apparently, some drivers require you to allocate all of the decoder d3d surfaces
at once. This commit changes the strategy from allocating surfaces as needed via
mp_image_pool_set_allocator, to allocating all the surfaces in one call to
IDirectXVideoDecoderService_CreateSurface and adding them to the pool with
mp_image_pool_add.
fixes#2822
Provide a way for the user to add mp_images to the pool. This is required for
dxva2, for which using set_allocator is extremely awkward since all the d3d9
surfaces must be allocated in advance and all together.
I mistakenly copied the wrong license text into these files when
I created them. Since I'm the only one to have touched these files,
it should be OK to change them.
This is achieved indirectly by deslecting all streams for the non-
current segment (and if the segment doesn't share the demuxer with the
currently active one).
Restores functionality added with commit 46bcdb70.
This uses a different method to piece segments together. The old
approach basically changes to a new file (with a new start offset) any
time a segment ends. This meant waiting for audio/video end on segment
end, and then changing to the new segment all at once. It had a very
weird impact on the playback core, and some things (like truly gapless
segment transitions, or frame backstepping) just didn't work.
The new approach adds the demux_timeline pseudo-demuxer, which presents
an uniform packet stream from the many segments. This is pretty similar
to how ordered chapters are implemented everywhere else. It also reminds
of the FFmpeg concat pseudo-demuxer.
The "pure" version of this approach doesn't work though. Segments can
actually have different codec configurations (different extradata), and
subtitles are most likely broken too. (Subtitles have multiple corner
cases which break the pure stream-concatenation approach completely.)
To counter this, we do two things:
- Reinit the decoder with each segment. We go as far as allowing
concatenating files with completely different codecs for the sake
of EDL (which also uses the timeline infrastructure). A "lighter"
approach would try to make use of decoder mechanism to update e.g.
the extradata, but that seems fragile.
- Clip decoded data to segment boundaries. This is equivalent to
normal playback core mechanisms like hr-seek, but now the playback
core doesn't need to care about these things.
These two mechanisms are equivalent to what happened in the old
implementation, except they don't happen in the playback core anymore.
In other words, the playback core is completely relieved from timeline
implementation details. (Which honestly is exactly what I'm trying to
do here. I don't think ordered chapter behavior deserves improvement,
even if it's bad - but I want to get it out from the playback core.)
There is code duplication between audio and video decoder common code.
This is awful and could be shareable - but this will happen later.
Note that the audio path has some code to clip audio frames for the
purpose of codec preroll/gapless handling, but it's not shared as
sharing it would cause more pain than it would help.
When playback of a video ends, and the next file has no video at all (no
cover art or anything), then the window must be cleared.
This also resizes the window forcibly, which is by design.
Fixes#2825.
A client API user is allowed to call mpv_opengl_cb_uninit_gl() followed
by mpv_opengl_cb_init_gl(). This crashed; fix it by fixing the lifetime
of ctx->gl.
This is required so that the individual surfaces can pass beyond the dxva2
decoder and be passed to the vo.
This also adds additional data to mp_image->planes[0] for IMGFMT_DXVA2, which is
required for maintaining and releasing the surface even if the decoder code is
uninited.
The IDirectXVideoDecoder itself is encapsulated together with its surface pool
and configuration in a dxva2_decoder structure whose creation and destruction is
managed by talloc.
Especially useful to see what video formats are involved on the various
filter links.
I suspect this function is not available on Libav, so add necessary
ifdeffery preemptively.
Don't allow rounding to let it underflow to 0. 0 width or height is
simply not allowed and could cause problems otherwhere.
Indirectly fixes CID 1350057, which complains about not checking the
resulting output size values before using it in divisions.
It thinks that integer_conv_fbo[index] is implied to be accessed with up
to index=5. Although that is theoretical only, it has a point that this
makes no sense. Use the same constant for the array allocation, to make
it more uniform and robust.
Fixes CID 1350060.
The code is shared with the --vd-lavc-threads option, so using 0 for
auto-detection just works.
But no, this is not useful. Just change it for orthogonality.