gl_video_set_options() does not acquire ownership of the opts parameter
or its contents. In case of vo_cmdline, opts will point to temporary
memory. This memory will be free'd at a later point, and p->opts will
point to free'd memory on the next reinitialization.
The fix is pretty ugly, but it's a quick bug fix. This can probably be
removed once VO sub-options are exposed as properties.
Fixes#2035.
Absence of license header implies LGPL, as mentioned in the "Copyright"
file. But vaapi.h contains some code taken from the mplayer-vaapi
patch, which was under the typical MPlayer license.
All vo_gl.c related code has been GPL+LGPL dual-licensed. The OSD code
is no exception and is also derived from vo_gl.c. Thus it should have
the same license (although I think technically speaking sub-licensing
it by removing one of the licenses is ok).
Commits 92b27be and f4ce99d removed high-fps logic to to a bug. That bug was
a missing parenthesis around everything after duration >= 0 && ... at the
removed code.
This patch restores the removed code, fixes the bug and then refactors the
code a bit.
This reverts commit f1746741de.
Together with the other revert, this fixes#2023 (the reason being
broken framedrop handling - it was dropping frames when it shouldn't).
read_output_surface() could fail and return NULL.
Also, make sure we don't set the image to a size larger than the
allocated size. Normally this shouldn't happen, but in theory it could
in corner cases; this is for robustness.
We can't do much in this case, but at least we can not call the vdpau
API functions with too large sizes. Apparently the API considers this
undefined behavior, and random stuff might happen.
The previous code was not wrong, but I'd claim this makes the code more
robust. If a situation could happen in which the passed surface size is
incorrect, we could have passed a too small image, and
VdpOutputSurfaceGetBitsNative could have randomly overwritten memory.
Check the maximum size of video surfaces, and refuse initialization if
the video is too large for them.
Maybe we could do something more sophisticated, like inserting a
software scaler. On the other hand, this would have a very questionable
benefit, as it would be guaranteed to be too slow.
This was missing for extended deinterlacer.
Unfortunately, these deinterlacer still do not work. The provided future
frame (which is all the deinterlacers want) seems to be correct, though.
One minor behavioral change is that this always keeps the previous frame
for PTS computations. This could be avoided (in order to keep exactly
the same behavior as before), but it seems more elegant and should not
do any harm. (Also, if we really cared about reducing hw frame refs,
a more worthy goal is producing the field output incrementally.)
The mapped data (pointed to by the param variable) is not needed before,
so the call can be moved down. Also, this prevents that the buffer
remains mapped forever if the other vaMapBuffer() call above fails (the
cleanup code forgets to unmap the buffer - this commit makes it
unnecessary).
This used a do-while loop, which runs only once, as replacement for a
cleanup goto. While this is ok, doing a goto directly is easier to
follow and is closer to idiomatic C. But mainly remove it so that the
indentation can be reduced.
Reconfiguring with the same video size should never cause the window to
resize back to the video size (if the user changed its size). This was
broken and it resized anyway.
We do not fill them, so we would pass random IDs to the driver. The code
was originally written to handle bob deinterlacing only, so I guess it
originally passed always 0 anyway, despite having code for reference
surface list allocation.
Also, move down the vaUnmapBuffer() call. This call actually "unmaps"
the param pointer, so accessing it after the unmap call would be
undefined behavior. The "example" in <va/vavpp.h> does this too, but
it's most likely an error.
(Additionally, not even bob deinterlacing worked correctly in my test,
sigh.)
This must have been some non-sense in the original vaapi mplayer patch.
While I still have no good idea what this "direct mapping" business is
about, it appears to be pretty much pointless. Nothing can hold
additional "real" surface references (due to how the API and mpv/lavc
refcounting work), so removing the additional surfaces won't break
anything. It still could be that this was for achieving additional
buffering (not reusing surfaces as soon), but we buffer some additional
data anyway. Plus, the original intention of the vaapi mplayer code was
probably increasing surface count just by 1 or 2, not actually doubling
it, and/or it was a "trick" to get to the maximum count of 21 when h264
is in use.
gstreamer-vaapi uses "ref_frames + SCRATCH_SURFACES_COUNT" here, with
SCRATCH_SURFACES_COUNT defined to 4. It doesn't appear to check the
overlay attributes at all in the decoder.
In any case, remove this non-sense.
On hw decoder reinit failure we did not actually always return a sw
format, because the first format (fmt[0]) is not always a sw format.
This broke some cases of fallback. We must go through the trouble to
determine the first actual sw format.
This reduces spam while preempted a bit.
The remaining message, "hardware accelerator failed to decode picture"
on every frame, can not be prevented because it's hardcoded in
libavcodec.
If gl_hwdec_driver.map_image fails, all textures will be set to 0. This
in turn makes pass_prepare_src_tex() skip generation of the texture
uniforms, which leads to a shader compilation error, as e.g. texture0 is
not defined but expected to exist and accessed.
Set the textures to an invalid non-0 ID instead. OpenGL can deal with
it.
Yet another of these dozens of hwaccel changes. This time, libavcodec
provides utility functions, which initialize the vdpau decoder and map
codec profiles. So a lot of work the API user had to do falls away.
This also will give us support for high bit depth profiles, and possibly
HEVC once libavcodec supports it.
...instead of relying on the hw decoding API to align it for us. The old
method could in theory have gone wrong if the video is cropped by an
amount large enough to step over several blocks.
Always configure the vdpau mixer based on the current surface sent to
it. Before this, we just hardcoded the chroma type, and the surface size
was essentially a guess.
Calling VdpVideoSurfaceGetParameters() on every surface is a bit
suspicious, but it appears it's a cheap function (just requiring some
locks and a table lookup). This way we avoid creating another
complicated mechanism to carry around the actual surface parameters
with a mp_image/AVFrame.
There's not much of a reason to keep get_surface_hwdec() and
get_buffer2_hwdec() separate. Actually, the way the mpi->AVFrame
referencing is done makes this confusing. The separation is probably
an artifact of the pre-libavcodec-refcounting compatibility glue.
Most of hardware decoding is initialized lazily. When the first packet
is parsed, libavcodec will call get_format() to check whether hw or sw
decoding is wanted. Until now, we've returned AV_PIX_FMT_NONE from
get_format() if hw decoder initialization failed. This caused the
avcodec_decode_video2() call to fail, which in turn let us trigger the
fallback. We didn't return a sw format from get_format(), because we
didn't want to continue decoding at all. (The reason being that full
reinitialization is more robust when continuing sw decoding.)
This has some disadvantages. libavcodec vomited some unwanted error
messages. Sometimes the failures are more severe, like it happened with
HEVC. In this case, the error code path simply acted up in a way that
was extremely inconvenient (and had to be fixed by myself). In general,
libavcodec is not designed to fallback this way.
Make it a bit less violent from the API usage point of view. Return a sw
format if hw decoder initialization fails. In this case, we let
get_buffer2() call avcodec_default_get_buffer2() as well. libavcodec is
allowed to perform its own sw fallback. But once the decode function
returns, we do the full reinitialization we wanted to do.
The result is that the fallback is more robust, and doesn't trigger any
decoder error codepaths or messages either. Change our own fallback
message to a warning, since there are no other messages with error
severity anymore.
They're completely orthogonal concepts, merged in the past due to
convenience and ease of implementing it in the old #ifdef hell renderer.
Especially after the CMS stuff was generalized by 634b4a, this was a
trivial change to implement and also means color management will be much
higher quality when enabled with vo=opengl (which had quantization
issues in the past due to the 8 bit FBO format and upscaling), since it
can be done in a single pass now.