iive agreed only to LGPL 3.0+ only. All of his relevant changes are for
XvMC, for which mpv completely dropped support back in mplayer2 times.
But you could claim that the get_format code represents some residual
copyright (everything else added by iive was removed, only get_format
still is around in some form). While I doubt that this is relly
copyright-relevant, consider it is for now.
michael is the original author of vd_lavc.c, so this file can become
LGPL only after the core becomes LGPL.
cehoyos did not agree with the LGPL relicensing, but all of his code is
gone.
Some others could not be reached, but their code is gone as well. In
particular, vdpau support, which was originally done by Nvidia, had
larger impact on vd_lavc.c, but vdpau support was first refactored a few
times (for the purpose of modularization) and moved to different files,
and then decoding was completely moved to libavcodec.
Lastly, assigning the "opaque" field was moved by Gwenole Beauchesne in
commit 8e5edec13e. Agreement is pending (due to copyright apparently
owned by the author's employer). So just undo the change, so we don't
have to think about whether the change is copyrightable.
Unfortunately quite a mess, in particular due to the need to have some
compatibility with the old API. (The old API will be supported only in
short term.)
The new API has literally no advantages (other than that we can drop
mp_vt_download_image and other things later), but it's sort-of uniform
with the other hwaccels.
"--videotoolbox-format=no" is not supported with the new API, because it
doesn't "fit in". Probably could be added later again.
The iOS code change is untested (no way to test).
It's not really guaranteed that other components always set this (e.g.
on subtle errors), so check it explicitly. Although I'm not aware of a
failing case.
If vo_opengl is used, and vo_opengl already created the vdpau interop
(for whatever reasons), and then preemption happens, and then you try to
enable hw decoding, it failed. The reason was that preemption recovery
is not run at any point before libavcodec accesses the vdpau device.
The actual impact was that with libmpv + opengl-cb use, hardware
decoding was permanently broken after display mode switching (something
that caused the display to get preempted at least with older drivers).
With mpv CLI, you can for example enable hw decoding during playback,
then disable it, VT switch to console, switch back to X, and try to
enable hw decoding again.
This is mostly because libav* does not deal with preemption, and NVIDIA
driver preemption behavior being horrible garbage. In addition to being
misdesigned API, the preemption callback is not called before you try to
access vdpau API, and then only with _some_ accesses.
In summary, the preemption callback was never called, neither before nor
after libavcodec tried to init the decoder. So we have to get
mp_vdpau_handle_preemption() called before libavcodec accesses it. This
in turn will do a dummy API access which usually triggers the preemption
callback immediately (with NVIDIA's drivers).
In addition, we have to update the AVHWDeviceContext's device. In theory
it could change (in practice it usually seems to use handle "0").
Creating a new device would cause chaos, as we don't have a concept of
switching the device context on the fly. So we simply update it
directly. I'm fairly sure this violates the libav* API, but it's the
best we can do.
Until now, the texture pointer was stored in plane 1, and the texture
array index was in plane 2. Move this down to plane 0 and plane 1. This
is to align it to the new WIP D3D11 decoding API in Libav, where we
decided that there is no reason to avoid setting plane 0, and that it
would be less weird to start at plane 0.
Sigh...
The functionality is not actually needed for vdpau, but if the vdpau
hwaccel is present, the FFmpeg version is new enough that it includes
the field.
These decoders can select the decoding device with hw_device_ctx, but
don't use hw_frames_ctx (at least not in a meaningful way).
Currently unused, but intended to be used for cuvid, as soon as it hits
ffmpeg git master.
Also make the vdpau and vaapi hwaccel definition structs static, as we
have removed the old code which would have had clashing external
declarations.
This drops support for the old libavcodec APIs. Now FFmpeg 3.3 or FFmpeg
git is required. Libav has no release with the new APIs yet, so for
Libav git as of a few weeks or months ago or so is required if you want
to use Libav.
Not much actually changes in hwdec_vaegl.c - some code is removed, but
the reindentation inflates the diff.
If vaapi was found, but neither the old or new libavcodec vaapi hwaccel
API, then HAVE_VAAPI_HWACCEL will be defined, but not _OLD or _NEW. The
HAVE_VAAPI_HWACCEL define pretty much exists only for acrobatics with
our own waf dependency checker helper code.
The new API works like the new vaapi API, using generic hwaccel support.
One minor detail is the error message that will be printed if using
non-4:2:0 surfaces (which as far as I can tell is completely broken in
the nVidia drivers and thus not supported by mpv). The HEVC warning
(which is completely broken in the nVidia drivers but should work with
Mesa) had to be added to the generic hwaccel code.
This also trashes display preemption recovery. Fuck that. It never
really worked. If someone complains, I might attempt to add it back
somehow.
This is the 4th iteration of the libavcodec vdpau API (after the
separate decoder API, the manual hwaccel API, and the automatic vdpau
hwaccel API). Fortunately, further iterations will be generic, and not
require much vdpau-specific changes (if any at all).
I guess that's the full extent I still care about nVidia's broken
garbage. In theory, we could always force the video mixer (which is the
only method getting the video data that works), but why bother.
FFmpeg could crash with vaapi (new) and --vo=opengl + interpolation.
It seems the actual surface count the old vaapi code uses (and which
usually never exceeded the preallocated amount) was higher than what
was used for the new vaapi code, so just correct that. The d3d helpers
also had weird code that bumped the real pool size, fix them as well.
Why this would result in an assertion failure instead of a proper error,
who knows.
hw_vaapi.c didn't do much interesting anymore. Other than the function
to create a device for decoding with vaapi-copy, everything can be done
by generic code. Other libavcodec hwaccels are planned to provide the
same API as vaapi. It will be possible to drop the other hw_ files in
the future. They will use this generic code instead.
Originally, there was probably some sort of intention to restrict it to
formats supported by the interop, or something. But in the end it was
overcomplicated nonsense.
In the future, we could use mp_hwdec_ctx.supported_formats or other
mechanisms to handle this in a better way.
mp_hwdec_ctx.ctx is not set to a dummy pointer - hwdec_devices_load() is
only used to detect whether to vo_opengl interop is present, and the
common hwdec code expects that the .ctx field is not NULL.
This also changes videotoolbox-copy to use --videotoolbox-format,
instead of the FFmpeg-set default.
The code for copying a videotoolbox surface to mp_image was duplicated
(with some minor differences - I picked the hw_videotoolbox.c version,
because it was "better"). mp_imgfmt_from_cvpixelformat() is somewhat
duplicated with the vt_formats[] table, but this will be fixed in a
later commit, and moving the function to shared code is preparation.
This should allow us to create the device in situations when
Direct3DCreate9 normally fails, for example if no user is logged in.
While the later use-case is not very interesting, I hope it to work in
some other situations as well, for example while certain drivers are in
exclusive full screen mode.
This is available since Windows 7, so I'm removing the old call
completely.
The lock was disabled recently. This commit gets rid of the dummied out
calls. The main reason for removing it is that there is no apparent need
for it anymore, and the new FFmpeg vaapi code does not use or provide
such a lock (there are some places which we cannot control and which do
vaapi API calls, like frame destructors).
Apparently this is the maximum that can be preserved. There is also
something about the decoder being able only to use 3 frames at a time,
and I'm assuming these are part of the 8 frames.
This can be useful in other contexts.
Note that we end up setting AVCodecContext.width/height instead of
coded_width/coded_height now. AVCodecParameters can't set coded_width,
but this is probably more correct anyway.
The FFmpeg versions we support all have the APIs we were checking for.
Only Libav missed them. Simplify this by explicitly checking for FFmpeg
in the code, instead of trying to detect the presence of the API.
Tried to decode a High 4:2:2 file, since libavcodec code seemed to
indicate that it's supported. Well, it decodes to garbage.
I couldn't find out why ffmpeg.c actually appears to reject this
correctly. The API seems to be fine with, just that the output is
garbage.
Add a hack for now.
Successful decoding of a frame resets ctx->hwdec_fail_count to 0 - which
us ok, but prevents fallback if it fails if --vd-lavc-software-fallback
is set to something higher than 1.
Just fail it immediately, since failing here always indicates some real
error (or OOM), not e.g. a video parsing error or such, which we try to
tolerate via the error counter.
Use the libavutil vdpau frame allocation code instead of our own "old"
code. This also uses its code for copying a video surface to normal
memory (used by vdpau-copy).
Since vdpau doesn't really have an internal pixel format, 4:2:0 can be
accessed as both nv12 and yuv420p - and libavutil prefers to report
yuv420p. The OpenGL interop has to be adjusted accordingly.
Preemption is a potential problem, but it doesn't break it more than it
already is.
This requires a bug fix to FFmpeg's vdpau code, or vdpau-copy (as well
as taking screenshots) will fail. Libav has fixed this bug ages ago.
In a way it can be reused. For now, sw_format and initial_pool_size
determination are still vaapi-specific. I'm hoping this can be eventally
moved to libavcodec in some way. Checking the supported_formats array is
not really vaapi-specific, and could be moved to the generic code path
too, but for now it would make things more complex.
hw_cuda.c can't use this, but hw_vdpau.c will in the following commit.
If hardware decoding is enabled (via --hwdec anything), the player was
printing an informational message that software decdoing is used.
Basically, this confuses users, because they think there is a problem or
such. Just disable the message, it's semi-useless anyway.
This was suggested on IRC, after yet another user was asking why this
message was shown (with a follow up discussion which CPUs can decode
what kind of video codecs).
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
For convenience. Since we still have code that works even if creating a
AVHWDeviceContext fails, failure is ignored. (Although currently, it
succeeds creation even with the stale/abandoned vdpau wrapper driver.)
Rendering support in Mesa probably doesn't exist yet. In theory it might
be possible to use VPP to convert the surfaces to 8 bit (like we do it
with dxva2/d3d11va as ANGLE doesn't support rendering 10 bit surface
either), but that too would require explicit mechanisms. This can't be
implemented either until I have a GPU with actual support.
Other hwdecs will also be able to use this (as soon as they are switched
to use AVHWFramesContext).
As an additional feature, failing to copy back the frame counts as
hardware decoding failure and can trigger fallback. This can be done
easily now, because it needs no way to communicate this from the hwaccel
glue code to the common code.
The old code is still required for the old decode API, until we either
drop or rewrite it. vo_vaapi.c's OSD code (fuck...) also uses these
surface functions to a higher degree.