Requires a bunch of hacks:
- we access AVFilterLink.hw_frames_ctx. This is not a public API in
FFmpeg and Libav. Newer FFmpeg provides an accessor
(av_buffersink_get_hw_frames_ctx), but it's not available in Libav or
the current FFmpeg release or Libav. We need this value after filter
graph creation, so We have no choice but to access this.
One alternative is making filter creation and format negotiation
fully lazy (i.e. delay it and do it as filters are output), but this
would be a huge change.
So for now, we knowingly violate FFmpeg's and Libav's ABI and API
constraints because they don't provide anything better.
On newer FFmpeg, we use the (quite ugly) accessor, though.
- mp_image_params doesn't (and can't) have a field for the frames
context AVBufferRef. So we pass it via vf_set_proto_frame(), and even
more hacks.
- if a filter needs a hw context, but we haven't created one yet
(because normally we create them lazily), it will fail at init.
- we allow any hw format now, although this could go horrible wrong.
Why all this effort? We could move hw deinterlacing filters etc. to
FFmpeg, which is a very worthy goal.
Instead of using the awful older "API" that passed the parameters
formatted as string. AVBufferSrcParameters is also a prerequisite for
hardware frame filtering support.
Because it allows easier testing of filters + hwdec.
Make the texture setup code a bit more generic so it doesn't get too
much of a mess. We also use the GL renderer utility function
gl_find_unorm_format(), which saves us additional work with OpenGL's
semi-redundant format specifiers.
Can break things quite badly.
Example: reloading a plugin linked against GTK 3.x can cause a segfault
if you call dlclose() on it. According to GTK developers, unloading the
GTK library is unsupported.
Give scripting backends a proper name, instead of calling everything
"scripts".
Log client exit directly in client.c, as that is more general (doesn't
change actual output).
If hardware decoding is enabled (via --hwdec anything), the player was
printing an informational message that software decdoing is used.
Basically, this confuses users, because they think there is a problem or
such. Just disable the message, it's semi-useless anyway.
This was suggested on IRC, after yet another user was asking why this
message was shown (with a follow up discussion which CPUs can decode
what kind of video codecs).
EGL rendering + new decode API didn't work due to a certain libva bug
with sort-of legacy API use hitting again. It will report the wrong
vaapi pixel format. It's old code and always nv12 anyway, so stop
worrying about it.
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
For convenience. Since we still have code that works even if creating a
AVHWDeviceContext fails, failure is ignored. (Although currently, it
succeeds creation even with the stale/abandoned vdpau wrapper driver.)
Application of options in the default section is "delayed" until the
whole config file is read in order to allow profile forward references.
This was run at the end of parsing a config file - but because of
"include" options, this means it's not always called at the end of the
main config file.
Use the recursion counter to prevent it from being processed after each
"include" option. This also gets rid of the resulting unintended
infinite recursion (which eventually stopped and failed loading the
config file) due to m_config_finish_default_profile() processing the
"include" option again.
Fixes#4024.
For surfaces allocated by libavutil, we assume that the sw_format (i.e.
in hw_subfmt in mp_image_params) is always correct. The API guarantees
that it explicitly sets the equivalent vaapi format on surface
allocation.
For surfaces allocated by mpv's old vaapi code, we explicitly retrieve
the format right after decoding. Unless the driver magically changes the
format asynchronously, it will still be correct once the surface reaches
the renderer.
In both cases, checking the format again is obviously redundant. In
addition, it doesn't require us to maintain a libva fourcc <-> mpfmt
table and the va_fourcc_to_imgfmt() function. This also unbreaks 10 bit
rendering support (still disabled by default).
This basically reuses the scripting infrastructure.
Note that this needs to be explicitly enabled at compilation. For one,
enabling export for certain symbols from an executable seems to be quite
toolchain-specific. It might not work outside of Linux and cause random
problems within Linux.
If C plugins actually become commonly used and this approach is starting
to turn out as a problem, we can build mpv CLI as a wrapper for libmpv,
which would remove the requirement that plugins pick up host symbols.
I'm being lazy, so implementation/documentation are parked in existing
files, even if that stuff doesn't necessarily belong there. Sue me, or
better send patches.
We have to perform some silly acrobatics to make the playback_thread()
exit, as the mpv_command() will error out with MPV_ERROR_UNINITIALIZED.
Test case: mpv_terminate_destroy(mpv_create())
Reported by a user on IRC.
This does not work, because Mesa has no support for the proposed
DRM_FORMAT_R16 and DRM_FORMAT_GR16 formats. It's also untested of
course.
As long as video/decode/vaapi.c doesn't hand down P010 surfaces, this is
fine anyway.
This can be tested by removing the code that disables P010 output:
diff --git a/video/decode/vaapi.c b/video/decode/vaapi.c
--- a/video/decode/vaapi.c
+++ b/video/decode/vaapi.c
@@ -55,13 +55,6 @@ static int init_decoder(struct lavc_ctx *ctx, int w, int h)
assert(!ctx->avctx->hw_frames_ctx);
- // If we use direct rendering, disallow 10 bit - it's probably not
- // implemented yet, and our downstream components can't deal with it.
- if (!p->own_ctx && required_sw_format != AV_PIX_FMT_NV12) {
- MP_WARN(ctx, "10 bit surfaces are currently supported.\n");
- return -1;
- }
-
Rendering support in Mesa probably doesn't exist yet. In theory it might
be possible to use VPP to convert the surfaces to 8 bit (like we do it
with dxva2/d3d11va as ANGLE doesn't support rendering 10 bit surface
either), but that too would require explicit mechanisms. This can't be
implemented either until I have a GPU with actual support.
Other hwdecs will also be able to use this (as soon as they are switched
to use AVHWFramesContext).
As an additional feature, failing to copy back the frame counts as
hardware decoding failure and can trigger fallback. This can be done
easily now, because it needs no way to communicate this from the hwaccel
glue code to the common code.
The old code is still required for the old decode API, until we either
drop or rewrite it. vo_vaapi.c's OSD code (fuck...) also uses these
surface functions to a higher degree.
mp_image_hw_download() is a libavutil wrapper added in the previous
commit. We drop our own code completely, as everything is provided by
libavutil and our helper wrapper.
This breaks the screenshot code, so that has to be adjusted as well.
Makes va_surface_download() call mp_image_hw_download() for
libavutil-allocated surfaces, which in turn calls
av_hwframe_transfer_data().
mp_image_hw_download() is actually not specific to vaapi, and can be
used for any hw surface allocated by libavutil.
Mostly affects conversion of the colorimetric parameters.
Not changing AV_FRAME_DATA_MASTERING_DISPLAY_METADATA handling - that's
too messy, as decoders typically output it for keyframes only, and would
require weird caching that can't even be done on the level of the frame
rewrapping functions.
This fixes direct rendering with hwdec_vaegl.c.
The code duplication between update_image_params() and
mp_image_copy_fields_from_av_frame() is quite annoying,
bit will have to be resolved in another commit.
AVHWDeviceContext.user_opaque is reserved to libavutil under certain
circumstances, while AVHWFramesContext.user_opaque is truly free for use
by us. It's slightly simpler too.
Dummy out the locking around all libva calls, which in theory shouldn't
be needed anyway. Two years ago, these were added to prevent crashes
with vaapi decoding and direct rendering (vo_opengl/vo_vaapi) active.
It's not clear whether these are really needed - theory strongly points
towards no. Some developers familiar with vaapi expressed similar
thoughts. But past experience says differently. So let's try
without the locking for a while and see what happens.
A recent commit added code that checks some HAVE_ symbols in this file.
No config.h include was added, so they could be unavailable and break
compilation (in practice, just --hwdec=vaapi-copy would break).
Not sure how I missed this, maybe waf defined these symbols on the
compiler command line for some reason.
The old API is deprecated, and libavcodec prints a warning at runtime.
The new API is a bit nicer and does many things for you, such as
managing the underlying hwaccel decoder. libavutil also provides code
for managing surfaces (we use their surface pool).
The new code does not contain any code from the original MPlayer VAAPI
patch (that was used as base for some of the vaapi code in mpv). Thus
the new code is LGPL.
The new API actually does not add any visible symbols, so the only way
to detect it is a version check. Of course, the versions overlap
between FFmpeg and Libav, which requires additional care. The new
API did not get merged into FFmpeg yet, so there's no check for
FFmpeg.
We usually attach some significant metadata and context to "our"
surfaces. Surfaces created by libavutil (such as we plan to do it when
using the new vaapi decode API in the following commit) don't have this
context, so e.g. copy decoding mode won't work.
Add tons of hacks to make this somehow work.
Eventually we will use libavutil's mechanisms and drop the hacks.
the display refresh rate can't be estimated
correctly in some cases and just increases till it
turns off display-resample. cases are
off-screen rendering (different space), mpv
being completely hidden behind another window or
the mission control view.
this utilise the unused displaylink callback to
limit the refresh rate to the actual display
refresh rate.
This flips the y-coordinate to be consistent with
other platforms and the manual. furthermore it
fixes an unwanted behaviour of the cocoa
convertRectFromBacking method, where the x- and
y-coordinate was divided by the same factor as the
width and height instead of placing the new scaled
rectangle at the same relative position as the
original unscaled rectangle, by manually
calculating the new position.
Fixes#3867
Same deal as with video. Including the EOF handling.
(It would be nice if this code were not duplicated, but right now we're
not even close to unifying the audio and video code paths.)
This is simpler and more robust, especially for the hwdec fallback case.
The most annoying issue is that C doesn't support multiple return values
(or sum types), so the decode call gets all awkward.
The hwdec fallback case does not need to try to produce some output
after the fallback anymore. Instead, it can use the normal "replay"
code path.
We invert the "eof" bool that vd_lavc.c used internally. The
receive_frame decoder API returns the inverse of EOF, because
returning "true" from the decode function if EOF was reached
feels awkward.
Usually they happen at the same time, but conflating them is still a bit
unclean and could possibly cause problems in the future. It's also
really unnecessary.