Add the upstream symbolic names as comments. Normally, these should be
defined in libdrm's drm_fourcc.h header. But DRM_FORMAT_R8 and
DRM_FORMAT_GR88 are not defined anywhere, except in the kernel userland
headers of Linux 4.3 (!). We don't want mpv to depend on bleeding-edge
Linux kernel headers, so this will have to do.
Also, just for completeness, add fourccs for the 3 and 4 channel
formats. I didn't manage to test them, though.
Reduces the amount of hardcoded assumptions about the layout
drastically. (Now adding yuv420 support would be just adjusting an if,
if you ignore the other problems, such as determining the hw format at
all early enough.)
Don't call eglDestroyImageKHR() on the same ID possibly more than once.
Clear the image reference on termination, or we would leak up to 1 image
per VO recreation.
Should work much better than the old GLX interop code. Requires Mesa 11,
and explicitly selecting the X11 EGL backend with:
--vo=opengl:backend=x11egl
Should it turn out that the new interop works well, we will try to
autodetect EGL by default.
This code still uses some bad assumptions, like expecting surfaces to be
in NV12. (This is probably ok, because virtually all HW will use this
format. But we should at least check this on init or so, instead of
failing to render an image if our assumption doesn't hold up.)
This repo was a lot of help: https://github.com/gbeauchesne/ffvademo
The kodi code was also helpful (the magic FourCC it uses for
EGL_LINUX_DRM_FOURCC_EXT are nowhere documented, and
EGL_IMAGE_INTERNAL_FORMAT_EXT as used in ffvademo does
not actually exist).
(This is the 3rd VAAPI GL interop that was implemented in this player.)
The VAAPI EGL interop code will need access to the X11 Display. While
GLX could return it from the current GLX context, EGL has no such
mechanism. (At least no standard one supported by all implementations.)
So mpv makes up such a mechanism.
For internal purposes, this is very rather awkward solution, but it's
needed for libmpv anyway.
These extensions use a bunch of EGL types, so we need to include the EGL
headers in common.h to use our GL function loader with this.
In the future, we should probably require presence of the EGL headers to
reduce the hacks. This might be not so simple at least with OSX, so for
now this has to do.
Surfaces used by hardware decoding formats can be mapped exactly like a
specific software pixel format, e.g. RGBA or NV12. p->image_params is
supposed to be set to this format, but it wasn't.
(How did this ever work?)
Also, setting params->imgfmt in the hwdec interop drivers is pointless
and redundant. (Change them to asserts, because why not.)
This is a pseudo-OpenGL extension for letting libmpv query native
windowing system handles from the API user. (It uses the OpenGL
extension mechanism because I'm lazy. In theory it would be nicer to let
the user pass them with mpv_opengl_cb_init_gl(), but this would require
a more intrusive API change to extend its argument list.)
The naming of the extension and associated function was unnecessarily
Windows specific (using "D3D"), even though it would work just fine for
other platforms. So deprecate the old names and introduce new ones. The
old ones still work.
This turns the old scalers (inherited from MPlayer) into a pre-
processing step (after color conversion and before scaling). The code
for the "sharpen5" scaler is reused for this.
The main reason MPlayer implemented this as scalers was perhaps because
FBOs were too expensive, and making it a scaler allowed to implement
this in 1 pass. But unsharp masking is not really a scaler, and I would
guess the result is more like combining bilinear scaling and unsharp
masking.
Usually, libavcodec ignores errors reported by the hardware decoding
API, so it's not like we can actually escape if the hardware is somehow
acting up.
For normal fallback purposes, or if parts of the hw decoding API which
we actually check fails, we do this by setting and checking the
hwdec_failed flag anyway.
This can happen if the hw decoder allocates padded surfaces (e.g.
mod16), but the VPP output surface was allocated with the exact size.
Apparently VPP requires matching input and output sizes, or it will add
artifacts. In this case, it added mirrored pixels to the bottom few
pixels.
Note that the previous commit should have fixed this. But it didn't
work, while this commit does.
Fixes#2320.
If not set, VPP will use the whole surface. This is a problem if the
surfaces are padded, and especially if the surfaces are padded by
different amounts.
This is an attempt to fix#2320, but it appears to do nothing at all.
The comment was largely outdated, and described the old situation when
we used a "violent" fallback by making get_buffer2 fail completely.
Also, for the case when the hw decoder initialization succeeded (in
get_format), but get_buffer2 for some reason requests something
unexpected, we also can fallback more gracefully and in the same way.
Window classes are process-wide (or at least DLL-wide), so you can't
have 2 classes with the same name. Our code attempted to do this when
for example 2 libmpv instances were created within the same process.
This failed, because RegisterWindowEx() fails if the class already
exists.
Fix this by ignoring RegisterWindowEx() errors. If the class can really
not be registered, we will fail on CreateWindowEx() instead. Of course
we also can't unregister the class, as another thread might be using it.
Windows will free the class automatically if the DLL is unloaded or the
process terminates.
Fixes#2319 (hopefully).
It's possible to build vf_vapoursynth with either the Python or Lua
backend (or both or none). The check for the vapoursynth core itself was
hidden away and couldn't be disabled, which would link mpv with
vapoursynth even if all backends were disabled.
Rearrange the checks so that the core will be disabled if no backend is
found (or both are disabled). This duplicates the check for
vapoursynth.pc, but since it's trivial, this is not that bad.
Caused by one of the --force-window commits. The unconditional
uninit_video_out() call (which normally should be idempotent) raised
sporadic MPV_EVENT_VIDEO_RECONFIG events. This is ok, except for the
fact that clients (like a Lua script or libmpv users) would cause the
event loop to run again after receiving it, triggering a feedback loop.
Fix it by sending the events really only on a change.
Sigh... After the recent changes, another regression appeared. This
time, the VO window wasn't cleared when changing from video to a non-
video file (such as audio-only with no cover art). Fix this by properly
taking the handle_force_window() bool parameter into account.
Also, the info message could be printed twice, which is harmless but
ugly. So just remove the message.
Also, do some more minor cleanups (like fixing the comment, which was
completely outdated).
If --force-window wasn't used, this would destroy the VO while a file
is still being loaded, resulting in flicker and other interruptions
when switching from one playlist entry to another. Recent regression.
The condition used here is pretty tricky, but it boils down to that it
should trigger either in idle mode, or when loading has been fully done
(at these points we definitely know whether the VO will be needed).
This was in sub/, because the code used to be specific to subtitles. It
was extended to automatically load external audio files too, and moving
the file and renaming it was long overdue.
The previous commit was incomplete (and I didn't notice due to a broken
test procedure).
The annoying part is that actually creating the VO was separate; redo
this and merge the code for this into handle_force_window() as well.
This will also make implementing proper reaction to runtime option
changes easier. (Only the part for actually listening to option changes
is missing.)
This is a bad hack; the correct way to handle this would be implementing
profiles differently, and then listen to option changes and act on them
dynamically.
This flag was used by some filters and made sure none of these filters
were inserted twice. This triggers only if the user explicitly tries to
add multiple filters (and not e.g. due to auto-insertion), so at best
this warned the user from doing something potentially pointless. At
worst, it blocked some (mildly) legitimate use-cases. Get rid of it.
Also see #2322.
I took this out because I thought the filter chain would auto-negotiate
using nv12 without the explicit hint, and it does in the basic case
with no intermediate filter, but once you start adding filters, it
can end up negotiating a different format and then failing.
Today, vdpaurb will fail if it's used with non-hardware-decoded
content. This created work for the user as they have to explicitly
add or not add it, depending on the content.
As an improvement, we can make vdpaurb pass through any frames
that aren't hardware decoded, so that it can always be present in the
filter chain, if desired.
I see no point in keeping these around. Keeping wrappers for some select
libavfilter filters just because MPlayer had these filters is not a good
reason.
Ultimately, all real filtering work should go to libavfilter, and users
should get used to using vf_lavfi directly. We might even not require
the awful double-nested syntax for using libavfilter one day.
vf_rotate, vf_yadif, vf_stereo3d are kept because mpv uses them
internally. (They all extend the lavfi filters or change their
defaults.) vf_mirror is kept for symmetry with vf_flip. vf_gradfun and
vf_pullup are probably semi-popular, so I'll remove them not yet - only
after some more discussion.
The reason MPlayer traditionally duplicated them all over the place is
that it wanted every component to be a self-contained library (e.g.
audio filters were in "libaf"). But this is not necessarily helpful, and
this change makes the following commit a bit simpler.
The "PWD" enviornment variable is described by POSIX. We don't go to
length to verify its contents, but just trust it.
This affects the logic for resuming playback.
* (de)planarize -1
* pad 1 byte -8
* truncate 1 byte -1024
* float -> int 1048576 * (8 - dst_bytes)
* int -> float -512
Now the score is negative if and only if the conversion is lossy
(e.g. previously s24 -> float was given a negative (lossy) score),
However, int->float is still considered bad
(s16->float is worse than than s16->s32).
This penalizes any loss of precision more than performance / bandwidth hits.
For example, previously s24->s16p was considered equal to s24->u8.
Finally, we penalize padding more than (de)planarizing as this will
increase the output size for example with ao_lavc.