The aspect ratio calculations are cached (mainly so that aspect ratio
related messages are not logged on every frame). The cache is not clared
anymore when video filters are reconfigured, but changing the
video-aspect-ratio property relied on it. Make it explicit.
Fixes#2714.
Lots of noise to remove the vfilter/vo fields from dec_video.
From now on, video filtering and output will still be done together,
summarized under struct vo_chain.
There is the question where exactly the vf_chain should go in such a
decoupled architecture. The end goal is being able to place a "complex"
filter between video decoders and output (which will culminate in
natural integration of A->V filters for natural integration of
libavfilter audio visualizations). The vf_chain is still useful for
"final" processing, such as format conversions and deinterlacing. Also,
there's only 1 VO and 1 --vf option. So having 1 vf_chain for a VO seems
ideal, since otherwise there would be no natural way to handle all these
existing options and mechanisms.
There is still some work required to truly decouple decoding.
Instead of handling this on filter chain reinit, do it directly after
the decoder. This makes the code less entangled. In particular, this
gets rid of the really weird "override params" concept in the video
filter code.
The last_format/fixed_formats have some redundance with decoder_output,
but unfortunately the latter has a slightly different use.
This is mainly a refactor. I'm hoping it will make some things easier
in the future due to cleanly separating codec metadata and stream
metadata.
Also, declare that the "codec" field can not be NULL anymore. demux.c
will set it to "" if it's NULL when added. This gets rid of a corner
case everything had to handle, but which rarely happened.
AVComponentDescriptor.offset was introduced relatively recently. On
older releases, you have to use AVComponentDescriptor.offset_plus1,
which is now deprecated.
Instead of adding ifdeffery, assume AV_PIX_FMT_NV21 is the only format
for which this applies (and will remain the only case), which is
probably true enough.
Should take care of the planned FFmpeg AV_PIX_FMT_P010 addition. (This
will eventually be needed when doing HEVC Main 10 decoding with DXVA2
copyback.)
A format could declare that some or all LSBs in a component are padding
bits by setting a non-0 AVComponentDescriptor.shift value. This means we
would interpret it incorrectly, because until now we always assumed all
regular formats have the padding in the MSBs.
Not a single format that does this actually exists, though. But a NV12
variant will be added later in FFmpeg.
Commit 3909e4cd ended up losing the ability to tune the gaussian window,
which this commit trivially reintroduces.
The constant scaling factor (present in the code copied from glumpy)
also goes against filter_kernels.c conventions, which is that f(0.0) = 1
(and the invoking code takes care of normalization), and has been
removed.
The values of this new gaussian function corresponds to different
functions when compared against the old version. To translate the old
values p1 to the new values p2 requires solving 2^(e/p1) = e^(2/p2) or
p2 = p1 * 2/(e * ln(2)) ≈ p1 * 1.0615
In other words, to get the old default in the new function requires
setting scale-param1 to 1.0615. (The new function is *slightly* sharper
by default)
(Though most users should probably not notice the change)
To get a uniform license for this file, relicense the mpv parts to BSD
as well.
But leave the door open for a later change to LGPL. (All non-Glumpy code
was written within mpv, and all mpv authors have agreed to LGPL
relicensing.)
Closes#2688.
This file claims to be based on the "MPlayer VA-API patch", but this is
untrue. Only some glue code was copied from hwdec_vaglx.c, and this glue
code was never in MPlayer or the MPlayer VA-API patch in any form, and
instead part of the mpv-original way we do hardware decoding OpenGL
interop. The EGL interop method didn't exist at the time the MPlayer
VA-API patch was created either.
This commit replaces code based on AGG, taken from this source file:
http://vector-agg.cvs.sourceforge.net/viewvc/vector-agg/agg-2.5/include/agg_image_filters.h
The intention is that filter_kernels.c can be relicensed to LGPL or BSD.
Because the AGG author died, full replacement is the only way to achieve
it.
This affects only some filter functions. These are exclusively
mathematical functions for computing filter coefficients. (Other parts
in filter_kernel.c were originally written by me, with heavy additions
and refactoring done by other mpv contributors.) While the code is
mostly just well-known mathematical formulas written down in C form,
AGG copyright could perhaps be claimed anyway.
To remove the AGG code, I replaced it with the filter functions from:
https://github.com/glumpy/glumpy/blob/master/glumpy/library/build-spatial-filters.py
These functions conveniently compute exactly the same thing in mpv,
Glumpy, AGG (and about anything that will filter images using the same
mathematical principles).
First I ported the Python code in the file to C. Then I replaced all
functions in filter_kernels.c with this code that could be replaced.
Then I investigated whether the remaining functions were based on AGG
code and took appropriate action:
hanning(), hamming(), quadric(), bicubic(), kaiser(), blackman(),
spline16(), spline36(), gaussian(), sinc() were taken straight from
Glumpy.
For sinc(), re-add the "fabs(x) < 1e-8" check, which was added in commit
586dc557 for unknown reasons.
gaussian() loses its filter parameter for some reason. (Well, who cares,
not my problem.)
The really awkward thing is that the text for hanning() and hamming()
does not change. In theory these functions are now based on Glumpy code,
but it seems like this can be neither proven nor denied. (The same
happened in some other cases with at least a few lines of code.)
sphinx() was added in commit 586dc557, and looks suspiciously like
sinc() as well. Replace the first 3 lines of the body with the ported
function (of which 2 lines do not change; the first uses code only in
mpv, and the second is just "return 1.0;"). The 4th line is only similar
on an abstract level (and that because of the mathematical relation
between these functions). Although the original sinc() was probably used
as template for it, with the other lines replaced, I don't think you
could make the claim that it falls under AGG copyright.
jinc() was added in commit 26baf5b9, but the code for it might be based
on sinc(). Rewrite it based on the "new" sinc(). Some of the same
remarks as with sphinx() apply.
cubic_bc() was ported from Glumpy's Mitchell(). (As far as I'm aware,
with the default parameters it's called "the" Mitchell-Netravali filter,
but in mpv this function is used to generate a whole group of filters.)
spline64() was added in commit a8b67c66, and was probably derived from
spline36(). Re-derive it from the "new" spline36().
triangle() could be considered derived from the original bilinear().
This is this in the original commit:
static double bilinear(kernel *k, double x)
{
return 1.0 - x;
}
This _might_ be based on AGG's image_filter_bilinear:
struct image_filter_bilinear
{
static double radius() { return 1.0; }
static double calc_weight(double x)
{
return 1.0 - x;
}
};
Considering that the "framework" was written by me, and the only part
from AGG taken is "return 1.0 - x;", and this part is trivial and was
later thoroughly replaced, this is probably not under the AGG copyright.
I'm hoping this doesn't introduce regressions. But the main focus is not
being productive anyway, and I didn't rigorously check unintended
changes in functionality.
Apparently, the firmware will ignore pixel_x/pixel_y if the numeric
value of them gets too high (even if they indicate square pixel aspect
ratio). Even worse, the destination rectangle is ignored completely,
and the video frame is simply stretched to the screen. I suspect this
is an overflow or weird sanity check within the firmware.
Work it around by limiting the fields to 16000, which is an arbitrary
but apparently working limit.
GLSL in GLES 2.0 did not have line continuation in its preprocessor.
This broke shader compilation. It also broke subtitle rendering in
vo_rpi, which reuses some of the OpenGL code.
Line continuation was finally added in GLES 3.0, which is perhaps the
reason why ANGLE accepted it.
Untested, but should be fine. Broken by commit 0a0bb905.
Also fix the include statement in context_rpi.c, which caused another
compilation failure. Also untested. (Because I'm lazy.)
Fixes#2638.
This removes the need to define IMGFMT_GBRAP, which fixes compilation
with the current Libav release.
This also makes it automatically pick up a GBRP format with the same bit
width. (Unfortunately, it seems libswscale does not support conversion
to AV_PIX_FMT_GBRAP16, so our code falls back to 8 bit, removing
precision for video covered by subtitles in cases this code is used.)
Also, when the source video is e.g. 10 bit YUV, upsample to 16 bit.
Whether this is good or bad, it fixes behavior with alpha. Although I'm
not sure if the alpha range is really correct ([0,2^16-1] vs.
[0,255*256]). Keep in mind that libswscale doesn't even agree with the
way we do it.
This actually treats destination alpha correctly, and gives much better
results than before. I don't know if this is perfectly correct yet,
though. Slight difference with vo_opengl behavior suggests it might not
be.
Note that this does not affect VOs with true alpha support. vo_opengl
does not use this code at all, and does the alpha calculations in OpenGL
instead.
gcc 4.8 does not support C11 thread local storage. This is a bit
annoying, so add a hack to use the gcc specific __thread extension if
C11 TLS is not available.
(This is used for the extremely silly mpv-internal way hwdec modules
access some platform specific handles. Disabling it simply made
hwdec_vaegl.c always fail initialization.)
Fixes#2631.
Add a "blend-tiles" choice to the "alpha" sub-option. This is pretty
simplistic and uses the GL raster position to derive the tiles. A weird
consequence is that using --vo=opengl and --vo=opengl-hq gives different
scaling behavior (screenspace pixel size vs. source video pixel size
16x16 tiles), but it seems we don't have easy access to the original
texture coordinates. Using the rasterpos is probably simpler.
Make this option the default.
This is for the sake of command.c and the "deinterlace" option/property.
Instead of forcing certain "better" defaults when inserting yadif,
change the actual "yadif" defaults.
I pondered not changing vf_yadif, and instead adding a trivial "yadif-
auto" wrapper filter, which would merely have different defaults. But
thinking about it, it doesn't make any sense for "deinterlace" to have
different defaults from vf_yadif, with vf_yadif having the "worse"
defaults. If someone wants the old behavior, the old behavior can be
forced in a backward and forward compatible way by setting the
suboptions.
Fixes#2539 (kind of).
long is 64 bits on x86_64 on Linux, which means the check for the corner
case of computing the depth mask is wrong.
Also, X11 compositors seem to expect premultiplied alpha.
MPlayer traditionally always used the display aspect ratio, e.g. 16:9,
while FFmpeg uses the sample (aka pixel) aspect ratio.
Both have a bunch of advantages and disadvantages. Actually, it seems
using sample aspect ratio is generally nicer. The main reason for the
change is making mpv closer to how FFmpeg works in order to make life
easier. It's also nice that everything uses integer fractions instead
of floats now (except --video-aspect option/property).
Note that there is at least 1 user-visible change: vf_dsize now does
not set the display size, only the display aspect ratio. This is
because the image_params d_w/d_h fields did not just set the display
aspect, but also the size (except in encoding mode).
Since alpha isn't pulled through the colormatrix (maybe it should?), we
reject alpha formats with odd sizes, such as yuva444p10.
But the awful tex_mul path in vo_opengl does this anyway (at some points
even explicitly), which means there will be a subtle difference in
handling of 16 bit yuv alpha formats. Make it consistent and always
apply the range adjustment to the alpha component. This also means odd
sizes like 10 bit are supported now.
This assumes alpha uses the same "shifted" range as the yuv color
channels for depths larger than 8 bit. I'm not sure whether this is
actually the case.
Now common.c only contains the code for the function loader, while
context.c contains the backend loader/dispatcher.
Not calling it "backend.c", because the central struct is called
MPGLContext.
This is used for dithering, although I'm not aware of anyone who got
higher than 8 bit depth support to work on Linux.
Also put this into egl_helpers.c. Since EGL is pseudo-portable at best I
have no hope that the EGL context creation code in all the backends can
be fully shared. But some self-contained functionality can definitely be
shared.