2010-01-30 16:57:40 +00:00
|
|
|
/*
|
2015-04-13 07:36:54 +00:00
|
|
|
* This file is part of mpv.
|
2010-01-30 16:57:40 +00:00
|
|
|
*
|
2015-04-13 07:36:54 +00:00
|
|
|
* mpv is free software; you can redistribute it and/or modify
|
2010-01-30 16:57:40 +00:00
|
|
|
* it under the terms of the GNU General Public License as published by
|
|
|
|
* the Free Software Foundation; either version 2 of the License, or
|
|
|
|
* (at your option) any later version.
|
|
|
|
*
|
2015-04-13 07:36:54 +00:00
|
|
|
* mpv is distributed in the hope that it will be useful,
|
2010-01-30 16:57:40 +00:00
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License along
|
2015-04-13 07:36:54 +00:00
|
|
|
* with mpv. If not, see <http://www.gnu.org/licenses/>.
|
2010-01-30 16:57:40 +00:00
|
|
|
*/
|
|
|
|
|
2002-04-07 20:21:37 +00:00
|
|
|
#include <stdio.h>
|
|
|
|
#include <stdlib.h>
|
|
|
|
#include <string.h>
|
|
|
|
#include <inttypes.h>
|
2012-07-31 21:37:56 +00:00
|
|
|
#include <sys/types.h>
|
2002-04-07 20:21:37 +00:00
|
|
|
|
2013-07-18 11:46:05 +00:00
|
|
|
#include <libswscale/swscale.h>
|
|
|
|
|
2005-11-18 14:39:25 +00:00
|
|
|
#include "config.h"
|
2013-12-17 01:39:45 +00:00
|
|
|
#include "common/msg.h"
|
2013-12-17 01:02:25 +00:00
|
|
|
#include "options/options.h"
|
2002-04-07 20:21:37 +00:00
|
|
|
|
2012-11-09 00:06:43 +00:00
|
|
|
#include "video/img_format.h"
|
|
|
|
#include "video/mp_image.h"
|
2002-04-07 20:21:37 +00:00
|
|
|
#include "vf.h"
|
2012-11-09 00:06:43 +00:00
|
|
|
#include "video/fmt-conversion.h"
|
2002-04-07 20:21:37 +00:00
|
|
|
|
2012-11-09 00:06:43 +00:00
|
|
|
#include "video/sws_utils.h"
|
2002-04-07 20:21:37 +00:00
|
|
|
|
2012-11-09 00:06:43 +00:00
|
|
|
#include "video/csputils.h"
|
|
|
|
#include "video/out/vo.h"
|
video, options: implement better YUV->RGB conversion control
Rewrite control of the colorspace and input/output level parameters
used in YUV-RGB conversions, replacing VO-specific suboptions with new
common options and adding configuration support to more cases.
Add new option --colormatrix which selects the colorspace the original
video is assumed to have in YUV->RGB conversions. The default
behavior changes from assuming BT.601 to colorspace autoselection
between BT.601 and BT.709 using a simple heuristic based on video
size. Add new options --colormatrix-input-range and
--colormatrix-output-range which select input YUV and output RGB range.
Disable the previously existing VO-specific colorspace and level
conversion suboptions in vo_gl and vo_vdpau. Remove the
"yuv_colorspace" property and replace it with one named "colormatrix"
and semantics matching the new option. Add new properties matching the
options for level conversion.
Colorspace selection is currently supported by vo_gl, vo_vdpau, vo_xv
and vf_scale, and all can change it at runtime (previously only
vo_vdpau and vo_xv could). vo_vdpau now uses the same conversion
matrix generation as vo_gl instead of libvdpau functionality; the main
functional difference is that the "contrast" equalizer control behaves
somewhat differently (it scales the Y component around 1/2 instead of
around 0, so that contrast 0 makes the image gray rather than black).
vo_xv does not support level conversion. vf_scale supports range
setting for input, but always outputs full-range RGB.
The value of the slave properties is the policy setting used for
conversions. This means they can be set to any value regardless of
whether the current VO supports that value or whether there currently
even is any video. Possibly separate properties could be added to
query the conversion actually used at the moment, if any.
Because the colorspace and level settings are now set with a single
VF/VO control call, the return value of that is no longer used to
signal whether all the settings are actually supported. Instead code
should set all the details it can support, and ignore the rest. The
core will use GET_YUV_COLORSPACE to check which colorspace details
have been set and which not. In other words, the return value for
SET_YUV_COLORSPACE only signals whether any kind of YUV colorspace
conversion handling exists at all, and VOs have to take care to return
the actual state with GET_YUV_COLORSPACE instead.
To be changed in later commits: add missing option documentation.
2011-10-15 21:50:21 +00:00
|
|
|
|
2013-12-17 01:02:25 +00:00
|
|
|
#include "options/m_option.h"
|
2003-03-15 18:01:02 +00:00
|
|
|
|
|
|
|
static struct vf_priv_s {
|
2013-07-18 11:13:03 +00:00
|
|
|
int w, h;
|
2011-12-12 20:16:05 +00:00
|
|
|
int cfg_w, cfg_h;
|
2002-06-24 01:05:41 +00:00
|
|
|
int v_chr_drop;
|
2004-09-18 00:08:17 +00:00
|
|
|
double param[2];
|
2013-07-18 11:41:38 +00:00
|
|
|
struct mp_sws_context *sws;
|
2006-02-18 21:12:56 +00:00
|
|
|
int noup;
|
2006-07-24 10:36:06 +00:00
|
|
|
int accurate_rnd;
|
2007-01-28 16:48:01 +00:00
|
|
|
} const vf_priv_dflt = {
|
2013-07-18 11:13:03 +00:00
|
|
|
0, 0,
|
|
|
|
-1, -1,
|
|
|
|
0,
|
|
|
|
{SWS_PARAM_DEFAULT, SWS_PARAM_DEFAULT},
|
2002-04-07 20:21:37 +00:00
|
|
|
};
|
|
|
|
|
vf_scale: replace ancient fallback image format selection
If video output and VO don't support the same format, a conversion
filter needs to be insert. Since a VO can support multiple formats, and
the filter chain also can deal with multiple formats, you basically have
to pick from a huge matrix of possible conversions.
The old MPlayer code had a quite naive algorithm: it first checked
whether any conversion from the list of preferred conversions matched,
and if not, it was falling back on checking a hardcoded list of output
formats (more or less sorted by quality). This had some unintended side-
effects, like not using obvious "replacement" formats, selecting the
wrong colorspace, selecting a bit depth that is too high or too low, and
more.
Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This
function was made for this purpose, and should select the "best" format.
Libav provides a similar function, but with a different name - there is
a function with the same name in FFmpeg, but it has different semantics
(I'm not sure if Libav or FFmpeg fucked up here).
This also removes handling of VFCAP_CSP_SUPPORTED vs.
VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly
for filter chains with multiple scale filters.
Fixes #1494.
2015-01-21 17:33:47 +00:00
|
|
|
static int find_best_out(vf_instance_t *vf, int in_format)
|
2013-07-18 11:13:03 +00:00
|
|
|
{
|
vf_scale: replace ancient fallback image format selection
If video output and VO don't support the same format, a conversion
filter needs to be insert. Since a VO can support multiple formats, and
the filter chain also can deal with multiple formats, you basically have
to pick from a huge matrix of possible conversions.
The old MPlayer code had a quite naive algorithm: it first checked
whether any conversion from the list of preferred conversions matched,
and if not, it was falling back on checking a hardcoded list of output
formats (more or less sorted by quality). This had some unintended side-
effects, like not using obvious "replacement" formats, selecting the
wrong colorspace, selecting a bit depth that is too high or too low, and
more.
Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This
function was made for this purpose, and should select the "best" format.
Libav provides a similar function, but with a different name - there is
a function with the same name in FFmpeg, but it has different semantics
(I'm not sure if Libav or FFmpeg fucked up here).
This also removes handling of VFCAP_CSP_SUPPORTED vs.
VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly
for filter chains with multiple scale filters.
Fixes #1494.
2015-01-21 17:33:47 +00:00
|
|
|
int best = 0;
|
|
|
|
for (int out_format = IMGFMT_START; out_format < IMGFMT_END; out_format++) {
|
|
|
|
if (!vf_next_query_format(vf, out_format))
|
|
|
|
continue;
|
|
|
|
if (sws_isSupportedOutput(imgfmt2pixfmt(out_format)) < 1)
|
|
|
|
continue;
|
|
|
|
if (best) {
|
|
|
|
int candidate = mp_imgfmt_select_best(best, out_format, in_format);
|
|
|
|
if (candidate)
|
|
|
|
best = candidate;
|
|
|
|
} else {
|
|
|
|
best = out_format;
|
2013-07-18 11:16:02 +00:00
|
|
|
}
|
|
|
|
}
|
2002-04-11 20:56:17 +00:00
|
|
|
return best;
|
|
|
|
}
|
|
|
|
|
2013-12-07 18:35:55 +00:00
|
|
|
static int reconfig(struct vf_instance *vf, struct mp_image_params *in,
|
|
|
|
struct mp_image_params *out)
|
2013-07-16 21:22:55 +00:00
|
|
|
{
|
2015-12-19 19:04:31 +00:00
|
|
|
int width = in->w, height = in->h;
|
|
|
|
int d_width, d_height;
|
|
|
|
mp_image_params_get_dsize(in, &d_width, &d_height);
|
|
|
|
|
vf_scale: replace ancient fallback image format selection
If video output and VO don't support the same format, a conversion
filter needs to be insert. Since a VO can support multiple formats, and
the filter chain also can deal with multiple formats, you basically have
to pick from a huge matrix of possible conversions.
The old MPlayer code had a quite naive algorithm: it first checked
whether any conversion from the list of preferred conversions matched,
and if not, it was falling back on checking a hardcoded list of output
formats (more or less sorted by quality). This had some unintended side-
effects, like not using obvious "replacement" formats, selecting the
wrong colorspace, selecting a bit depth that is too high or too low, and
more.
Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This
function was made for this purpose, and should select the "best" format.
Libav provides a similar function, but with a different name - there is
a function with the same name in FFmpeg, but it has different semantics
(I'm not sure if Libav or FFmpeg fucked up here).
This also removes handling of VFCAP_CSP_SUPPORTED vs.
VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly
for filter chains with multiple scale filters.
Fixes #1494.
2015-01-21 17:33:47 +00:00
|
|
|
unsigned int best = find_best_out(vf, in->imgfmt);
|
2013-07-18 11:13:03 +00:00
|
|
|
int round_w = 0, round_h = 0;
|
2009-07-06 23:26:13 +00:00
|
|
|
|
2013-07-18 11:13:03 +00:00
|
|
|
if (!best) {
|
2015-07-20 19:16:37 +00:00
|
|
|
MP_WARN(vf, "no supported output format found\n");
|
2013-07-18 11:13:03 +00:00
|
|
|
return -1;
|
2002-04-07 21:33:42 +00:00
|
|
|
}
|
2009-07-06 23:26:13 +00:00
|
|
|
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->next->query_format(vf->next, best);
|
2009-07-06 23:26:13 +00:00
|
|
|
|
2011-12-12 20:16:05 +00:00
|
|
|
vf->priv->w = vf->priv->cfg_w;
|
|
|
|
vf->priv->h = vf->priv->cfg_h;
|
|
|
|
|
2005-12-17 20:00:16 +00:00
|
|
|
if (vf->priv->w <= -8) {
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->w += 8;
|
|
|
|
round_w = 1;
|
2005-03-06 21:15:24 +00:00
|
|
|
}
|
2005-12-17 20:00:16 +00:00
|
|
|
if (vf->priv->h <= -8) {
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->h += 8;
|
|
|
|
round_h = 1;
|
2005-03-06 21:15:24 +00:00
|
|
|
}
|
|
|
|
|
2004-11-12 11:15:26 +00:00
|
|
|
if (vf->priv->w < -3 || vf->priv->h < -3 ||
|
2013-07-18 11:13:03 +00:00
|
|
|
(vf->priv->w < -1 && vf->priv->h < -1))
|
|
|
|
{
|
2015-07-20 19:16:37 +00:00
|
|
|
MP_ERR(vf, "invalid parameters\n");
|
2013-07-18 11:13:03 +00:00
|
|
|
return -1;
|
2004-11-12 11:15:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (vf->priv->w == -1)
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->w = width;
|
2004-11-12 11:15:26 +00:00
|
|
|
if (vf->priv->w == 0)
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->w = d_width;
|
2004-11-12 11:15:26 +00:00
|
|
|
|
|
|
|
if (vf->priv->h == -1)
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->h = height;
|
2004-11-12 11:15:26 +00:00
|
|
|
if (vf->priv->h == 0)
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->h = d_height;
|
2004-11-12 11:15:26 +00:00
|
|
|
|
|
|
|
if (vf->priv->w == -3)
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->w = vf->priv->h * width / height;
|
2004-11-12 11:15:26 +00:00
|
|
|
if (vf->priv->w == -2)
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->w = vf->priv->h * d_width / d_height;
|
2004-11-12 11:15:26 +00:00
|
|
|
|
|
|
|
if (vf->priv->h == -3)
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->h = vf->priv->w * height / width;
|
2004-11-12 11:15:26 +00:00
|
|
|
if (vf->priv->h == -2)
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->h = vf->priv->w * d_height / d_width;
|
2004-11-12 11:15:26 +00:00
|
|
|
|
2005-03-06 21:15:24 +00:00
|
|
|
if (round_w)
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->w = ((vf->priv->w + 8) / 16) * 16;
|
2005-03-06 21:15:24 +00:00
|
|
|
if (round_h)
|
2013-07-18 11:13:03 +00:00
|
|
|
vf->priv->h = ((vf->priv->h + 8) / 16) * 16;
|
2005-03-06 21:15:24 +00:00
|
|
|
|
2012-09-24 14:34:03 +00:00
|
|
|
// check for upscaling, now that all parameters had been applied
|
2013-07-18 11:13:03 +00:00
|
|
|
if (vf->priv->noup) {
|
|
|
|
if ((vf->priv->w > width) + (vf->priv->h > height) >= vf->priv->noup) {
|
|
|
|
vf->priv->w = width;
|
|
|
|
vf->priv->h = height;
|
2012-09-24 14:34:03 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-07-20 19:16:37 +00:00
|
|
|
MP_DBG(vf, "scaling %dx%d to %dx%d\n", width, height, vf->priv->w, vf->priv->h);
|
2002-04-07 23:30:59 +00:00
|
|
|
|
2013-07-18 11:13:03 +00:00
|
|
|
// Compute new d_width and d_height, preserving aspect
|
|
|
|
// while ensuring that both are >= output size in pixels.
|
|
|
|
if (vf->priv->h * d_width > vf->priv->w * d_height) {
|
|
|
|
d_width = vf->priv->h * d_width / d_height;
|
|
|
|
d_height = vf->priv->h;
|
|
|
|
} else {
|
|
|
|
d_height = vf->priv->w * d_height / d_width;
|
|
|
|
d_width = vf->priv->w;
|
2002-04-07 20:21:37 +00:00
|
|
|
}
|
2013-12-07 18:35:55 +00:00
|
|
|
|
|
|
|
*out = *in;
|
|
|
|
out->w = vf->priv->w;
|
|
|
|
out->h = vf->priv->h;
|
2015-12-19 19:04:31 +00:00
|
|
|
mp_image_params_set_dsize(out, d_width, d_height);
|
2013-12-07 18:35:55 +00:00
|
|
|
out->imgfmt = best;
|
2013-07-18 11:41:38 +00:00
|
|
|
|
|
|
|
// Second-guess what libswscale is going to output and what not.
|
|
|
|
// It depends what libswscale supports for in/output, and what makes sense.
|
2013-12-07 18:35:55 +00:00
|
|
|
struct mp_imgfmt_desc s_fmt = mp_imgfmt_get_desc(in->imgfmt);
|
|
|
|
struct mp_imgfmt_desc d_fmt = mp_imgfmt_get_desc(out->imgfmt);
|
2013-07-18 11:41:38 +00:00
|
|
|
// keep colorspace settings if the data stays in yuv
|
|
|
|
if (!(s_fmt.flags & MP_IMGFLAG_YUV) || !(d_fmt.flags & MP_IMGFLAG_YUV)) {
|
2016-06-29 07:16:13 +00:00
|
|
|
out->color.space = MP_CSP_AUTO;
|
|
|
|
out->color.levels = MP_CSP_LEVELS_AUTO;
|
2013-07-18 11:41:38 +00:00
|
|
|
}
|
2013-12-07 18:35:55 +00:00
|
|
|
mp_image_params_guess_csp(out);
|
2002-04-07 20:21:37 +00:00
|
|
|
|
2016-08-30 21:50:57 +00:00
|
|
|
mp_sws_set_from_cmdline(vf->priv->sws, vf->chain->opts->vo->sws_opts);
|
2013-07-18 11:41:38 +00:00
|
|
|
vf->priv->sws->flags |= vf->priv->v_chr_drop << SWS_SRC_V_CHR_DROP_SHIFT;
|
|
|
|
vf->priv->sws->flags |= vf->priv->accurate_rnd * SWS_ACCURATE_RND;
|
2013-12-07 18:35:55 +00:00
|
|
|
vf->priv->sws->src = *in;
|
|
|
|
vf->priv->sws->dst = *out;
|
2013-07-18 11:41:38 +00:00
|
|
|
|
|
|
|
if (mp_sws_reinit(vf->priv->sws) < 0) {
|
|
|
|
// error...
|
2013-12-21 16:43:25 +00:00
|
|
|
MP_WARN(vf, "Couldn't init libswscale for this setup\n");
|
2013-07-18 11:41:38 +00:00
|
|
|
return -1;
|
2009-07-06 23:26:13 +00:00
|
|
|
}
|
2013-12-07 18:35:55 +00:00
|
|
|
return 0;
|
2003-12-29 14:16:07 +00:00
|
|
|
}
|
|
|
|
|
video/filter: change filter API, use refcounting, remove filter DR
Change the entire filter API to use reference counted images instead
of vf_get_image().
Remove filter "direct rendering". This was useful for vf_expand and (in
rare cases) vf_sub: DR allowed these filters to pass a cropped image to
the filters before them. Then, on filtering, the image was "uncropped",
so that black bars could be added around the image without copying. This
means that in some cases, vf_expand will be slower (-vf gradfun,expand
for example).
Note that another form of DR used for in-place filters has been replaced
by simpler logic. Instead of trying to do DR, filters can check if the
image is writeable (with mp_image_is_writeable()), and do true in-place
if that's the case. This affects filters like vf_gradfun and vf_sub.
Everything has to support strides now. If something doesn't, making a
copy of the image data is required.
2012-11-05 13:25:04 +00:00
|
|
|
static struct mp_image *filter(struct vf_instance *vf, struct mp_image *mpi)
|
|
|
|
{
|
|
|
|
struct mp_image *dmpi = vf_alloc_out_image(vf);
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
if (!dmpi)
|
|
|
|
return NULL;
|
video/filter: change filter API, use refcounting, remove filter DR
Change the entire filter API to use reference counted images instead
of vf_get_image().
Remove filter "direct rendering". This was useful for vf_expand and (in
rare cases) vf_sub: DR allowed these filters to pass a cropped image to
the filters before them. Then, on filtering, the image was "uncropped",
so that black bars could be added around the image without copying. This
means that in some cases, vf_expand will be slower (-vf gradfun,expand
for example).
Note that another form of DR used for in-place filters has been replaced
by simpler logic. Instead of trying to do DR, filters can check if the
image is writeable (with mp_image_is_writeable()), and do true in-place
if that's the case. This affects filters like vf_gradfun and vf_sub.
Everything has to support strides now. If something doesn't, making a
copy of the image data is required.
2012-11-05 13:25:04 +00:00
|
|
|
mp_image_copy_attributes(dmpi, mpi);
|
2002-10-17 21:53:30 +00:00
|
|
|
|
2013-07-18 11:41:38 +00:00
|
|
|
mp_sws_scale(vf->priv->sws, dmpi, mpi);
|
2009-07-06 23:26:13 +00:00
|
|
|
|
video/filter: change filter API, use refcounting, remove filter DR
Change the entire filter API to use reference counted images instead
of vf_get_image().
Remove filter "direct rendering". This was useful for vf_expand and (in
rare cases) vf_sub: DR allowed these filters to pass a cropped image to
the filters before them. Then, on filtering, the image was "uncropped",
so that black bars could be added around the image without copying. This
means that in some cases, vf_expand will be slower (-vf gradfun,expand
for example).
Note that another form of DR used for in-place filters has been replaced
by simpler logic. Instead of trying to do DR, filters can check if the
image is writeable (with mp_image_is_writeable()), and do true in-place
if that's the case. This affects filters like vf_gradfun and vf_sub.
Everything has to support strides now. If something doesn't, making a
copy of the image data is required.
2012-11-05 13:25:04 +00:00
|
|
|
talloc_free(mpi);
|
|
|
|
return dmpi;
|
2002-04-07 20:21:37 +00:00
|
|
|
}
|
|
|
|
|
2013-07-18 11:13:03 +00:00
|
|
|
static int control(struct vf_instance *vf, int request, void *data)
|
|
|
|
{
|
2013-07-18 11:41:38 +00:00
|
|
|
struct mp_sws_context *sws = vf->priv->sws;
|
|
|
|
|
|
|
|
switch (request) {
|
|
|
|
case VFCTRL_GET_EQUALIZER:
|
2013-09-29 22:31:10 +00:00
|
|
|
if (mp_sws_get_vf_equalizer(sws, data) < 1)
|
2013-07-18 11:41:38 +00:00
|
|
|
break;
|
|
|
|
return CONTROL_TRUE;
|
|
|
|
case VFCTRL_SET_EQUALIZER:
|
2013-10-16 19:35:27 +00:00
|
|
|
if (mp_sws_set_vf_equalizer(sws, data) < 1)
|
2013-07-18 11:13:03 +00:00
|
|
|
break;
|
2013-07-18 11:41:38 +00:00
|
|
|
return CONTROL_TRUE;
|
video, options: implement better YUV->RGB conversion control
Rewrite control of the colorspace and input/output level parameters
used in YUV-RGB conversions, replacing VO-specific suboptions with new
common options and adding configuration support to more cases.
Add new option --colormatrix which selects the colorspace the original
video is assumed to have in YUV->RGB conversions. The default
behavior changes from assuming BT.601 to colorspace autoselection
between BT.601 and BT.709 using a simple heuristic based on video
size. Add new options --colormatrix-input-range and
--colormatrix-output-range which select input YUV and output RGB range.
Disable the previously existing VO-specific colorspace and level
conversion suboptions in vo_gl and vo_vdpau. Remove the
"yuv_colorspace" property and replace it with one named "colormatrix"
and semantics matching the new option. Add new properties matching the
options for level conversion.
Colorspace selection is currently supported by vo_gl, vo_vdpau, vo_xv
and vf_scale, and all can change it at runtime (previously only
vo_vdpau and vo_xv could). vo_vdpau now uses the same conversion
matrix generation as vo_gl instead of libvdpau functionality; the main
functional difference is that the "contrast" equalizer control behaves
somewhat differently (it scales the Y component around 1/2 instead of
around 0, so that contrast 0 makes the image gray rather than black).
vo_xv does not support level conversion. vf_scale supports range
setting for input, but always outputs full-range RGB.
The value of the slave properties is the policy setting used for
conversions. This means they can be set to any value regardless of
whether the current VO supports that value or whether there currently
even is any video. Possibly separate properties could be added to
query the conversion actually used at the moment, if any.
Because the colorspace and level settings are now set with a single
VF/VO control call, the return value of that is no longer used to
signal whether all the settings are actually supported. Instead code
should set all the details it can support, and ignore the rest. The
core will use GET_YUV_COLORSPACE to check which colorspace details
have been set and which not. In other words, the return value for
SET_YUV_COLORSPACE only signals whether any kind of YUV colorspace
conversion handling exists at all, and VOs have to take care to return
the actual state with GET_YUV_COLORSPACE instead.
To be changed in later commits: add missing option documentation.
2011-10-15 21:50:21 +00:00
|
|
|
}
|
|
|
|
|
2013-12-07 18:33:38 +00:00
|
|
|
return CONTROL_UNKNOWN;
|
video, options: implement better YUV->RGB conversion control
Rewrite control of the colorspace and input/output level parameters
used in YUV-RGB conversions, replacing VO-specific suboptions with new
common options and adding configuration support to more cases.
Add new option --colormatrix which selects the colorspace the original
video is assumed to have in YUV->RGB conversions. The default
behavior changes from assuming BT.601 to colorspace autoselection
between BT.601 and BT.709 using a simple heuristic based on video
size. Add new options --colormatrix-input-range and
--colormatrix-output-range which select input YUV and output RGB range.
Disable the previously existing VO-specific colorspace and level
conversion suboptions in vo_gl and vo_vdpau. Remove the
"yuv_colorspace" property and replace it with one named "colormatrix"
and semantics matching the new option. Add new properties matching the
options for level conversion.
Colorspace selection is currently supported by vo_gl, vo_vdpau, vo_xv
and vf_scale, and all can change it at runtime (previously only
vo_vdpau and vo_xv could). vo_vdpau now uses the same conversion
matrix generation as vo_gl instead of libvdpau functionality; the main
functional difference is that the "contrast" equalizer control behaves
somewhat differently (it scales the Y component around 1/2 instead of
around 0, so that contrast 0 makes the image gray rather than black).
vo_xv does not support level conversion. vf_scale supports range
setting for input, but always outputs full-range RGB.
The value of the slave properties is the policy setting used for
conversions. This means they can be set to any value regardless of
whether the current VO supports that value or whether there currently
even is any video. Possibly separate properties could be added to
query the conversion actually used at the moment, if any.
Because the colorspace and level settings are now set with a single
VF/VO control call, the return value of that is no longer used to
signal whether all the settings are actually supported. Instead code
should set all the details it can support, and ignore the rest. The
core will use GET_YUV_COLORSPACE to check which colorspace details
have been set and which not. In other words, the return value for
SET_YUV_COLORSPACE only signals whether any kind of YUV colorspace
conversion handling exists at all, and VOs have to take care to return
the actual state with GET_YUV_COLORSPACE instead.
To be changed in later commits: add missing option documentation.
2011-10-15 21:50:21 +00:00
|
|
|
}
|
|
|
|
|
2013-07-18 11:13:03 +00:00
|
|
|
static int query_format(struct vf_instance *vf, unsigned int fmt)
|
|
|
|
{
|
2015-01-21 21:08:24 +00:00
|
|
|
if (IMGFMT_IS_HWACCEL(fmt) || sws_isSupportedInput(imgfmt2pixfmt(fmt)) < 1)
|
|
|
|
return 0;
|
|
|
|
return !!find_best_out(vf, fmt);
|
2002-04-07 21:33:42 +00:00
|
|
|
}
|
|
|
|
|
2013-07-18 11:13:03 +00:00
|
|
|
static void uninit(struct vf_instance *vf)
|
|
|
|
{
|
2003-11-29 19:40:30 +00:00
|
|
|
}
|
|
|
|
|
2013-12-03 23:01:38 +00:00
|
|
|
static int vf_open(vf_instance_t *vf)
|
2013-07-18 11:13:03 +00:00
|
|
|
{
|
|
|
|
vf->reconfig = reconfig;
|
|
|
|
vf->filter = filter;
|
|
|
|
vf->query_format = query_format;
|
|
|
|
vf->control = control;
|
|
|
|
vf->uninit = uninit;
|
2013-07-18 11:41:38 +00:00
|
|
|
vf->priv->sws = mp_sws_alloc(vf);
|
2013-12-21 16:57:10 +00:00
|
|
|
vf->priv->sws->log = vf->log;
|
2013-07-22 12:41:33 +00:00
|
|
|
vf->priv->sws->params[0] = vf->priv->param[0];
|
|
|
|
vf->priv->sws->params[1] = vf->priv->param[1];
|
2002-04-07 20:21:37 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
options: use m_config for options instead of m_struct
For some reason, both m_config and m_struct are somewhat similar, except
that m_config is much more powerful. m_config is used for VOs and some
other things, so to unify them. We plan to kick out m_struct and use
m_config for everything. (Unfortunately, m_config is also a bit more
bloated, so this commit isn't all that great, but it will allow to
reduce the option parser mess somewhat.)
This commit also switches all video filters to use the option macros.
One reason is that m_struct and m_config, even though they both use
m_option, store the offsets of the option fields differently (sigh...),
meaning the options defined for either are incompatible. It's easier to
switch everything in one go.
This commit will allow using the -vf option parser for other things,
like VOs and AOs.
2013-07-21 17:33:08 +00:00
|
|
|
#define OPT_BASE_STRUCT struct vf_priv_s
|
2008-04-26 13:35:40 +00:00
|
|
|
static const m_option_t vf_opts_fields[] = {
|
options: use m_config for options instead of m_struct
For some reason, both m_config and m_struct are somewhat similar, except
that m_config is much more powerful. m_config is used for VOs and some
other things, so to unify them. We plan to kick out m_struct and use
m_config for everything. (Unfortunately, m_config is also a bit more
bloated, so this commit isn't all that great, but it will allow to
reduce the option parser mess somewhat.)
This commit also switches all video filters to use the option macros.
One reason is that m_struct and m_config, even though they both use
m_option, store the offsets of the option fields differently (sigh...),
meaning the options defined for either are incompatible. It's easier to
switch everything in one go.
This commit will allow using the -vf option parser for other things,
like VOs and AOs.
2013-07-21 17:33:08 +00:00
|
|
|
OPT_INT("w", cfg_w, M_OPT_MIN, .min = -11),
|
|
|
|
OPT_INT("h", cfg_h, M_OPT_MIN, .min = -11),
|
|
|
|
OPT_DOUBLE("param", param[0], M_OPT_RANGE, .min = 0.0, .max = 100.0),
|
|
|
|
OPT_DOUBLE("param2", param[1], M_OPT_RANGE, .min = 0.0, .max = 100.0),
|
|
|
|
OPT_INTRANGE("chr-drop", v_chr_drop, 0, 0, 3),
|
|
|
|
OPT_INTRANGE("noup", noup, 0, 0, 2),
|
|
|
|
OPT_FLAG("arnd", accurate_rnd, 0),
|
|
|
|
{0}
|
2003-03-15 18:01:02 +00:00
|
|
|
};
|
|
|
|
|
2007-12-02 14:57:15 +00:00
|
|
|
const vf_info_t vf_info_scale = {
|
2013-10-23 17:06:42 +00:00
|
|
|
.description = "software scaling",
|
|
|
|
.name = "scale",
|
|
|
|
.open = vf_open,
|
options: use m_config for options instead of m_struct
For some reason, both m_config and m_struct are somewhat similar, except
that m_config is much more powerful. m_config is used for VOs and some
other things, so to unify them. We plan to kick out m_struct and use
m_config for everything. (Unfortunately, m_config is also a bit more
bloated, so this commit isn't all that great, but it will allow to
reduce the option parser mess somewhat.)
This commit also switches all video filters to use the option macros.
One reason is that m_struct and m_config, even though they both use
m_option, store the offsets of the option fields differently (sigh...),
meaning the options defined for either are incompatible. It's easier to
switch everything in one go.
This commit will allow using the -vf option parser for other things,
like VOs and AOs.
2013-07-21 17:33:08 +00:00
|
|
|
.priv_size = sizeof(struct vf_priv_s),
|
|
|
|
.priv_defaults = &vf_priv_dflt,
|
|
|
|
.options = vf_opts_fields,
|
2002-04-07 20:21:37 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
//===========================================================================//
|