2010-01-30 16:57:40 +00:00
|
|
|
/*
|
2015-04-13 07:36:54 +00:00
|
|
|
* This file is part of mpv.
|
2010-01-30 16:57:40 +00:00
|
|
|
*
|
2017-09-21 11:50:18 +00:00
|
|
|
* mpv is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
2010-01-30 16:57:40 +00:00
|
|
|
*
|
2015-04-13 07:36:54 +00:00
|
|
|
* mpv is distributed in the hope that it will be useful,
|
2010-01-30 16:57:40 +00:00
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
2017-09-21 11:50:18 +00:00
|
|
|
* GNU Lesser General Public License for more details.
|
2010-01-30 16:57:40 +00:00
|
|
|
*
|
2017-09-21 11:50:18 +00:00
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
|
2010-01-30 16:57:40 +00:00
|
|
|
*/
|
2007-08-04 22:12:49 +00:00
|
|
|
|
2014-06-17 20:44:13 +00:00
|
|
|
#include <limits.h>
|
2012-12-11 23:43:36 +00:00
|
|
|
#include <assert.h>
|
2007-08-04 22:12:49 +00:00
|
|
|
|
2012-12-31 00:58:25 +00:00
|
|
|
#include <libavutil/mem.h>
|
|
|
|
#include <libavutil/common.h>
|
2021-10-02 17:19:10 +00:00
|
|
|
#include <libavutil/display.h>
|
2024-06-21 16:35:11 +00:00
|
|
|
#include <libavutil/dovi_meta.h>
|
2012-12-26 20:13:58 +00:00
|
|
|
#include <libavutil/bswap.h>
|
2017-01-12 08:40:16 +00:00
|
|
|
#include <libavutil/hwcontext.h>
|
2020-05-20 23:56:31 +00:00
|
|
|
#include <libavutil/intreadwrite.h>
|
2015-12-19 19:04:31 +00:00
|
|
|
#include <libavutil/rational.h>
|
2013-06-28 19:14:43 +00:00
|
|
|
#include <libavcodec/avcodec.h>
|
2017-10-30 20:07:38 +00:00
|
|
|
#include <libavutil/mastering_display_metadata.h>
|
2023-11-03 23:52:14 +00:00
|
|
|
#include <libplacebo/utils/libav.h>
|
2017-10-30 20:07:38 +00:00
|
|
|
|
2016-01-11 18:03:40 +00:00
|
|
|
#include "mpv_talloc.h"
|
2007-08-04 22:12:49 +00:00
|
|
|
|
2017-10-16 14:36:21 +00:00
|
|
|
#include "common/av_common.h"
|
2017-06-16 17:35:24 +00:00
|
|
|
#include "common/common.h"
|
2023-10-21 02:55:41 +00:00
|
|
|
#include "fmt-conversion.h"
|
video: add mp_image_params.hw_flags and add an example
It seems this will be useful for Rokchip DRM hwcontext integration.
DRM hwcontexts have additional internal structure which can be different
depending on the decoder, and which is not part of the generic hwcontext
API. Rockchip has 1 layer, which EGL interop happens to translate to a
RGB texture, while VAAPI (mapped as DRM hwcontext) will use multiple
layers. Both will use sw_format=nv12, and thus are indistinguishable on
the mp_image_params level. But this is needed to initialize the EGL
mapping and the vo_gpu video renderer correctly.
We hope that the layer count is enough to tell whether EGL will
translate the data to a RGB texture (vs. 2 texture resembling raw nv12
data). For that we introduce MP_IMAGE_HW_FLAG_OPAQUE.
This commit adds the flag, infrastructure to set it, and an "example"
for D3D11.
The D3D11 addition is quite useless at this point. But later we want to
get rid of d3d11_update_image_attribs() anyway, while we still need a
way to force d3d11vpp filter insertion, so maybe it has some
justification (who knows). In any case it makes testing this easier.
Obviously it also adds some basic support for triggering the opaque
format for decoding, which will use a driver-specific format, but which
is not supported in shaders. The opaque flag is not used to determine
whether d3d11vpp needs to be inserted, though.
2017-10-16 12:44:59 +00:00
|
|
|
#include "hwdec.h"
|
2013-03-09 19:21:12 +00:00
|
|
|
#include "mp_image.h"
|
2023-10-21 02:55:41 +00:00
|
|
|
#include "osdep/threads.h"
|
2013-03-09 19:21:12 +00:00
|
|
|
#include "sws_utils.h"
|
2023-11-03 23:52:14 +00:00
|
|
|
#include "out/placebo/utils.h"
|
2007-08-04 22:12:49 +00:00
|
|
|
|
2017-07-23 07:31:27 +00:00
|
|
|
// Determine strides, plane sizes, and total required size for an image
|
|
|
|
// allocation. Returns total size on success, <0 on error. Unused planes
|
|
|
|
// have out_stride/out_plane_size to 0, and out_plane_offset set to -1 up
|
|
|
|
// until MP_MAX_PLANES-1.
|
|
|
|
static int mp_image_layout(int imgfmt, int w, int h, int stride_align,
|
|
|
|
int out_stride[MP_MAX_PLANES],
|
|
|
|
int out_plane_offset[MP_MAX_PLANES],
|
|
|
|
int out_plane_size[MP_MAX_PLANES])
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
{
|
2017-07-23 07:31:27 +00:00
|
|
|
struct mp_imgfmt_desc desc = mp_imgfmt_get_desc(imgfmt);
|
2020-05-17 12:57:13 +00:00
|
|
|
|
|
|
|
w = MP_ALIGN_UP(w, desc.align_x);
|
|
|
|
h = MP_ALIGN_UP(h, desc.align_y);
|
|
|
|
|
2017-07-23 07:31:27 +00:00
|
|
|
struct mp_image_params params = {.imgfmt = imgfmt, .w = w, .h = h};
|
2012-12-11 23:43:36 +00:00
|
|
|
|
2017-07-23 07:31:27 +00:00
|
|
|
if (!mp_image_params_valid(¶ms) || desc.flags & MP_IMGFLAG_HWACCEL)
|
|
|
|
return -1;
|
2014-06-17 20:44:13 +00:00
|
|
|
|
2013-08-06 18:09:31 +00:00
|
|
|
// Note: for non-mod-2 4:2:0 YUV frames, we have to allocate an additional
|
|
|
|
// top/right border. This is needed for correct handling of such
|
2018-01-20 15:10:42 +00:00
|
|
|
// images in filter and VO code (e.g. vo_vdpau or vo_gpu).
|
2013-08-06 18:09:31 +00:00
|
|
|
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
for (int n = 0; n < MP_MAX_PLANES; n++) {
|
2017-07-23 07:31:27 +00:00
|
|
|
int alloc_w = mp_chroma_div_up(w, desc.xs[n]);
|
|
|
|
int alloc_h = MP_ALIGN_UP(h, 32) >> desc.ys[n];
|
|
|
|
int line_bytes = (alloc_w * desc.bpp[n] + 7) / 8;
|
2023-10-19 15:52:14 +00:00
|
|
|
out_stride[n] = MP_ALIGN_NPOT(line_bytes, stride_align);
|
2017-07-23 07:31:27 +00:00
|
|
|
out_plane_size[n] = out_stride[n] * alloc_h;
|
2007-08-04 22:12:49 +00:00
|
|
|
}
|
2017-07-23 07:31:27 +00:00
|
|
|
if (desc.flags & MP_IMGFLAG_PAL)
|
2017-12-01 21:03:38 +00:00
|
|
|
out_plane_size[1] = AVPALETTE_SIZE;
|
2009-12-31 23:09:35 +00:00
|
|
|
|
2017-07-23 07:31:27 +00:00
|
|
|
int sum = 0;
|
|
|
|
for (int n = 0; n < MP_MAX_PLANES; n++) {
|
|
|
|
out_plane_offset[n] = out_plane_size[n] ? sum : -1;
|
|
|
|
sum += out_plane_size[n];
|
|
|
|
}
|
|
|
|
|
|
|
|
return sum;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Return the total size needed for an image allocation of the given
|
|
|
|
// configuration (imgfmt, w, h must be set). Returns -1 on error.
|
|
|
|
// Assumes the allocation is already aligned on stride_align (otherwise you
|
|
|
|
// need to add padding yourself).
|
|
|
|
int mp_image_get_alloc_size(int imgfmt, int w, int h, int stride_align)
|
|
|
|
{
|
|
|
|
int stride[MP_MAX_PLANES];
|
|
|
|
int plane_offset[MP_MAX_PLANES];
|
|
|
|
int plane_size[MP_MAX_PLANES];
|
|
|
|
return mp_image_layout(imgfmt, w, h, stride_align, stride, plane_offset,
|
|
|
|
plane_size);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Fill the mpi->planes and mpi->stride fields of the given mpi with data
|
|
|
|
// from buffer according to the mpi's w/h/imgfmt fields. See mp_image_from_buffer
|
|
|
|
// aboud remarks how to allocate/use buffer/buffer_size.
|
|
|
|
// This does not free the data. You are expected to setup refcounting by
|
|
|
|
// setting mp_image.bufs before or after this function is called.
|
|
|
|
// Returns true on success, false on failure.
|
|
|
|
static bool mp_image_fill_alloc(struct mp_image *mpi, int stride_align,
|
|
|
|
void *buffer, int buffer_size)
|
|
|
|
{
|
|
|
|
int stride[MP_MAX_PLANES];
|
|
|
|
int plane_offset[MP_MAX_PLANES];
|
|
|
|
int plane_size[MP_MAX_PLANES];
|
|
|
|
int size = mp_image_layout(mpi->imgfmt, mpi->w, mpi->h, stride_align,
|
|
|
|
stride, plane_offset, plane_size);
|
|
|
|
if (size < 0 || size > buffer_size)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
int align = MP_ALIGN_UP((uintptr_t)buffer, stride_align) - (uintptr_t)buffer;
|
|
|
|
if (buffer_size - size < align)
|
|
|
|
return false;
|
|
|
|
uint8_t *s = buffer;
|
|
|
|
s += align;
|
|
|
|
|
|
|
|
for (int n = 0; n < MP_MAX_PLANES; n++) {
|
|
|
|
mpi->planes[n] = plane_offset[n] >= 0 ? s + plane_offset[n] : NULL;
|
|
|
|
mpi->stride[n] = stride[n];
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Create a mp_image from the provided buffer. The mp_image is filled according
|
|
|
|
// to the imgfmt/w/h parameters, and respecting the stride_align parameter to
|
|
|
|
// align the plane start pointers and strides. Once the last reference to the
|
|
|
|
// returned image is destroyed, free(free_opaque, buffer) is called. (Be aware
|
|
|
|
// that this can happen from any thread.)
|
|
|
|
// The allocated size of buffer must be given by buffer_size. buffer_size should
|
|
|
|
// be at least the value returned by mp_image_get_alloc_size(). If buffer is not
|
|
|
|
// already aligned to stride_align, the function will attempt to align the
|
2023-03-28 15:16:42 +00:00
|
|
|
// pointer itself by incrementing the buffer pointer until their alignment is
|
2017-07-23 07:31:27 +00:00
|
|
|
// achieved (if buffer_size is not large enough to allow aligning the buffer
|
|
|
|
// safely, the function fails). To be safe, you may want to overallocate the
|
|
|
|
// buffer by stride_align bytes, and include the overallocation in buffer_size.
|
|
|
|
// Returns NULL on failure. On failure, the free() callback is not called.
|
|
|
|
struct mp_image *mp_image_from_buffer(int imgfmt, int w, int h, int stride_align,
|
|
|
|
uint8_t *buffer, int buffer_size,
|
|
|
|
void *free_opaque,
|
|
|
|
void (*free)(void *opaque, uint8_t *data))
|
|
|
|
{
|
|
|
|
struct mp_image *mpi = mp_image_new_dummy_ref(NULL);
|
|
|
|
mp_image_setfmt(mpi, imgfmt);
|
|
|
|
mp_image_set_size(mpi, w, h);
|
|
|
|
|
|
|
|
if (!mp_image_fill_alloc(mpi, stride_align, buffer, buffer_size))
|
|
|
|
goto fail;
|
|
|
|
|
|
|
|
mpi->bufs[0] = av_buffer_create(buffer, buffer_size, free, free_opaque, 0);
|
|
|
|
if (!mpi->bufs[0])
|
|
|
|
goto fail;
|
|
|
|
|
|
|
|
return mpi;
|
|
|
|
|
|
|
|
fail:
|
|
|
|
talloc_free(mpi);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool mp_image_alloc_planes(struct mp_image *mpi)
|
|
|
|
{
|
|
|
|
assert(!mpi->planes[0]);
|
|
|
|
assert(!mpi->bufs[0]);
|
|
|
|
|
video: generally try to align image data on 64 bytes
Generally, using x86 SIMD efficiently (or crash-free) requires aligning
all data on boundaries of 16, 32, or 64 (depending on instruction set
used). 64 bytes is needed or AVX-512, 32 for old AVX, 16 for SSE. Both
FFmpeg and zimg usually require aligned data for this reason.
FFmpeg is very unclear about alignment. Yes, it requires you to align
data pointers and strides. No, it doesn't tell you how much, except
sometimes (libavcodec has a legacy-looking avcodec_align_dimensions2()
API function, that requires a heavy-weight AVCodecContext as argument).
Sometimes, FFmpeg will take a shit on YOUR and ITS OWN alignment. For
example, vf_crop will randomly reduce alignment of data pointers,
depending on the crop parameters. On the other hand, some libavfilter
filters or libavcodec encoders may randomly crash if they get the wrong
alignment. I have no idea how this thing works at all.
FFmpeg usually doesn't seem to signal alignment internal anywhere, and
usually leaves it to av_malloc() etc. to allocate with proper alignment.
libavutil/mem.c currently has a ALIGN define, which is set to 64 if
FFmpeg is built with AVX-512 support, or as low as 16 if built without
any AVX support. The really funny thing is that a normal FFmpeg build
will e.g. align tiny string allocations to 64 bytes, even if the machine
does not support AVX at all.
For zimg use (in a later commit), we also want guaranteed alignment.
Modern x86 should actually not be much slower at unaligned accesses, but
that doesn't help. zimg's dumb intrinsic code apparently randomly
chooses between aligned or unaligned accesses (depending on compiler, I
guess), and on some CPUs these can even cause crashes. So just treat the
requirement to align as a fact of life.
All this means that we should probably make sure our own allocations are
64 bit aligned. This still doesn't guarantee alignment in all cases, but
it's slightly better than before.
This also makes me wonder whether we should always override libavcodec's
buffer pool, just so we have a guaranteed alignment. Currently, we only
do that if --vd-lavc-dr is used (and if that actually works). On the
other hand, it always uses DR on my machine, so who cares.
2019-07-15 01:06:47 +00:00
|
|
|
int align = MP_IMAGE_BYTE_ALIGN;
|
2017-07-23 07:31:27 +00:00
|
|
|
|
|
|
|
int size = mp_image_get_alloc_size(mpi->imgfmt, mpi->w, mpi->h, align);
|
|
|
|
if (size < 0)
|
|
|
|
return false;
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
// Note: mp_image_pool assumes this creates only 1 AVBufferRef.
|
2017-07-23 07:31:27 +00:00
|
|
|
mpi->bufs[0] = av_buffer_alloc(size + align);
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
if (!mpi->bufs[0])
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
return false;
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
|
2017-07-23 07:31:27 +00:00
|
|
|
if (!mp_image_fill_alloc(mpi, align, mpi->bufs[0]->data, mpi->bufs[0]->size)) {
|
|
|
|
av_buffer_unref(&mpi->bufs[0]);
|
|
|
|
return false;
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
}
|
2017-07-23 07:31:27 +00:00
|
|
|
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
return true;
|
2007-08-04 22:12:49 +00:00
|
|
|
}
|
2010-04-15 05:39:36 +00:00
|
|
|
|
2014-03-17 17:19:57 +00:00
|
|
|
void mp_image_setfmt(struct mp_image *mpi, int out_fmt)
|
2012-12-31 00:58:25 +00:00
|
|
|
{
|
|
|
|
struct mp_imgfmt_desc fmt = mp_imgfmt_get_desc(out_fmt);
|
2023-11-23 04:36:53 +00:00
|
|
|
mpi->params.imgfmt = fmt.id;
|
2012-12-31 00:58:25 +00:00
|
|
|
mpi->fmt = fmt;
|
|
|
|
mpi->imgfmt = fmt.id;
|
|
|
|
mpi->num_planes = fmt.num_planes;
|
2010-04-15 05:39:36 +00:00
|
|
|
}
|
|
|
|
|
2013-10-12 23:16:30 +00:00
|
|
|
static void mp_image_destructor(void *ptr)
|
2011-10-06 18:46:01 +00:00
|
|
|
{
|
|
|
|
mp_image_t *mpi = ptr;
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
for (int p = 0; p < MP_MAX_PLANES; p++)
|
|
|
|
av_buffer_unref(&mpi->bufs[p]);
|
2016-04-15 13:07:02 +00:00
|
|
|
av_buffer_unref(&mpi->hwctx);
|
2017-07-25 21:06:27 +00:00
|
|
|
av_buffer_unref(&mpi->icc_profile);
|
2018-01-29 12:49:39 +00:00
|
|
|
av_buffer_unref(&mpi->a53_cc);
|
2022-01-06 07:32:46 +00:00
|
|
|
av_buffer_unref(&mpi->dovi);
|
2022-04-01 20:25:38 +00:00
|
|
|
av_buffer_unref(&mpi->film_grain);
|
2018-03-01 12:58:15 +00:00
|
|
|
for (int n = 0; n < mpi->num_ff_side_data; n++)
|
|
|
|
av_buffer_unref(&mpi->ff_side_data[n].buf);
|
|
|
|
talloc_free(mpi->ff_side_data);
|
2011-10-06 18:46:01 +00:00
|
|
|
}
|
|
|
|
|
vo_opengl: change the way unaligned chroma size is handled
This deals with subsampled YUV video that has odd sizes, for example a
5x5 image with 4:2:0 subsampling.
It would be easy to handle if we actually passed separate texture
coordinates for each plane to the shader, but as of now the luma
coordinates are implicitly rescaled to chroma one. If luma and chroma
sizes don't match up, and this is not handled, you'd get a chroma shift
by 1 pixel.
The existing hack worked, but broke separable scaling. This was exposed
by a recent commit which switched to GL_NEAREST sampling for FBOs. The
rendering was accidentally scaled by 1 pixel, because the FBO size used
the original video size, while textures_sizes[0] was set to the padded
texture size (i.e. one pixel larger).
It could be fixed by setting the padded texture size only on the first
shader. But somehow that is annoying, so do something else. Don't pad
textures anymore, and rescale the chroma coordinates in the shader
instead.
Seems like this somehow doesn't work with rectangle textures (and
introduces a chroma shift), but since it's only used when doing VDA
hardware decoding, and the bug occurs only with unaligned video sizes, I
don't care much.
Fixes #1523.
2015-01-27 17:08:42 +00:00
|
|
|
int mp_chroma_div_up(int size, int shift)
|
2012-12-11 23:43:36 +00:00
|
|
|
{
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
return (size + (1 << shift) - 1) >> shift;
|
2010-04-15 05:39:36 +00:00
|
|
|
}
|
|
|
|
|
2015-04-10 18:58:26 +00:00
|
|
|
// Return the storage width in pixels of the given plane.
|
|
|
|
int mp_image_plane_w(struct mp_image *mpi, int plane)
|
|
|
|
{
|
2022-02-24 14:49:30 +00:00
|
|
|
return mp_chroma_div_up(mpi->w, mpi->fmt.xs[plane]);
|
2015-04-10 18:58:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Return the storage height in pixels of the given plane.
|
|
|
|
int mp_image_plane_h(struct mp_image *mpi, int plane)
|
|
|
|
{
|
2022-02-24 14:49:30 +00:00
|
|
|
return mp_chroma_div_up(mpi->h, mpi->fmt.ys[plane]);
|
2015-04-10 18:58:26 +00:00
|
|
|
}
|
|
|
|
|
2012-11-10 01:02:24 +00:00
|
|
|
// Caller has to make sure this doesn't exceed the allocated plane data/strides.
|
|
|
|
void mp_image_set_size(struct mp_image *mpi, int w, int h)
|
|
|
|
{
|
2014-06-17 20:44:13 +00:00
|
|
|
assert(w >= 0 && h >= 0);
|
2015-12-19 19:04:31 +00:00
|
|
|
mpi->w = mpi->params.w = w;
|
|
|
|
mpi->h = mpi->params.h = h;
|
2012-11-10 01:02:24 +00:00
|
|
|
}
|
|
|
|
|
2014-04-20 19:27:45 +00:00
|
|
|
void mp_image_set_params(struct mp_image *image,
|
|
|
|
const struct mp_image_params *params)
|
|
|
|
{
|
2014-05-01 12:24:20 +00:00
|
|
|
// possibly initialize other stuff
|
2014-04-20 19:27:45 +00:00
|
|
|
mp_image_setfmt(image, params->imgfmt);
|
|
|
|
mp_image_set_size(image, params->w, params->h);
|
2014-05-01 12:24:20 +00:00
|
|
|
image->params = *params;
|
2014-04-20 19:27:45 +00:00
|
|
|
}
|
|
|
|
|
2014-03-17 17:19:57 +00:00
|
|
|
struct mp_image *mp_image_alloc(int imgfmt, int w, int h)
|
2012-12-11 23:43:36 +00:00
|
|
|
{
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
struct mp_image *mpi = talloc_zero(NULL, struct mp_image);
|
|
|
|
talloc_set_destructor(mpi, mp_image_destructor);
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
mp_image_set_size(mpi, w, h);
|
2012-12-11 23:43:36 +00:00
|
|
|
mp_image_setfmt(mpi, imgfmt);
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
if (!mp_image_alloc_planes(mpi)) {
|
|
|
|
talloc_free(mpi);
|
|
|
|
return NULL;
|
|
|
|
}
|
2012-12-11 23:43:36 +00:00
|
|
|
return mpi;
|
|
|
|
}
|
|
|
|
|
Implement backwards playback
See manpage additions. This is a huge hack. You can bet there are shit
tons of bugs. It's literally forcing square pegs into round holes.
Hopefully, the manpage wall of text makes it clear enough that the whole
shit can easily crash and burn. (Although it shouldn't literally crash.
That would be a bug. It possibly _could_ start a fire by entering some
sort of endless loop, not a literal one, just something where it tries
to do work without making progress.)
(Some obvious bugs I simply ignored for this initial version, but
there's a number of potential bugs I can't even imagine. Normal playback
should remain completely unaffected, though.)
How this works is also described in the manpage. Basically, we demux in
reverse, then we decode in reverse, then we render in reverse.
The decoding part is the simplest: just reorder the decoder output. This
weirdly integrates with the timeline/ordered chapter code, which also
has special requirements on feeding the packets to the decoder in a
non-straightforward way (it doesn't conflict, although a bugmessmass
breaks correct slicing of segments, so EDL/ordered chapter playback is
broken in backward direction).
Backward demuxing is pretty involved. In theory, it could be much
easier: simply iterating the usual demuxer output backward. But this
just doesn't fit into our code, so there's a cthulhu nightmare of shit.
To be specific, each stream (audio, video) is reversed separately. At
least this means we can do backward playback within cached content (for
example, you could play backwards in a live stream; on that note, it
disables prefetching, which would lead to losing new live video, but
this could be avoided).
The fuckmess also meant that I didn't bother trying to support
subtitles. Subtitles are a problem because they're "sparse" streams.
They need to be "passively" demuxed: you don't try to read a subtitle
packet, you demux audio and video, and then look whether there was a
subtitle packet. This means to get subtitles for a time range, you need
to know that you demuxed video and audio over this range, which becomes
pretty messy when you demux audio and video backwards separately.
Backward display is the most weird (and potentially buggy) part. To
avoid that we need to touch a LOT of timing code, we negate all
timestamps. The basic idea is that due to the navigation, all
comparisons and subtractions of timestamps keep working, and you don't
need to touch every single of them to "reverse" them.
E.g.:
bool before = pts_a < pts_b;
would need to be:
bool before = forward
? pts_a < pts_b
: pts_a > pts_b;
or:
bool before = pts_a * dir < pts_b * dir;
or if you, as it's implemented now, just do this after decoding:
pts_a *= dir;
pts_b *= dir;
and then in the normal timing/renderer code:
bool before = pts_a < pts_b;
Consequently, we don't need many changes in the latter code. But some
assumptions inhererently true for forward playback may have been broken
anyway. What is mainly needed is fixing places where values are passed
between positive and negative "domains". For example, seeking and
timestamp user display always uses positive timestamps. The main mess is
that it's not obvious which domain a given variable should or does use.
Well, in my tests with a single file, it suddenly started to work when I
did this. I'm honestly surprised that it did, and that I didn't have to
change a single line in the timing code past decoder (just something
minor to make external/cached text subtitles display). I committed it
immediately while avoiding thinking about it. But there really likely
are subtle problems of all sorts.
As far as I'm aware, gstreamer also supports backward playback. When I
looked at this years ago, I couldn't find a way to actually try this,
and I didn't revisit it now. Back then I also read talk slides from the
person who implemented it, and I'm not sure if and which ideas I might
have taken from it. It's possible that the timestamp reversal is
inspired by it, but I didn't check. (I think it claimed that it could
avoid large changes by changing a sign?)
VapourSynth has some sort of reverse function, which provides a backward
view on a video. The function itself is trivial to implement, as
VapourSynth aims to provide random access to video by frame numbers (so
you just request decreasing frame numbers). From what I remember, it
wasn't exactly fluid, but it worked. It's implemented by creating an
index, and seeking to the target on demand, and a bunch of caching. mpv
could use it, but it would either require using VapourSynth as demuxer
and decoder for everything, or replacing the current file every time
something is supposed to be played backwards.
FFmpeg's libavfilter has reversal filters for audio and video. These
require buffering the entire media data of the file, and don't really
fit into mpv's architecture. It could be used by playing a libavfilter
graph that also demuxes, but that's like VapourSynth but worse.
2019-05-18 00:10:51 +00:00
|
|
|
int mp_image_approx_byte_size(struct mp_image *img)
|
|
|
|
{
|
|
|
|
int total = sizeof(*img);
|
|
|
|
|
|
|
|
for (int n = 0; n < MP_MAX_PLANES; n++) {
|
|
|
|
struct AVBufferRef *buf = img->bufs[n];
|
|
|
|
if (buf)
|
|
|
|
total += buf->size;
|
|
|
|
}
|
|
|
|
|
|
|
|
return total;
|
|
|
|
}
|
|
|
|
|
2012-12-11 23:43:36 +00:00
|
|
|
struct mp_image *mp_image_new_copy(struct mp_image *img)
|
|
|
|
{
|
|
|
|
struct mp_image *new = mp_image_alloc(img->imgfmt, img->w, img->h);
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
if (!new)
|
|
|
|
return NULL;
|
2012-12-11 23:43:36 +00:00
|
|
|
mp_image_copy(new, img);
|
|
|
|
mp_image_copy_attributes(new, img);
|
|
|
|
return new;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Make dst take over the image data of src, and free src.
|
|
|
|
// This is basically a safe version of *dst = *src; free(src);
|
|
|
|
// Only works with ref-counted images, and can't change image size/format.
|
|
|
|
void mp_image_steal_data(struct mp_image *dst, struct mp_image *src)
|
|
|
|
{
|
|
|
|
assert(dst->imgfmt == src->imgfmt && dst->w == src->w && dst->h == src->h);
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
assert(dst->bufs[0] && src->bufs[0]);
|
2012-12-11 23:43:36 +00:00
|
|
|
|
2016-04-15 13:11:08 +00:00
|
|
|
mp_image_destructor(dst); // unref old
|
|
|
|
talloc_free_children(dst);
|
2012-12-11 23:43:36 +00:00
|
|
|
|
2016-04-15 13:11:08 +00:00
|
|
|
*dst = *src;
|
|
|
|
|
|
|
|
*src = (struct mp_image){0};
|
2012-12-11 23:43:36 +00:00
|
|
|
talloc_free(src);
|
|
|
|
}
|
|
|
|
|
2017-02-20 12:15:50 +00:00
|
|
|
// Unref most data buffer (and clear the data array), but leave other fields
|
|
|
|
// allocated. In particular, mp_image.hwctx is preserved.
|
|
|
|
void mp_image_unref_data(struct mp_image *img)
|
|
|
|
{
|
|
|
|
for (int n = 0; n < MP_MAX_PLANES; n++) {
|
|
|
|
img->planes[n] = NULL;
|
|
|
|
img->stride[n] = 0;
|
|
|
|
av_buffer_unref(&img->bufs[n]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-28 05:42:22 +00:00
|
|
|
static void ref_buffer(AVBufferRef **dst)
|
2018-01-29 12:47:05 +00:00
|
|
|
{
|
|
|
|
if (*dst) {
|
|
|
|
*dst = av_buffer_ref(*dst);
|
2023-06-28 05:42:22 +00:00
|
|
|
MP_HANDLE_OOM(*dst);
|
2018-01-29 12:47:05 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-12-11 23:43:36 +00:00
|
|
|
// Return a new reference to img. The returned reference is owned by the caller,
|
|
|
|
// while img is left untouched.
|
|
|
|
struct mp_image *mp_image_new_ref(struct mp_image *img)
|
|
|
|
{
|
2015-01-24 21:56:02 +00:00
|
|
|
if (!img)
|
|
|
|
return NULL;
|
|
|
|
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
if (!img->bufs[0])
|
2012-12-11 23:43:36 +00:00
|
|
|
return mp_image_new_copy(img);
|
|
|
|
|
|
|
|
struct mp_image *new = talloc_ptrtype(NULL, new);
|
|
|
|
talloc_set_destructor(new, mp_image_destructor);
|
|
|
|
*new = *img;
|
|
|
|
|
2018-01-29 12:47:05 +00:00
|
|
|
for (int p = 0; p < MP_MAX_PLANES; p++)
|
2023-06-28 05:42:22 +00:00
|
|
|
ref_buffer(&new->bufs[p]);
|
2018-01-29 12:47:05 +00:00
|
|
|
|
2023-06-28 05:42:22 +00:00
|
|
|
ref_buffer(&new->hwctx);
|
|
|
|
ref_buffer(&new->icc_profile);
|
|
|
|
ref_buffer(&new->a53_cc);
|
|
|
|
ref_buffer(&new->dovi);
|
|
|
|
ref_buffer(&new->film_grain);
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
|
2018-03-01 12:58:15 +00:00
|
|
|
new->ff_side_data = talloc_memdup(NULL, new->ff_side_data,
|
|
|
|
new->num_ff_side_data * sizeof(new->ff_side_data[0]));
|
|
|
|
for (int n = 0; n < new->num_ff_side_data; n++)
|
2023-06-28 05:42:22 +00:00
|
|
|
ref_buffer(&new->ff_side_data[n].buf);
|
2017-07-25 21:06:27 +00:00
|
|
|
|
2023-06-28 05:42:22 +00:00
|
|
|
return new;
|
2012-12-11 23:43:36 +00:00
|
|
|
}
|
|
|
|
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
struct free_args {
|
|
|
|
void *arg;
|
|
|
|
void (*free)(void *arg);
|
|
|
|
};
|
|
|
|
|
|
|
|
static void call_free(void *opaque, uint8_t *data)
|
|
|
|
{
|
|
|
|
struct free_args *args = opaque;
|
|
|
|
args->free(args->arg);
|
|
|
|
talloc_free(args);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Create a new mp_image based on img, but don't set any buffers.
|
|
|
|
// Using this is only valid until the original img is unreferenced (including
|
|
|
|
// implicit unreferencing of the data by mp_image_make_writeable()), unless
|
|
|
|
// a new reference is set.
|
|
|
|
struct mp_image *mp_image_new_dummy_ref(struct mp_image *img)
|
2012-12-11 23:43:36 +00:00
|
|
|
{
|
|
|
|
struct mp_image *new = talloc_ptrtype(NULL, new);
|
|
|
|
talloc_set_destructor(new, mp_image_destructor);
|
2016-04-25 09:28:49 +00:00
|
|
|
*new = img ? *img : (struct mp_image){0};
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
for (int p = 0; p < MP_MAX_PLANES; p++)
|
|
|
|
new->bufs[p] = NULL;
|
2016-04-15 13:07:02 +00:00
|
|
|
new->hwctx = NULL;
|
2018-03-13 09:33:37 +00:00
|
|
|
new->icc_profile = NULL;
|
|
|
|
new->a53_cc = NULL;
|
2022-01-06 07:32:46 +00:00
|
|
|
new->dovi = NULL;
|
2022-04-01 20:25:38 +00:00
|
|
|
new->film_grain = NULL;
|
2018-03-13 09:33:37 +00:00
|
|
|
new->num_ff_side_data = 0;
|
|
|
|
new->ff_side_data = NULL;
|
2012-12-11 23:43:36 +00:00
|
|
|
return new;
|
|
|
|
}
|
|
|
|
|
2015-03-19 22:59:30 +00:00
|
|
|
// Return a reference counted reference to img. If the reference count reaches
|
|
|
|
// 0, call free(free_arg). The data passed by img must not be free'd before
|
|
|
|
// that. The new reference will be writeable.
|
|
|
|
// On allocation failure, unref the frame and return NULL.
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
// This is only used for hw decoding; this is important, because libav* expects
|
|
|
|
// all plane data to be accounted for by AVBufferRefs.
|
2015-03-19 22:59:30 +00:00
|
|
|
struct mp_image *mp_image_new_custom_ref(struct mp_image *img, void *free_arg,
|
|
|
|
void (*free)(void *arg))
|
|
|
|
{
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
struct mp_image *new = mp_image_new_dummy_ref(img);
|
|
|
|
|
|
|
|
struct free_args *args = talloc_ptrtype(NULL, args);
|
|
|
|
*args = (struct free_args){free_arg, free};
|
|
|
|
new->bufs[0] = av_buffer_create(NULL, 0, call_free, args,
|
|
|
|
AV_BUFFER_FLAG_READONLY);
|
|
|
|
if (new->bufs[0])
|
|
|
|
return new;
|
|
|
|
talloc_free(new);
|
|
|
|
return NULL;
|
2015-03-19 22:59:30 +00:00
|
|
|
}
|
|
|
|
|
2012-12-11 23:43:36 +00:00
|
|
|
bool mp_image_is_writeable(struct mp_image *img)
|
|
|
|
{
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
if (!img->bufs[0])
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
return true; // not ref-counted => always considered writeable
|
video: replace our own refcounting with libavutil's
mpv had refcounted frames before libav*, so we were not using
libavutil's facilities. Change this and drop our own code.
Since AVFrames are not actually refcounted, and only the image data
they reference, the semantics change a bit. This affects mainly
mp_image_pool, which was operating on whole images instead of buffers.
While we could work on AVBufferRefs instead (and use AVBufferPool),
this doesn't work for use with hardware decoding, which doesn't
map cleanly to FFmpeg's reference counting. But it worked out. One
weird consequence is that we still need our custom image data
allocation function (for normal image data), because AVFrame's uses
multiple buffers.
There also seems to be a timing-dependent problem with vaapi (the
pool appears to be "leaking" surfaces). I don't know if this is a new
problem, or whether the code changes just happened to cause it more
often. Raising the number of reserved surfaces seemed to fix it, but
since it appears to be timing dependent, and I couldn't find anything
wrong with the code, I'm just going to assume it's not a new bug.
2015-07-05 21:56:00 +00:00
|
|
|
for (int p = 0; p < MP_MAX_PLANES; p++) {
|
|
|
|
if (!img->bufs[p])
|
|
|
|
break;
|
|
|
|
if (!av_buffer_is_writable(img->bufs[p]))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
2012-12-11 23:43:36 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Make the image data referenced by img writeable. This allocates new data
|
|
|
|
// if the data wasn't already writeable, and img->planes[] and img->stride[]
|
|
|
|
// will be set to the copy.
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
// Returns success; if false is returned, the image could not be made writeable.
|
|
|
|
bool mp_image_make_writeable(struct mp_image *img)
|
2012-12-11 23:43:36 +00:00
|
|
|
{
|
|
|
|
if (mp_image_is_writeable(img))
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
return true;
|
2012-12-11 23:43:36 +00:00
|
|
|
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
struct mp_image *new = mp_image_new_copy(img);
|
|
|
|
if (!new)
|
|
|
|
return false;
|
|
|
|
mp_image_steal_data(img, new);
|
2012-12-11 23:43:36 +00:00
|
|
|
assert(mp_image_is_writeable(img));
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
return true;
|
2012-12-11 23:43:36 +00:00
|
|
|
}
|
|
|
|
|
video: introduce failure path for image allocations
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
2014-06-17 20:43:43 +00:00
|
|
|
// Helper function: unrefs *p_img, and sets *p_img to a new ref of new_value.
|
|
|
|
// Only unrefs *p_img and sets it to NULL if out of memory.
|
2012-12-11 23:43:36 +00:00
|
|
|
void mp_image_setrefp(struct mp_image **p_img, struct mp_image *new_value)
|
|
|
|
{
|
|
|
|
if (*p_img != new_value) {
|
|
|
|
talloc_free(*p_img);
|
|
|
|
*p_img = new_value ? mp_image_new_ref(new_value) : NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Mere helper function (mp_image can be directly free'd with talloc_free)
|
|
|
|
void mp_image_unrefp(struct mp_image **p_img)
|
|
|
|
{
|
|
|
|
talloc_free(*p_img);
|
|
|
|
*p_img = NULL;
|
2010-04-15 05:39:36 +00:00
|
|
|
}
|
|
|
|
|
2019-10-19 23:43:17 +00:00
|
|
|
void memcpy_pic(void *dst, const void *src, int bytesPerLine, int height,
|
|
|
|
int dstStride, int srcStride)
|
2015-09-25 16:58:17 +00:00
|
|
|
{
|
|
|
|
if (bytesPerLine == dstStride && dstStride == srcStride && height) {
|
|
|
|
if (srcStride < 0) {
|
|
|
|
src = (uint8_t*)src + (height - 1) * srcStride;
|
|
|
|
dst = (uint8_t*)dst + (height - 1) * dstStride;
|
|
|
|
srcStride = -srcStride;
|
|
|
|
}
|
|
|
|
|
2019-10-19 23:43:17 +00:00
|
|
|
memcpy(dst, src, srcStride * (height - 1) + bytesPerLine);
|
2015-09-25 16:58:17 +00:00
|
|
|
} else {
|
|
|
|
for (int i = 0; i < height; i++) {
|
2019-10-19 23:43:17 +00:00
|
|
|
memcpy(dst, src, bytesPerLine);
|
2015-09-25 16:58:17 +00:00
|
|
|
src = (uint8_t*)src + srcStride;
|
|
|
|
dst = (uint8_t*)dst + dstStride;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2019-10-19 23:43:17 +00:00
|
|
|
void mp_image_copy(struct mp_image *dst, struct mp_image *src)
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
{
|
|
|
|
assert(dst->imgfmt == src->imgfmt);
|
|
|
|
assert(dst->w == src->w && dst->h == src->h);
|
|
|
|
assert(mp_image_is_writeable(dst));
|
|
|
|
for (int n = 0; n < dst->num_planes; n++) {
|
2015-04-10 18:58:26 +00:00
|
|
|
int line_bytes = (mp_image_plane_w(dst, n) * dst->fmt.bpp[n] + 7) / 8;
|
|
|
|
int plane_h = mp_image_plane_h(dst, n);
|
2019-10-19 23:43:17 +00:00
|
|
|
memcpy_pic(dst->planes[n], src->planes[n], line_bytes, plane_h,
|
|
|
|
dst->stride[n], src->stride[n]);
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
}
|
2018-04-03 16:01:07 +00:00
|
|
|
if (dst->fmt.flags & MP_IMGFLAG_PAL)
|
2017-12-01 21:03:38 +00:00
|
|
|
memcpy(dst->planes[1], src->planes[1], AVPALETTE_SIZE);
|
mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.
Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.
Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.
Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2012-12-19 11:04:32 +00:00
|
|
|
}
|
|
|
|
|
2023-11-04 02:55:38 +00:00
|
|
|
static enum pl_color_system mp_image_params_get_forced_csp(struct mp_image_params *params)
|
2017-09-19 17:20:27 +00:00
|
|
|
{
|
|
|
|
int imgfmt = params->hw_subfmt ? params->hw_subfmt : params->imgfmt;
|
|
|
|
return mp_imgfmt_get_forced_csp(imgfmt);
|
|
|
|
}
|
|
|
|
|
2019-10-25 20:36:19 +00:00
|
|
|
static void assign_bufref(AVBufferRef **dst, AVBufferRef *new)
|
|
|
|
{
|
|
|
|
av_buffer_unref(dst);
|
|
|
|
if (new) {
|
|
|
|
*dst = av_buffer_ref(new);
|
|
|
|
MP_HANDLE_OOM(*dst);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-12-19 11:04:57 +00:00
|
|
|
void mp_image_copy_attributes(struct mp_image *dst, struct mp_image *src)
|
|
|
|
{
|
2023-03-05 19:35:09 +00:00
|
|
|
assert(dst != src);
|
|
|
|
|
2012-12-19 11:04:57 +00:00
|
|
|
dst->pict_type = src->pict_type;
|
|
|
|
dst->fields = src->fields;
|
|
|
|
dst->pts = src->pts;
|
2016-01-28 10:15:51 +00:00
|
|
|
dst->dts = src->dts;
|
2016-12-21 17:18:24 +00:00
|
|
|
dst->pkt_duration = src->pkt_duration;
|
2015-01-07 03:46:14 +00:00
|
|
|
dst->params.rotate = src->params.rotate;
|
2018-04-21 11:31:00 +00:00
|
|
|
dst->params.stereo3d = src->params.stereo3d;
|
2017-09-19 17:04:47 +00:00
|
|
|
dst->params.p_w = src->params.p_w;
|
|
|
|
dst->params.p_h = src->params.p_h;
|
2017-09-19 16:59:01 +00:00
|
|
|
dst->params.color = src->params.color;
|
2023-11-04 02:55:38 +00:00
|
|
|
dst->params.repr = src->params.repr;
|
|
|
|
dst->params.light = src->params.light;
|
2017-09-19 16:59:01 +00:00
|
|
|
dst->params.chroma_location = src->params.chroma_location;
|
2023-09-01 10:25:27 +00:00
|
|
|
dst->params.crop = src->params.crop;
|
2018-04-19 15:42:14 +00:00
|
|
|
dst->nominal_fps = src->nominal_fps;
|
2023-01-24 06:58:16 +00:00
|
|
|
|
2017-09-19 17:20:27 +00:00
|
|
|
// ensure colorspace consistency
|
2023-11-04 02:55:38 +00:00
|
|
|
enum pl_color_system dst_forced_csp = mp_image_params_get_forced_csp(&dst->params);
|
2023-01-24 06:58:16 +00:00
|
|
|
if (mp_image_params_get_forced_csp(&src->params) != dst_forced_csp) {
|
2023-11-04 02:55:38 +00:00
|
|
|
dst->params.repr.sys = dst_forced_csp != PL_COLOR_SYSTEM_UNKNOWN ?
|
2023-01-24 06:58:16 +00:00
|
|
|
dst_forced_csp :
|
|
|
|
mp_csp_guess_colorspace(src->w, src->h);
|
|
|
|
}
|
|
|
|
|
2013-12-01 19:45:44 +00:00
|
|
|
if ((dst->fmt.flags & MP_IMGFLAG_PAL) && (src->fmt.flags & MP_IMGFLAG_PAL)) {
|
2015-07-05 21:55:47 +00:00
|
|
|
if (dst->planes[1] && src->planes[1]) {
|
|
|
|
if (mp_image_make_writeable(dst))
|
2017-12-01 21:03:38 +00:00
|
|
|
memcpy(dst->planes[1], src->planes[1], AVPALETTE_SIZE);
|
2015-07-05 21:55:47 +00:00
|
|
|
}
|
2012-12-19 11:04:57 +00:00
|
|
|
}
|
2019-10-25 20:36:19 +00:00
|
|
|
assign_bufref(&dst->icc_profile, src->icc_profile);
|
2022-01-06 07:32:46 +00:00
|
|
|
assign_bufref(&dst->dovi, src->dovi);
|
2022-04-01 20:25:38 +00:00
|
|
|
assign_bufref(&dst->film_grain, src->film_grain);
|
2019-10-25 20:38:00 +00:00
|
|
|
assign_bufref(&dst->a53_cc, src->a53_cc);
|
2023-03-05 16:53:12 +00:00
|
|
|
|
|
|
|
for (int n = 0; n < dst->num_ff_side_data; n++)
|
|
|
|
av_buffer_unref(&dst->ff_side_data[n].buf);
|
|
|
|
|
|
|
|
MP_RESIZE_ARRAY(NULL, dst->ff_side_data, src->num_ff_side_data);
|
|
|
|
dst->num_ff_side_data = src->num_ff_side_data;
|
|
|
|
|
|
|
|
for (int n = 0; n < dst->num_ff_side_data; n++) {
|
|
|
|
dst->ff_side_data[n].type = src->ff_side_data[n].type;
|
|
|
|
dst->ff_side_data[n].buf = av_buffer_ref(src->ff_side_data[n].buf);
|
|
|
|
MP_HANDLE_OOM(dst->ff_side_data[n].buf);
|
|
|
|
}
|
2012-12-19 11:04:57 +00:00
|
|
|
}
|
|
|
|
|
2012-12-25 21:29:49 +00:00
|
|
|
// Crop the given image to (x0, y0)-(x1, y1) (bottom/right border exclusive)
|
|
|
|
// x0/y0 must be naturally aligned.
|
|
|
|
void mp_image_crop(struct mp_image *img, int x0, int y0, int x1, int y1)
|
|
|
|
{
|
|
|
|
assert(x0 >= 0 && y0 >= 0);
|
|
|
|
assert(x0 <= x1 && y0 <= y1);
|
|
|
|
assert(x1 <= img->w && y1 <= img->h);
|
|
|
|
assert(!(x0 & (img->fmt.align_x - 1)));
|
|
|
|
assert(!(y0 & (img->fmt.align_y - 1)));
|
|
|
|
|
|
|
|
for (int p = 0; p < img->num_planes; ++p) {
|
|
|
|
img->planes[p] += (y0 >> img->fmt.ys[p]) * img->stride[p] +
|
|
|
|
(x0 >> img->fmt.xs[p]) * img->fmt.bpp[p] / 8;
|
|
|
|
}
|
|
|
|
mp_image_set_size(img, x1 - x0, y1 - y0);
|
|
|
|
}
|
|
|
|
|
|
|
|
void mp_image_crop_rc(struct mp_image *img, struct mp_rect rc)
|
|
|
|
{
|
|
|
|
mp_image_crop(img, rc.x0, rc.y0, rc.x1, rc.y1);
|
|
|
|
}
|
|
|
|
|
2020-05-20 23:56:31 +00:00
|
|
|
// Repeatedly write count patterns of src[0..src_size] to p.
|
|
|
|
static void memset_pattern(void *p, size_t count, uint8_t *src, size_t src_size)
|
|
|
|
{
|
|
|
|
assert(src_size >= 1);
|
|
|
|
|
|
|
|
if (src_size == 1) {
|
|
|
|
memset(p, src[0], count);
|
|
|
|
} else if (src_size == 2) { // >8 bit YUV => common, be slightly less naive
|
|
|
|
uint16_t val;
|
|
|
|
memcpy(&val, src, 2);
|
|
|
|
uint16_t *p16 = p;
|
|
|
|
while (count--)
|
|
|
|
*p16++ = val;
|
|
|
|
} else {
|
|
|
|
while (count--) {
|
|
|
|
memcpy(p, src, src_size);
|
|
|
|
p = (char *)p + src_size;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool endian_swap_bytes(void *d, size_t bytes, size_t word_size)
|
|
|
|
{
|
|
|
|
if (word_size != 2 && word_size != 4)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
size_t num_words = bytes / word_size;
|
|
|
|
uint8_t *ud = d;
|
|
|
|
|
|
|
|
switch (word_size) {
|
|
|
|
case 2:
|
|
|
|
for (size_t x = 0; x < num_words; x++)
|
|
|
|
AV_WL16(ud + x * 2, AV_RB16(ud + x * 2));
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
for (size_t x = 0; x < num_words; x++)
|
|
|
|
AV_WL32(ud + x * 2, AV_RB32(ud + x * 2));
|
|
|
|
break;
|
|
|
|
default:
|
2021-11-03 14:15:20 +00:00
|
|
|
MP_ASSERT_UNREACHABLE();
|
2020-05-20 23:56:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2012-12-26 20:13:58 +00:00
|
|
|
// Bottom/right border is allowed not to be aligned, but it might implicitly
|
|
|
|
// overwrite pixel data until the alignment (align_x/align_y) is reached.
|
2020-05-20 23:56:31 +00:00
|
|
|
// Alpha is cleared to 0 (fully transparent).
|
2012-12-26 20:13:58 +00:00
|
|
|
void mp_image_clear(struct mp_image *img, int x0, int y0, int x1, int y1)
|
2012-12-19 11:04:57 +00:00
|
|
|
{
|
2012-12-26 20:13:58 +00:00
|
|
|
assert(x0 >= 0 && y0 >= 0);
|
|
|
|
assert(x0 <= x1 && y0 <= y1);
|
|
|
|
assert(x1 <= img->w && y1 <= img->h);
|
|
|
|
assert(!(x0 & (img->fmt.align_x - 1)));
|
|
|
|
assert(!(y0 & (img->fmt.align_y - 1)));
|
|
|
|
|
|
|
|
struct mp_image area = *img;
|
2020-05-20 23:56:31 +00:00
|
|
|
struct mp_imgfmt_desc *fmt = &area.fmt;
|
2012-12-26 20:13:58 +00:00
|
|
|
mp_image_crop(&area, x0, y0, x1, y1);
|
|
|
|
|
2020-05-20 23:56:31 +00:00
|
|
|
// "Black" color for each plane.
|
|
|
|
uint8_t plane_clear[MP_MAX_PLANES][8] = {0};
|
|
|
|
int plane_size[MP_MAX_PLANES] = {0};
|
|
|
|
int misery = 1; // pixel group width
|
|
|
|
|
|
|
|
// YUV integer chroma needs special consideration, and technically luma is
|
|
|
|
// usually not 0 either.
|
|
|
|
if ((fmt->flags & (MP_IMGFLAG_HAS_COMPS | MP_IMGFLAG_PACKED_SS_YUV)) &&
|
|
|
|
(fmt->flags & MP_IMGFLAG_TYPE_MASK) == MP_IMGFLAG_TYPE_UINT &&
|
|
|
|
(fmt->flags & MP_IMGFLAG_COLOR_MASK) == MP_IMGFLAG_COLOR_YUV)
|
|
|
|
{
|
|
|
|
uint64_t plane_clear_i[MP_MAX_PLANES] = {0};
|
|
|
|
|
|
|
|
// Need to handle "multiple" pixels with packed YUV.
|
|
|
|
uint8_t luma_offsets[4] = {0};
|
|
|
|
if (fmt->flags & MP_IMGFLAG_PACKED_SS_YUV) {
|
|
|
|
misery = fmt->align_x;
|
|
|
|
if (misery <= MP_ARRAY_SIZE(luma_offsets)) // ignore if out of bounds
|
|
|
|
mp_imgfmt_get_packed_yuv_locations(fmt->id, luma_offsets);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (int c = 0; c < 4; c++) {
|
|
|
|
struct mp_imgfmt_comp_desc *cd = &fmt->comps[c];
|
|
|
|
int plane_bits = fmt->bpp[cd->plane] * misery;
|
|
|
|
if (plane_bits <= 64 && plane_bits % 8u == 0 && cd->size) {
|
|
|
|
plane_size[cd->plane] = plane_bits / 8u;
|
|
|
|
int depth = cd->size + MPMIN(cd->pad, 0);
|
|
|
|
double m, o;
|
2023-11-04 02:55:38 +00:00
|
|
|
mp_get_csp_uint_mul(area.params.repr.sys,
|
|
|
|
area.params.repr.levels,
|
2020-05-20 23:56:31 +00:00
|
|
|
depth, c + 1, &m, &o);
|
|
|
|
uint64_t val = MPCLAMP(lrint((0 - o) / m), 0, 1ull << depth);
|
|
|
|
plane_clear_i[cd->plane] |= val << cd->offset;
|
|
|
|
for (int x = 1; x < (c ? 0 : misery); x++)
|
|
|
|
plane_clear_i[cd->plane] |= val << luma_offsets[x];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
for (int p = 0; p < MP_MAX_PLANES; p++) {
|
|
|
|
if (!plane_clear_i[p])
|
|
|
|
plane_size[p] = 0;
|
|
|
|
memcpy(&plane_clear[p][0], &plane_clear_i[p], 8); // endian dependent
|
|
|
|
|
|
|
|
if (fmt->endian_shift) {
|
|
|
|
endian_swap_bytes(&plane_clear[p][0], plane_size[p],
|
|
|
|
1 << fmt->endian_shift);
|
|
|
|
}
|
2020-05-09 15:58:55 +00:00
|
|
|
}
|
2012-12-19 11:04:57 +00:00
|
|
|
}
|
2012-12-26 20:13:58 +00:00
|
|
|
|
|
|
|
for (int p = 0; p < area.num_planes; p++) {
|
2020-05-20 23:56:31 +00:00
|
|
|
int p_h = mp_image_plane_h(&area, p);
|
|
|
|
int p_w = mp_image_plane_w(&area, p);
|
|
|
|
for (int y = 0; y < p_h; y++) {
|
|
|
|
void *ptr = area.planes[p] + (ptrdiff_t)area.stride[p] * y;
|
2022-07-28 17:19:14 +00:00
|
|
|
if (plane_size[p]) {
|
2020-05-20 23:56:31 +00:00
|
|
|
memset_pattern(ptr, p_w / misery, plane_clear[p], plane_size[p]);
|
|
|
|
} else {
|
|
|
|
memset(ptr, 0, mp_image_plane_bytes(&area, p, 0, area.w));
|
|
|
|
}
|
2012-12-26 20:13:58 +00:00
|
|
|
}
|
2012-12-19 11:04:57 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-05-22 12:18:35 +00:00
|
|
|
void mp_image_clear_rc(struct mp_image *mpi, struct mp_rect rc)
|
|
|
|
{
|
|
|
|
mp_image_clear(mpi, rc.x0, rc.y0, rc.x1, rc.y1);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Clear the are of the image _not_ covered by rc.
|
|
|
|
void mp_image_clear_rc_inv(struct mp_image *mpi, struct mp_rect rc)
|
|
|
|
{
|
|
|
|
struct mp_rect clr[4];
|
|
|
|
int cnt = mp_rect_subtract(&(struct mp_rect){0, 0, mpi->w, mpi->h}, &rc, clr);
|
|
|
|
for (int n = 0; n < cnt; n++)
|
|
|
|
mp_image_clear_rc(mpi, clr[n]);
|
|
|
|
}
|
|
|
|
|
2013-03-01 10:28:59 +00:00
|
|
|
void mp_image_vflip(struct mp_image *img)
|
|
|
|
{
|
|
|
|
for (int p = 0; p < img->num_planes; p++) {
|
2015-04-10 18:58:26 +00:00
|
|
|
int plane_h = mp_image_plane_h(img, p);
|
|
|
|
img->planes[p] = img->planes[p] + img->stride[p] * (plane_h - 1);
|
2013-03-01 10:28:59 +00:00
|
|
|
img->stride[p] = -img->stride[p];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-08-25 17:10:34 +00:00
|
|
|
bool mp_image_crop_valid(const struct mp_image_params *p)
|
|
|
|
{
|
|
|
|
return p->crop.x1 > p->crop.x0 && p->crop.y1 > p->crop.y0 &&
|
|
|
|
p->crop.x0 >= 0 && p->crop.y0 >= 0 &&
|
|
|
|
p->crop.x1 <= p->w && p->crop.y1 <= p->h;
|
|
|
|
}
|
|
|
|
|
2015-12-19 19:04:31 +00:00
|
|
|
// Display size derived from image size and pixel aspect ratio.
|
|
|
|
void mp_image_params_get_dsize(const struct mp_image_params *p,
|
|
|
|
int *d_w, int *d_h)
|
|
|
|
{
|
2023-08-25 17:10:34 +00:00
|
|
|
if (mp_image_crop_valid(p))
|
|
|
|
{
|
|
|
|
*d_w = mp_rect_w(p->crop);
|
|
|
|
*d_h = mp_rect_h(p->crop);
|
|
|
|
} else {
|
|
|
|
*d_w = p->w;
|
|
|
|
*d_h = p->h;
|
|
|
|
}
|
|
|
|
|
2015-12-19 19:04:31 +00:00
|
|
|
if (p->p_w > p->p_h && p->p_h >= 1)
|
2016-02-12 15:04:26 +00:00
|
|
|
*d_w = MPCLAMP(*d_w * (int64_t)p->p_w / p->p_h, 1, INT_MAX);
|
2015-12-19 19:04:31 +00:00
|
|
|
if (p->p_h > p->p_w && p->p_w >= 1)
|
2016-02-12 15:04:26 +00:00
|
|
|
*d_h = MPCLAMP(*d_h * (int64_t)p->p_h / p->p_w, 1, INT_MAX);
|
2015-12-19 19:04:31 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
void mp_image_params_set_dsize(struct mp_image_params *p, int d_w, int d_h)
|
|
|
|
{
|
|
|
|
AVRational ds = av_div_q((AVRational){d_w, d_h}, (AVRational){p->w, p->h});
|
|
|
|
p->p_w = ds.num;
|
|
|
|
p->p_h = ds.den;
|
|
|
|
}
|
|
|
|
|
2014-11-12 18:19:16 +00:00
|
|
|
char *mp_image_params_to_str_buf(char *b, size_t bs,
|
|
|
|
const struct mp_image_params *p)
|
|
|
|
{
|
|
|
|
if (p && p->imgfmt) {
|
|
|
|
snprintf(b, bs, "%dx%d", p->w, p->h);
|
2015-12-19 19:04:31 +00:00
|
|
|
if (p->p_w != p->p_h || !p->p_w)
|
|
|
|
mp_snprintf_cat(b, bs, " [%d:%d]", p->p_w, p->p_h);
|
2014-11-12 18:19:16 +00:00
|
|
|
mp_snprintf_cat(b, bs, " %s", mp_imgfmt_to_name(p->imgfmt));
|
2016-04-07 16:46:43 +00:00
|
|
|
if (p->hw_subfmt)
|
2016-07-15 09:54:44 +00:00
|
|
|
mp_snprintf_cat(b, bs, "[%s]", mp_imgfmt_to_name(p->hw_subfmt));
|
2018-01-16 10:42:07 +00:00
|
|
|
mp_snprintf_cat(b, bs, " %s/%s/%s/%s/%s",
|
2023-11-04 02:55:38 +00:00
|
|
|
m_opt_choice_str(pl_csp_names, p->repr.sys),
|
|
|
|
m_opt_choice_str(pl_csp_prim_names, p->color.primaries),
|
|
|
|
m_opt_choice_str(pl_csp_trc_names, p->color.transfer),
|
|
|
|
m_opt_choice_str(pl_csp_levels_names, p->repr.levels),
|
|
|
|
m_opt_choice_str(mp_csp_light_names, p->light));
|
2015-03-30 21:52:28 +00:00
|
|
|
mp_snprintf_cat(b, bs, " CL=%s",
|
2023-11-04 04:15:27 +00:00
|
|
|
m_opt_choice_str(pl_chroma_names, p->chroma_location));
|
2023-08-25 17:10:34 +00:00
|
|
|
if (mp_image_crop_valid(p)) {
|
|
|
|
mp_snprintf_cat(b, bs, " crop=%dx%d+%d+%d", mp_rect_w(p->crop),
|
|
|
|
mp_rect_h(p->crop), p->crop.x0, p->crop.y0);
|
|
|
|
}
|
2014-11-12 18:19:16 +00:00
|
|
|
if (p->rotate)
|
|
|
|
mp_snprintf_cat(b, bs, " rot=%d", p->rotate);
|
2018-04-21 11:31:00 +00:00
|
|
|
if (p->stereo3d > 0) {
|
|
|
|
mp_snprintf_cat(b, bs, " stereo=%s",
|
|
|
|
MP_STEREO3D_NAME_DEF(p->stereo3d, "?"));
|
2014-11-12 18:30:34 +00:00
|
|
|
}
|
2023-11-04 03:54:51 +00:00
|
|
|
if (p->repr.alpha) {
|
2020-04-24 12:41:50 +00:00
|
|
|
mp_snprintf_cat(b, bs, " A=%s",
|
2023-11-04 03:54:51 +00:00
|
|
|
m_opt_choice_str(pl_alpha_names, p->repr.alpha));
|
2020-04-24 12:41:50 +00:00
|
|
|
}
|
2014-11-12 18:19:16 +00:00
|
|
|
} else {
|
|
|
|
snprintf(b, bs, "???");
|
|
|
|
}
|
|
|
|
return b;
|
|
|
|
}
|
|
|
|
|
2014-06-17 20:44:13 +00:00
|
|
|
// Return whether the image parameters are valid.
|
|
|
|
// Some non-essential fields are allowed to be unset (like colorspace flags).
|
|
|
|
bool mp_image_params_valid(const struct mp_image_params *p)
|
|
|
|
{
|
|
|
|
// av_image_check_size has similar checks and triggers around 16000*16000
|
|
|
|
// It's mostly needed to deal with the fact that offsets are sometimes
|
|
|
|
// ints. We also should (for now) do the same as FFmpeg, to be sure large
|
|
|
|
// images don't crash with libswscale or when wrapping with AVFrame and
|
|
|
|
// passing the result to filters.
|
2015-03-23 17:38:19 +00:00
|
|
|
if (p->w <= 0 || p->h <= 0 || (p->w + 128LL) * (p->h + 128LL) >= INT_MAX / 8)
|
2014-06-17 20:44:13 +00:00
|
|
|
return false;
|
|
|
|
|
2016-05-30 17:07:09 +00:00
|
|
|
if (p->p_w < 0 || p->p_h < 0)
|
2014-06-17 20:44:13 +00:00
|
|
|
return false;
|
|
|
|
|
|
|
|
if (p->rotate < 0 || p->rotate >= 360)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
struct mp_imgfmt_desc desc = mp_imgfmt_get_desc(p->imgfmt);
|
|
|
|
if (!desc.id)
|
|
|
|
return false;
|
|
|
|
|
2016-04-07 16:46:43 +00:00
|
|
|
if (p->hw_subfmt && !(desc.flags & MP_IMGFLAG_HWACCEL))
|
|
|
|
return false;
|
|
|
|
|
2014-06-17 20:44:13 +00:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2014-06-17 21:30:16 +00:00
|
|
|
bool mp_image_params_equal(const struct mp_image_params *p1,
|
|
|
|
const struct mp_image_params *p2)
|
2012-10-27 16:01:51 +00:00
|
|
|
{
|
2013-07-18 11:17:56 +00:00
|
|
|
return p1->imgfmt == p2->imgfmt &&
|
2016-04-07 16:46:43 +00:00
|
|
|
p1->hw_subfmt == p2->hw_subfmt &&
|
2013-07-18 11:17:56 +00:00
|
|
|
p1->w == p2->w && p1->h == p2->h &&
|
2015-12-19 19:04:31 +00:00
|
|
|
p1->p_w == p2->p_w && p1->p_h == p2->p_h &&
|
2023-09-20 18:18:26 +00:00
|
|
|
p1->force_window == p2->force_window &&
|
2023-11-04 02:55:38 +00:00
|
|
|
pl_color_space_equal(&p1->color, &p2->color) &&
|
|
|
|
pl_color_repr_equal(&p1->repr, &p2->repr) &&
|
|
|
|
p1->light == p2->light &&
|
2014-04-20 19:28:09 +00:00
|
|
|
p1->chroma_location == p2->chroma_location &&
|
2014-08-30 21:24:46 +00:00
|
|
|
p1->rotate == p2->rotate &&
|
2020-04-24 12:41:50 +00:00
|
|
|
p1->stereo3d == p2->stereo3d &&
|
2023-08-25 17:10:34 +00:00
|
|
|
mp_rect_equals(&p1->crop, &p2->crop);
|
2012-10-27 16:01:51 +00:00
|
|
|
}
|
|
|
|
|
2024-02-10 22:46:04 +00:00
|
|
|
bool mp_image_params_static_equal(const struct mp_image_params *p1,
|
|
|
|
const struct mp_image_params *p2)
|
|
|
|
{
|
|
|
|
// Compare only static video parameters, excluding dynamic metadata.
|
|
|
|
struct mp_image_params a = *p1;
|
|
|
|
struct mp_image_params b = *p2;
|
|
|
|
a.repr.dovi = b.repr.dovi = NULL;
|
|
|
|
a.color.hdr = b.color.hdr = (struct pl_hdr_metadata){0};
|
|
|
|
return mp_image_params_equal(&a, &b);
|
|
|
|
}
|
|
|
|
|
2024-08-31 14:26:36 +00:00
|
|
|
// Restore color system, transfer, and primaries to their original values
|
|
|
|
// before dovi mapping.
|
|
|
|
void mp_image_params_restore_dovi_mapping(struct mp_image_params *params)
|
|
|
|
{
|
2024-09-03 13:56:28 +00:00
|
|
|
if (!params->primaries_orig || !params->transfer_orig || !params->sys_orig)
|
|
|
|
return;
|
2024-08-31 14:26:36 +00:00
|
|
|
params->color.primaries = params->primaries_orig;
|
|
|
|
params->color.transfer = params->transfer_orig;
|
|
|
|
params->repr.sys = params->sys_orig;
|
|
|
|
if (!pl_color_transfer_is_hdr(params->transfer_orig))
|
|
|
|
params->color.hdr = (struct pl_hdr_metadata){0};
|
|
|
|
if (params->transfer_orig != PL_COLOR_TRC_PQ)
|
|
|
|
params->color.hdr.max_pq_y = params->color.hdr.avg_pq_y = 0;
|
|
|
|
}
|
|
|
|
|
2013-11-03 22:55:16 +00:00
|
|
|
// Set most image parameters, but not image format or size.
|
|
|
|
// Display size is used to set the PAR.
|
|
|
|
void mp_image_set_attributes(struct mp_image *image,
|
|
|
|
const struct mp_image_params *params)
|
|
|
|
{
|
|
|
|
struct mp_image_params nparams = *params;
|
|
|
|
nparams.imgfmt = image->imgfmt;
|
|
|
|
nparams.w = image->w;
|
|
|
|
nparams.h = image->h;
|
2024-01-26 15:08:48 +00:00
|
|
|
if (nparams.imgfmt != params->imgfmt) {
|
|
|
|
nparams.repr = (struct pl_color_repr){0};
|
2023-11-04 02:55:38 +00:00
|
|
|
nparams.color = (struct pl_color_space){0};
|
2024-01-26 15:08:48 +00:00
|
|
|
}
|
2013-11-03 22:55:16 +00:00
|
|
|
mp_image_set_params(image, &nparams);
|
|
|
|
}
|
|
|
|
|
2023-11-04 02:55:38 +00:00
|
|
|
static enum pl_color_levels infer_levels(enum mp_imgfmt imgfmt)
|
2023-08-07 15:10:34 +00:00
|
|
|
{
|
|
|
|
switch (imgfmt2pixfmt(imgfmt)) {
|
|
|
|
case AV_PIX_FMT_YUVJ420P:
|
|
|
|
case AV_PIX_FMT_YUVJ411P:
|
|
|
|
case AV_PIX_FMT_YUVJ422P:
|
|
|
|
case AV_PIX_FMT_YUVJ444P:
|
|
|
|
case AV_PIX_FMT_YUVJ440P:
|
|
|
|
case AV_PIX_FMT_GRAY8:
|
|
|
|
case AV_PIX_FMT_YA8:
|
|
|
|
case AV_PIX_FMT_GRAY9LE:
|
|
|
|
case AV_PIX_FMT_GRAY9BE:
|
|
|
|
case AV_PIX_FMT_GRAY10LE:
|
|
|
|
case AV_PIX_FMT_GRAY10BE:
|
|
|
|
case AV_PIX_FMT_GRAY12LE:
|
|
|
|
case AV_PIX_FMT_GRAY12BE:
|
|
|
|
case AV_PIX_FMT_GRAY14LE:
|
|
|
|
case AV_PIX_FMT_GRAY14BE:
|
|
|
|
case AV_PIX_FMT_GRAY16LE:
|
|
|
|
case AV_PIX_FMT_GRAY16BE:
|
|
|
|
case AV_PIX_FMT_YA16BE:
|
|
|
|
case AV_PIX_FMT_YA16LE:
|
2023-11-04 02:55:38 +00:00
|
|
|
return PL_COLOR_LEVELS_FULL;
|
2023-08-07 15:10:34 +00:00
|
|
|
default:
|
2023-11-04 02:55:38 +00:00
|
|
|
return PL_COLOR_LEVELS_LIMITED;
|
2023-08-07 15:10:34 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-06-29 22:27:50 +00:00
|
|
|
// If details like params->colorspace/colorlevels are missing, guess them from
|
|
|
|
// the other settings. Also, even if they are set, make them consistent with
|
|
|
|
// the colorspace as implied by the pixel format.
|
|
|
|
void mp_image_params_guess_csp(struct mp_image_params *params)
|
|
|
|
{
|
2023-11-04 02:55:38 +00:00
|
|
|
enum pl_color_system forced_csp = mp_image_params_get_forced_csp(params);
|
|
|
|
if (forced_csp == PL_COLOR_SYSTEM_UNKNOWN) { // YUV/other
|
|
|
|
if (params->repr.sys != PL_COLOR_SYSTEM_BT_601 &&
|
|
|
|
params->repr.sys != PL_COLOR_SYSTEM_BT_709 &&
|
|
|
|
params->repr.sys != PL_COLOR_SYSTEM_BT_2020_NC &&
|
|
|
|
params->repr.sys != PL_COLOR_SYSTEM_BT_2020_C &&
|
2024-02-10 23:51:58 +00:00
|
|
|
params->repr.sys != PL_COLOR_SYSTEM_BT_2100_PQ &&
|
|
|
|
params->repr.sys != PL_COLOR_SYSTEM_BT_2100_HLG &&
|
|
|
|
params->repr.sys != PL_COLOR_SYSTEM_DOLBYVISION &&
|
2023-11-04 02:55:38 +00:00
|
|
|
params->repr.sys != PL_COLOR_SYSTEM_SMPTE_240M &&
|
|
|
|
params->repr.sys != PL_COLOR_SYSTEM_YCGCO)
|
2013-07-14 22:50:01 +00:00
|
|
|
{
|
|
|
|
// Makes no sense, so guess instead
|
|
|
|
// YCGCO should be separate, but libavcodec disagrees
|
2023-11-04 02:55:38 +00:00
|
|
|
params->repr.sys = PL_COLOR_SYSTEM_UNKNOWN;
|
2013-07-14 22:50:01 +00:00
|
|
|
}
|
2023-11-04 02:55:38 +00:00
|
|
|
if (params->repr.sys == PL_COLOR_SYSTEM_UNKNOWN)
|
|
|
|
params->repr.sys = mp_csp_guess_colorspace(params->w, params->h);
|
|
|
|
if (params->repr.levels == PL_COLOR_LEVELS_UNKNOWN) {
|
|
|
|
if (params->color.transfer == PL_COLOR_TRC_V_LOG) {
|
|
|
|
params->repr.levels = PL_COLOR_LEVELS_FULL;
|
2016-06-26 17:28:06 +00:00
|
|
|
} else {
|
2023-11-04 02:55:38 +00:00
|
|
|
params->repr.levels = infer_levels(params->imgfmt);
|
2016-06-26 17:28:06 +00:00
|
|
|
}
|
|
|
|
}
|
2023-11-04 02:55:38 +00:00
|
|
|
if (params->color.primaries == PL_COLOR_PRIM_UNKNOWN) {
|
2014-04-01 22:40:36 +00:00
|
|
|
// Guess based on the colormatrix as a first priority
|
2023-11-04 02:55:38 +00:00
|
|
|
if (params->repr.sys == PL_COLOR_SYSTEM_BT_2020_NC ||
|
|
|
|
params->repr.sys == PL_COLOR_SYSTEM_BT_2020_C) {
|
|
|
|
params->color.primaries = PL_COLOR_PRIM_BT_2020;
|
|
|
|
} else if (params->repr.sys == PL_COLOR_SYSTEM_BT_709) {
|
|
|
|
params->color.primaries = PL_COLOR_PRIM_BT_709;
|
2014-04-01 22:40:36 +00:00
|
|
|
} else {
|
|
|
|
// Ambiguous colormatrix for BT.601, guess based on res
|
2016-06-29 07:16:13 +00:00
|
|
|
params->color.primaries = mp_csp_guess_primaries(params->w, params->h);
|
2014-03-26 00:46:38 +00:00
|
|
|
}
|
|
|
|
}
|
2023-11-04 02:55:38 +00:00
|
|
|
if (params->color.transfer == PL_COLOR_TRC_UNKNOWN)
|
|
|
|
params->color.transfer = PL_COLOR_TRC_BT_1886;
|
|
|
|
} else if (forced_csp == PL_COLOR_SYSTEM_RGB) {
|
|
|
|
params->repr.sys = PL_COLOR_SYSTEM_RGB;
|
|
|
|
params->repr.levels = PL_COLOR_LEVELS_FULL;
|
2014-03-26 00:46:38 +00:00
|
|
|
|
|
|
|
// The majority of RGB content is either sRGB or (rarely) some other
|
|
|
|
// color space which we don't even handle, like AdobeRGB or
|
|
|
|
// ProPhotoRGB. The only reasonable thing we can do is assume it's
|
|
|
|
// sRGB and hope for the best, which should usually just work out fine.
|
|
|
|
// Note: sRGB primaries = BT.709 primaries
|
2023-11-04 02:55:38 +00:00
|
|
|
if (params->color.primaries == PL_COLOR_PRIM_UNKNOWN)
|
|
|
|
params->color.primaries = PL_COLOR_PRIM_BT_709;
|
|
|
|
if (params->color.transfer == PL_COLOR_TRC_UNKNOWN)
|
|
|
|
params->color.transfer = PL_COLOR_TRC_SRGB;
|
|
|
|
} else if (forced_csp == PL_COLOR_SYSTEM_XYZ) {
|
|
|
|
params->repr.sys = PL_COLOR_SYSTEM_XYZ;
|
|
|
|
params->repr.levels = PL_COLOR_LEVELS_FULL;
|
2023-01-24 07:11:16 +00:00
|
|
|
// Force gamma to ST428 as this is the only correct for DCDM X'Y'Z'
|
2023-11-04 02:55:38 +00:00
|
|
|
params->color.transfer = PL_COLOR_TRC_ST428;
|
2023-01-24 07:11:16 +00:00
|
|
|
// Don't care about primaries, they shouldn't be used, or if anything
|
|
|
|
// MP_CSP_PRIM_ST428 should be defined.
|
2012-10-27 16:01:51 +00:00
|
|
|
} else {
|
2013-06-29 22:27:50 +00:00
|
|
|
// We have no clue.
|
2023-11-04 02:55:38 +00:00
|
|
|
params->repr.sys = PL_COLOR_SYSTEM_UNKNOWN;
|
|
|
|
params->repr.levels = PL_COLOR_LEVELS_UNKNOWN;
|
|
|
|
params->color.primaries = PL_COLOR_PRIM_UNKNOWN;
|
|
|
|
params->color.transfer = PL_COLOR_TRC_UNKNOWN;
|
2012-10-27 16:01:51 +00:00
|
|
|
}
|
2016-05-30 17:56:58 +00:00
|
|
|
|
2023-11-03 23:52:14 +00:00
|
|
|
if (!params->color.hdr.max_luma) {
|
2023-11-04 02:55:38 +00:00
|
|
|
if (params->color.transfer == PL_COLOR_TRC_HLG) {
|
2023-11-03 23:52:14 +00:00
|
|
|
params->color.hdr.max_luma = 1000; // reference display
|
2017-06-26 22:31:51 +00:00
|
|
|
} else {
|
|
|
|
// If the signal peak is unknown, we're forced to pick the TRC's
|
|
|
|
// nominal range as the signal peak to prevent clipping
|
2023-11-04 05:27:38 +00:00
|
|
|
params->color.hdr.max_luma = pl_color_transfer_nominal_peak(params->color.transfer) * MP_REF_WHITE;
|
2017-06-26 22:31:51 +00:00
|
|
|
}
|
|
|
|
}
|
2017-06-14 18:06:56 +00:00
|
|
|
|
2023-11-04 05:27:38 +00:00
|
|
|
if (!pl_color_space_is_hdr(¶ms->color)) {
|
2018-09-05 17:48:50 +00:00
|
|
|
// Some clips have leftover HDR metadata after conversion to SDR, so to
|
|
|
|
// avoid blowing up the tone mapping code, strip/sanitize it
|
2023-11-03 23:52:14 +00:00
|
|
|
params->color.hdr = pl_hdr_metadata_empty;
|
2018-09-05 17:48:50 +00:00
|
|
|
}
|
|
|
|
|
2023-11-04 04:15:27 +00:00
|
|
|
if (params->chroma_location == PL_CHROMA_UNKNOWN) {
|
2023-11-04 02:55:38 +00:00
|
|
|
if (params->repr.levels == PL_COLOR_LEVELS_LIMITED)
|
2023-11-04 04:15:27 +00:00
|
|
|
params->chroma_location = PL_CHROMA_LEFT;
|
2023-11-04 02:55:38 +00:00
|
|
|
if (params->repr.levels == PL_COLOR_LEVELS_FULL)
|
2023-11-04 04:15:27 +00:00
|
|
|
params->chroma_location = PL_CHROMA_CENTER;
|
2017-10-16 08:35:37 +00:00
|
|
|
}
|
|
|
|
|
2023-11-04 02:55:38 +00:00
|
|
|
if (params->light == MP_CSP_LIGHT_AUTO) {
|
2017-06-14 18:06:56 +00:00
|
|
|
// HLG is always scene-referred (using its own OOTF), everything else
|
2023-03-28 15:16:42 +00:00
|
|
|
// we assume is display-referred by default.
|
2023-11-04 02:55:38 +00:00
|
|
|
if (params->color.transfer == PL_COLOR_TRC_HLG) {
|
|
|
|
params->light = MP_CSP_LIGHT_SCENE_HLG;
|
2017-06-14 18:06:56 +00:00
|
|
|
} else {
|
2023-11-04 02:55:38 +00:00
|
|
|
params->light = MP_CSP_LIGHT_DISPLAY;
|
2017-06-14 18:06:56 +00:00
|
|
|
}
|
|
|
|
}
|
2012-10-27 16:01:51 +00:00
|
|
|
}
|
2013-03-09 19:21:12 +00:00
|
|
|
|
2017-10-16 14:19:22 +00:00
|
|
|
// Create a new mp_image reference to av_frame.
|
|
|
|
struct mp_image *mp_image_from_av_frame(struct AVFrame *src)
|
2013-03-09 19:21:12 +00:00
|
|
|
{
|
2017-10-16 14:19:22 +00:00
|
|
|
struct mp_image *dst = &(struct mp_image){0};
|
2017-10-16 14:36:21 +00:00
|
|
|
AVFrameSideData *sd;
|
2017-10-16 14:19:22 +00:00
|
|
|
|
|
|
|
for (int p = 0; p < MP_MAX_PLANES; p++)
|
|
|
|
dst->bufs[p] = src->buf[p];
|
|
|
|
|
|
|
|
dst->hwctx = src->hw_frames_ctx;
|
|
|
|
|
2013-03-09 19:21:12 +00:00
|
|
|
mp_image_setfmt(dst, pixfmt2imgfmt(src->format));
|
|
|
|
mp_image_set_size(dst, src->width, src->height);
|
2016-05-30 17:12:41 +00:00
|
|
|
|
|
|
|
dst->params.p_w = src->sample_aspect_ratio.num;
|
|
|
|
dst->params.p_h = src->sample_aspect_ratio.den;
|
2013-03-09 19:21:12 +00:00
|
|
|
|
|
|
|
for (int i = 0; i < 4; i++) {
|
|
|
|
dst->planes[i] = src->data[i];
|
|
|
|
dst->stride[i] = src->linesize[i];
|
|
|
|
}
|
|
|
|
|
|
|
|
dst->pict_type = src->pict_type;
|
2013-03-15 13:21:42 +00:00
|
|
|
|
2023-09-01 10:25:27 +00:00
|
|
|
dst->params.crop.x0 = src->crop_left;
|
|
|
|
dst->params.crop.y0 = src->crop_top;
|
|
|
|
dst->params.crop.x1 = src->width - src->crop_right;
|
|
|
|
dst->params.crop.y1 = src->height - src->crop_bottom;
|
|
|
|
|
2015-04-23 20:06:14 +00:00
|
|
|
dst->fields = 0;
|
2023-05-10 18:25:48 +00:00
|
|
|
if (src->flags & AV_FRAME_FLAG_INTERLACED)
|
|
|
|
dst->fields |= MP_IMGFIELD_INTERLACED;
|
|
|
|
if (src->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST)
|
|
|
|
dst->fields |= MP_IMGFIELD_TOP_FIRST;
|
2013-03-09 19:21:12 +00:00
|
|
|
if (src->repeat_pict == 1)
|
|
|
|
dst->fields |= MP_IMGFIELD_REPEAT_FIRST;
|
2017-01-12 08:40:16 +00:00
|
|
|
|
2023-11-04 02:55:38 +00:00
|
|
|
dst->params.repr = (struct pl_color_repr){
|
|
|
|
.sys = pl_system_from_av(src->colorspace),
|
|
|
|
.levels = pl_levels_from_av(src->color_range),
|
|
|
|
};
|
|
|
|
|
|
|
|
dst->params.color = (struct pl_color_space){
|
|
|
|
.primaries = pl_primaries_from_av(src->color_primaries),
|
|
|
|
.transfer = pl_transfer_from_av(src->color_trc),
|
2017-01-12 10:54:53 +00:00
|
|
|
};
|
|
|
|
|
2023-11-04 04:15:27 +00:00
|
|
|
dst->params.chroma_location = pl_chroma_from_av(src->chroma_location);
|
2017-02-13 11:12:14 +00:00
|
|
|
|
|
|
|
if (src->opaque_ref) {
|
|
|
|
struct mp_image_params *p = (void *)src->opaque_ref->data;
|
2018-04-21 11:31:00 +00:00
|
|
|
dst->params.stereo3d = p->stereo3d;
|
2018-01-16 10:42:07 +00:00
|
|
|
// Might be incorrect if colorspace changes.
|
2023-11-04 02:55:38 +00:00
|
|
|
dst->params.light = p->light;
|
2023-11-04 03:54:51 +00:00
|
|
|
dst->params.repr.alpha = p->repr.alpha;
|
2017-02-13 11:12:14 +00:00
|
|
|
}
|
2017-10-16 14:19:22 +00:00
|
|
|
|
2021-10-02 17:19:10 +00:00
|
|
|
sd = av_frame_get_side_data(src, AV_FRAME_DATA_DISPLAYMATRIX);
|
|
|
|
if (sd) {
|
|
|
|
double r = av_display_rotation_get((int32_t *)(sd->data));
|
|
|
|
if (!isnan(r))
|
|
|
|
dst->params.rotate = (((int)(-r) % 360) + 360) % 360;
|
|
|
|
}
|
|
|
|
|
2017-10-16 14:36:21 +00:00
|
|
|
sd = av_frame_get_side_data(src, AV_FRAME_DATA_ICC_PROFILE);
|
|
|
|
if (sd)
|
2018-03-01 12:39:45 +00:00
|
|
|
dst->icc_profile = sd->buf;
|
2017-10-30 20:07:38 +00:00
|
|
|
|
2023-11-03 23:52:14 +00:00
|
|
|
AVFrameSideData *mdm = av_frame_get_side_data(src, AV_FRAME_DATA_MASTERING_DISPLAY_METADATA);
|
|
|
|
AVFrameSideData *clm = av_frame_get_side_data(src, AV_FRAME_DATA_CONTENT_LIGHT_LEVEL);
|
|
|
|
AVFrameSideData *dhp = av_frame_get_side_data(src, AV_FRAME_DATA_DYNAMIC_HDR_PLUS);
|
|
|
|
pl_map_hdr_metadata(&dst->params.color.hdr, &(struct pl_av_hdr_metadata) {
|
|
|
|
.mdm = (void *)(mdm ? mdm->data : NULL),
|
|
|
|
.clm = (void *)(clm ? clm->data : NULL),
|
|
|
|
.dhp = (void *)(dhp ? dhp->data : NULL),
|
|
|
|
});
|
2018-01-29 12:49:39 +00:00
|
|
|
|
|
|
|
sd = av_frame_get_side_data(src, AV_FRAME_DATA_A53_CC);
|
|
|
|
if (sd)
|
2018-03-01 12:39:45 +00:00
|
|
|
dst->a53_cc = sd->buf;
|
2018-03-01 12:58:15 +00:00
|
|
|
|
2024-08-31 14:26:36 +00:00
|
|
|
dst->params.primaries_orig = dst->params.color.primaries;
|
|
|
|
dst->params.transfer_orig = dst->params.color.transfer;
|
|
|
|
dst->params.sys_orig = dst->params.repr.sys;
|
2024-02-10 23:52:49 +00:00
|
|
|
AVBufferRef *dovi = NULL;
|
2022-01-06 07:32:46 +00:00
|
|
|
sd = av_frame_get_side_data(src, AV_FRAME_DATA_DOVI_METADATA);
|
2024-02-10 23:52:49 +00:00
|
|
|
if (sd) {
|
|
|
|
#ifdef PL_HAVE_LAV_DOLBY_VISION
|
|
|
|
const AVDOVIMetadata *metadata = (const AVDOVIMetadata *)sd->buf->data;
|
|
|
|
const AVDOVIRpuDataHeader *header = av_dovi_get_header(metadata);
|
|
|
|
if (header->disable_residual_flag) {
|
|
|
|
dst->dovi = dovi = av_buffer_alloc(sizeof(struct pl_dovi_metadata));
|
|
|
|
MP_HANDLE_OOM(dovi);
|
|
|
|
#if PL_API_VER >= 343
|
|
|
|
pl_map_avdovi_metadata(&dst->params.color, &dst->params.repr,
|
|
|
|
(void *)dst->dovi->data, metadata);
|
|
|
|
#else
|
|
|
|
struct pl_frame frame;
|
|
|
|
frame.repr = dst->params.repr;
|
|
|
|
frame.color = dst->params.color;
|
|
|
|
pl_frame_map_avdovi_metadata(&frame, (void *)dst->dovi->data, metadata);
|
|
|
|
dst->params.repr = frame.repr;
|
|
|
|
dst->params.color = frame.color;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
2023-02-14 02:04:04 +00:00
|
|
|
|
|
|
|
sd = av_frame_get_side_data(src, AV_FRAME_DATA_DOVI_RPU_BUFFER);
|
2024-02-10 23:52:49 +00:00
|
|
|
if (sd) {
|
|
|
|
pl_hdr_metadata_from_dovi_rpu(&dst->params.color.hdr, sd->buf->data,
|
|
|
|
sd->buf->size);
|
|
|
|
}
|
2022-01-06 07:32:46 +00:00
|
|
|
|
2022-04-01 20:25:38 +00:00
|
|
|
sd = av_frame_get_side_data(src, AV_FRAME_DATA_FILM_GRAIN_PARAMS);
|
|
|
|
if (sd)
|
|
|
|
dst->film_grain = sd->buf;
|
|
|
|
|
2018-03-01 12:58:15 +00:00
|
|
|
for (int n = 0; n < src->nb_side_data; n++) {
|
|
|
|
sd = src->side_data[n];
|
|
|
|
struct mp_ff_side_data mpsd = {
|
|
|
|
.type = sd->type,
|
|
|
|
.buf = sd->buf,
|
|
|
|
};
|
|
|
|
MP_TARRAY_APPEND(NULL, dst->ff_side_data, dst->num_ff_side_data, mpsd);
|
|
|
|
}
|
2017-10-16 14:36:21 +00:00
|
|
|
|
2017-10-16 14:56:24 +00:00
|
|
|
if (dst->hwctx) {
|
|
|
|
AVHWFramesContext *fctx = (void *)dst->hwctx->data;
|
|
|
|
dst->params.hw_subfmt = pixfmt2imgfmt(fctx->sw_format);
|
|
|
|
}
|
|
|
|
|
2018-03-01 12:58:15 +00:00
|
|
|
struct mp_image *res = mp_image_new_ref(dst);
|
|
|
|
|
|
|
|
// Allocated, but non-refcounted data.
|
|
|
|
talloc_free(dst->ff_side_data);
|
2024-02-10 23:52:49 +00:00
|
|
|
av_buffer_unref(&dovi);
|
2018-03-01 12:58:15 +00:00
|
|
|
|
|
|
|
return res;
|
2013-03-09 19:21:12 +00:00
|
|
|
}
|
2013-03-09 19:50:06 +00:00
|
|
|
|
2017-10-16 14:36:21 +00:00
|
|
|
|
2017-10-16 14:19:22 +00:00
|
|
|
// Convert the mp_image reference to a AVFrame reference.
|
|
|
|
struct AVFrame *mp_image_to_av_frame(struct mp_image *src)
|
2013-03-10 18:30:48 +00:00
|
|
|
{
|
2017-10-16 14:19:22 +00:00
|
|
|
struct mp_image *new_ref = mp_image_new_ref(src);
|
|
|
|
AVFrame *dst = av_frame_alloc();
|
|
|
|
if (!dst || !new_ref) {
|
|
|
|
talloc_free(new_ref);
|
|
|
|
av_frame_free(&dst);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2017-10-16 14:36:21 +00:00
|
|
|
for (int p = 0; p < MP_MAX_PLANES; p++) {
|
2017-10-16 14:19:22 +00:00
|
|
|
dst->buf[p] = new_ref->bufs[p];
|
2017-10-16 14:36:21 +00:00
|
|
|
new_ref->bufs[p] = NULL;
|
|
|
|
}
|
2017-10-16 14:19:22 +00:00
|
|
|
|
|
|
|
dst->hw_frames_ctx = new_ref->hwctx;
|
2017-10-16 14:36:21 +00:00
|
|
|
new_ref->hwctx = NULL;
|
2017-10-16 14:19:22 +00:00
|
|
|
|
2013-03-10 18:30:48 +00:00
|
|
|
dst->format = imgfmt2pixfmt(src->imgfmt);
|
|
|
|
dst->width = src->w;
|
|
|
|
dst->height = src->h;
|
|
|
|
|
2023-10-20 02:53:15 +00:00
|
|
|
dst->crop_left = src->params.crop.x0;
|
|
|
|
dst->crop_top = src->params.crop.y0;
|
|
|
|
dst->crop_right = dst->width - src->params.crop.x1;
|
|
|
|
dst->crop_bottom = dst->height - src->params.crop.y1;
|
|
|
|
|
2016-05-30 17:12:41 +00:00
|
|
|
dst->sample_aspect_ratio.num = src->params.p_w;
|
|
|
|
dst->sample_aspect_ratio.den = src->params.p_h;
|
|
|
|
|
2013-03-10 18:30:48 +00:00
|
|
|
for (int i = 0; i < 4; i++) {
|
|
|
|
dst->data[i] = src->planes[i];
|
|
|
|
dst->linesize[i] = src->stride[i];
|
|
|
|
}
|
|
|
|
dst->extended_data = dst->data;
|
|
|
|
|
|
|
|
dst->pict_type = src->pict_type;
|
2023-05-10 18:25:48 +00:00
|
|
|
if (src->fields & MP_IMGFIELD_INTERLACED)
|
|
|
|
dst->flags |= AV_FRAME_FLAG_INTERLACED;
|
|
|
|
if (src->fields & MP_IMGFIELD_TOP_FIRST)
|
|
|
|
dst->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST;
|
2013-03-10 18:30:48 +00:00
|
|
|
if (src->fields & MP_IMGFIELD_REPEAT_FIRST)
|
|
|
|
dst->repeat_pict = 1;
|
2013-07-27 19:17:31 +00:00
|
|
|
|
2024-02-04 09:24:42 +00:00
|
|
|
pl_avframe_set_repr(dst, src->params.repr);
|
2017-01-12 10:54:53 +00:00
|
|
|
|
2023-11-04 04:15:27 +00:00
|
|
|
dst->chroma_location = pl_chroma_to_av(src->params.chroma_location);
|
2017-02-13 11:12:14 +00:00
|
|
|
|
|
|
|
dst->opaque_ref = av_buffer_alloc(sizeof(struct mp_image_params));
|
2023-01-10 17:59:21 +00:00
|
|
|
MP_HANDLE_OOM(dst->opaque_ref);
|
2017-02-13 11:12:14 +00:00
|
|
|
*(struct mp_image_params *)dst->opaque_ref->data = src->params;
|
2013-03-10 18:30:48 +00:00
|
|
|
|
2017-10-16 14:36:21 +00:00
|
|
|
if (src->icc_profile) {
|
|
|
|
AVFrameSideData *sd =
|
2018-03-01 21:12:16 +00:00
|
|
|
av_frame_new_side_data_from_buf(dst, AV_FRAME_DATA_ICC_PROFILE,
|
|
|
|
new_ref->icc_profile);
|
2023-01-10 17:59:21 +00:00
|
|
|
MP_HANDLE_OOM(sd);
|
2017-10-16 14:36:21 +00:00
|
|
|
new_ref->icc_profile = NULL;
|
|
|
|
}
|
2018-01-16 10:42:07 +00:00
|
|
|
|
2023-11-04 02:55:38 +00:00
|
|
|
pl_avframe_set_color(dst, src->params.color);
|
2018-03-01 12:58:15 +00:00
|
|
|
|
2023-11-01 04:05:42 +00:00
|
|
|
{
|
|
|
|
AVFrameSideData *sd = av_frame_new_side_data(dst,
|
|
|
|
AV_FRAME_DATA_DISPLAYMATRIX,
|
|
|
|
sizeof(int32_t) * 9);
|
|
|
|
MP_HANDLE_OOM(sd);
|
|
|
|
av_display_rotation_set((int32_t *)sd->data, src->params.rotate);
|
|
|
|
}
|
|
|
|
|
2018-03-01 12:58:15 +00:00
|
|
|
// Add back side data, but only for types which are not specially handled
|
|
|
|
// above. Keep in mind that the types above will be out of sync anyway.
|
|
|
|
for (int n = 0; n < new_ref->num_ff_side_data; n++) {
|
|
|
|
struct mp_ff_side_data *mpsd = &new_ref->ff_side_data[n];
|
|
|
|
if (!av_frame_get_side_data(dst, mpsd->type)) {
|
2018-03-01 21:12:16 +00:00
|
|
|
AVFrameSideData *sd = av_frame_new_side_data_from_buf(dst, mpsd->type,
|
|
|
|
mpsd->buf);
|
2023-01-10 17:59:21 +00:00
|
|
|
MP_HANDLE_OOM(sd);
|
2018-03-01 12:58:15 +00:00
|
|
|
mpsd->buf = NULL;
|
|
|
|
}
|
|
|
|
}
|
2017-10-16 14:36:21 +00:00
|
|
|
|
2013-07-24 17:47:05 +00:00
|
|
|
talloc_free(new_ref);
|
2017-10-16 14:36:21 +00:00
|
|
|
|
2017-10-16 14:19:22 +00:00
|
|
|
if (dst->format == AV_PIX_FMT_NONE)
|
|
|
|
av_frame_free(&dst);
|
|
|
|
return dst;
|
2013-03-10 18:30:48 +00:00
|
|
|
}
|
2015-03-19 23:21:23 +00:00
|
|
|
|
2016-04-15 13:33:53 +00:00
|
|
|
// Same as mp_image_to_av_frame(), but unref img. (It does so even on failure.)
|
|
|
|
struct AVFrame *mp_image_to_av_frame_and_unref(struct mp_image *img)
|
|
|
|
{
|
|
|
|
AVFrame *frame = mp_image_to_av_frame(img);
|
|
|
|
talloc_free(img);
|
|
|
|
return frame;
|
|
|
|
}
|
|
|
|
|
2015-03-19 23:21:23 +00:00
|
|
|
void memset_pic(void *dst, int fill, int bytesPerLine, int height, int stride)
|
|
|
|
{
|
2015-03-19 23:34:15 +00:00
|
|
|
if (bytesPerLine == stride && height) {
|
|
|
|
memset(dst, fill, stride * (height - 1) + bytesPerLine);
|
2015-03-19 23:21:23 +00:00
|
|
|
} else {
|
|
|
|
for (int i = 0; i < height; i++) {
|
|
|
|
memset(dst, fill, bytesPerLine);
|
|
|
|
dst = (uint8_t *)dst + stride;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void memset16_pic(void *dst, int fill, int unitsPerLine, int height, int stride)
|
|
|
|
{
|
|
|
|
if (fill == 0) {
|
|
|
|
memset_pic(dst, 0, unitsPerLine * 2, height, stride);
|
|
|
|
} else {
|
|
|
|
for (int i = 0; i < height; i++) {
|
|
|
|
uint16_t *line = dst;
|
|
|
|
uint16_t *end = line + unitsPerLine;
|
|
|
|
while (line < end)
|
|
|
|
*line++ = fill;
|
|
|
|
dst = (uint8_t *)dst + stride;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-05-02 21:48:43 +00:00
|
|
|
|
2020-05-17 12:57:13 +00:00
|
|
|
// Pixel at the given luma position on the given plane. x/y always refer to
|
|
|
|
// non-subsampled coordinates (even if plane is chroma).
|
|
|
|
// The coordinates must be aligned to mp_imgfmt_desc.align_x/y (these are byte
|
|
|
|
// and chroma boundaries).
|
|
|
|
// You cannot access e.g. individual luma pixels on the luma plane with yuv420p.
|
2020-05-02 21:48:43 +00:00
|
|
|
void *mp_image_pixel_ptr(struct mp_image *img, int plane, int x, int y)
|
|
|
|
{
|
2020-05-17 12:57:13 +00:00
|
|
|
assert(MP_IS_ALIGNED(x, img->fmt.align_x));
|
|
|
|
assert(MP_IS_ALIGNED(y, img->fmt.align_y));
|
|
|
|
return mp_image_pixel_ptr_ny(img, plane, x, y);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Like mp_image_pixel_ptr(), but do not require alignment on Y coordinates if
|
|
|
|
// the plane does not require it. Use with care.
|
|
|
|
// Useful for addressing luma rows.
|
|
|
|
void *mp_image_pixel_ptr_ny(struct mp_image *img, int plane, int x, int y)
|
|
|
|
{
|
|
|
|
assert(MP_IS_ALIGNED(x, img->fmt.align_x));
|
|
|
|
assert(MP_IS_ALIGNED(y, 1 << img->fmt.ys[plane]));
|
2020-05-02 21:48:43 +00:00
|
|
|
return img->planes[plane] +
|
|
|
|
img->stride[plane] * (ptrdiff_t)(y >> img->fmt.ys[plane]) +
|
2020-05-17 12:57:13 +00:00
|
|
|
(x >> img->fmt.xs[plane]) * (size_t)img->fmt.bpp[plane] / 8;
|
2020-05-02 21:48:43 +00:00
|
|
|
}
|
|
|
|
|
2020-05-17 12:57:13 +00:00
|
|
|
// Return size of pixels [x0, x0+w-1] in bytes. The coordinates refer to non-
|
|
|
|
// subsampled pixels (basically plane 0), and the size is rounded to chroma
|
|
|
|
// and byte alignment boundaries for the entire image, even if plane!=0.
|
|
|
|
// x0!=0 is useful for rounding (e.g. 8 bpp, x0=7, w=7 => 0..15 => 2 bytes).
|
2020-05-02 21:48:43 +00:00
|
|
|
size_t mp_image_plane_bytes(struct mp_image *img, int plane, int x0, int w)
|
|
|
|
{
|
2020-05-17 12:57:13 +00:00
|
|
|
int x1 = MP_ALIGN_UP(x0 + w, img->fmt.align_x);
|
|
|
|
x0 = MP_ALIGN_DOWN(x0, img->fmt.align_x);
|
|
|
|
size_t bpp = img->fmt.bpp[plane];
|
2020-05-02 21:48:43 +00:00
|
|
|
int xs = img->fmt.xs[plane];
|
2020-05-17 12:57:13 +00:00
|
|
|
return (x1 >> xs) * bpp / 8 - (x0 >> xs) * bpp / 8;
|
2020-05-02 21:48:43 +00:00
|
|
|
}
|