2010-01-30 16:57:40 +00:00
|
|
|
/*
|
2015-04-13 07:36:54 +00:00
|
|
|
* This file is part of mpv.
|
2010-01-30 16:57:40 +00:00
|
|
|
*
|
video/fmt-conversion, img_format: change license to LGPL
The problem with fmt-conversion.h is that "lucabe", who disagreed with
LGPL, originally wrote it. But it was actually rewritten by "reimar"
later. The original switch statement was replaced with a lookup table.
No code other than the imgfmt2pixfmt() function signature survives.
Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping
them, can be copyrighted.
So changing the license should be fine, because reimar and all other
authors involved with the new code agreed to LGPL.
We also don't consider format pairs added later as copyrightable.
(The direct-mapping idea mentioned in the "Copyright" file seems
attractive, and I might implement in later anyway.)
Likewise, there might be some format names added to img_format.h, which
are not covered by relicensing agreements. These all affect "later"
additions, and they follow either the FFmpeg PIXFMT naming or some other
pre-existing logic, so this should be fine.
2017-06-18 13:12:11 +00:00
|
|
|
* mpv is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
2010-01-30 16:57:40 +00:00
|
|
|
*
|
2015-04-13 07:36:54 +00:00
|
|
|
* mpv is distributed in the hope that it will be useful,
|
2010-01-30 16:57:40 +00:00
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
video/fmt-conversion, img_format: change license to LGPL
The problem with fmt-conversion.h is that "lucabe", who disagreed with
LGPL, originally wrote it. But it was actually rewritten by "reimar"
later. The original switch statement was replaced with a lookup table.
No code other than the imgfmt2pixfmt() function signature survives.
Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping
them, can be copyrighted.
So changing the license should be fine, because reimar and all other
authors involved with the new code agreed to LGPL.
We also don't consider format pairs added later as copyrightable.
(The direct-mapping idea mentioned in the "Copyright" file seems
attractive, and I might implement in later anyway.)
Likewise, there might be some format names added to img_format.h, which
are not covered by relicensing agreements. These all affect "later"
additions, and they follow either the FFmpeg PIXFMT naming or some other
pre-existing logic, so this should be fine.
2017-06-18 13:12:11 +00:00
|
|
|
* GNU Lesser General Public License for more details.
|
2010-01-30 16:57:40 +00:00
|
|
|
*
|
video/fmt-conversion, img_format: change license to LGPL
The problem with fmt-conversion.h is that "lucabe", who disagreed with
LGPL, originally wrote it. But it was actually rewritten by "reimar"
later. The original switch statement was replaced with a lookup table.
No code other than the imgfmt2pixfmt() function signature survives.
Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping
them, can be copyrighted.
So changing the license should be fine, because reimar and all other
authors involved with the new code agreed to LGPL.
We also don't consider format pairs added later as copyrightable.
(The direct-mapping idea mentioned in the "Copyright" file seems
attractive, and I might implement in later anyway.)
Likewise, there might be some format names added to img_format.h, which
are not covered by relicensing agreements. These all affect "later"
additions, and they follow either the FFmpeg PIXFMT naming or some other
pre-existing logic, so this should be fine.
2017-06-18 13:12:11 +00:00
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
|
2010-01-30 16:57:40 +00:00
|
|
|
*/
|
|
|
|
|
2008-02-22 09:09:46 +00:00
|
|
|
#ifndef MPLAYER_IMG_FORMAT_H
|
|
|
|
#define MPLAYER_IMG_FORMAT_H
|
2002-04-13 19:14:34 +00:00
|
|
|
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
#include <inttypes.h>
|
2014-07-09 22:49:01 +00:00
|
|
|
|
hwdec_vulkan: add Vulkan HW Interop
Vulkan Video Decoding has finally become a reality, as it's now
showing up in shipping drivers, and the ffmpeg support has been
merged.
With that in mind, this change introduces HW interop support for
ffmpeg Vulkan frames. The implementation is functionally complete - it
can display frames produced by hardware decoding, and it can work with
ffmpeg vulkan filters. There are still various caveats due to gaps and
bugs in drivers, so YMMV, as always.
Primary testing has been done on Intel, AMD, and nvidia hardware on
Linux with basic Windows testing on nvidia.
Notable caveats:
* Due to driver bugs, video decoding on nvidia does not work right now,
unless you use the Vulkan Beta driver. It can be worked around, but
requires ffmpeg changes that are not considered acceptable to merge.
* Even if those work-arounds are applied, Vulkan filters will not work
on video that was decoded by Vulkan, due to additional bugs in the
nvidia drivers. The filters do work correctly on content decoded some
other way, and then uploaded to Vulkan (eg: Decode with nvdec, upload
with --vf=format=vulkan)
* Vulkan filters can only be used with drivers that support
VK_EXT_descriptor_buffer which doesn't include Intel ANV as yet.
There is an MR outstanding for this.
* When dealing with 1080p content, there may be some visual distortion
in the bottom lines of frames due to chroma scaling incorporating the
extra hidden lines at the bottom of the frame (1080p content is
actually stored as 1088 lines), depending on the hardware/driver
combination and the scaling algorithm. This cannot be easily
addressed as the mechanical fix for it violates the Vulkan spec, and
probably requires a spec change to resolve properly.
All of these caveats will be fixed in either drivers or ffmpeg, and so
will not require mpv changes (unless something unexpected happens)
If you want to run on nvidia with the non-beta drivers, you can this
ffmpeg tree with the work-around patches:
* https://github.com/philipl/FFmpeg/tree/vulkan-nvidia-workarounds
2022-03-12 19:21:29 +00:00
|
|
|
#include "config.h"
|
2014-07-09 22:49:01 +00:00
|
|
|
#include "osdep/endian.h"
|
2014-08-29 10:09:04 +00:00
|
|
|
#include "misc/bstr.h"
|
2017-06-29 18:51:37 +00:00
|
|
|
#include "video/csputils.h"
|
2009-10-19 09:50:51 +00:00
|
|
|
|
2012-12-31 00:58:25 +00:00
|
|
|
#define MP_MAX_PLANES 4
|
2020-05-20 16:25:14 +00:00
|
|
|
#define MP_NUM_COMPONENTS 4
|
2012-12-31 00:58:25 +00:00
|
|
|
|
2020-05-19 21:58:36 +00:00
|
|
|
// mp_imgfmt_desc.comps[] is set to useful values. Some types of formats will
|
|
|
|
// use comps[], but not set this flag, because it doesn't cover all requirements
|
|
|
|
// (for example MP_IMGFLAG_PACKED_SS_YUV).
|
2020-05-20 16:25:14 +00:00
|
|
|
#define MP_IMGFLAG_HAS_COMPS (1 << 0)
|
|
|
|
|
|
|
|
// all components start on byte boundaries
|
|
|
|
#define MP_IMGFLAG_BYTES (1 << 1)
|
|
|
|
|
|
|
|
// all pixels start in byte boundaries
|
|
|
|
#define MP_IMGFLAG_BYTE_ALIGNED (1 << 2)
|
|
|
|
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
// set if in little endian, or endian independent
|
2020-05-20 16:25:14 +00:00
|
|
|
#define MP_IMGFLAG_LE (1 << 3)
|
|
|
|
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
// set if in big endian, or endian independent
|
2020-05-20 16:25:14 +00:00
|
|
|
#define MP_IMGFLAG_BE (1 << 4)
|
|
|
|
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
// set if in native (host) endian, or endian independent
|
2020-05-20 16:25:14 +00:00
|
|
|
#define MP_IMGFLAG_NE MP_SELECT_LE_BE(MP_IMGFLAG_LE, MP_IMGFLAG_BE)
|
|
|
|
|
|
|
|
// set if an alpha component is included
|
|
|
|
#define MP_IMGFLAG_ALPHA (1 << 5)
|
|
|
|
|
|
|
|
// color class flags - can use via bit tests, or use the mask and compare
|
|
|
|
#define MP_IMGFLAG_COLOR_MASK (15 << 6)
|
|
|
|
#define MP_IMGFLAG_COLOR_YUV (1 << 6)
|
|
|
|
#define MP_IMGFLAG_COLOR_RGB (2 << 6)
|
|
|
|
#define MP_IMGFLAG_COLOR_XYZ (4 << 6)
|
|
|
|
|
|
|
|
// component type flags (same access conventions as MP_IMGFLAG_COLOR_*)
|
|
|
|
#define MP_IMGFLAG_TYPE_MASK (15 << 10)
|
|
|
|
#define MP_IMGFLAG_TYPE_UINT (1 << 10)
|
|
|
|
#define MP_IMGFLAG_TYPE_FLOAT (2 << 10)
|
|
|
|
#define MP_IMGFLAG_TYPE_PAL8 (4 << 10)
|
|
|
|
#define MP_IMGFLAG_TYPE_HW (8 << 10)
|
|
|
|
|
|
|
|
#define MP_IMGFLAG_YUV MP_IMGFLAG_COLOR_YUV
|
|
|
|
#define MP_IMGFLAG_RGB MP_IMGFLAG_COLOR_RGB
|
|
|
|
#define MP_IMGFLAG_PAL MP_IMGFLAG_TYPE_PAL8
|
|
|
|
#define MP_IMGFLAG_HWACCEL MP_IMGFLAG_TYPE_HW
|
|
|
|
|
|
|
|
// 1 component format (or 2 components if MP_IMGFLAG_ALPHA is set).
|
|
|
|
// This should probably be a separate MP_IMGFLAG_COLOR_GRAY, but for now it
|
|
|
|
// is too much of a mess.
|
|
|
|
#define MP_IMGFLAG_GRAY (1 << 14)
|
2012-12-31 00:58:25 +00:00
|
|
|
|
2020-05-19 21:58:36 +00:00
|
|
|
// Packed, sub-sampled YUV format. Does not apply to packed non-subsampled YUV.
|
|
|
|
// These formats pack multiple pixels into one sample with strange organization.
|
|
|
|
// In this specific case, mp_imgfmt_desc.align_x gives the size of a "full"
|
|
|
|
// pixel, which has align_x luma samples, and 1 chroma sample of each Cb and Cr.
|
|
|
|
// mp_imgfmt_desc.comps describes the chroma samples, and the first luma sample.
|
|
|
|
// All luma samples have the same configuration as the first one, and you can
|
|
|
|
// get their offsets with mp_imgfmt_get_packed_yuv_locations(). Note that the
|
|
|
|
// component offsets can be >= bpp[0]; the actual range is bpp[0]*align_x.
|
|
|
|
// These formats have no alpha.
|
2020-05-20 16:25:14 +00:00
|
|
|
#define MP_IMGFLAG_PACKED_SS_YUV (1 << 15)
|
2020-05-19 21:58:36 +00:00
|
|
|
|
2020-05-20 16:25:14 +00:00
|
|
|
// set if the format is in a standard YUV format:
|
|
|
|
// - planar and yuv colorspace
|
|
|
|
// - chroma shift 0-2
|
|
|
|
// - 1-4 planes (1: gray, 2: gray/alpha, 3: yuv, 4: yuv/alpha)
|
|
|
|
// - 8-16 bit per pixel/plane, all planes have same depth,
|
|
|
|
// each plane has exactly one component
|
|
|
|
#define MP_IMGFLAG_YUV_P (1 << 16)
|
|
|
|
|
|
|
|
// Like MP_IMGFLAG_YUV_P, but RGB. This can be e.g. AV_PIX_FMT_GBRP. The planes
|
|
|
|
// are always shuffled (G - B - R [- A]).
|
|
|
|
#define MP_IMGFLAG_RGB_P (1 << 17)
|
|
|
|
|
|
|
|
// Semi-planar YUV formats, like AV_PIX_FMT_NV12.
|
|
|
|
#define MP_IMGFLAG_YUV_NV (1 << 18)
|
2020-05-19 21:58:36 +00:00
|
|
|
|
|
|
|
struct mp_imgfmt_comp_desc {
|
|
|
|
// Plane on which this component is.
|
|
|
|
uint8_t plane;
|
|
|
|
// Bit offset of first sample, from start of the pixel group (little endian).
|
|
|
|
uint8_t offset : 6;
|
|
|
|
// Number of bits used by each sample.
|
|
|
|
uint8_t size : 6;
|
|
|
|
// Internal padding. See mp_regular_imgfmt.component_pad.
|
|
|
|
int8_t pad : 4;
|
|
|
|
};
|
|
|
|
|
2012-12-31 00:58:25 +00:00
|
|
|
struct mp_imgfmt_desc {
|
|
|
|
int id; // IMGFMT_*
|
|
|
|
int flags; // MP_IMGFLAG_* bitfield
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
int8_t num_planes;
|
|
|
|
int8_t chroma_xs, chroma_ys; // chroma shift (i.e. log2 of chroma pixel size)
|
2020-05-09 15:56:44 +00:00
|
|
|
int8_t align_x, align_y; // pixel count to get byte alignment and to get
|
2012-12-25 21:29:49 +00:00
|
|
|
// to a pixel pos where luma & chroma aligns
|
2020-05-09 15:56:44 +00:00
|
|
|
// always power of 2
|
2020-05-17 12:57:13 +00:00
|
|
|
int8_t bpp[MP_MAX_PLANES]; // bits per pixel (may be "average"; the real
|
|
|
|
// byte value is determined by align_x*bpp/8
|
|
|
|
// for align_x pixels)
|
2012-12-31 00:58:25 +00:00
|
|
|
// chroma shifts per plane (provided for convenience with planar formats)
|
2020-05-17 12:57:13 +00:00
|
|
|
// Packed YUV always uses xs[0]=ys[0]=0, because plane 0 contains luma in
|
|
|
|
// addition to chroma, and thus is not sub-sampled (uses align_x=2 instead).
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
int8_t xs[MP_MAX_PLANES];
|
|
|
|
int8_t ys[MP_MAX_PLANES];
|
2020-05-19 21:58:36 +00:00
|
|
|
|
|
|
|
// Description for each component. Generally valid only if flags has
|
|
|
|
// MP_IMGFLAG_HAS_COMPS set.
|
|
|
|
// This is indexed by component_type-1 (so 0=R, 1=G, etc.), see
|
|
|
|
// mp_regular_imgfmt_plane.components[x] for component_type. Components not
|
2020-05-20 16:25:14 +00:00
|
|
|
// present use size=0. Bits not covered by any component are random and not
|
|
|
|
// interpreted by any software.
|
2020-05-19 21:58:36 +00:00
|
|
|
// In particular, don't make the mistake to index this by plane.
|
|
|
|
struct mp_imgfmt_comp_desc comps[MP_NUM_COMPONENTS];
|
|
|
|
|
|
|
|
// log(2) of the word size in bytes for endian swapping that needs to be
|
|
|
|
// performed for converting to native endian. This is performed before any
|
|
|
|
// other unpacking steps, and for all data covered by bits.
|
|
|
|
// Always 0 if IMGFLAG_NE is set.
|
|
|
|
uint8_t endian_shift : 2;
|
2012-12-31 00:58:25 +00:00
|
|
|
};
|
|
|
|
|
2013-11-05 20:59:26 +00:00
|
|
|
struct mp_imgfmt_desc mp_imgfmt_get_desc(int imgfmt);
|
2012-12-31 00:58:25 +00:00
|
|
|
|
2020-05-21 00:29:05 +00:00
|
|
|
// Return the number of component types, or 0 if unknown.
|
|
|
|
int mp_imgfmt_desc_get_num_comps(struct mp_imgfmt_desc *desc);
|
|
|
|
|
2020-05-19 21:58:36 +00:00
|
|
|
// For MP_IMGFLAG_PACKED_SS_YUV formats (packed sub-sampled YUV): positions of
|
|
|
|
// further luma samples. luma_offsets must be an array of align_x size, and the
|
|
|
|
// function will return the offset (like in mp_imgfmt_comp_desc.offset) of each
|
|
|
|
// luma pixel. luma_offsets[0] == mp_imgfmt_desc.comps[0].offset.
|
|
|
|
bool mp_imgfmt_get_packed_yuv_locations(int imgfmt, uint8_t *luma_offsets);
|
|
|
|
|
2023-11-04 02:55:38 +00:00
|
|
|
// PL_COLOR_SYSTEM_UNKNOWN for YUV, PL_COLOR_SYSTEM_RGB or PL_COLOR_SYSTEM_XYZ otherwise.
|
2017-06-29 18:51:37 +00:00
|
|
|
// (Because IMGFMT/AV_PIX_FMT conflate format and csp for RGB and XYZ.)
|
2023-11-04 02:55:38 +00:00
|
|
|
enum pl_color_system mp_imgfmt_get_forced_csp(int imgfmt);
|
2017-06-29 18:51:37 +00:00
|
|
|
|
2017-08-15 15:00:35 +00:00
|
|
|
enum mp_component_type {
|
|
|
|
MP_COMPONENT_TYPE_UNKNOWN = 0,
|
|
|
|
MP_COMPONENT_TYPE_UINT,
|
|
|
|
MP_COMPONENT_TYPE_FLOAT,
|
|
|
|
};
|
|
|
|
|
|
|
|
enum mp_component_type mp_imgfmt_get_component_type(int imgfmt);
|
|
|
|
|
2017-06-29 18:51:37 +00:00
|
|
|
struct mp_regular_imgfmt_plane {
|
|
|
|
uint8_t num_components;
|
2020-05-09 15:56:07 +00:00
|
|
|
// 1 is red/luminance/gray, 2 is green/Cb, 3 is blue/Cr, 4 is alpha.
|
2017-06-29 18:51:37 +00:00
|
|
|
// 0 is used for padding (undefined contents).
|
2019-10-19 23:37:14 +00:00
|
|
|
// It is guaranteed that non-0 values occur only once in the whole format.
|
2017-06-29 18:51:37 +00:00
|
|
|
uint8_t components[MP_NUM_COMPONENTS];
|
|
|
|
};
|
|
|
|
|
|
|
|
// This describes pixel formats that are byte aligned, have byte aligned
|
|
|
|
// components, native endian, etc.
|
|
|
|
struct mp_regular_imgfmt {
|
2017-08-15 15:00:35 +00:00
|
|
|
// Type of each component.
|
|
|
|
enum mp_component_type component_type;
|
|
|
|
|
2019-11-02 00:00:32 +00:00
|
|
|
// See mp_imgfmt_get_forced_csp(). Normally code should use
|
|
|
|
// mp_image_params.colors. This field is only needed to map the format
|
|
|
|
// unambiguously to FFmpeg formats.
|
2023-11-04 02:55:38 +00:00
|
|
|
enum pl_color_system forced_csp;
|
2019-11-02 00:00:32 +00:00
|
|
|
|
2017-06-29 18:51:37 +00:00
|
|
|
// Size of each component in bytes.
|
|
|
|
uint8_t component_size;
|
|
|
|
|
|
|
|
// If >0, LSB padding, if <0, MSB padding. The padding bits are always 0.
|
|
|
|
// This applies: bit_depth = component_size * 8 - abs(component_pad)
|
2019-11-01 23:55:55 +00:00
|
|
|
// bit_size = component_size * 8 + MPMIN(0, component_pad)
|
|
|
|
// E.g. P010: component_pad=6 (LSB always implied 0, all data in MSB)
|
|
|
|
// => has a "depth" of 10 bit, but usually treated as 16 bit value
|
|
|
|
// yuv420p10: component_pad=-6 (like a 10 bit value 0-extended to 16)
|
|
|
|
// => has depth of 10 bit, needs <<6 to get a 16 bit value
|
2017-06-29 18:51:37 +00:00
|
|
|
int8_t component_pad;
|
|
|
|
|
|
|
|
uint8_t num_planes;
|
|
|
|
struct mp_regular_imgfmt_plane planes[MP_MAX_PLANES];
|
|
|
|
|
2020-04-21 21:11:23 +00:00
|
|
|
// Chroma shifts for chroma planes. 0/0 is 4:4:4 YUV or RGB. If not 0/0,
|
|
|
|
// then this is always a yuv format, with components 2/3 on separate planes
|
|
|
|
// (reduced by the shift), and planes for components 1/4 are full sized.
|
|
|
|
uint8_t chroma_xs, chroma_ys;
|
2017-06-29 18:51:37 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
bool mp_get_regular_imgfmt(struct mp_regular_imgfmt *dst, int imgfmt);
|
2019-11-02 00:02:54 +00:00
|
|
|
int mp_find_regular_imgfmt(struct mp_regular_imgfmt *src);
|
2017-06-29 18:51:37 +00:00
|
|
|
|
zimg: add support for big endian input and output
One of the extremely annoying dumb things in ffmpeg is that most pixel
formats are available as little endian and big endian variants. (The
sane way would be having native endian formats only.) Usually, most of
the real codecs use native formats only, while non-native formats are
used by fringe raw codecs only. But the PNG encoders and decoders
unfortunately use big endian formats, and since PNG it such a popular
format, this causes problems for us. In particular, the current zimg
wrapper will refuse to work (and mpv will fall back to sws) when writing
non-8 bit PNGs.
So add non-native endian support to zimg. This is done in a fairly
"generic" way (which means lots of potential for bugs). If input is a
"regular" format (and just byte-swapped), the rest happens
automatically, which happens to cover all interesting formats.
Some things could be more efficient; for example, unpacking is done on
the data before it's passed to the unpacker. You could make endian
swapping part of the actual unpacking process, which might be slightly
faster. You could avoid copying twice in some cases (such as when
there's no actual repacker, or if alignment needs to be corrected). But
I don't really care. It's reasonably fast for the normal case.
Not entirely sure whether this is correct. Some (but not many) formats
are covered by the tests, some I tested manually. Some I can't even
test, because libswscale doesn't support them (like nv20*).
2020-04-12 20:14:55 +00:00
|
|
|
// If imgfmt is valid, and there exists a format that is exactly the same, but
|
|
|
|
// has inverse endianness, return this other format. Otherwise return 0.
|
|
|
|
int mp_find_other_endian(int imgfmt);
|
|
|
|
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
enum mp_imgfmt {
|
|
|
|
IMGFMT_NONE = 0,
|
|
|
|
|
|
|
|
// Offset to make confusing with ffmpeg formats harder
|
|
|
|
IMGFMT_START = 1000,
|
|
|
|
|
|
|
|
// Planar YUV formats
|
|
|
|
IMGFMT_444P, // 1x1
|
|
|
|
IMGFMT_420P, // 2x2
|
|
|
|
|
|
|
|
// Gray
|
|
|
|
IMGFMT_Y8,
|
2014-11-05 00:16:57 +00:00
|
|
|
IMGFMT_Y16,
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
|
|
|
|
// Packed YUV formats (components are byte-accessed)
|
|
|
|
IMGFMT_UYVY, // U Y0 V Y1
|
|
|
|
|
|
|
|
// Y plane + packed plane for chroma
|
|
|
|
IMGFMT_NV12,
|
|
|
|
|
2016-11-19 21:57:23 +00:00
|
|
|
// Like IMGFMT_NV12, but with 10 bits per component (and 6 bits of padding)
|
2016-04-29 20:38:54 +00:00
|
|
|
IMGFMT_P010,
|
|
|
|
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
// RGB/BGR Formats
|
|
|
|
|
|
|
|
// Byte accessed (low address to high address)
|
|
|
|
IMGFMT_ARGB,
|
|
|
|
IMGFMT_BGRA,
|
|
|
|
IMGFMT_ABGR,
|
|
|
|
IMGFMT_RGBA,
|
2013-07-18 11:49:33 +00:00
|
|
|
IMGFMT_BGR24, // 3 bytes per pixel
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
IMGFMT_RGB24,
|
|
|
|
|
2013-11-05 20:59:26 +00:00
|
|
|
// Like e.g. IMGFMT_ARGB, but has a padding byte instead of alpha
|
|
|
|
IMGFMT_0RGB,
|
|
|
|
IMGFMT_BGR0,
|
|
|
|
IMGFMT_0BGR,
|
|
|
|
IMGFMT_RGB0,
|
|
|
|
|
2018-02-07 19:18:36 +00:00
|
|
|
// Like IMGFMT_RGBA, but 2 bytes per component.
|
|
|
|
IMGFMT_RGBA64,
|
|
|
|
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
// Accessed with bit-shifts after endian-swapping the uint16_t pixel
|
2016-01-25 09:43:35 +00:00
|
|
|
IMGFMT_RGB565, // 5r 6g 5b (MSB to LSB)
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
|
2020-02-10 17:59:59 +00:00
|
|
|
// AV_PIX_FMT_PAL8
|
|
|
|
IMGFMT_PAL8,
|
|
|
|
|
2013-07-18 11:49:33 +00:00
|
|
|
// Hardware accelerated formats. Plane data points to special data
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
// structures, instead of pixel data.
|
2014-05-22 18:55:17 +00:00
|
|
|
IMGFMT_VDPAU, // VdpVideoSurface
|
2017-07-15 11:04:32 +00:00
|
|
|
// plane 0: ID3D11Texture2D
|
|
|
|
// plane 1: slice index casted to pointer
|
video: rewrite filtering glue code
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.
This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.
vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.
f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).
The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.
Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)
In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.
vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.
The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.
Exporting VO capabilities is still a big mess (mp_stream_info stuff).
The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.
f_async_queue is unused.
2018-01-16 10:53:44 +00:00
|
|
|
IMGFMT_D3D11,
|
2016-03-18 06:28:29 +00:00
|
|
|
IMGFMT_DXVA2, // IDirect3DSurface9 (NV12/P010/P016)
|
2015-03-29 13:12:11 +00:00
|
|
|
IMGFMT_MMAL, // MMAL_BUFFER_HEADER_T
|
2017-07-06 17:54:40 +00:00
|
|
|
IMGFMT_MEDIACODEC, // AVMediaCodecBuffer
|
hwdec/opengl: Add support for CUDA and cuvid/NvDecode
Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross
platform, but nvidia proprietary API that exposes their hardware
video decoding capabilities. It is analogous to their DXVA or VDPAU
support on Windows or Linux but without using platform specific API
calls.
As a rule, you'd rather use DXVA or VDPAU as these are more mature
and well supported APIs, but on Linux, VDPAU is falling behind the
hardware capabilities, and there's no sign that nvidia are making
the investments to update it.
Most concretely, this means that there is no VP8/9 or HEVC Main10
support in VDPAU. On the other hand, NvDecode does export vp8/9 and
partial support for HEVC Main10 (more on that below).
ffmpeg already has support in the form of the "cuvid" family of
decoders. Due to the design of the API, it is best exposed as a full
decoder rather than an hwaccel. As such, there are decoders like
h264_cuvid, hevc_cuvid, etc.
These decoders support two output paths today - in both cases, NV12
frames are returned, either in CUDA device memory or regular system
memory.
In the case of the system memory path, the decoders can be used
as-is in mpv today with a command line like:
mpv --vd=lavc:h264_cuvid foobar.mp4
Doing this will take advantage of hardware decoding, but the cost
of the memcpy to system memory adds up, especially for high
resolution video (4K etc).
To avoid that, we need an hwdec that takes advantage of CUDA's
OpenGL interop to copy from device memory into OpenGL textures.
That is what this change implements.
The process is relatively simple as only basic device context
aquisition needs to be done by us - the CUDA buffer pool is managed
by the decoder - thankfully.
The hwdec looks a bit like the vdpau interop one - the hwdec
maintains a single set of plane textures and each output frame
is repeatedly mapped into these textures to pass on.
The frames are always in NV12 format, at least until 10bit output
supports emerges.
The only slightly interesting part of the copying process is that
CUDA works by associating PBOs, so we need to define these for
each of the textures.
TODO Items:
* I need to add a download_image function for screenshots. This
would do the same copy to system memory that the decoder's
system memory output does.
* There are items to investigate on the ffmpeg side. There appears
to be a problem with timestamps for some content.
Final note: I mentioned HEVC Main10. While there is no 10bit output
support, NvDecode can return dithered 8bit NV12 so you can take
advantage of the hardware acceleration.
This particular mode requires compiling ffmpeg with a modified
header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg
yet.
Usage:
You will need to specify vo=opengl and hwdec=cuda.
Note that hwdec=auto will probably not work as it will try to use
vdpau first.
mpv --hwdec=cuda --vo=opengl foobar.mp4
If you want to use filters that require frames in system memory,
just use the decoder directly without the hwdec, as documented
above.
2016-09-04 22:23:55 +00:00
|
|
|
IMGFMT_CUDA, // CUDA Buffer
|
2018-03-14 18:42:24 +00:00
|
|
|
|
2020-04-21 20:56:45 +00:00
|
|
|
// Not an actual format; base for mpv-specific descriptor table.
|
|
|
|
// Some may still map to AV_PIX_FMT_*.
|
|
|
|
IMGFMT_CUST_BASE,
|
|
|
|
|
|
|
|
// Planar gray/alpha.
|
|
|
|
IMGFMT_YAP8,
|
|
|
|
IMGFMT_YAP16,
|
|
|
|
|
2020-05-09 15:58:55 +00:00
|
|
|
// Planar YUV/alpha formats. Sometimes useful for internal processing. There
|
|
|
|
// should be one for each subsampling factor, with and without alpha, gray.
|
|
|
|
IMGFMT_YAPF, // Note: non-alpha version exists in ffmpeg
|
|
|
|
IMGFMT_444PF,
|
|
|
|
IMGFMT_444APF,
|
|
|
|
IMGFMT_420PF,
|
|
|
|
IMGFMT_420APF,
|
|
|
|
IMGFMT_422PF,
|
|
|
|
IMGFMT_422APF,
|
|
|
|
IMGFMT_440PF,
|
|
|
|
IMGFMT_440APF,
|
|
|
|
IMGFMT_410PF,
|
|
|
|
IMGFMT_410APF,
|
|
|
|
IMGFMT_411PF,
|
|
|
|
IMGFMT_411APF,
|
|
|
|
|
2020-04-21 20:56:45 +00:00
|
|
|
// Accessed with bit-shifts, uint32_t units.
|
2020-05-09 15:57:24 +00:00
|
|
|
IMGFMT_RGB30, // 2pad 10r 10g 10b (MSB to LSB)
|
2020-04-21 20:56:45 +00:00
|
|
|
|
img_format: add some mpv-only helper formats
Utterly useless, but the intention is to make dealing with corner case
pixel formats (forced upon us by FFmpeg, very rarely) less of a pain.
The zimg wrapper will use them. (It already supports these formats
automatically, but it will help with its internals.)
Y1 is considered RGB, even though gray formats are generally treated as
YUV for various reasons. mpv will default all YUV formats to limited
range internally, which makes no sense for a 1 bit format, so this is a
problem. I wanted to avoid that mp_image_params_guess_csp() (which
applies the default) explicitly checks for an image format, so although
a bit janky, this seems to be a good solution, especially because I
really don't give a shit about these formats, other than having to
handle them. It's notable that AV_PIX_FMT_MONOBLACK (also 1 bit gray,
just packed) already explicitly marked itself as RGB.
2020-04-23 10:47:13 +00:00
|
|
|
// Fringe formats for fringe RGB format repacking.
|
|
|
|
IMGFMT_Y1, // gray with 1 bit per pixel
|
|
|
|
IMGFMT_GBRP1, // planar RGB with N bits per color component
|
|
|
|
IMGFMT_GBRP2,
|
|
|
|
IMGFMT_GBRP3,
|
|
|
|
IMGFMT_GBRP4,
|
|
|
|
IMGFMT_GBRP5,
|
|
|
|
IMGFMT_GBRP6,
|
|
|
|
|
2020-04-21 20:56:45 +00:00
|
|
|
// Hardware accelerated formats (again).
|
|
|
|
IMGFMT_VDPAU_OUTPUT, // VdpOutputSurface
|
|
|
|
IMGFMT_VAAPI,
|
|
|
|
IMGFMT_VIDEOTOOLBOX, // CVPixelBufferRef
|
hwdec_vulkan: add Vulkan HW Interop
Vulkan Video Decoding has finally become a reality, as it's now
showing up in shipping drivers, and the ffmpeg support has been
merged.
With that in mind, this change introduces HW interop support for
ffmpeg Vulkan frames. The implementation is functionally complete - it
can display frames produced by hardware decoding, and it can work with
ffmpeg vulkan filters. There are still various caveats due to gaps and
bugs in drivers, so YMMV, as always.
Primary testing has been done on Intel, AMD, and nvidia hardware on
Linux with basic Windows testing on nvidia.
Notable caveats:
* Due to driver bugs, video decoding on nvidia does not work right now,
unless you use the Vulkan Beta driver. It can be worked around, but
requires ffmpeg changes that are not considered acceptable to merge.
* Even if those work-arounds are applied, Vulkan filters will not work
on video that was decoded by Vulkan, due to additional bugs in the
nvidia drivers. The filters do work correctly on content decoded some
other way, and then uploaded to Vulkan (eg: Decode with nvdec, upload
with --vf=format=vulkan)
* Vulkan filters can only be used with drivers that support
VK_EXT_descriptor_buffer which doesn't include Intel ANV as yet.
There is an MR outstanding for this.
* When dealing with 1080p content, there may be some visual distortion
in the bottom lines of frames due to chroma scaling incorporating the
extra hidden lines at the bottom of the frame (1080p content is
actually stored as 1088 lines), depending on the hardware/driver
combination and the scaling algorithm. This cannot be easily
addressed as the mechanical fix for it violates the Vulkan spec, and
probably requires a spec change to resolve properly.
All of these caveats will be fixed in either drivers or ffmpeg, and so
will not require mpv changes (unless something unexpected happens)
If you want to run on nvidia with the non-beta drivers, you can this
ffmpeg tree with the work-around patches:
* https://github.com/philipl/FFmpeg/tree/vulkan-nvidia-workarounds
2022-03-12 19:21:29 +00:00
|
|
|
IMGFMT_VULKAN, // VKImage
|
2022-09-22 18:36:38 +00:00
|
|
|
IMGFMT_DRMPRIME, // AVDRMFrameDescriptor
|
2020-04-21 20:56:45 +00:00
|
|
|
|
video: passthrough unknown AVPixelFormats
This is a rather radical change: instead of maintaining a whitelist of
FFmpeg formats we support, we automatically support all formats.
In general, a format which doesn't have an explicit IMGFMT_* name will
be converted to a known format through libswscale, or will be handled
by code which can treat pixel formats in a generic way using the pixel
format description, like vo_opengl.
AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma
subsampling by 4 in both directions. Its component order is documented
as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples.
This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so
4 bytes + 2 bytes). FFmpeg can actually handle this format with its
generic mechanism in an extremely awkward way, but it doesn't work for
us. Blacklist it, and hope no similar formats will be added in the
future.
Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of
500. More is not allowed, and there are some fixed size arrays that need
to contain any possible format (look for IMGFMT_END dependencies).
We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_*
through the whole codebase. But for now, this is better, because we
can compensate for formats missing in Libav or older FFmpeg versions,
like AV_PIX_FMT_RGB0 and others.
2014-11-04 22:32:02 +00:00
|
|
|
// Generic pass-through of AV_PIX_FMT_*. Used for formats which don't have
|
|
|
|
// a corresponding IMGFMT_ value.
|
|
|
|
IMGFMT_AVPIXFMT_START,
|
|
|
|
IMGFMT_AVPIXFMT_END = IMGFMT_AVPIXFMT_START + 500,
|
video: add vaapi decode and output support
This is based on the MPlayer VA API patches. To be exact it's based on
a very stripped down version of commit f1ad459a263f8537f6c from
git://gitorious.org/vaapi/mplayer.git.
This doesn't contain useless things like benchmarking hacks and the
demo code for GLX interop. Also, unlike in the original patch, decoding
and video output are split into separate source files (the separation
between decoding and display also makes pixel format hacks unnecessary).
On the other hand, some features not present in the original patch were
added, like screenshot support.
VA API is rather bad for actual video output. Dealing with older libva
versions or the completely broken vdpau backend doesn't help. OSD is
low quality and should be rather slow. In some cases, only either OSD
or subtitles can be shown at the same time (because OSD is drawn first,
OSD is prefered).
Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures: the vdpau backend seems to assume
premultiplied, while a native vaapi driver uses straight. So I picked
straight alpha. It doesn't matter much, because the blending code for
straight alpha I added to img_convert.c is probably buggy, and ASS
subtitles might be blended incorrectly.
Really good video output with VA API would probably use OpenGL and the
GL interop features, but at this point you might just use vo_opengl.
(Patches for making HW decoding with vo_opengl have a chance of being
accepted.)
Despite these issues, decoding seems to work ok. I still got tearing
on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also
tested with the vdpau vaapi wrapper on a nvidia system; however this
was rather broken. (Fortunately, there is no reason to use mpv's VAAPI
support over native VDPAU.)
2013-08-09 12:01:30 +00:00
|
|
|
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
IMGFMT_END,
|
|
|
|
};
|
2012-12-31 00:58:25 +00:00
|
|
|
|
2018-01-16 10:38:16 +00:00
|
|
|
#define IMGFMT_IS_HWACCEL(fmt) (!!(mp_imgfmt_get_desc(fmt).flags & MP_IMGFLAG_HWACCEL))
|
2010-11-03 16:39:52 +00:00
|
|
|
|
2017-06-18 11:58:42 +00:00
|
|
|
int mp_imgfmt_from_name(bstr name);
|
2014-06-14 07:58:48 +00:00
|
|
|
char *mp_imgfmt_to_name_buf(char *buf, size_t buf_size, int fmt);
|
|
|
|
#define mp_imgfmt_to_name(fmt) mp_imgfmt_to_name_buf((char[16]){0}, 16, (fmt))
|
2012-08-21 17:20:36 +00:00
|
|
|
|
2014-04-14 18:19:44 +00:00
|
|
|
char **mp_imgfmt_name_list(void);
|
|
|
|
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
#define vo_format_name mp_imgfmt_to_name
|
|
|
|
|
vf_scale: replace ancient fallback image format selection
If video output and VO don't support the same format, a conversion
filter needs to be insert. Since a VO can support multiple formats, and
the filter chain also can deal with multiple formats, you basically have
to pick from a huge matrix of possible conversions.
The old MPlayer code had a quite naive algorithm: it first checked
whether any conversion from the list of preferred conversions matched,
and if not, it was falling back on checking a hardcoded list of output
formats (more or less sorted by quality). This had some unintended side-
effects, like not using obvious "replacement" formats, selecting the
wrong colorspace, selecting a bit depth that is too high or too low, and
more.
Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This
function was made for this purpose, and should select the "best" format.
Libav provides a similar function, but with a different name - there is
a function with the same name in FFmpeg, but it has different semantics
(I'm not sure if Libav or FFmpeg fucked up here).
This also removes handling of VFCAP_CSP_SUPPORTED vs.
VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly
for filter chains with multiple scale filters.
Fixes #1494.
2015-01-21 17:33:47 +00:00
|
|
|
int mp_imgfmt_select_best(int dst1, int dst2, int src);
|
2018-01-16 10:37:21 +00:00
|
|
|
int mp_imgfmt_select_best_list(int *dst, int num_dst, int src);
|
vf_scale: replace ancient fallback image format selection
If video output and VO don't support the same format, a conversion
filter needs to be insert. Since a VO can support multiple formats, and
the filter chain also can deal with multiple formats, you basically have
to pick from a huge matrix of possible conversions.
The old MPlayer code had a quite naive algorithm: it first checked
whether any conversion from the list of preferred conversions matched,
and if not, it was falling back on checking a hardcoded list of output
formats (more or less sorted by quality). This had some unintended side-
effects, like not using obvious "replacement" formats, selecting the
wrong colorspace, selecting a bit depth that is too high or too low, and
more.
Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This
function was made for this purpose, and should select the "best" format.
Libav provides a similar function, but with a different name - there is
a function with the same name in FFmpeg, but it has different semantics
(I'm not sure if Libav or FFmpeg fucked up here).
This also removes handling of VFCAP_CSP_SUPPORTED vs.
VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly
for filter chains with multiple scale filters.
Fixes #1494.
2015-01-21 17:33:47 +00:00
|
|
|
|
2008-02-22 09:09:46 +00:00
|
|
|
#endif /* MPLAYER_IMG_FORMAT_H */
|