2009-02-14 17:57:47 +00:00
|
|
|
/*
|
2015-04-13 07:36:54 +00:00
|
|
|
* This file is part of mpv.
|
2009-02-14 17:57:47 +00:00
|
|
|
*
|
video/fmt-conversion, img_format: change license to LGPL
The problem with fmt-conversion.h is that "lucabe", who disagreed with
LGPL, originally wrote it. But it was actually rewritten by "reimar"
later. The original switch statement was replaced with a lookup table.
No code other than the imgfmt2pixfmt() function signature survives.
Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping
them, can be copyrighted.
So changing the license should be fine, because reimar and all other
authors involved with the new code agreed to LGPL.
We also don't consider format pairs added later as copyrightable.
(The direct-mapping idea mentioned in the "Copyright" file seems
attractive, and I might implement in later anyway.)
Likewise, there might be some format names added to img_format.h, which
are not covered by relicensing agreements. These all affect "later"
additions, and they follow either the FFmpeg PIXFMT naming or some other
pre-existing logic, so this should be fine.
2017-06-18 13:12:11 +00:00
|
|
|
* mpv is free software; you can redistribute it and/or
|
|
|
|
* modify it under the terms of the GNU Lesser General Public
|
|
|
|
* License as published by the Free Software Foundation; either
|
|
|
|
* version 2.1 of the License, or (at your option) any later version.
|
2009-02-14 17:57:47 +00:00
|
|
|
*
|
2015-04-13 07:36:54 +00:00
|
|
|
* mpv is distributed in the hope that it will be useful,
|
2009-02-14 17:57:47 +00:00
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
video/fmt-conversion, img_format: change license to LGPL
The problem with fmt-conversion.h is that "lucabe", who disagreed with
LGPL, originally wrote it. But it was actually rewritten by "reimar"
later. The original switch statement was replaced with a lookup table.
No code other than the imgfmt2pixfmt() function signature survives.
Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping
them, can be copyrighted.
So changing the license should be fine, because reimar and all other
authors involved with the new code agreed to LGPL.
We also don't consider format pairs added later as copyrightable.
(The direct-mapping idea mentioned in the "Copyright" file seems
attractive, and I might implement in later anyway.)
Likewise, there might be some format names added to img_format.h, which
are not covered by relicensing agreements. These all affect "later"
additions, and they follow either the FFmpeg PIXFMT naming or some other
pre-existing logic, so this should be fine.
2017-06-18 13:12:11 +00:00
|
|
|
* GNU Lesser General Public License for more details.
|
2009-02-14 17:57:47 +00:00
|
|
|
*
|
video/fmt-conversion, img_format: change license to LGPL
The problem with fmt-conversion.h is that "lucabe", who disagreed with
LGPL, originally wrote it. But it was actually rewritten by "reimar"
later. The original switch statement was replaced with a lookup table.
No code other than the imgfmt2pixfmt() function signature survives.
Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping
them, can be copyrighted.
So changing the license should be fine, because reimar and all other
authors involved with the new code agreed to LGPL.
We also don't consider format pairs added later as copyrightable.
(The direct-mapping idea mentioned in the "Copyright" file seems
attractive, and I might implement in later anyway.)
Likewise, there might be some format names added to img_format.h, which
are not covered by relicensing agreements. These all affect "later"
additions, and they follow either the FFmpeg PIXFMT naming or some other
pre-existing logic, so this should be fine.
2017-06-18 13:12:11 +00:00
|
|
|
* You should have received a copy of the GNU Lesser General Public
|
|
|
|
* License along with mpv. If not, see <http://www.gnu.org/licenses/>.
|
2009-02-14 17:57:47 +00:00
|
|
|
*/
|
|
|
|
|
2011-06-25 22:04:53 +00:00
|
|
|
#include <libavutil/pixdesc.h>
|
2013-12-18 16:12:21 +00:00
|
|
|
#include <libavutil/avutil.h>
|
|
|
|
|
2012-11-09 00:06:43 +00:00
|
|
|
#include "video/img_format.h"
|
2009-02-14 17:57:47 +00:00
|
|
|
#include "fmt-conversion.h"
|
|
|
|
|
|
|
|
static const struct {
|
|
|
|
int fmt;
|
2013-11-29 16:39:57 +00:00
|
|
|
enum AVPixelFormat pix_fmt;
|
2009-02-14 17:57:47 +00:00
|
|
|
} conversion_map[] = {
|
2013-11-29 16:39:57 +00:00
|
|
|
{IMGFMT_ARGB, AV_PIX_FMT_ARGB},
|
|
|
|
{IMGFMT_BGRA, AV_PIX_FMT_BGRA},
|
|
|
|
{IMGFMT_BGR24, AV_PIX_FMT_BGR24},
|
2014-11-05 00:16:57 +00:00
|
|
|
{IMGFMT_RGB565, AV_PIX_FMT_RGB565},
|
2013-11-29 16:39:57 +00:00
|
|
|
{IMGFMT_ABGR, AV_PIX_FMT_ABGR},
|
|
|
|
{IMGFMT_RGBA, AV_PIX_FMT_RGBA},
|
|
|
|
{IMGFMT_RGB24, AV_PIX_FMT_RGB24},
|
2020-02-10 17:59:59 +00:00
|
|
|
{IMGFMT_PAL8, AV_PIX_FMT_PAL8},
|
2013-11-29 16:39:57 +00:00
|
|
|
{IMGFMT_UYVY, AV_PIX_FMT_UYVY422},
|
|
|
|
{IMGFMT_NV12, AV_PIX_FMT_NV12},
|
|
|
|
{IMGFMT_Y8, AV_PIX_FMT_GRAY8},
|
2014-11-05 00:16:57 +00:00
|
|
|
{IMGFMT_Y16, AV_PIX_FMT_GRAY16},
|
2013-11-29 16:39:57 +00:00
|
|
|
{IMGFMT_420P, AV_PIX_FMT_YUV420P},
|
|
|
|
{IMGFMT_444P, AV_PIX_FMT_YUV444P},
|
2009-02-19 12:19:55 +00:00
|
|
|
|
video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)
Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.
Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)
The TV code has not been tested.
Some corrective changes regarding endian and other image format flags
creep in.
2012-12-23 19:03:30 +00:00
|
|
|
// YUVJ are YUV formats that use the full Y range. Decoder color range
|
|
|
|
// information is used instead. Deprecated in ffmpeg.
|
2013-11-29 16:39:57 +00:00
|
|
|
{IMGFMT_420P, AV_PIX_FMT_YUVJ420P},
|
|
|
|
{IMGFMT_444P, AV_PIX_FMT_YUVJ444P},
|
2009-02-19 12:19:55 +00:00
|
|
|
|
2013-11-29 16:39:57 +00:00
|
|
|
{IMGFMT_BGR0, AV_PIX_FMT_BGR0},
|
|
|
|
{IMGFMT_0RGB, AV_PIX_FMT_0RGB},
|
|
|
|
{IMGFMT_RGB0, AV_PIX_FMT_RGB0},
|
|
|
|
{IMGFMT_0BGR, AV_PIX_FMT_0BGR},
|
2012-10-20 23:10:32 +00:00
|
|
|
|
2018-02-07 19:18:36 +00:00
|
|
|
{IMGFMT_RGBA64, AV_PIX_FMT_RGBA64},
|
|
|
|
|
2020-06-17 14:18:07 +00:00
|
|
|
#ifdef AV_PIX_FMT_X2RGB10
|
|
|
|
{IMGFMT_RGB30, AV_PIX_FMT_X2RGB10},
|
|
|
|
#endif
|
|
|
|
|
2014-08-01 07:16:42 +00:00
|
|
|
{IMGFMT_VDPAU, AV_PIX_FMT_VDPAU},
|
2015-07-11 15:21:39 +00:00
|
|
|
{IMGFMT_VIDEOTOOLBOX, AV_PIX_FMT_VIDEOTOOLBOX},
|
2017-07-06 17:54:40 +00:00
|
|
|
{IMGFMT_MEDIACODEC, AV_PIX_FMT_MEDIACODEC},
|
2017-09-29 15:53:00 +00:00
|
|
|
{IMGFMT_VAAPI, AV_PIX_FMT_VAAPI},
|
2014-10-25 17:23:46 +00:00
|
|
|
{IMGFMT_DXVA2, AV_PIX_FMT_DXVA2_VLD},
|
video: rewrite filtering glue code
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.
This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.
vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.
f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).
The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.
Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)
In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.
vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.
The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.
Exporting VO capabilities is still a big mess (mp_stream_info stuff).
The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.
f_async_queue is unused.
2018-01-16 10:53:44 +00:00
|
|
|
{IMGFMT_D3D11, AV_PIX_FMT_D3D11},
|
2015-03-29 13:12:11 +00:00
|
|
|
{IMGFMT_MMAL, AV_PIX_FMT_MMAL},
|
hwdec/opengl: Add support for CUDA and cuvid/NvDecode
Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross
platform, but nvidia proprietary API that exposes their hardware
video decoding capabilities. It is analogous to their DXVA or VDPAU
support on Windows or Linux but without using platform specific API
calls.
As a rule, you'd rather use DXVA or VDPAU as these are more mature
and well supported APIs, but on Linux, VDPAU is falling behind the
hardware capabilities, and there's no sign that nvidia are making
the investments to update it.
Most concretely, this means that there is no VP8/9 or HEVC Main10
support in VDPAU. On the other hand, NvDecode does export vp8/9 and
partial support for HEVC Main10 (more on that below).
ffmpeg already has support in the form of the "cuvid" family of
decoders. Due to the design of the API, it is best exposed as a full
decoder rather than an hwaccel. As such, there are decoders like
h264_cuvid, hevc_cuvid, etc.
These decoders support two output paths today - in both cases, NV12
frames are returned, either in CUDA device memory or regular system
memory.
In the case of the system memory path, the decoders can be used
as-is in mpv today with a command line like:
mpv --vd=lavc:h264_cuvid foobar.mp4
Doing this will take advantage of hardware decoding, but the cost
of the memcpy to system memory adds up, especially for high
resolution video (4K etc).
To avoid that, we need an hwdec that takes advantage of CUDA's
OpenGL interop to copy from device memory into OpenGL textures.
That is what this change implements.
The process is relatively simple as only basic device context
aquisition needs to be done by us - the CUDA buffer pool is managed
by the decoder - thankfully.
The hwdec looks a bit like the vdpau interop one - the hwdec
maintains a single set of plane textures and each output frame
is repeatedly mapped into these textures to pass on.
The frames are always in NV12 format, at least until 10bit output
supports emerges.
The only slightly interesting part of the copying process is that
CUDA works by associating PBOs, so we need to define these for
each of the textures.
TODO Items:
* I need to add a download_image function for screenshots. This
would do the same copy to system memory that the decoder's
system memory output does.
* There are items to investigate on the ffmpeg side. There appears
to be a problem with timestamps for some content.
Final note: I mentioned HEVC Main10. While there is no 10bit output
support, NvDecode can return dithered 8bit NV12 so you can take
advantage of the hardware acceleration.
This particular mode requires compiling ffmpeg with a modified
header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg
yet.
Usage:
You will need to specify vo=opengl and hwdec=cuda.
Note that hwdec=auto will probably not work as it will try to use
vdpau first.
mpv --hwdec=cuda --vo=opengl foobar.mp4
If you want to use filters that require frames in system memory,
just use the decoder directly without the hwdec, as documented
above.
2016-09-04 22:23:55 +00:00
|
|
|
{IMGFMT_CUDA, AV_PIX_FMT_CUDA},
|
2016-04-29 20:38:54 +00:00
|
|
|
{IMGFMT_P010, AV_PIX_FMT_P010},
|
2017-10-23 15:51:49 +00:00
|
|
|
{IMGFMT_DRMPRIME, AV_PIX_FMT_DRM_PRIME},
|
hwdec_vulkan: add Vulkan HW Interop
Vulkan Video Decoding has finally become a reality, as it's now
showing up in shipping drivers, and the ffmpeg support has been
merged.
With that in mind, this change introduces HW interop support for
ffmpeg Vulkan frames. The implementation is functionally complete - it
can display frames produced by hardware decoding, and it can work with
ffmpeg vulkan filters. There are still various caveats due to gaps and
bugs in drivers, so YMMV, as always.
Primary testing has been done on Intel, AMD, and nvidia hardware on
Linux with basic Windows testing on nvidia.
Notable caveats:
* Due to driver bugs, video decoding on nvidia does not work right now,
unless you use the Vulkan Beta driver. It can be worked around, but
requires ffmpeg changes that are not considered acceptable to merge.
* Even if those work-arounds are applied, Vulkan filters will not work
on video that was decoded by Vulkan, due to additional bugs in the
nvidia drivers. The filters do work correctly on content decoded some
other way, and then uploaded to Vulkan (eg: Decode with nvdec, upload
with --vf=format=vulkan)
* Vulkan filters can only be used with drivers that support
VK_EXT_descriptor_buffer which doesn't include Intel ANV as yet.
There is an MR outstanding for this.
* When dealing with 1080p content, there may be some visual distortion
in the bottom lines of frames due to chroma scaling incorporating the
extra hidden lines at the bottom of the frame (1080p content is
actually stored as 1088 lines), depending on the hardware/driver
combination and the scaling algorithm. This cannot be easily
addressed as the mechanical fix for it violates the Vulkan spec, and
probably requires a spec change to resolve properly.
All of these caveats will be fixed in either drivers or ffmpeg, and so
will not require mpv changes (unless something unexpected happens)
If you want to run on nvidia with the non-beta drivers, you can this
ffmpeg tree with the work-around patches:
* https://github.com/philipl/FFmpeg/tree/vulkan-nvidia-workarounds
2022-03-12 19:21:29 +00:00
|
|
|
#if HAVE_VULKAN_INTEROP
|
|
|
|
{IMGFMT_VULKAN, AV_PIX_FMT_VULKAN},
|
|
|
|
#endif
|
2016-04-29 20:38:54 +00:00
|
|
|
|
2013-11-29 16:39:57 +00:00
|
|
|
{0, AV_PIX_FMT_NONE}
|
2009-02-14 17:57:47 +00:00
|
|
|
};
|
|
|
|
|
2013-11-29 16:39:57 +00:00
|
|
|
enum AVPixelFormat imgfmt2pixfmt(int fmt)
|
2009-02-14 17:57:47 +00:00
|
|
|
{
|
2013-01-17 15:10:26 +00:00
|
|
|
if (fmt == IMGFMT_NONE)
|
2013-11-29 16:39:57 +00:00
|
|
|
return AV_PIX_FMT_NONE;
|
2013-01-17 15:10:26 +00:00
|
|
|
|
video: passthrough unknown AVPixelFormats
This is a rather radical change: instead of maintaining a whitelist of
FFmpeg formats we support, we automatically support all formats.
In general, a format which doesn't have an explicit IMGFMT_* name will
be converted to a known format through libswscale, or will be handled
by code which can treat pixel formats in a generic way using the pixel
format description, like vo_opengl.
AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma
subsampling by 4 in both directions. Its component order is documented
as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples.
This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so
4 bytes + 2 bytes). FFmpeg can actually handle this format with its
generic mechanism in an extremely awkward way, but it doesn't work for
us. Blacklist it, and hope no similar formats will be added in the
future.
Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of
500. More is not allowed, and there are some fixed size arrays that need
to contain any possible format (look for IMGFMT_END dependencies).
We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_*
through the whole codebase. But for now, this is better, because we
can compensate for formats missing in Libav or older FFmpeg versions,
like AV_PIX_FMT_RGB0 and others.
2014-11-04 22:32:02 +00:00
|
|
|
if (fmt >= IMGFMT_AVPIXFMT_START && fmt < IMGFMT_AVPIXFMT_END) {
|
|
|
|
enum AVPixelFormat pixfmt = fmt - IMGFMT_AVPIXFMT_START;
|
|
|
|
// Avoid duplicate format - each format must be unique.
|
|
|
|
int mpfmt = pixfmt2imgfmt(pixfmt);
|
2019-11-08 16:20:57 +00:00
|
|
|
if (mpfmt == fmt && av_pix_fmt_desc_get(pixfmt))
|
video: passthrough unknown AVPixelFormats
This is a rather radical change: instead of maintaining a whitelist of
FFmpeg formats we support, we automatically support all formats.
In general, a format which doesn't have an explicit IMGFMT_* name will
be converted to a known format through libswscale, or will be handled
by code which can treat pixel formats in a generic way using the pixel
format description, like vo_opengl.
AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma
subsampling by 4 in both directions. Its component order is documented
as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples.
This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so
4 bytes + 2 bytes). FFmpeg can actually handle this format with its
generic mechanism in an extremely awkward way, but it doesn't work for
us. Blacklist it, and hope no similar formats will be added in the
future.
Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of
500. More is not allowed, and there are some fixed size arrays that need
to contain any possible format (look for IMGFMT_END dependencies).
We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_*
through the whole codebase. But for now, this is better, because we
can compensate for formats missing in Libav or older FFmpeg versions,
like AV_PIX_FMT_RGB0 and others.
2014-11-04 22:32:02 +00:00
|
|
|
return pixfmt;
|
|
|
|
return AV_PIX_FMT_NONE;
|
|
|
|
}
|
|
|
|
|
2013-12-21 17:04:16 +00:00
|
|
|
for (int i = 0; conversion_map[i].fmt; i++) {
|
2009-02-14 17:57:47 +00:00
|
|
|
if (conversion_map[i].fmt == fmt)
|
2013-12-21 17:04:16 +00:00
|
|
|
return conversion_map[i].pix_fmt;
|
|
|
|
}
|
|
|
|
return AV_PIX_FMT_NONE;
|
2009-02-14 17:57:47 +00:00
|
|
|
}
|
|
|
|
|
2013-11-29 16:39:57 +00:00
|
|
|
int pixfmt2imgfmt(enum AVPixelFormat pix_fmt)
|
2009-02-14 17:57:47 +00:00
|
|
|
{
|
2013-11-29 16:39:57 +00:00
|
|
|
if (pix_fmt == AV_PIX_FMT_NONE)
|
2013-01-17 15:10:26 +00:00
|
|
|
return IMGFMT_NONE;
|
|
|
|
|
2013-12-21 17:04:16 +00:00
|
|
|
for (int i = 0; conversion_map[i].pix_fmt != AV_PIX_FMT_NONE; i++) {
|
2009-02-14 17:57:47 +00:00
|
|
|
if (conversion_map[i].pix_fmt == pix_fmt)
|
2013-12-21 17:04:16 +00:00
|
|
|
return conversion_map[i].fmt;
|
2011-06-25 22:04:53 +00:00
|
|
|
}
|
video: passthrough unknown AVPixelFormats
This is a rather radical change: instead of maintaining a whitelist of
FFmpeg formats we support, we automatically support all formats.
In general, a format which doesn't have an explicit IMGFMT_* name will
be converted to a known format through libswscale, or will be handled
by code which can treat pixel formats in a generic way using the pixel
format description, like vo_opengl.
AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma
subsampling by 4 in both directions. Its component order is documented
as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples.
This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so
4 bytes + 2 bytes). FFmpeg can actually handle this format with its
generic mechanism in an extremely awkward way, but it doesn't work for
us. Blacklist it, and hope no similar formats will be added in the
future.
Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of
500. More is not allowed, and there are some fixed size arrays that need
to contain any possible format (look for IMGFMT_END dependencies).
We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_*
through the whole codebase. But for now, this is better, because we
can compensate for formats missing in Libav or older FFmpeg versions,
like AV_PIX_FMT_RGB0 and others.
2014-11-04 22:32:02 +00:00
|
|
|
|
|
|
|
int generic = IMGFMT_AVPIXFMT_START + pix_fmt;
|
2019-11-08 16:20:57 +00:00
|
|
|
if (generic < IMGFMT_AVPIXFMT_END && av_pix_fmt_desc_get(pix_fmt))
|
video: passthrough unknown AVPixelFormats
This is a rather radical change: instead of maintaining a whitelist of
FFmpeg formats we support, we automatically support all formats.
In general, a format which doesn't have an explicit IMGFMT_* name will
be converted to a known format through libswscale, or will be handled
by code which can treat pixel formats in a generic way using the pixel
format description, like vo_opengl.
AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma
subsampling by 4 in both directions. Its component order is documented
as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples.
This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so
4 bytes + 2 bytes). FFmpeg can actually handle this format with its
generic mechanism in an extremely awkward way, but it doesn't work for
us. Blacklist it, and hope no similar formats will be added in the
future.
Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of
500. More is not allowed, and there are some fixed size arrays that need
to contain any possible format (look for IMGFMT_END dependencies).
We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_*
through the whole codebase. But for now, this is better, because we
can compensate for formats missing in Libav or older FFmpeg versions,
like AV_PIX_FMT_RGB0 and others.
2014-11-04 22:32:02 +00:00
|
|
|
return generic;
|
|
|
|
|
2013-12-21 17:04:16 +00:00
|
|
|
return 0;
|
2009-02-14 17:57:47 +00:00
|
|
|
}
|