Commit Graph

39 Commits

Author SHA1 Message Date
wm4 a7686e86ff video: remove swapped-endian image format aliases
Like the previous commit, this removes names only, not actual support
for these formats.
2014-11-05 01:52:20 +01:00
wm4 91471deecc video: remove aliases for some rarely referenced image formats
These formats are still supported; you just can't reference them via a
defined constants directly. They are now handled via the generic
passthrough.

(If you want to use such a format, you either have to add the entry
back, or use AV_PIX_FMT_* directly.)
2014-11-05 01:52:19 +01:00
wm4 7333bc6536 video: passthrough unknown AVPixelFormats
This is a rather radical change: instead of maintaining a whitelist of
FFmpeg formats we support, we automatically support all formats.

In general, a format which doesn't have an explicit IMGFMT_* name will
be converted to a known format through libswscale, or will be handled
by code which can treat pixel formats in a generic way using the pixel
format description, like vo_opengl.

AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma
subsampling by 4 in both directions. Its component order is documented
as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples.
This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so
4 bytes + 2 bytes). FFmpeg can actually handle this format with its
generic mechanism in an extremely awkward way, but it doesn't work for
us. Blacklist it, and hope no similar formats will be added in the
future.

Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of
500. More is not allowed, and there are some fixed size arrays that need
to contain any possible format (look for IMGFMT_END dependencies).

We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_*
through the whole codebase. But for now, this is better, because we
can compensate for formats missing in Libav or older FFmpeg versions,
like AV_PIX_FMT_RGB0 and others.
2014-11-05 01:52:19 +01:00
wm4 ffe9c03502 video: get hwaccel flag from pixdesc 2014-11-05 01:41:34 +01:00
wm4 bb80204de2 video: clarify what IMFMT_DXVA2 is 2014-10-26 02:35:38 +02:00
wm4 423a7de676 video: initial dxva2 support
Shamelessly stolen from ffmpeg. It probably doesn't work - you can debug
it yourself.
2014-10-25 19:25:22 +02:00
wm4 68ff8a0484 Move compat/ and bstr/ directory contents somewhere else
bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.

The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
2014-08-29 12:31:52 +02:00
wm4 1a1e631ccd build: deal with endian mess
There is no standard mechanism for detecting endianess. Doing it at
compile time in a portable way is probably hard. Doing it properly
with a configure check is probably hard too. Using the endian
definitions in <sys/types.h> (usually includes <endian.h>, which is
not available everywhere) works under circumstances, but the previous
commit broke it on OSX.

Ideally all code should be endian dependent, but that is not possible
due to the dependencies (such as FFmpeg, some video output APIs, some
audio output APIs).

Create a header osdep/endian.h, which contains various fallbacks.
Note that the last fallback uses libavutil; however, it's not clear
whether AV_HAVE_BIGENDIAN is a public symbol, or whether including
<libavutil/bswap.h> really makes it visible. And in fact we don't want
to pollute the namespace with libavutil definitions either. Thus it's
only the last fallback.
2014-07-10 00:58:56 +02:00
wm4 a9538e17ad video: synchronize mpv rgb pixel format names with ffmpeg names
This affects packed RGB formats up to 16 bits per pixel. The old mplayer
names used LSB-to-MSB order, while FFmpeg (and some other libraries) use
MSB-to-LSB.

Nothing should change with this commit, i.e. no bit order or endian bugs
should be added or fixed. In some cases, the name stays the same, even
though the byte order changes, e.g. RGB8->BGR8 and BGR8->RGB8, and this
affects the user-visible names too; this might cause confusion.
2014-06-14 10:03:04 +02:00
wm4 6ab72f9760 video: automatically strip "le" and "be" suffix from pixle format names
These suffixes are annoying when they're redundant, so strip them
automatically. On little endian machines, always strip the "le" suffix,
and on big endian machines vice versa (although I don't think anyone
ever tried to run mpv on a big endian machine).

Since pixel format strings are returned by a certain function and we
can't just change static strings, use a trick to pass a stack buffer
transparently. But this also means the string can't be permanently
stored by the caller, so vf_dlopen.c has to be updated. There seems
to be no other case where this is done, though.
2014-06-14 09:58:48 +02:00
wm4 7b7e15a460 vdpau: move RGB surface management out of the VO
Integrate it with the existing surface allocator in vdpau.c. The changes
are a bit violent, because the vdpau API is so non-orthogonal: compared
to video surfaces, output surfaces use a different ID type, different
format types, and different API functions.

Also, introduce IMGFMT_VDPAU_OUTPUT for VdpOutputSurfaces wrapped in
mp_image, rather than hacking it. This is a bit cleaner.
2014-05-22 20:59:31 +02:00
wm4 186fd0311d video: change image format names, prefer mostly FFmpeg names
The most user visible change is that "420p" is now displayed as
"yuv420p". This is what FFmpeg uses (almost), and is also less confusing
since "420p" is often confused with "420 pixels vertical resolution".

In general, we return the FFmpeg pixel format name. We still use our own
old mechanism to keep a list of exceptions to provide compatibility for
a while.

Also, never return NULL for image format names. If the format is unset
(0/IMGFMT_NONE), return "none". If the format has no name (probably
never happens, FFmpeg seems to guarantee that a name is set), return
"unknown".
2014-04-14 20:51:27 +02:00
wm4 6389507314 vdpau: remove legacy pixel formats
They were used by ancient libavcodec versions. This also removes the
need to distinguish vdpau image formats at all (since there is only
one), and some code can be simplified.
2014-03-17 18:21:11 +01:00
wm4 5ed24862c0 video: change image format from unsigned int to int in some places
Image formats used to be FourCCs, so unsigned int was better. But now
it's annoying and the only difference is that unsigned int is more to
type than int.
2014-03-17 18:19:57 +01:00
wm4 0112143fda Split mpvcore/ into common/, misc/, bstr/ 2013-12-17 02:39:45 +01:00
wm4 f30c2c99d1 mp_image: deal with FFmpeg PSEUDOPAL braindeath
We got a crash in libavutil when encoding with Y8 (GRAY8). The reason
was that libavutil was copying an Y8 image allocated by us, and expected
a palette. This is because GRAY8 is a PSEUDOPAL format. It's not clear
what PSEUDOPAL means, and it makes literally no sense at all. However,
it does expect a palette allocated for some formats that are not
paletted, and libavutil crashed when trying to access the non-existent
palette.
2013-12-01 20:51:38 +01:00
wm4 60cd300558 vaapi: remove unused hw image formats, simplify
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.

Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
2013-11-29 14:19:29 +01:00
wm4 d6de87d1d3 video: make IMGFMT_RGB0 etc. exist even if libavutil doesn't support it
These formats are helpful for distinguishing surfaces with and without
alpha. Unfortunately, Libav and older version of FFmpeg don't support
them, so code will break. Fix this by treating these formats specially
on the mpv side, mapping them to RGBA on Libav, and unseting the alpha
bit in the mp_imgfmt_desc struct.
2013-11-05 22:05:23 +01:00
Stefano Pigozzi a9cb2dc1b8 video: add vda decode support (with hwaccel) and direct rendering
Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec'
decoder in FFmpeg.

The Good: This new implementation has some advantages over the previous one:

 - It works with Libav: vda_h264_dec never got into Libav since they prefer
   client applications to use the hwaccel API.

 - It is way more efficient: in my tests this implementation yields a
   reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and
   ~65-75% compared to h264 software decoding. This is mainly because
   `vo_corevideo` was adapted to perform direct rendering of the
   `CVPixelBufferRefs` created by the Video Decode Acceleration API Framework.

The Bad:
  - `vo_corevideo` is required to use VDA decoding acceleration.
  - only works with versions of ffmpeg/libav new enough (needs reference
    refcounting). That is FFmpeg 2.0+ and Libav's git master currently.

The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture.
One one end this makes the code simple since Apple's OpenGL implementation
actually supports this out of the box. It would be nice to support other
output image formats and choose the best format depending on the input, or at
least making it configurable. My tests indicate that CPU usage actually
increases with a 420p IMGFMT output which is not what I would have expected.

NOTE: There is a small memory leak with old versions of FFmpeg and with Libav
since the CVPixelBufferRef is not automatically released when the AVFrame is
deallocated. This can cause leaks inside libavcodec for decoded frames that
are discarded before mpv wraps them inside a refcounted mp_image (this only
happens on seeks).
For frames that enter mpv's refcounting facilities, this is not a problem
since we rewrap the CVPixelBufferRef in our mp_image that properly forwards
CVPixelBufferRetain/CvPixelBufferRelease calls to the underying
CVPixelBufferRef.

So, for FFmpeg use something more recent than `b3d63995` for Libav the patch
was posted to the dev ML in July and in review since, apparently, the proposed
fix is rather hacky.
2013-08-22 12:13:30 +02:00
wm4 2827295703 video: add vaapi decode and output support
This is based on the MPlayer VA API patches. To be exact it's based on
a very stripped down version of commit f1ad459a263f8537f6c from
git://gitorious.org/vaapi/mplayer.git.

This doesn't contain useless things like benchmarking hacks and the
demo code for GLX interop. Also, unlike in the original patch, decoding
and video output are split into separate source files (the separation
between decoding and display also makes pixel format hacks unnecessary).

On the other hand, some features not present in the original patch were
added, like screenshot support.

VA API is rather bad for actual video output. Dealing with older libva
versions or the completely broken vdpau backend doesn't help. OSD is
low quality and should be rather slow. In some cases, only either OSD
or subtitles can be shown at the same time (because OSD is drawn first,
OSD is prefered).

Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures: the vdpau backend seems to assume
premultiplied, while a native vaapi driver uses straight. So I picked
straight alpha. It doesn't matter much, because the blending code for
straight alpha I added to img_convert.c is probably buggy, and ASS
subtitles might be blended incorrectly.

Really good video output with VA API would probably use OpenGL and the
GL interop features, but at this point you might just use vo_opengl.
(Patches for making HW decoding with vo_opengl have a chance of being
accepted.)

Despite these issues, decoding seems to work ok. I still got tearing
on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also
tested with the vdpau vaapi wrapper on a nvidia system; however this
was rather broken. (Fortunately, there is no reason to use mpv's VAAPI
support over native VDPAU.)
2013-08-12 01:12:02 +02:00
Stefano Pigozzi 406241005e core: move contents to mpvcore (2/2)
Followup commit. Fixes all the files references.
2013-08-06 22:52:31 +02:00
wm4 5accc5e7c1 vdpau: split off decoder parts, use "new" libavcodec vdpau hwaccel API
Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This
file is named so because because it's written against the "old"
libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau").

Add support for the "new" libavcodec vdpau support. This was recently
added and replaces the "old" vdpau parts. (In fact, Libav is about to
deprecate and remove the "old" API without deprecation grace period,
so we have to support it now. Moreover, there will probably be no Libav
release which supports both, so the transition is even less smooth than
we could hope, and we have to support both the old and new API.)

Whether the old or new API is used is checked by a configure test: if
the new API is found, it is used, otherwise the old API is assumed.

Some details might be handled differently. Especially display preemption
is a bit problematic with the "new" libavcodec vdpau support: it wants
to keep a pointer to a specific vdpau API function (which can be driver
specific, because preemption might switch drivers). Also, surface IDs
are now directly stored in AVFrames (and mp_images), so they can't be
forced to VDP_INVALID_HANDLE on preemption. (This changes even with
older libavcodec versions, because mp_image always uses the newer
representation to make vo_vdpau.c simpler.)

Decoder initialization in the new code tries to deal with codec
profiles, while the old code always uses the highest profile per codec.

Surface allocation changes. Since the decoder won't call config() in
vo_vdpau.c on video size change anymore, we allow allocating surfaces
of arbitrary size instead of locking it to what the VO was configured.
The non-hwdec code also has slightly different allocation behavior now.

Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau
doesn't work anymore (a warning suggesting the --hwdec option is
printed instead).
2013-07-28 19:25:07 +02:00
wm4 a1fd8c6953 img_format: comment on some pixel formats 2013-07-18 13:49:33 +02:00
wm4 fcdb681822 img_format: add a mask for color class
Using the term "color class" to avoid confusion with the other
colorspace related concepts.

Also get rid of MP_IMGFLAG_FMT_MASK, since it was unused.
2013-07-18 13:49:28 +02:00
wm4 9e0b68a385 video: add XYZ support
Needed for the ffmpeg j2k decoder.
2013-05-01 16:26:45 +02:00
wm4 90efe7cf48 demux_mf: support .xbm
And support the PIX_FMT_MONOWHITE pixel format. (This is really weird:
unlike PIX_FMT_MONOBLACK, it uses white pixels. I have no idea why
libavcodec doesn't just convert the pixel format on the fly, instead of
bothering everyone with really special pixel formats.)
2013-02-24 16:51:29 +01:00
wm4 1800761a65 mp_image: remove mp_image.bpp
This field contained the "average" bit depth per pixel. It serves no
purpose anymore. Remove it.

Only vo_opengl_old still uses this in order to allocate a buffer that is
shared between all planes.
2013-01-13 20:04:13 +01:00
wm4 61e59cd92c imgfmt: add more ffmpeg pixel formats
Most of these probably don't have much actual use, but at least allow
images of these formats to be handed to swscale, should any decoder
output them.
2013-01-13 20:04:13 +01:00
wm4 4950513ffe img_format: change meaning of MP_IMGFLAG_PLANAR
This used to mean that there is more than one plane. This is not very
useful: IMGFMT_Y8 was not considered planar, but it's just a Y plane,
and could be treated the same as actual planar formats. On the other
hand, IMGFMT_NV12 is partially packed, and usually requires special
handling, but was considered planar.

Change its meaning. Now the flag is set if the format has a separate
plane for each component. IMGFMT_Y8 is now planar, IMGFMT_NV12 is not.
As an odd special case, IMGFMT_MONO (1 bit per pixel) is like a planar
RGB format with a single plane.
2013-01-13 20:04:13 +01:00
wm4 717d904bbc mp_image: add mp_image_crop()
Actually stolen from draw_bmp.c.
2013-01-13 20:04:12 +01:00
wm4 5830d639b8 video: remove img_format compat hacks
Remove the strange things the old mp_image_setfmt() code did to the
image format parameters. This includes setting chroma shift to 31 for
gray (Y8) formats and more.

Y8 + vo_opengl_old didn't actually work for unknown reasons (regression
in this branch). Fix this. The difference is that Y8 is now interpreted
as gray RGB (LUMINANCE texture) instead of involving YUV (and levels)
conversion.

Get rid of distinguishing RGB and BGR. There wasn't really any good
reason for this.

Remove mp_get_chroma_shift() and IMGFMT_IS_YUVP16*(). mp_imgfmt_desc
gives the same information and more.
2013-01-13 20:04:11 +01:00
wm4 3791c226b7 draw_bmp: better way to find 444 format
Even though #ifdef ACCURATE is removed, the result should be about the
same. The fallback is only used by packed YUV formats (YUYV, NV12), and
doing 16 bit for them instead of 8 bit is not useful.

A side effect is that Y8 (gray) is not converted drawing subs, and for
alpha formats, the alpha plane is not removed. This means the number of
planes after upsampling can be 1-4 (1: gray, 2: gray+alpha, 3: planar,
4: planar+alpha). The code has to be adjusted accordingly to work on the
color planes only. Also remove the workaround for the chroma shift 31
hack.
2013-01-13 20:04:11 +01:00
wm4 8751a0e261 video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)

Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.

Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)

The TV code has not been tested.

Some corrective changes regarding endian and other image format flags
creep in.
2013-01-13 20:04:11 +01:00
wm4 aa6ba6372c mp_image: change how palette is handled
According to DOCS/OUTDATED-tech/colorspaces.txt, the following formats
are supposed to be palettized:

    IMGFMT_BGR8
    IMGFMT_RGB8,
    IMGFMT_BGR4_CHAR
    IMGFMT_RGB4_CHAR
    IMGFMT_BGR4
    IMGFMT_RGB4

Of these, only BGR8 and RGB8 are actually treated as palettized in some
way. ffmpeg has only one palettized format (AV_PIX_FMT_PAL8), and
IMGFMT_BGR8 was inconsistently mapped to packed non-palettized RGB
formats too (AV_PIX_FMT_BGR8). Moreover, vf_scale.c contained messy
hacks to generate a palette when AV_PIX_FMT_BGR8 is output. (libswscale
does not support AV_PIX_FMT_PAL8 output in the first place.)

Get rid of all of this, and introduce IMGFMT_PAL8, which directly maps
to AV_PIX_FMT_PAL8. Remove the palette creation code from vf_scale.c.
IMGFMT_BGR8 maps to AV_PIX_FMT_RGB8 (don't ask me why it's swapped),
without any palette use. Enabling it in vo_x11 or using it as vf_scale
input seems to give correct results.
2013-01-13 20:04:11 +01:00
wm4 00653a3eb0 video: use libavutil pixel format descriptors
Replace the internal pixel format stuff with code that queries the
libavutil list of pixel format descriptors.

Trying to map IMGFMT_IS_RGB() etc. turned out extremely hacky.
2013-01-13 20:04:10 +01:00
Stephen Hutchinson c082240c62 video: add support for 12 and 14 bit YUV pixel formats
Based on a patch by qyot27. Add the missing parts in mp_get_chroma_shift(),
which allow allocation of such images, and which make vo_opengl
automatically accept the new formats. Change the IMGFMT_IS_YUVP16_LE/BE
macros to properly report IMGFMT_444P14 as supported: this pixel format
has the highest numerical bit width identifier (0x55), which is not
covered by the mask ~0xfc. Remove 1 bit from the mask (makes it 0xf8) so
that IMGFMT_IS_YUVP16(IMGFMT_444P14) is 1. This is slightly risky, as
the organization of the image format IDs (actually FourCCs + mplayer
internal IDs) is messy at best, but it should be ok.
2012-12-03 21:08:51 +01:00
wm4 e5f7976000 video: add IMGFMT_Y16/PIX_FMT_GRAY16
This pixel format is sometimes used with yuv4mpeg.

vo_direct3d used its own IMGFMT_Y16 internally for some reason.

vo_opengl, vo_opengl_old, and vo_direct3d should be able to display
this pixel format natively.
2012-11-14 11:50:02 +01:00
wm4 4873b32c59 Rename directories, move files (step 2 of 2)
Finish renaming directories and moving files. Adjust all include
statements to make the previous commit compile.

The two commits are separate, because git is bad at tracking renames
and content changes at the same time.

Also take this as an opportunity to remove the separation between
"common" and "mplayer" sources in the Makefile. ("common" used to be
shared between mplayer and mencoder.)
2012-11-12 20:08:18 +01:00
wm4 d4bdd0473d Rename directories, move files (step 1 of 2) (does not compile)
Tis drops the silly lib prefixes, and attempts to organize the tree in
a more logical way. Make the top-level directory less cluttered as
well.

Renames the following directories:
    libaf -> audio/filter
    libao2 -> audio/out
    libvo -> video/out
    libmpdemux -> demux

Split libmpcodecs:
    vf* -> video/filter
    vd*, dec_video.* -> video/decode
    mp_image*, img_format*, ... -> video/
    ad*, dec_audio.* -> audio/decode

libaf/format.* is moved to audio/ - this is similar to how mp_image.*
is located in video/.

Move most top-level .c/.h files to core. (talloc.c/.h is left on top-
level, because it's external.) Park some of the more annoying files
in compat/. Some of these are relicts from the time mplayer used
ffmpeg internals.

sub/ is not split, because it's too much of a mess (subtitle code is
mixed with OSD display and rendering).

Maybe the organization of core is not ideal: it mixes playback core
(like mplayer.c) and utility helpers (like bstr.c/h). Should the need
arise, the playback core will be moved somewhere else, while core
contains all helper and common code.
2012-11-12 20:06:14 +01:00