Commit Graph

58 Commits

Author SHA1 Message Date
wm4 2aff6f8c95 vo_direct3d: remove non-working nv12 shader support
It never worked. It relied on some obscure texture format to provide the
equivalent of GL_RG or GL_LUMINANCE_ALPHA, but no hardware seemed to
report support for it ever. No idea what's the correct way to do this.
On D3D11 it exists, of course.

(Actually I'd like to remove the whole VO.)
2017-06-30 18:13:01 +02:00
wm4 1ad036a2ef video: get rid of swapped packed YUV
Another legacy annoyance. The only place where packed YUV is still
important is slightly older Apple hardware or drivers, which require
it for efficient hardware decoding.
2017-06-30 18:01:29 +02:00
wm4 300f6a3344 video: drop some more IMGFMT aliases
For vo_opengl and vo_direct3d, these are supported in a generic way.

For vf_vapoursynth, we could probably map its VSFormat struct in a
generic way, but for now do some bullshit.

vf_eq.c actually loses support for these formats. We could add generic
support too (anything that has 8 bit planes will work), but why bother.
The filter is deprecated anyway.
2017-06-29 21:30:10 +02:00
wm4 af5a71f52e video: drop some unused IMGFMT aliases
These formats are supported in a generic way.

To get rid of IMGFMT_NV21, remove support from vo_direct3d.c completely.
2017-06-29 20:56:29 +02:00
wm4 5ea851feae video/fmt-conversion, img_format: change license to LGPL
The problem with fmt-conversion.h is that "lucabe", who disagreed with
LGPL, originally wrote it. But it was actually rewritten by "reimar"
later. The original switch statement was replaced with a lookup table.
No code other than the imgfmt2pixfmt() function signature survives.
Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping
them, can be copyrighted.

So changing the license should be fine, because reimar and all other
authors involved with the new code agreed to LGPL.

We also don't consider format pairs added later as copyrightable.

(The direct-mapping idea mentioned in the "Copyright" file seems
attractive, and I might implement in later anyway.)

Likewise, there might be some format names added to img_format.h, which
are not covered by relicensing agreements. These all affect "later"
additions, and they follow either the FFmpeg PIXFMT naming or some other
pre-existing logic, so this should be fine.
2017-06-18 15:15:07 +02:00
wm4 456c2ad222 img_format: drop unused aliases
vo_opengl uses those automatically via pixdesc.
2017-06-18 15:15:07 +02:00
wm4 0754cbc83e d3d: add support for new libavcodec hwaccel API
Unfortunately quite a mess, in particular due to the need to have some
compatibility with the old API. (The old API will be supported only in
short term.)
2017-06-08 21:51:25 +02:00
wm4 3eceac2eab Remove compatibility things
Possible with bumped FFmpeg/Libav.

These are just the simple cases.
2016-12-07 19:53:11 +01:00
Philip Langdale 585c5c34f1 vo_opengl: hwdec_cuda: Support P016 output surfaces
The latest 375.xx nvidia drivers add support for P016 output
surfaces. In combination with an ffmpeg change to return those
surfaces, we can display them.

The bulk of the work is related to knowing which format you're
dealing with at the right time. Once you know, it's straight forward.
2016-11-22 20:19:58 +01:00
wm4 73a5bde518 img_format: remove some unneeded format definitions
They're still supported, just that they have no IMGFMT_ alias.
2016-09-28 14:21:32 +02:00
Philip Langdale b83bfea05d hwdec_cuda: Rename config variable to be more consistent
'cuda-gl' isn't right - you can turn this on without any GL and
get some non-zero benefit (with the cuda-copy hwaccel). So
'cuda-hwaccel' seems more consistent with everything else.
2016-09-16 14:26:30 +02:00
Philip Langdale 2048ad2b8a hwdec/opengl: Add support for CUDA and cuvid/NvDecode
Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross
platform, but nvidia proprietary API that exposes their hardware
video decoding capabilities. It is analogous to their DXVA or VDPAU
support on Windows or Linux but without using platform specific API
calls.

As a rule, you'd rather use DXVA or VDPAU as these are more mature
and well supported APIs, but on Linux, VDPAU is falling behind the
hardware capabilities, and there's no sign that nvidia are making
the investments to update it.

Most concretely, this means that there is no VP8/9 or HEVC Main10
support in VDPAU. On the other hand, NvDecode does export vp8/9 and
partial support for HEVC Main10 (more on that below).

ffmpeg already has support in the form of the "cuvid" family of
decoders. Due to the design of the API, it is best exposed as a full
decoder rather than an hwaccel. As such, there are decoders like
h264_cuvid, hevc_cuvid, etc.

These decoders support two output paths today - in both cases, NV12
frames are returned, either in CUDA device memory or regular system
memory.

In the case of the system memory path, the decoders can be used
as-is in mpv today with a command line like:

mpv --vd=lavc:h264_cuvid foobar.mp4

Doing this will take advantage of hardware decoding, but the cost
of the memcpy to system memory adds up, especially for high
resolution video (4K etc).

To avoid that, we need an hwdec that takes advantage of CUDA's
OpenGL interop to copy from device memory into OpenGL textures.

That is what this change implements.

The process is relatively simple as only basic device context
aquisition needs to be done by us - the CUDA buffer pool is managed
by the decoder - thankfully.

The hwdec looks a bit like the vdpau interop one - the hwdec
maintains a single set of plane textures and each output frame
is repeatedly mapped into these textures to pass on.

The frames are always in NV12 format, at least until 10bit output
supports emerges.

The only slightly interesting part of the copying process is that
CUDA works by associating PBOs, so we need to define these for
each of the textures.

TODO Items:
* I need to add a download_image function for screenshots. This
  would do the same copy to system memory that the decoder's
  system memory output does.
* There are items to investigate on the ffmpeg side. There appears
  to be a problem with timestamps for some content.

Final note: I mentioned HEVC Main10. While there is no 10bit output
support, NvDecode can return dithered 8bit NV12 so you can take
advantage of the hardware acceleration.

This particular mode requires compiling ffmpeg with a modified
header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg
yet.

Usage:

You will need to specify vo=opengl and hwdec=cuda.

Note that hwdec=auto will probably not work as it will try to use
vdpau first.

mpv --hwdec=cuda --vo=opengl foobar.mp4

If you want to use filters that require frames in system memory,
just use the decoder directly without the hwdec, as documented
above.
2016-09-08 16:06:12 +02:00
wm4 fd82e14888 build: merge d3d11va and dxva2 hwaccel checks
We don't have any reason to disable either. Both are loaded dynamically
at runtime anyway. There is also no reason why dxva2 would disappear
from libavcodec any time soon.
2016-05-11 15:40:31 +02:00
wm4 bda111018c video: add IMGFMT_P010 alias
Gets rid of some silliness, and might be useful in the future.
2016-04-29 22:38:54 +02:00
Kevin Mitchell a7110862c8 vd_lavc: add d3d11va hwdec
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
2016-03-30 09:01:27 -07:00
wm4 3a015b9ec7 video: remove some useless old RGB formats
Some VOs had support for these - remove them.

Typically, these formats will have only some use in cases where using
RGB software conversion with libswscale is faster than letting the
VO/GPU do it (i.e. almost never). For the sake of testing this case,
keep IMGFMT_RGB565. This is the least messy format, because it has no
padding/alpha bits with unknown semantics.

Note that decoding to these formats still works. We'll let libswscale
repack the data to whatever the VO in use can take.
2016-01-25 10:43:35 +01:00
wm4 3973a953df sub: find GBRP format automatically when rendering to RGB
This removes the need to define IMGFMT_GBRAP, which fixes compilation
with the current Libav release.

This also makes it automatically pick up a GBRP format with the same bit
width. (Unfortunately, it seems libswscale does not support conversion
to AV_PIX_FMT_GBRAP16, so our code falls back to 8 bit, removing
precision for video covered by subtitles in cases this code is used.)

Also, when the source video is e.g. 10 bit YUV, upsample to 16 bit.
Whether this is good or bad, it fixes behavior with alpha. Although I'm
not sure if the alpha range is really correct ([0,2^16-1] vs.
[0,255*256]). Keep in mind that libswscale doesn't even agree with the
way we do it.
2015-12-24 16:42:21 +01:00
wm4 6bec6ac558 sub: better alpha blending when rendering to alpha surfaces
This actually treats destination alpha correctly, and gives much better
results than before. I don't know if this is perfectly correct yet,
though. Slight difference with vo_opengl behavior suggests it might not
be.

Note that this does not affect VOs with true alpha support. vo_opengl
does not use this code at all, and does the alpha calculations in OpenGL
instead.
2015-12-24 14:43:23 +01:00
wm4 1dd7b7bddc video: remove VDA support
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.

VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
2015-09-28 22:03:14 +02:00
Sebastien Zwickert 31b5a211f4 hwdec: add VideoToolbox support
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).

Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
2015-08-05 17:47:30 +02:00
Marcin Kurczewski f43017bfe9 Update license headers
Signed-off-by: wm4 <wm4@nowhere>
2015-04-13 12:10:01 +02:00
wm4 8fff125422 RPI support
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.

Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.

This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.

vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.

This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.

This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.

Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
2015-03-29 16:09:56 +02:00
wm4 be5994a781 video: work around libswscale for PNG pixel formats
The intention is that we can test vo_opengl with high bit depth PNGs
better. This throws libswscale completely out of the loop, which before
was needed in order to convert from big endian to little endian.

Also apply a minimal cleanup to fmt-conversion.c (unrelated).
2015-02-06 23:22:16 +01:00
wm4 a0caadd512 vo_opengl: handle grayscale input better, add YA16 support
Simply clamp off the U/V components in the colormatrix, instead of doing
something special in the shader.

Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix
calculation overflowed with 32, add a component_bits field to the image
format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had
no bad consequences otherwise).
2015-01-21 19:29:18 +01:00
wm4 a7686e86ff video: remove swapped-endian image format aliases
Like the previous commit, this removes names only, not actual support
for these formats.
2014-11-05 01:52:20 +01:00
wm4 91471deecc video: remove aliases for some rarely referenced image formats
These formats are still supported; you just can't reference them via a
defined constants directly. They are now handled via the generic
passthrough.

(If you want to use such a format, you either have to add the entry
back, or use AV_PIX_FMT_* directly.)
2014-11-05 01:52:19 +01:00
wm4 7333bc6536 video: passthrough unknown AVPixelFormats
This is a rather radical change: instead of maintaining a whitelist of
FFmpeg formats we support, we automatically support all formats.

In general, a format which doesn't have an explicit IMGFMT_* name will
be converted to a known format through libswscale, or will be handled
by code which can treat pixel formats in a generic way using the pixel
format description, like vo_opengl.

AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma
subsampling by 4 in both directions. Its component order is documented
as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples.
This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so
4 bytes + 2 bytes). FFmpeg can actually handle this format with its
generic mechanism in an extremely awkward way, but it doesn't work for
us. Blacklist it, and hope no similar formats will be added in the
future.

Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of
500. More is not allowed, and there are some fixed size arrays that need
to contain any possible format (look for IMGFMT_END dependencies).

We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_*
through the whole codebase. But for now, this is better, because we
can compensate for formats missing in Libav or older FFmpeg versions,
like AV_PIX_FMT_RGB0 and others.
2014-11-05 01:52:19 +01:00
wm4 423a7de676 video: initial dxva2 support
Shamelessly stolen from ffmpeg. It probably doesn't work - you can debug
it yourself.
2014-10-25 19:25:22 +02:00
Stefano Pigozzi 6aac17cebb vda: only support the new hwaccel 1.2 API (remove old code)
Since the new hwaccel API is now merged in ffmpeg's stable release, we can
finally remove support for the old API.

I pretty much kept lu_zero's new code unchanged and just added some error
printing (that we had with the old glue code) to make the life of our users
less miserable.
2014-08-01 10:38:18 +02:00
wm4 a9538e17ad video: synchronize mpv rgb pixel format names with ffmpeg names
This affects packed RGB formats up to 16 bits per pixel. The old mplayer
names used LSB-to-MSB order, while FFmpeg (and some other libraries) use
MSB-to-LSB.

Nothing should change with this commit, i.e. no bit order or endian bugs
should be added or fixed. In some cases, the name stays the same, even
though the byte order changes, e.g. RGB8->BGR8 and BGR8->RGB8, and this
affects the user-visible names too; this might cause confusion.
2014-06-14 10:03:04 +02:00
Luca Barbato e0e79a2e7e vda: Hwaccel 1.2 support
Use the new context and the default functions provided.
2014-05-12 12:59:16 +02:00
wm4 3db3653227 video: fix FFmpeg or Libav being a special snowflake 2014-03-16 16:23:12 +01:00
wm4 3ec7f528c4 vd_lavc: remove compatibility crap
All this code was needed for compatibility with very old libavcodec
versions only (such as Libav 9).

Includes some now-possible simplifications too.
2014-03-16 13:19:19 +01:00
wm4 64278128d8 video/fmt-conversion.c: remove unknown pixel format messages
This removes the messages printed on unknown pixel format messages.
Passing a mp_log to them would be too messy. Actually, this is a good
change, because in the past we often had trouble with these messages
printed too often (causing terminal spam etc.), and printing warnings or
error messages on the caller sides is much cleaner.

vd_lavc.c had a change earlier to print an error message if a decoder
outputs an unsupported pixel format.
2013-12-21 20:50:11 +01:00
wm4 2c08bf1bd7 Reduce recursive config.h inclusions in headers
In my opinion, config.h inclusions should be kept to a minimum. MPlayer
code really liked including config.h everywhere, though, even in often
used header files. Try to reduce this.
2013-12-18 17:12:21 +01:00
wm4 0112143fda Split mpvcore/ into common/, misc/, bstr/ 2013-12-17 02:39:45 +01:00
wm4 597b8a3550 Take care of some libavutil deprecations, drop support for FFmpeg 1.0
PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get

This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.

Mostly untested; it does compile with Libav 9.9.
2013-11-29 17:39:57 +01:00
wm4 60cd300558 vaapi: remove unused hw image formats, simplify
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.

Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
2013-11-29 14:19:29 +01:00
wm4 d6de87d1d3 video: make IMGFMT_RGB0 etc. exist even if libavutil doesn't support it
These formats are helpful for distinguishing surfaces with and without
alpha. Unfortunately, Libav and older version of FFmpeg don't support
them, so code will break. Fix this by treating these formats specially
on the mpv side, mapping them to RGBA on Libav, and unseting the alpha
bit in the mp_imgfmt_desc struct.
2013-11-05 22:05:23 +01:00
Stefano Pigozzi 37388ebb0e configure: uniform the defines to #define HAVE_xxx (0|1)
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:

  * #define HAVE_HURR 1   / #undef HAVE_DURR
  * #define HAVE_HURR     / #undef HAVE_DURR
  * #define CONFIG_HURR 1 / #undef CONFIG_DURR
  * #define HAVE_HURR 1   / #define HAVE_DURR 0
  * #define CONFIG_HURR 1 / #define CONFIG_DURR 0

All is now uniform and uses:
  * #define HAVE_HURR 1
  * #define HAVE_DURR 0

We like definining to 0 as opposed to `undef` bcause it can help spot typos
and is very helpful when doing big reorganizations in the code.

[1]: http://xkcd.com/927/ related
2013-11-03 21:59:54 +01:00
Stefano Pigozzi a9cb2dc1b8 video: add vda decode support (with hwaccel) and direct rendering
Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec'
decoder in FFmpeg.

The Good: This new implementation has some advantages over the previous one:

 - It works with Libav: vda_h264_dec never got into Libav since they prefer
   client applications to use the hwaccel API.

 - It is way more efficient: in my tests this implementation yields a
   reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and
   ~65-75% compared to h264 software decoding. This is mainly because
   `vo_corevideo` was adapted to perform direct rendering of the
   `CVPixelBufferRefs` created by the Video Decode Acceleration API Framework.

The Bad:
  - `vo_corevideo` is required to use VDA decoding acceleration.
  - only works with versions of ffmpeg/libav new enough (needs reference
    refcounting). That is FFmpeg 2.0+ and Libav's git master currently.

The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture.
One one end this makes the code simple since Apple's OpenGL implementation
actually supports this out of the box. It would be nice to support other
output image formats and choose the best format depending on the input, or at
least making it configurable. My tests indicate that CPU usage actually
increases with a 420p IMGFMT output which is not what I would have expected.

NOTE: There is a small memory leak with old versions of FFmpeg and with Libav
since the CVPixelBufferRef is not automatically released when the AVFrame is
deallocated. This can cause leaks inside libavcodec for decoded frames that
are discarded before mpv wraps them inside a refcounted mp_image (this only
happens on seeks).
For frames that enter mpv's refcounting facilities, this is not a problem
since we rewrap the CVPixelBufferRef in our mp_image that properly forwards
CVPixelBufferRetain/CvPixelBufferRelease calls to the underying
CVPixelBufferRef.

So, for FFmpeg use something more recent than `b3d63995` for Libav the patch
was posted to the dev ML in July and in review since, apparently, the proposed
fix is rather hacky.
2013-08-22 12:13:30 +02:00
wm4 2827295703 video: add vaapi decode and output support
This is based on the MPlayer VA API patches. To be exact it's based on
a very stripped down version of commit f1ad459a263f8537f6c from
git://gitorious.org/vaapi/mplayer.git.

This doesn't contain useless things like benchmarking hacks and the
demo code for GLX interop. Also, unlike in the original patch, decoding
and video output are split into separate source files (the separation
between decoding and display also makes pixel format hacks unnecessary).

On the other hand, some features not present in the original patch were
added, like screenshot support.

VA API is rather bad for actual video output. Dealing with older libva
versions or the completely broken vdpau backend doesn't help. OSD is
low quality and should be rather slow. In some cases, only either OSD
or subtitles can be shown at the same time (because OSD is drawn first,
OSD is prefered).

Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures: the vdpau backend seems to assume
premultiplied, while a native vaapi driver uses straight. So I picked
straight alpha. It doesn't matter much, because the blending code for
straight alpha I added to img_convert.c is probably buggy, and ASS
subtitles might be blended incorrectly.

Really good video output with VA API would probably use OpenGL and the
GL interop features, but at this point you might just use vo_opengl.
(Patches for making HW decoding with vo_opengl have a chance of being
accepted.)

Despite these issues, decoding seems to work ok. I still got tearing
on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also
tested with the vdpau vaapi wrapper on a nvidia system; however this
was rather broken. (Fortunately, there is no reason to use mpv's VAAPI
support over native VDPAU.)
2013-08-12 01:12:02 +02:00
Stefano Pigozzi 406241005e core: move contents to mpvcore (2/2)
Followup commit. Fixes all the files references.
2013-08-06 22:52:31 +02:00
wm4 5accc5e7c1 vdpau: split off decoder parts, use "new" libavcodec vdpau hwaccel API
Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This
file is named so because because it's written against the "old"
libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau").

Add support for the "new" libavcodec vdpau support. This was recently
added and replaces the "old" vdpau parts. (In fact, Libav is about to
deprecate and remove the "old" API without deprecation grace period,
so we have to support it now. Moreover, there will probably be no Libav
release which supports both, so the transition is even less smooth than
we could hope, and we have to support both the old and new API.)

Whether the old or new API is used is checked by a configure test: if
the new API is found, it is used, otherwise the old API is assumed.

Some details might be handled differently. Especially display preemption
is a bit problematic with the "new" libavcodec vdpau support: it wants
to keep a pointer to a specific vdpau API function (which can be driver
specific, because preemption might switch drivers). Also, surface IDs
are now directly stored in AVFrames (and mp_images), so they can't be
forced to VDP_INVALID_HANDLE on preemption. (This changes even with
older libavcodec versions, because mp_image always uses the newer
representation to make vo_vdpau.c simpler.)

Decoder initialization in the new code tries to deal with codec
profiles, while the old code always uses the highest profile per codec.

Surface allocation changes. Since the decoder won't call config() in
vo_vdpau.c on video size change anymore, we allow allocating surfaces
of arbitrary size instead of locking it to what the VO was configured.
The non-hwdec code also has slightly different allocation behavior now.

Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau
doesn't work anymore (a warning suggesting the --hwdec option is
printed instead).
2013-07-28 19:25:07 +02:00
wm4 0d1cd116d7 Fix compilation with Libav 2013-05-01 17:02:06 +02:00
wm4 9e0b68a385 video: add XYZ support
Needed for the ffmpeg j2k decoder.
2013-05-01 16:26:45 +02:00
wm4 90efe7cf48 demux_mf: support .xbm
And support the PIX_FMT_MONOWHITE pixel format. (This is really weird:
unlike PIX_FMT_MONOBLACK, it uses white pixels. I have no idea why
libavcodec doesn't just convert the pixel format on the fly, instead of
bothering everyone with really special pixel formats.)
2013-02-24 16:51:29 +01:00
wm4 4a2e4b684a build: make it work on somewhat older ffmpeg versions
Tested with n0.10.4. All these version checks are rather tricky,
because Libav and FFmpeg change the same thing at slightly different
versions.
2013-01-31 17:42:21 +01:00
wm4 52d1f3cc53 options: also accept ffmpeg pixel format names
Options that take pixel format names now also accept ffmpeg names.
mpv internal names are preferred. We leave this undocumented
intentionally, and may be removed once libswscale stops printing
ffmpeg pixel format names to the terminal (or if we stop passing the
SWS_PRINT_INFO flag to it, which makes it print these).

(We insist on keeping the mpv specific names instead of dropping them
in favor of ffmpeg's name due to NIH, and also because ffmpeg always
appends the endian suffixes "le" and "be".)
2013-01-17 16:39:26 +01:00
wm4 61e59cd92c imgfmt: add more ffmpeg pixel formats
Most of these probably don't have much actual use, but at least allow
images of these formats to be handed to swscale, should any decoder
output them.
2013-01-13 20:04:13 +01:00