1
0
mirror of https://github.com/mpv-player/mpv synced 2024-12-30 11:02:10 +00:00
Commit Graph

69 Commits

Author SHA1 Message Date
wm4
c61520b6bd img_format: drop some unused things 2017-06-30 18:38:23 +02:00
wm4
1dffcb0167 vo_opengl: rely on FFmpeg pixdesc a bit more
Add something that allows is to extract the component order from various
RGBA formats. In fact, also handle YUV, GBRP, and XYZ formats with this.

It introduces a new struct mp_regular_imgfmt, that hopefully will
eventually replace struct mp_imgfmt_desc. The latter is still needed by
a lot of code though, especially generic code. Also vo_opengl still uses
the old one, so this commit is sort of incomplete.

Due to its genericness, it's also possible that this commit introduces
rendering bugs, or accepts formats it shouldn't accept.
2017-06-29 20:52:05 +02:00
wm4
5ea851feae video/fmt-conversion, img_format: change license to LGPL
The problem with fmt-conversion.h is that "lucabe", who disagreed with
LGPL, originally wrote it. But it was actually rewritten by "reimar"
later. The original switch statement was replaced with a lookup table.
No code other than the imgfmt2pixfmt() function signature survives.
Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping
them, can be copyrighted.

So changing the license should be fine, because reimar and all other
authors involved with the new code agreed to LGPL.

We also don't consider format pairs added later as copyrightable.

(The direct-mapping idea mentioned in the "Copyright" file seems
attractive, and I might implement in later anyway.)

Likewise, there might be some format names added to img_format.h, which
are not covered by relicensing agreements. These all affect "later"
additions, and they follow either the FFmpeg PIXFMT naming or some other
pre-existing logic, so this should be fine.
2017-06-18 15:15:07 +02:00
wm4
32833fa3d1 img_format: minor simplification 2017-06-18 13:58:42 +02:00
wm4
937dcc25ad img_format: drop legacy name mappings
Not needed anymore.
2017-06-18 13:46:34 +02:00
wm4
e85d06baad img_format: stop setting some fields to dummy values for hwaccel formats
Flags like MP_IMGFLAG_YUV were meaningless for hwaccel formats, and
setting fields like component_bits made even less sense.
2017-02-21 10:35:38 +01:00
wm4
3eceac2eab Remove compatibility things
Possible with bumped FFmpeg/Libav.

These are just the simple cases.
2016-12-07 19:53:11 +01:00
wm4
0348cd080f video: remove d3d11 video processor use from OpenGL interop
We now have a video filter that uses the d3d11 video processor, so it
makes no sense to have one in the VO interop code. The VO uses it for
formats not directly supported by ANGLE (so the video data is converted
to a RGB texture, which ANGLE can take in).

Change this so that the video filter is automatically inserted if
needed. Move the code that maps RGB surfaces to its own inteorp backend.
Add a bunch of new image formats, which are used to enforce the new
constraints, and to automatically insert the filter only when needed.

The added vf mechanism to auto-insert the d3d11vpp filter is very dumb
and primitive, and will work only for this specific purpose. The format
negotiation mechanism in the filter chain is generally not very pretty,
and mostly broken as well. (libavfilter has a different mechanism, and
these mechanisms don't match well, so vf_lavfi uses some sort of hack.
It only works because hwaccel and non-hwaccel formats are strictly
separated.)

The RGB interop is now only used with older ANGLE versions. The only
reason I'm keeping it is because it's relatively isolated (uses only
existing mechanisms and adds no new concepts), and because I want to be
able to compare the behavior of the old code with the new one for
testing. It will be removed eventually.

If ANGLE has NV12 interop, P010 is now handled by converting to NV12
with the video processor, instead of converting it to RGB and using the
old mechanism to import that as a texture.
2016-05-29 19:00:55 +02:00
Niklas Haas
93546f0c2f vo_opengl: refactor pass_read_video and texture binding
This is a pretty major rewrite of the internal texture binding
mechanic, which makes it more flexible.

In general, the difference between the old and current approaches is
that now, all texture description is held in a struct img_tex and only
explicitly bound with pass_bind. (Once bound, a texture unit is assumed
to be set in stone and no longer tied to the img_tex)

This approach makes the code inside pass_read_video significantly more
flexible and cuts down on the number of weird special cases and
spaghetti logic.

It also has some improvements, e.g. cutting down greatly on the number
of unnecessary conversion passes inside pass_read_video (which was
previously mostly done to cope with the fact that the alternative would
have resulted in a combinatorial explosion of code complexity).

Some other notable changes (and potential improvements):

- texture expansion is now *always* handled in pass_read_video, and the
  colormatrix never does this anymore. (Which means the code could
  probably be removed from the colormatrix generation logic, modulo some
  other VOs)

- struct fbo_tex now stores both its "physical" and "logical"
  (configured) size, which cuts down on the amount of width/height
  baggage on some function calls

- vo_opengl can now technically support textures with different bit
  depths (e.g. 10 bit luma, 8 bit chroma) - but the APIs it queries
  inside img_format.c doesn't export this (nor does ffmpeg support it,
  really) so the status quo of using the same tex_mul for all planes is
  kept.

- dumb_mode is now only needed because of the indirect_fbo being in the
  main rendering pipeline. If we reintroduce p->use_indirect and thread
  a transform through the entire program this could be skipped where
  unnecessary, allowing for the removal of dumb_mode. But I'm not sure
  how to do this in a clean way. (Which is part of why it got introduced
  to begin with)

- It would be trivial to resurrect source-shader now (it would just be
  one extra 'if' inside pass_read_video).
2016-03-05 13:08:38 +01:00
wm4
e4a1086cfb img_format: fix padding calculation with P010
This was broken during refactoring commit e2d90b38 before pushing it.
2016-01-08 12:48:03 +01:00
wm4
6eccd4a573 img_format: fix compilation on older libavutil releases
AVComponentDescriptor.offset was introduced relatively recently. On
older releases, you have to use AVComponentDescriptor.offset_plus1,
which is now deprecated.

Instead of adding ifdeffery, assume AV_PIX_FMT_NV21 is the only format
for which this applies (and will remain the only case), which is
probably true enough.
2016-01-07 16:54:01 +01:00
wm4
b0c1455aa4 img_format: add a generic flag for semi-planar formats 2016-01-07 16:30:34 +01:00
wm4
e2d90b383b img_format: take care of pixfmts that declare padding
A format could declare that some or all LSBs in a component are padding
bits by setting a non-0 AVComponentDescriptor.shift value. This means we
would interpret it incorrectly, because until now we always assumed all
regular formats have the padding in the MSBs.

Not a single format that does this actually exists, though. But a NV12
variant will be added later in FFmpeg.
2016-01-07 16:30:34 +01:00
wm4
3973a953df sub: find GBRP format automatically when rendering to RGB
This removes the need to define IMGFMT_GBRAP, which fixes compilation
with the current Libav release.

This also makes it automatically pick up a GBRP format with the same bit
width. (Unfortunately, it seems libswscale does not support conversion
to AV_PIX_FMT_GBRAP16, so our code falls back to 8 bit, removing
precision for video covered by subtitles in cases this code is used.)

Also, when the source video is e.g. 10 bit YUV, upsample to 16 bit.
Whether this is good or bad, it fixes behavior with alpha. Although I'm
not sure if the alpha range is really correct ([0,2^16-1] vs.
[0,255*256]). Keep in mind that libswscale doesn't even agree with the
way we do it.
2015-12-24 16:42:21 +01:00
wm4
663415b914 vo_opengl: fix issues with some obscure pixel formats
The computation of the tex_mul variable was broken in multiple ways.
This variable is used e.g. by debanding for moving expansion of 10 bit
fixed-point input to normalized range to another stage of processing.

One obvious bug was that the rgb555 pixel format was broken. This format
has component_bits=5, but obviously it's already sampled in normalized
range, and does not need expansion. The tex_mul-free code path avoids
this by not using the colormatrix. (The code was originally designed to
work around dealing with the generally complicated pixel formats by only
using the colormatrix in the YUV case.)

Another possible bug was with 10 bit input. It expanded the input by
bringing the [0,2^10) range to [0,1], and then treating the expanded
input as 16 bit input. I didn't bother to check what this actually
computed, but it's somewhat likely it was wrong anyway. Now it uses
mp_get_csp_mul(), and disables expansion when computing the YUV matrix.
2015-12-07 23:48:59 +01:00
wm4
cdeb0e4c72 video: fix playback of pal8
PAL8 is the only format that is RGB, has only 1 component, is byte-
aligned. It was accidentally detected by the GBRP case as planar RGB.
(It would have been ok if it were gray; what ruins it is that it's
actually paletted, and the color values do not correspond to colors (but
palette entries).

Pseudo-pal formats are ok; in fact AV_PIX_FMT_GRAY is rightfully marked
as MP_IMGFLAG_YUV_P.
2015-11-01 14:11:43 +01:00
wm4
e3de309804 vo_opengl: support all kinds of GBRP formats
Adds support for AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12,
AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16, AV_PIX_FMT_GBRAP, and
AV_PIX_FMT_GBRAP16.

(Not that it matters, because nobody uses these anyway.)
2015-10-18 18:37:24 +02:00
wm4
1dd7b7bddc video: remove VDA support
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.

VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
2015-09-28 22:03:14 +02:00
wm4
9e04e31906 video: do not use deprecated libavutil pixdesc fields
These were normalized and are saner now. We want to use the new fields,
and also get rid of the deprecation warnings, so use them. There's no
release yet which uses these, so some ifdeffery is unfortunately needed.
2015-09-10 22:13:52 +02:00
Sebastien Zwickert
31b5a211f4 hwdec: add VideoToolbox support
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).

Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
2015-08-05 17:47:30 +02:00
Marcin Kurczewski
f43017bfe9 Update license headers
Signed-off-by: wm4 <wm4@nowhere>
2015-04-13 12:10:01 +02:00
wm4
cfa44f4e90 vo_opengl: move minor helper to common code
The generic image format code should cary most of the "knowledge" about
image formats.
2015-03-09 22:47:33 +01:00
wm4
2a691d1ede video: try to keep implied alpha when using conversion filters
Don't just discard alpha. This probably does the right thing, in the
rare situations when alpha matters at all.
2015-01-21 21:49:15 +01:00
wm4
a0caadd512 vo_opengl: handle grayscale input better, add YA16 support
Simply clamp off the U/V components in the colormatrix, instead of doing
something special in the shader.

Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix
calculation overflowed with 32, add a component_bits field to the image
format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had
no bad consequences otherwise).
2015-01-21 19:29:18 +01:00
wm4
30ca30c0a1 vf_scale: replace ancient fallback image format selection
If video output and VO don't support the same format, a conversion
filter needs to be insert. Since a VO can support multiple formats, and
the filter chain also can deal with multiple formats, you basically have
to pick from a huge matrix of possible conversions.

The old MPlayer code had a quite naive algorithm: it first checked
whether any conversion from the list of preferred conversions matched,
and if not, it was falling back on checking a hardcoded list of output
formats (more or less sorted by quality). This had some unintended side-
effects, like not using obvious "replacement" formats, selecting the
wrong colorspace, selecting a bit depth that is too high or too low, and
more.

Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This
function was made for this purpose, and should select the "best" format.

Libav provides a similar function, but with a different name - there is
a function with the same name in FFmpeg, but it has different semantics
(I'm not sure if Libav or FFmpeg fucked up here).

This also removes handling of VFCAP_CSP_SUPPORTED vs.
VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly
for filter chains with multiple scale filters.

Fixes #1494.
2015-01-21 18:33:47 +01:00
wm4
e5f2072364 command: change properties added in previous commit
Make their meaning more exact, and don't pretend that there's a
reasonable definition for "bits-per-pixel". Also make unset fields
unavailable.

average_depth still might be inconsistent: for example, 10 bit 4:2:0 is
identified as 24 bits, but RGB 4:4:4 as 12 bits. So YUV formats
seemingly drop the per-component padding, while RGB formats do not.
Internally it's consistent though: 10 bit YUV components are read as
16 bit, and the padding must be 0 (it's basically like an odd fixed-
point representation, rather than a bitfield).
2015-01-10 19:13:16 +01:00
wm4
a7686e86ff video: remove swapped-endian image format aliases
Like the previous commit, this removes names only, not actual support
for these formats.
2014-11-05 01:52:20 +01:00
wm4
5fc29e459f video: add image format test program 2014-11-05 01:52:19 +01:00
wm4
7333bc6536 video: passthrough unknown AVPixelFormats
This is a rather radical change: instead of maintaining a whitelist of
FFmpeg formats we support, we automatically support all formats.

In general, a format which doesn't have an explicit IMGFMT_* name will
be converted to a known format through libswscale, or will be handled
by code which can treat pixel formats in a generic way using the pixel
format description, like vo_opengl.

AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma
subsampling by 4 in both directions. Its component order is documented
as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples.
This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so
4 bytes + 2 bytes). FFmpeg can actually handle this format with its
generic mechanism in an extremely awkward way, but it doesn't work for
us. Blacklist it, and hope no similar formats will be added in the
future.

Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of
500. More is not allowed, and there are some fixed size arrays that need
to contain any possible format (look for IMGFMT_END dependencies).

We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_*
through the whole codebase. But for now, this is better, because we
can compensate for formats missing in Libav or older FFmpeg versions,
like AV_PIX_FMT_RGB0 and others.
2014-11-05 01:52:19 +01:00
wm4
9548f63943 video: handle endian detection in a more generic way
FFmpeg has only a AV_PIX_FMT_FLAG_BE flag, not a LE one, which causes
problems for us: we want to have the LE flag too, so code can actually
detect whether a format is non-native endian. Basically, we want to
reconstruct the LE/BE suffix all AV_PIX_FMT_*s have.

Doing this is hard due to the (messed up) way AVPixFmtDescriptor works.
The worst is AV_PIX_FMT_RGB444: this group of formats describe an
endian-independent access (since no component actually spans 2 bytes,
you only need byte accesses with a fixed offset), so we have to go
through some pain.
2014-11-05 01:41:35 +01:00
wm4
ffe9c03502 video: get hwaccel flag from pixdesc 2014-11-05 01:41:34 +01:00
wm4
68ff8a0484 Move compat/ and bstr/ directory contents somewhere else
bstr.c doesn't really deserve its own directory, and compat had just
a few files, most of which may as well be in osdep. There isn't really
any justification for these extra directories, so get rid of them.

The compat/libav.h was empty - just delete it. We changed our approach
to API compatibility, and will likely not need it anymore.
2014-08-29 12:31:52 +02:00
wm4
f193d25844 video: cosmetics: reformat image format names table 2014-06-14 10:06:23 +02:00
wm4
a9538e17ad video: synchronize mpv rgb pixel format names with ffmpeg names
This affects packed RGB formats up to 16 bits per pixel. The old mplayer
names used LSB-to-MSB order, while FFmpeg (and some other libraries) use
MSB-to-LSB.

Nothing should change with this commit, i.e. no bit order or endian bugs
should be added or fixed. In some cases, the name stays the same, even
though the byte order changes, e.g. RGB8->BGR8 and BGR8->RGB8, and this
affects the user-visible names too; this might cause confusion.
2014-06-14 10:03:04 +02:00
wm4
6ab72f9760 video: automatically strip "le" and "be" suffix from pixle format names
These suffixes are annoying when they're redundant, so strip them
automatically. On little endian machines, always strip the "le" suffix,
and on big endian machines vice versa (although I don't think anyone
ever tried to run mpv on a big endian machine).

Since pixel format strings are returned by a certain function and we
can't just change static strings, use a trick to pass a stack buffer
transparently. But this also means the string can't be permanently
stored by the caller, so vf_dlopen.c has to be updated. There seems
to be no other case where this is done, though.
2014-06-14 09:58:48 +02:00
wm4
7b7e15a460 vdpau: move RGB surface management out of the VO
Integrate it with the existing surface allocator in vdpau.c. The changes
are a bit violent, because the vdpau API is so non-orthogonal: compared
to video surfaces, output surfaces use a different ID type, different
format types, and different API functions.

Also, introduce IMGFMT_VDPAU_OUTPUT for VdpOutputSurfaces wrapped in
mp_image, rather than hacking it. This is a bit cleaner.
2014-05-22 20:59:31 +02:00
wm4
186fd0311d video: change image format names, prefer mostly FFmpeg names
The most user visible change is that "420p" is now displayed as
"yuv420p". This is what FFmpeg uses (almost), and is also less confusing
since "420p" is often confused with "420 pixels vertical resolution".

In general, we return the FFmpeg pixel format name. We still use our own
old mechanism to keep a list of exceptions to provide compatibility for
a while.

Also, never return NULL for image format names. If the format is unset
(0/IMGFMT_NONE), return "none". If the format has no name (probably
never happens, FFmpeg seems to guarantee that a name is set), return
"unknown".
2014-04-14 20:51:27 +02:00
wm4
6389507314 vdpau: remove legacy pixel formats
They were used by ancient libavcodec versions. This also removes the
need to distinguish vdpau image formats at all (since there is only
one), and some code can be simplified.
2014-03-17 18:21:11 +01:00
wm4
5ed24862c0 video: change image format from unsigned int to int in some places
Image formats used to be FourCCs, so unsigned int was better. But now
it's annoying and the only difference is that unsigned int is more to
type than int.
2014-03-17 18:19:57 +01:00
wm4
6aa2d1f120 img_format: AV_PIX_FMT_FLAG_ALPHA is always available
We no more support ancient libavutil versions.
2014-03-17 18:19:28 +01:00
wm4
95d94238f4 img_format: drop message about unknown pixel formats
Too bad.
2013-12-21 20:50:11 +01:00
wm4
398bfbe4c1 compat: add compatibility kludge for Libav 9
Libav 9 still uses the unprefixed PIX_FMT_... symbols, but they will
probably be removed some time in the future.

There are some other deprecations we have yet to take care of, but
there are no clear replacements yet.
2013-12-08 23:51:39 +01:00
wm4
f30c2c99d1 mp_image: deal with FFmpeg PSEUDOPAL braindeath
We got a crash in libavutil when encoding with Y8 (GRAY8). The reason
was that libavutil was copying an Y8 image allocated by us, and expected
a palette. This is because GRAY8 is a PSEUDOPAL format. It's not clear
what PSEUDOPAL means, and it makes literally no sense at all. However,
it does expect a palette allocated for some formats that are not
paletted, and libavutil crashed when trying to access the non-existent
palette.
2013-12-01 20:51:38 +01:00
wm4
597b8a3550 Take care of some libavutil deprecations, drop support for FFmpeg 1.0
PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get

This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.

Mostly untested; it does compile with Libav 9.9.
2013-11-29 17:39:57 +01:00
wm4
60cd300558 vaapi: remove unused hw image formats, simplify
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.

Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
2013-11-29 14:19:29 +01:00
wm4
d6de87d1d3 video: make IMGFMT_RGB0 etc. exist even if libavutil doesn't support it
These formats are helpful for distinguishing surfaces with and without
alpha. Unfortunately, Libav and older version of FFmpeg don't support
them, so code will break. Fix this by treating these formats specially
on the mpv side, mapping them to RGBA on Libav, and unseting the alpha
bit in the mp_imgfmt_desc struct.
2013-11-05 22:05:23 +01:00
Stefano Pigozzi
a9cb2dc1b8 video: add vda decode support (with hwaccel) and direct rendering
Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec'
decoder in FFmpeg.

The Good: This new implementation has some advantages over the previous one:

 - It works with Libav: vda_h264_dec never got into Libav since they prefer
   client applications to use the hwaccel API.

 - It is way more efficient: in my tests this implementation yields a
   reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and
   ~65-75% compared to h264 software decoding. This is mainly because
   `vo_corevideo` was adapted to perform direct rendering of the
   `CVPixelBufferRefs` created by the Video Decode Acceleration API Framework.

The Bad:
  - `vo_corevideo` is required to use VDA decoding acceleration.
  - only works with versions of ffmpeg/libav new enough (needs reference
    refcounting). That is FFmpeg 2.0+ and Libav's git master currently.

The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture.
One one end this makes the code simple since Apple's OpenGL implementation
actually supports this out of the box. It would be nice to support other
output image formats and choose the best format depending on the input, or at
least making it configurable. My tests indicate that CPU usage actually
increases with a 420p IMGFMT output which is not what I would have expected.

NOTE: There is a small memory leak with old versions of FFmpeg and with Libav
since the CVPixelBufferRef is not automatically released when the AVFrame is
deallocated. This can cause leaks inside libavcodec for decoded frames that
are discarded before mpv wraps them inside a refcounted mp_image (this only
happens on seeks).
For frames that enter mpv's refcounting facilities, this is not a problem
since we rewrap the CVPixelBufferRef in our mp_image that properly forwards
CVPixelBufferRetain/CvPixelBufferRelease calls to the underying
CVPixelBufferRef.

So, for FFmpeg use something more recent than `b3d63995` for Libav the patch
was posted to the dev ML in July and in review since, apparently, the proposed
fix is rather hacky.
2013-08-22 12:13:30 +02:00
wm4
2827295703 video: add vaapi decode and output support
This is based on the MPlayer VA API patches. To be exact it's based on
a very stripped down version of commit f1ad459a263f8537f6c from
git://gitorious.org/vaapi/mplayer.git.

This doesn't contain useless things like benchmarking hacks and the
demo code for GLX interop. Also, unlike in the original patch, decoding
and video output are split into separate source files (the separation
between decoding and display also makes pixel format hacks unnecessary).

On the other hand, some features not present in the original patch were
added, like screenshot support.

VA API is rather bad for actual video output. Dealing with older libva
versions or the completely broken vdpau backend doesn't help. OSD is
low quality and should be rather slow. In some cases, only either OSD
or subtitles can be shown at the same time (because OSD is drawn first,
OSD is prefered).

Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures: the vdpau backend seems to assume
premultiplied, while a native vaapi driver uses straight. So I picked
straight alpha. It doesn't matter much, because the blending code for
straight alpha I added to img_convert.c is probably buggy, and ASS
subtitles might be blended incorrectly.

Really good video output with VA API would probably use OpenGL and the
GL interop features, but at this point you might just use vo_opengl.
(Patches for making HW decoding with vo_opengl have a chance of being
accepted.)

Despite these issues, decoding seems to work ok. I still got tearing
on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also
tested with the vdpau vaapi wrapper on a nvidia system; however this
was rather broken. (Fortunately, there is no reason to use mpv's VAAPI
support over native VDPAU.)
2013-08-12 01:12:02 +02:00
wm4
5accc5e7c1 vdpau: split off decoder parts, use "new" libavcodec vdpau hwaccel API
Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This
file is named so because because it's written against the "old"
libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau").

Add support for the "new" libavcodec vdpau support. This was recently
added and replaces the "old" vdpau parts. (In fact, Libav is about to
deprecate and remove the "old" API without deprecation grace period,
so we have to support it now. Moreover, there will probably be no Libav
release which supports both, so the transition is even less smooth than
we could hope, and we have to support both the old and new API.)

Whether the old or new API is used is checked by a configure test: if
the new API is found, it is used, otherwise the old API is assumed.

Some details might be handled differently. Especially display preemption
is a bit problematic with the "new" libavcodec vdpau support: it wants
to keep a pointer to a specific vdpau API function (which can be driver
specific, because preemption might switch drivers). Also, surface IDs
are now directly stored in AVFrames (and mp_images), so they can't be
forced to VDP_INVALID_HANDLE on preemption. (This changes even with
older libavcodec versions, because mp_image always uses the newer
representation to make vo_vdpau.c simpler.)

Decoder initialization in the new code tries to deal with codec
profiles, while the old code always uses the highest profile per codec.

Surface allocation changes. Since the decoder won't call config() in
vo_vdpau.c on video size change anymore, we allow allocating surfaces
of arbitrary size instead of locking it to what the VO was configured.
The non-hwdec code also has slightly different allocation behavior now.

Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau
doesn't work anymore (a warning suggesting the --hwdec option is
printed instead).
2013-07-28 19:25:07 +02:00
wm4
b34338ac6f img_format: fix broken condition
This caused all formats with fewer than 8 bits per component marked as
little endian. (Normally, only some messed up packed RGB formats are
endian-specific in this case.)
2013-05-06 23:11:10 +02:00