Commit Graph

71 Commits

Author SHA1 Message Date
wm4 6ff20c71ab video: alias IMGFMT_RGB30 to AV_PIX_FMT_X2RGB10
IMGFMT_RGB30 was added first; FFmpeg added AV_PIX_FMT_X2RGB10 later.
This is exactly the same, so treat them as such. For some reason,
libswscale still seems to output incompatible data - not sure what this
is about, but I'm not going to debug it.
2020-06-17 19:43:09 +02:00
wm4 0df0a847f4 video: drop NV24 alias
Caused build failures with still supported FFmpeg versions. It's
unreferenced, so it's not needed.

Fixes: #7471
2020-02-18 18:03:42 +01:00
wm4 7d11eda72e Remove remains of Libav compatibility
Libav seems rather dead: no release for 2 years, no new git commits in
master for almost a year (with one exception ~6 months ago). From what I
can tell, some developers resigned themselves to the horrifying idea to
post patches to ffmpeg-devel instead, while the rest of the developers
went on to greener pastures.

Libav was a better project than FFmpeg. Unfortunately, FFmpeg won,
because it managed to keep the name and website. Libav was pushed more
and more into obscurity: while there was initially a big push for Libav,
FFmpeg just remained "in place" and visible for most people. FFmpeg was
slowly draining all manpower and energy from Libav. A big part of this
was that FFmpeg stole code from Libav (regular merges of the entire
Libav git tree), making it some sort of Frankenstein mirror of Libav,
think decaying zombie with additional legs ("features") nailed to it.
"Stealing" surely is the wrong word; I'm just aping the language that
some of the FFmpeg members used to use. All that is in the past now, I'm
probably the only person left who is annoyed by this, and with this
commit I'm putting this decade long problem finally to an end. I just
thought I'd express my annoyance about this fucking shitshow one last
time.

The most intrusive change in this commit is the resample filter, which
originally used libavresample. Since the FFmpeg developer refused to
enable libavresample by default for drama reasons, and the API was
slightly different, so the filter used some big preprocessor mess to
make it compatible to libswresample. All that falls away now. The
simplification to the build system is also significant.
2020-02-16 15:14:55 +01:00
wm4 d699893dbd img_format: add alias for ffmpeg pal8 format
For the next commit.
2020-02-10 18:59:59 +01:00
wm4 1edb3d061b test: add dumping of img_format metadata
This is fragile enough that it warrants getting "monitored".

This takes the commented test program code from img_format.c, makes it
output to a text file, and then compares it to a "ref" file stored in
git.

Originally, I wanted to do the comparison etc. in a shell or Python
script. But why not do it in C. So mpv calls /usr/bin/diff as a
sub-process now.

This test will start producing different output if FFmpeg adds new pixel
formats or pixel format flags, or if mpv adds new IMGFMT (either aliases
to FFmpeg formats or own formats). That is unavoidable, and requires
manual inspection of the results, and then updating the ref file.

The changes in the non-test code are to guarantee that the format ID
conversion functions only translate between valid IDs.
2019-11-08 21:22:49 +01:00
wm4 ff67fbf328 build: lower required FFmpeg version
The FFmpeg version was last bumped a long time ago, except in commit
1638fa7b46, where it was used for some obscure pixel
format.

This is pretty annoying, so make it optional.
2019-10-20 19:41:18 +02:00
Philip Langdale 1638fa7b46 vo/gpu: hwdec_vdpau: Support direct mode for 4:4:4 content
New releases of VDPAU support decoding 4:4:4 content, and that comes
back as NV24 when using 'direct mode' in OpenGL Interop. That means we
need to be a little bit smarter about how we set up the OpenGL
textures.
2019-07-08 01:11:27 +02:00
wm4 9f595f3a80 vo_gpu: make screenshots use the GL renderer
Using the GL renderer for color conversion will make sure screenshots
will use the same conversion as normal video rendering. It can do this
for all types of screenshots.

The logic when to write 16 bit PNGs changes. To approximate the old
behavior, we decide by looking whether the source video format has more
than 8 bits per component. We apply this logic even for window
screenshots. Also, 16 bit PNGs now always include an unused alpha
channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no
RGB064. RGB48 is 3 bytes and usually not supported by GPUs for
rendering, so we have to use RGBA64, which forces an alpha channel.

Will break for users who use --target-trc and similar options.

I considered creating a new gl_video context, but it could double GPU
memory use, so I didn't.

This uses FBOs instead of glGetTexImage(), because that increases the
chance it could work on GLES (e.g. ANGLE). Untested. No support for the
Vulkan and D3D11 backends yet.

Fixes #5498. Also fixes #5240, because the code for reading back is not
used with the new code path.
2018-02-11 17:45:51 -08:00
wm4 76276c9210 video: rewrite filtering glue code
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.

This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.

vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.

f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).

The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.

Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)

In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.

vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.

The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.

Exporting VO capabilities is still a big mess (mp_stream_info stuff).

The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.

f_async_queue is unused.
2018-01-30 03:10:27 -08:00
Lionel CHAZALLON cfcee4cfe7 Add DRM_PRIME Format Handling and Display for RockChip MPP decoders
This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced
format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor
struct.

That struct holds dmabuf fds and information allowing zerocopy rendering
using KMS / DRM Atomic.

This has been tested on RockChip ROCK64 device.
2017-10-23 21:07:24 +02:00
Aman Gupta 61a1612de9 hwdec: add mediacodec hardware decoder for IMGFMT_MEDIACODEC frames 2017-10-09 18:36:54 +02:00
wm4 1f79b26529 vaapi: use newer libavutil vaapi pixfmt name
AV_PIX_FMT_VAAPI_VLD is clearly deprecated - it just doesn't raise any
compiler warnings.
2017-09-29 17:53:00 +02:00
wm4 ae7db6503b video: drop old D3D11/DXVA2 support
Now you need FFmpeg git, or something.

This also gets rid of the last real use of gpu_memcpy(). libavutil does
that itself. (vaapi.c still used it, but it was essentially unused,
because the code path isn't really in use anymore. It wasn't even
included due to the d3d-hwaccel dependency in wscript.)
2017-09-26 18:58:45 +02:00
wm4 2aff6f8c95 vo_direct3d: remove non-working nv12 shader support
It never worked. It relied on some obscure texture format to provide the
equivalent of GL_RG or GL_LUMINANCE_ALPHA, but no hardware seemed to
report support for it ever. No idea what's the correct way to do this.
On D3D11 it exists, of course.

(Actually I'd like to remove the whole VO.)
2017-06-30 18:13:01 +02:00
wm4 1ad036a2ef video: get rid of swapped packed YUV
Another legacy annoyance. The only place where packed YUV is still
important is slightly older Apple hardware or drivers, which require
it for efficient hardware decoding.
2017-06-30 18:01:29 +02:00
wm4 300f6a3344 video: drop some more IMGFMT aliases
For vo_opengl and vo_direct3d, these are supported in a generic way.

For vf_vapoursynth, we could probably map its VSFormat struct in a
generic way, but for now do some bullshit.

vf_eq.c actually loses support for these formats. We could add generic
support too (anything that has 8 bit planes will work), but why bother.
The filter is deprecated anyway.
2017-06-29 21:30:10 +02:00
wm4 af5a71f52e video: drop some unused IMGFMT aliases
These formats are supported in a generic way.

To get rid of IMGFMT_NV21, remove support from vo_direct3d.c completely.
2017-06-29 20:56:29 +02:00
wm4 5ea851feae video/fmt-conversion, img_format: change license to LGPL
The problem with fmt-conversion.h is that "lucabe", who disagreed with
LGPL, originally wrote it. But it was actually rewritten by "reimar"
later. The original switch statement was replaced with a lookup table.
No code other than the imgfmt2pixfmt() function signature survives.
Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping
them, can be copyrighted.

So changing the license should be fine, because reimar and all other
authors involved with the new code agreed to LGPL.

We also don't consider format pairs added later as copyrightable.

(The direct-mapping idea mentioned in the "Copyright" file seems
attractive, and I might implement in later anyway.)

Likewise, there might be some format names added to img_format.h, which
are not covered by relicensing agreements. These all affect "later"
additions, and they follow either the FFmpeg PIXFMT naming or some other
pre-existing logic, so this should be fine.
2017-06-18 15:15:07 +02:00
wm4 456c2ad222 img_format: drop unused aliases
vo_opengl uses those automatically via pixdesc.
2017-06-18 15:15:07 +02:00
wm4 0754cbc83e d3d: add support for new libavcodec hwaccel API
Unfortunately quite a mess, in particular due to the need to have some
compatibility with the old API. (The old API will be supported only in
short term.)
2017-06-08 21:51:25 +02:00
wm4 3eceac2eab Remove compatibility things
Possible with bumped FFmpeg/Libav.

These are just the simple cases.
2016-12-07 19:53:11 +01:00
Philip Langdale 585c5c34f1 vo_opengl: hwdec_cuda: Support P016 output surfaces
The latest 375.xx nvidia drivers add support for P016 output
surfaces. In combination with an ffmpeg change to return those
surfaces, we can display them.

The bulk of the work is related to knowing which format you're
dealing with at the right time. Once you know, it's straight forward.
2016-11-22 20:19:58 +01:00
wm4 73a5bde518 img_format: remove some unneeded format definitions
They're still supported, just that they have no IMGFMT_ alias.
2016-09-28 14:21:32 +02:00
Philip Langdale b83bfea05d hwdec_cuda: Rename config variable to be more consistent
'cuda-gl' isn't right - you can turn this on without any GL and
get some non-zero benefit (with the cuda-copy hwaccel). So
'cuda-hwaccel' seems more consistent with everything else.
2016-09-16 14:26:30 +02:00
Philip Langdale 2048ad2b8a hwdec/opengl: Add support for CUDA and cuvid/NvDecode
Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross
platform, but nvidia proprietary API that exposes their hardware
video decoding capabilities. It is analogous to their DXVA or VDPAU
support on Windows or Linux but without using platform specific API
calls.

As a rule, you'd rather use DXVA or VDPAU as these are more mature
and well supported APIs, but on Linux, VDPAU is falling behind the
hardware capabilities, and there's no sign that nvidia are making
the investments to update it.

Most concretely, this means that there is no VP8/9 or HEVC Main10
support in VDPAU. On the other hand, NvDecode does export vp8/9 and
partial support for HEVC Main10 (more on that below).

ffmpeg already has support in the form of the "cuvid" family of
decoders. Due to the design of the API, it is best exposed as a full
decoder rather than an hwaccel. As such, there are decoders like
h264_cuvid, hevc_cuvid, etc.

These decoders support two output paths today - in both cases, NV12
frames are returned, either in CUDA device memory or regular system
memory.

In the case of the system memory path, the decoders can be used
as-is in mpv today with a command line like:

mpv --vd=lavc:h264_cuvid foobar.mp4

Doing this will take advantage of hardware decoding, but the cost
of the memcpy to system memory adds up, especially for high
resolution video (4K etc).

To avoid that, we need an hwdec that takes advantage of CUDA's
OpenGL interop to copy from device memory into OpenGL textures.

That is what this change implements.

The process is relatively simple as only basic device context
aquisition needs to be done by us - the CUDA buffer pool is managed
by the decoder - thankfully.

The hwdec looks a bit like the vdpau interop one - the hwdec
maintains a single set of plane textures and each output frame
is repeatedly mapped into these textures to pass on.

The frames are always in NV12 format, at least until 10bit output
supports emerges.

The only slightly interesting part of the copying process is that
CUDA works by associating PBOs, so we need to define these for
each of the textures.

TODO Items:
* I need to add a download_image function for screenshots. This
  would do the same copy to system memory that the decoder's
  system memory output does.
* There are items to investigate on the ffmpeg side. There appears
  to be a problem with timestamps for some content.

Final note: I mentioned HEVC Main10. While there is no 10bit output
support, NvDecode can return dithered 8bit NV12 so you can take
advantage of the hardware acceleration.

This particular mode requires compiling ffmpeg with a modified
header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg
yet.

Usage:

You will need to specify vo=opengl and hwdec=cuda.

Note that hwdec=auto will probably not work as it will try to use
vdpau first.

mpv --hwdec=cuda --vo=opengl foobar.mp4

If you want to use filters that require frames in system memory,
just use the decoder directly without the hwdec, as documented
above.
2016-09-08 16:06:12 +02:00
wm4 fd82e14888 build: merge d3d11va and dxva2 hwaccel checks
We don't have any reason to disable either. Both are loaded dynamically
at runtime anyway. There is also no reason why dxva2 would disappear
from libavcodec any time soon.
2016-05-11 15:40:31 +02:00
wm4 bda111018c video: add IMGFMT_P010 alias
Gets rid of some silliness, and might be useful in the future.
2016-04-29 22:38:54 +02:00
Kevin Mitchell a7110862c8 vd_lavc: add d3d11va hwdec
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
2016-03-30 09:01:27 -07:00
wm4 3a015b9ec7 video: remove some useless old RGB formats
Some VOs had support for these - remove them.

Typically, these formats will have only some use in cases where using
RGB software conversion with libswscale is faster than letting the
VO/GPU do it (i.e. almost never). For the sake of testing this case,
keep IMGFMT_RGB565. This is the least messy format, because it has no
padding/alpha bits with unknown semantics.

Note that decoding to these formats still works. We'll let libswscale
repack the data to whatever the VO in use can take.
2016-01-25 10:43:35 +01:00
wm4 3973a953df sub: find GBRP format automatically when rendering to RGB
This removes the need to define IMGFMT_GBRAP, which fixes compilation
with the current Libav release.

This also makes it automatically pick up a GBRP format with the same bit
width. (Unfortunately, it seems libswscale does not support conversion
to AV_PIX_FMT_GBRAP16, so our code falls back to 8 bit, removing
precision for video covered by subtitles in cases this code is used.)

Also, when the source video is e.g. 10 bit YUV, upsample to 16 bit.
Whether this is good or bad, it fixes behavior with alpha. Although I'm
not sure if the alpha range is really correct ([0,2^16-1] vs.
[0,255*256]). Keep in mind that libswscale doesn't even agree with the
way we do it.
2015-12-24 16:42:21 +01:00
wm4 6bec6ac558 sub: better alpha blending when rendering to alpha surfaces
This actually treats destination alpha correctly, and gives much better
results than before. I don't know if this is perfectly correct yet,
though. Slight difference with vo_opengl behavior suggests it might not
be.

Note that this does not affect VOs with true alpha support. vo_opengl
does not use this code at all, and does the alpha calculations in OpenGL
instead.
2015-12-24 14:43:23 +01:00
wm4 1dd7b7bddc video: remove VDA support
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.

VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
2015-09-28 22:03:14 +02:00
Sebastien Zwickert 31b5a211f4 hwdec: add VideoToolbox support
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).

Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
2015-08-05 17:47:30 +02:00
Marcin Kurczewski f43017bfe9 Update license headers
Signed-off-by: wm4 <wm4@nowhere>
2015-04-13 12:10:01 +02:00
wm4 8fff125422 RPI support
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.

Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.

This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.

vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.

This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.

This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.

Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
2015-03-29 16:09:56 +02:00
wm4 be5994a781 video: work around libswscale for PNG pixel formats
The intention is that we can test vo_opengl with high bit depth PNGs
better. This throws libswscale completely out of the loop, which before
was needed in order to convert from big endian to little endian.

Also apply a minimal cleanup to fmt-conversion.c (unrelated).
2015-02-06 23:22:16 +01:00
wm4 a0caadd512 vo_opengl: handle grayscale input better, add YA16 support
Simply clamp off the U/V components in the colormatrix, instead of doing
something special in the shader.

Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix
calculation overflowed with 32, add a component_bits field to the image
format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had
no bad consequences otherwise).
2015-01-21 19:29:18 +01:00
wm4 a7686e86ff video: remove swapped-endian image format aliases
Like the previous commit, this removes names only, not actual support
for these formats.
2014-11-05 01:52:20 +01:00
wm4 91471deecc video: remove aliases for some rarely referenced image formats
These formats are still supported; you just can't reference them via a
defined constants directly. They are now handled via the generic
passthrough.

(If you want to use such a format, you either have to add the entry
back, or use AV_PIX_FMT_* directly.)
2014-11-05 01:52:19 +01:00
wm4 7333bc6536 video: passthrough unknown AVPixelFormats
This is a rather radical change: instead of maintaining a whitelist of
FFmpeg formats we support, we automatically support all formats.

In general, a format which doesn't have an explicit IMGFMT_* name will
be converted to a known format through libswscale, or will be handled
by code which can treat pixel formats in a generic way using the pixel
format description, like vo_opengl.

AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma
subsampling by 4 in both directions. Its component order is documented
as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples.
This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so
4 bytes + 2 bytes). FFmpeg can actually handle this format with its
generic mechanism in an extremely awkward way, but it doesn't work for
us. Blacklist it, and hope no similar formats will be added in the
future.

Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of
500. More is not allowed, and there are some fixed size arrays that need
to contain any possible format (look for IMGFMT_END dependencies).

We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_*
through the whole codebase. But for now, this is better, because we
can compensate for formats missing in Libav or older FFmpeg versions,
like AV_PIX_FMT_RGB0 and others.
2014-11-05 01:52:19 +01:00
wm4 423a7de676 video: initial dxva2 support
Shamelessly stolen from ffmpeg. It probably doesn't work - you can debug
it yourself.
2014-10-25 19:25:22 +02:00
Stefano Pigozzi 6aac17cebb vda: only support the new hwaccel 1.2 API (remove old code)
Since the new hwaccel API is now merged in ffmpeg's stable release, we can
finally remove support for the old API.

I pretty much kept lu_zero's new code unchanged and just added some error
printing (that we had with the old glue code) to make the life of our users
less miserable.
2014-08-01 10:38:18 +02:00
wm4 a9538e17ad video: synchronize mpv rgb pixel format names with ffmpeg names
This affects packed RGB formats up to 16 bits per pixel. The old mplayer
names used LSB-to-MSB order, while FFmpeg (and some other libraries) use
MSB-to-LSB.

Nothing should change with this commit, i.e. no bit order or endian bugs
should be added or fixed. In some cases, the name stays the same, even
though the byte order changes, e.g. RGB8->BGR8 and BGR8->RGB8, and this
affects the user-visible names too; this might cause confusion.
2014-06-14 10:03:04 +02:00
Luca Barbato e0e79a2e7e vda: Hwaccel 1.2 support
Use the new context and the default functions provided.
2014-05-12 12:59:16 +02:00
wm4 3db3653227 video: fix FFmpeg or Libav being a special snowflake 2014-03-16 16:23:12 +01:00
wm4 3ec7f528c4 vd_lavc: remove compatibility crap
All this code was needed for compatibility with very old libavcodec
versions only (such as Libav 9).

Includes some now-possible simplifications too.
2014-03-16 13:19:19 +01:00
wm4 64278128d8 video/fmt-conversion.c: remove unknown pixel format messages
This removes the messages printed on unknown pixel format messages.
Passing a mp_log to them would be too messy. Actually, this is a good
change, because in the past we often had trouble with these messages
printed too often (causing terminal spam etc.), and printing warnings or
error messages on the caller sides is much cleaner.

vd_lavc.c had a change earlier to print an error message if a decoder
outputs an unsupported pixel format.
2013-12-21 20:50:11 +01:00
wm4 2c08bf1bd7 Reduce recursive config.h inclusions in headers
In my opinion, config.h inclusions should be kept to a minimum. MPlayer
code really liked including config.h everywhere, though, even in often
used header files. Try to reduce this.
2013-12-18 17:12:21 +01:00
wm4 0112143fda Split mpvcore/ into common/, misc/, bstr/ 2013-12-17 02:39:45 +01:00
wm4 597b8a3550 Take care of some libavutil deprecations, drop support for FFmpeg 1.0
PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants)
enum PixelFormat -> enum AVPixelFormat
Losen some version checks in certain newer pixel formats.
av_pix_fmt_descriptors -> av_pix_fmt_desc_get

This removes support for FFmpeg 1.0.x, which is even older than
Libav 9.x. Support for it probably was already broken, and its
libswresample was rejected by our build system anyway because it's
broken.

Mostly untested; it does compile with Libav 9.9.
2013-11-29 17:39:57 +01:00