Commit Graph

18 Commits

Author SHA1 Message Date
wm4 de22d2b1ba vf_vavpp: make it work with vo_opengl and software decoding
vo_opengl always loads the hwdec backend lazily, so hwdec_request_api()
has to be called to possibly load it. This makes vf_vavpp work with
software decoding. (Hardware decoding loads the backend before the
filter is initialized, so this case is different.)

Also, the VFCTRL_GET_HWDEC_INFO call doesn't need to be checked. If it
fails, the info will be left blank.
2013-11-22 18:06:34 +01:00
wm4 571e697a7c vo_opengl: add infrastructure for hardware decoding OpenGL interop
Most hardware decoding APIs provide some OpenGL interop. This allows
using vo_opengl, without having to read the video data back from GPU.

This requires adding a backend for each hardware decoding API. (Each
backend is an entry in gl_hwdec_vaglx[].) The backends expose video data
as a set of OpenGL textures.

Add infrastructure to support this. The next commit will add support for
VA-API.
2013-11-04 00:11:07 +01:00
wm4 24897eb94c video: check profiles with hardware decoding
We had some code for checking profiles earlier, which was removed in
commits 2508f38 and adfb71b. These commits mentioned that (working) hw
decoding was sometimes prevented due to profile checking, but I can't
find the samples anymore that showed this behavior. Also, I changed my
opinion, and I think checking the profiles is something that should be
done for better fallback to software decoding behavior.

The checks roughly follow VLC's vdpau profile checks, although we do
not check codec levels. (VLC's profile checks aren't necessarily
completely correct, but they're a welcome help anyway.)

Add a --vd-lavc-check-hw-profile option, which skips the profile check.
2013-11-01 17:33:33 +01:00
wm4 641e94cd27 vaapi: allow GPU read-back with --hwdec=vaapi-copy
This code is actually quite inefficient: it reuses the (slow, simple)
screenshot code. It uses an inefficient method to read the image
(vaGetImage() instead of vaDeriveImage()), allocates new memory for
each frame that is read, and it tries all image formats again each
time.

Also, in my tests it always picked NV12 as image format, which is not
ideal if you actually want to filter the video, and vo_xv can't handle
this format without conversion either.

However, a user confirmed that it worked for him, so everything is fine.
2013-09-25 13:53:42 +02:00
wm4 7c3f1ffc44 vd_lavc: allow process_image to change image format
This will allow GPU read-back with process_image.

We have to restructure how init_vo() works. Instead of initializing the
VO before process_image is called, rename init_vo() to
update_image_params(), and let it update the params only. Then we really
initialize the VO after process_image.

As a consequence of these changes, already decoded hw frames are
correctly unreferenced if creation of the filter chain fails. This
could trigger assertions on VO uninitialization, because it's not
allowed to reference hw frames past VO lifetime.
2013-09-25 13:53:42 +02:00
wm4 006c2f66e1 video/decode: change fix_image callback
This might make it slightly easier when trying to implement surface
read-back for hardware decoding.
2013-08-15 23:40:02 +02:00
wm4 0da9638576 video/decode: pass parameters directly to hwdec allocate_image callback
Instead of passing AVFrame. This also moves the mysterious logic about
the size of the allocated image to common code, instead of duplicating
it everywhere.
2013-08-15 23:40:02 +02:00
wm4 2827295703 video: add vaapi decode and output support
This is based on the MPlayer VA API patches. To be exact it's based on
a very stripped down version of commit f1ad459a263f8537f6c from
git://gitorious.org/vaapi/mplayer.git.

This doesn't contain useless things like benchmarking hacks and the
demo code for GLX interop. Also, unlike in the original patch, decoding
and video output are split into separate source files (the separation
between decoding and display also makes pixel format hacks unnecessary).

On the other hand, some features not present in the original patch were
added, like screenshot support.

VA API is rather bad for actual video output. Dealing with older libva
versions or the completely broken vdpau backend doesn't help. OSD is
low quality and should be rather slow. In some cases, only either OSD
or subtitles can be shown at the same time (because OSD is drawn first,
OSD is prefered).

Also, libva can't decide whether it accepts straight or premultiplied
alpha for OSD sub-pictures: the vdpau backend seems to assume
premultiplied, while a native vaapi driver uses straight. So I picked
straight alpha. It doesn't matter much, because the blending code for
straight alpha I added to img_convert.c is probably buggy, and ASS
subtitles might be blended incorrectly.

Really good video output with VA API would probably use OpenGL and the
GL interop features, but at this point you might just use vo_opengl.
(Patches for making HW decoding with vo_opengl have a chance of being
accepted.)

Despite these issues, decoding seems to work ok. I still got tearing
on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also
tested with the vdpau vaapi wrapper on a nvidia system; however this
was rather broken. (Fortunately, there is no reason to use mpv's VAAPI
support over native VDPAU.)
2013-08-12 01:12:02 +02:00
wm4 8fe4790ec8 video: redo hw decoding initialization, add --hwdec=auto
Change how the HW decoding stuff is organized, the way it's initialized
in particular. Instead of duplicating the list of supported codecs for
hwaccel decoders, add a probe function which allows each decoder to
report whether it supports a given codec.

Add an "auto" choice to the --hwdec option, which automatically enables
hardware decoding if libavcodec and/or the VO supports it.

What mpv prints on the terminal changes a bit. Now it will just print
a single line whether hw decoding is used or not (and nothing at all if
no hw decoding at all was requested). The pretty violent fallback from
hw decoding to software decoding is still quite verbose and evil-looking
though.
2013-08-11 23:59:18 +02:00
wm4 5accc5e7c1 vdpau: split off decoder parts, use "new" libavcodec vdpau hwaccel API
Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This
file is named so because because it's written against the "old"
libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau").

Add support for the "new" libavcodec vdpau support. This was recently
added and replaces the "old" vdpau parts. (In fact, Libav is about to
deprecate and remove the "old" API without deprecation grace period,
so we have to support it now. Moreover, there will probably be no Libav
release which supports both, so the transition is even less smooth than
we could hope, and we have to support both the old and new API.)

Whether the old or new API is used is checked by a configure test: if
the new API is found, it is used, otherwise the old API is assumed.

Some details might be handled differently. Especially display preemption
is a bit problematic with the "new" libavcodec vdpau support: it wants
to keep a pointer to a specific vdpau API function (which can be driver
specific, because preemption might switch drivers). Also, surface IDs
are now directly stored in AVFrames (and mp_images), so they can't be
forced to VDP_INVALID_HANDLE on preemption. (This changes even with
older libavcodec versions, because mp_image always uses the newer
representation to make vo_vdpau.c simpler.)

Decoder initialization in the new code tries to deal with codec
profiles, while the old code always uses the highest profile per codec.

Surface allocation changes. Since the decoder won't call config() in
vo_vdpau.c on video size change anymore, we allow allocating surfaces
of arbitrary size instead of locking it to what the VO was configured.
The non-hwdec code also has slightly different allocation behavior now.

Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau
doesn't work anymore (a warning suggesting the --hwdec option is
printed instead).
2013-07-28 19:25:07 +02:00
wm4 3382a6f6e4 video: add a new method to configure filters and VOs
The filter chain and the video ouputs have config() functions. They are
strictly limited to transfering the video size and format. Other
parameters (like color levels) have to be transferred separately.

Improve upon this by introducing a separate set of reconfig() functions,
which use mp_image_params to carry format parameters. This struct
contains all image format related parameters from config(), plus
additional parameters such as colorspace.

Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just
an example/test case, but vo_opengl will need it later.

The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This
information is now handed to the VOs via reconfig(). The getter,
VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
2013-06-28 20:34:46 +02:00
wm4 514d8a7c9d video: make use of libavcodec refcounting
Now lavc_dr1.c is not used anymore if libavcodec is recent enough.
2013-03-13 23:51:30 +01:00
wm4 4d016a92c8 core: redo how codecs are mapped, remove codecs.conf
Use codec names instead of FourCCs to identify codecs. Rewrite how
codecs are selected and initialized. Now each decoder exports a list
of decoders (and the codec it supports) via add_decoders(). The order
matters, and the first decoder for a given decoder is preferred over
the other decoders. E.g. all ad_mpg123 decoders are preferred over
ad_lavc, because it comes first in the mpcodecs_ad_drivers array.
Likewise, decoders within ad_lavc that are enumerated first by
libavcodec (using av_codec_next()) are preferred. (This is actually
critical to select h264 software decoding by default instead of vdpau.
libavcodec and ffmpeg/avconv use the same method to select decoders by
default, so we hope this is sane.)

The codec names follow libavcodec's codec names as defined by
AVCodecDescriptor.name (see libavcodec/codec_desc.c). Some decoders
have names different from the canonical codec name. The AVCodecDescriptor
API is relatively new, so we need a compatibility layer for older
libavcodec versions for codec names that are referenced internally,
and which are different from the decoder name. (Add a configure check
for that, because checking versions is getting way too messy.)

demux/codec_tags.c is generated from the former codecs.conf (minus
"special" decoders like vdpau, and excluding the mappings that are the
same as the mappings libavformat's exported RIFF tables). It contains
all the mappings from FourCCs to codec name. This is needed for
demux_mkv, demux_mpg, demux_avi and demux_asf. demux_lavf will set the
codec as determined by libavformat, while the other demuxers have to do
this on their own, using the mp_set_audio/video_codec_from_tag()
functions. Note that the sh_audio/video->format members don't uniquely
identify the codec anymore, and sh->codec takes over this role.

Replace the --ac/--vc/--afm/--vfm with new --vd/--ad options, which
provide cover the functionality of the removed switched.

Note: there's no CODECS_FLAG_FLIP flag anymore. This means some obscure
container/video combinations (e.g. the sample Film_200_zygo_pro.mov)
are played flipped. ffplay/avplay doesn't handle this properly either,
so we don't care and blame ffmeg/libav instead.
2013-02-10 17:25:56 +01:00
wm4 4056765643 vd_lavc: remove -lavdopts vstats suboption
This printed per-frame statistics into a file, like bitrate or frame
type. Not very useful and accesses obscure AVCodecContext fields
(danger of deprecation/breakage), so get rid of it.
2013-01-13 23:30:12 +01:00
wm4 8751a0e261 video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)

Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.

Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)

The TV code has not been tested.

Some corrective changes regarding endian and other image format flags
creep in.
2013-01-13 20:04:11 +01:00
wm4 c54fc507da video/filter: change filter API, use refcounting, remove filter DR
Change the entire filter API to use reference counted images instead
of vf_get_image().

Remove filter "direct rendering". This was useful for vf_expand and (in
rare cases) vf_sub: DR allowed these filters to pass a cropped image to
the filters before them. Then, on filtering, the image was "uncropped",
so that black bars could be added around the image without copying. This
means that in some cases, vf_expand will be slower (-vf gradfun,expand
for example).

Note that another form of DR used for in-place filters has been replaced
by simpler logic. Instead of trying to do DR, filters can check if the
image is writeable (with mp_image_is_writeable()), and do true in-place
if that's the case. This affects filters like vf_gradfun and vf_sub.

Everything has to support strides now. If something doesn't, making a
copy of the image data is required.
2013-01-13 20:04:10 +01:00
wm4 881f82dd33 vd_lavc: use refcounting
Note that if the codec doesn't support DR1, the image has to be copied.
There is no other way to guarantee that the image will be valid after
decoding the next image.

The only important codec that doesn't support DR1 yet is rawvideo. It's
likely that ffmpeg/Libav will fix this at some time. For now, this
decoder uses an evil hack and puts pointers to the packet data into the
returned frame. This means the image will actually get invalid as soon
as the corresponding video packet is free'd, so copying the image is the
only reasonable thing anyway.
2013-01-13 17:39:32 +01:00
wm4 a8e69707f7 vd_lavc: add DR1 support
Replace libavcodec's native buffer allocation with code taken from
ffplay/ffmpeg's libavfilter support. The code in lavc_dr1.c is directly
copied from cmdutils.c. Note that this is quite arcane code, which
contains some workarounds for decoder bugs and the like. This is not
really a maintainance burden, since fixes from ffmpeg can be directly
applied to the code in lavc_dr1.c.

It's unknown why libavcodec doesn't provide such a function directly.
avcodec_default_get_buffer() can't be reused for various reasons.
There's some hope that the work known as The Evil Plan [1] will make
custom get_buffer implementations unneeded.

The DR1 support as of this commit does nothing. A future commit will
use it to implement ref-counting for mp_image (similar to how AVFrame
will be ref-counted with The Evil Plan.)

[1] http://lists.libav.org/pipermail/libav-devel/2012-December/039781.html
2013-01-13 17:39:32 +01:00