1
0
mirror of https://github.com/mpv-player/mpv synced 2025-01-01 04:12:25 +00:00
Commit Graph

22 Commits

Author SHA1 Message Date
wm4
8e172afc8f vd_lavc: remove lowres decoding
This was a "broken misfeature" according to Libav developers. It wasn't
implemented for modern codecs (like h264), and has been removed from
Libav a while ago (the AVCodecContext field has been marked as
deprecated and its value is ignored). FFmpeg still supports it, but
isn't much useful due to aforementioned reasons.

Remove the code to enable it.
2013-01-13 23:29:30 +01:00
wm4
bd6470ec6a vd_lavc: prefer AVFrame over AVCodecContext fields
This is more correct. Not all frame specific fields are in AVFrame,
such as colorspace and color_range, and these are still queried
through the decoder context.
2013-01-13 20:04:13 +01:00
wm4
8751a0e261 video: decouple internal pixel formats from FourCCs
mplayer's video chain traditionally used FourCCs for pixel formats. For
example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the
string 'YV12' interpreted as unsigned int. Additionally, it used to
encode information into the numeric values of some formats. The RGB
formats had their bit depth and endian encoded into the least
significant byte. Extended planar formats (420P10 etc.) had chroma
shift, endian, and component bit depth encoded. (This has been removed
in recent commits.)

Replace the FourCC mess with a simple enum. Remove all the redundant
formats like YV12/I420/IYUV. Replace some image format names by
something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P.

Add img_fourcc.h, which contains the old IDs for code that actually uses
FourCCs. Change the way demuxers, that output raw video, identify the
video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to
request the rawvideo decoder, and sh_video->imgfmt specifies the pixel
format. Like the previous hack, this is supposed to avoid the need for
a complete codecs.cfg entry per format, or other lookup tables. (Note
that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT
raw video, but this is still considered better than adding a raw video
decoder - even if trivial, it would be full of annoying lookup tables.)

The TV code has not been tested.

Some corrective changes regarding endian and other image format flags
creep in.
2013-01-13 20:04:11 +01:00
wm4
aa6ba6372c mp_image: change how palette is handled
According to DOCS/OUTDATED-tech/colorspaces.txt, the following formats
are supposed to be palettized:

    IMGFMT_BGR8
    IMGFMT_RGB8,
    IMGFMT_BGR4_CHAR
    IMGFMT_RGB4_CHAR
    IMGFMT_BGR4
    IMGFMT_RGB4

Of these, only BGR8 and RGB8 are actually treated as palettized in some
way. ffmpeg has only one palettized format (AV_PIX_FMT_PAL8), and
IMGFMT_BGR8 was inconsistently mapped to packed non-palettized RGB
formats too (AV_PIX_FMT_BGR8). Moreover, vf_scale.c contained messy
hacks to generate a palette when AV_PIX_FMT_BGR8 is output. (libswscale
does not support AV_PIX_FMT_PAL8 output in the first place.)

Get rid of all of this, and introduce IMGFMT_PAL8, which directly maps
to AV_PIX_FMT_PAL8. Remove the palette creation code from vf_scale.c.
IMGFMT_BGR8 maps to AV_PIX_FMT_RGB8 (don't ask me why it's swapped),
without any palette use. Enabling it in vo_x11 or using it as vf_scale
input seems to give correct results.
2013-01-13 20:04:11 +01:00
wm4
ab94c64ed2 mp_image: simplify image allocation
mp_image_alloc_planes() allocated images with minimal stride, even if
the resulting stride was unaligned. It was the responsibility of
vf_get_image() to set an image's width to something larger than
required to get an aligned stride, and then crop it. Always allocate
with aligned strides instead.

Get rid of IMGFMT_IF09 special handling. This format is not used
anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was
mainly used for - this is still supported.) Get rid of swapped chroma
plane allocation. This is not used anywhere, and VOs like vo_xv,
vo_direct3d and vo_sdl do their own swapping.

Always round chroma width/height up instead of down. Consider 4:2:0 and
an uneven image size. For luma, the size was left uneven, and the chroma
size was rounded down. This doesn't make sense, because chroma would be
missing for the bottom/right border.

Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not
used anymore, except in draw_bmp.c. (It's still allowed to setup
mp_images manually, you just can't allocate image data with them
anymore - this is also done in draw_bmp.c.)
2013-01-13 20:04:10 +01:00
wm4
1f32250629 vd_lavc: make non-reference frames writeable
This allows avoiding a copy.

The logic about using the AVFrame.reference field if buffer hints are
not available is similar to mplayer-svn's direct rendering code. This is
needed because h264 decoding doesn't set buffer hints.

<wm4> michaelni: couldn't it set buffer hints from AVFrame.reference then?
<michaelni> i see no reason ATM why that wouldnt be possible

OK...
2013-01-13 20:04:10 +01:00
wm4
c54fc507da video/filter: change filter API, use refcounting, remove filter DR
Change the entire filter API to use reference counted images instead
of vf_get_image().

Remove filter "direct rendering". This was useful for vf_expand and (in
rare cases) vf_sub: DR allowed these filters to pass a cropped image to
the filters before them. Then, on filtering, the image was "uncropped",
so that black bars could be added around the image without copying. This
means that in some cases, vf_expand will be slower (-vf gradfun,expand
for example).

Note that another form of DR used for in-place filters has been replaced
by simpler logic. Instead of trying to do DR, filters can check if the
image is writeable (with mp_image_is_writeable()), and do true in-place
if that's the case. This affects filters like vf_gradfun and vf_sub.

Everything has to support strides now. If something doesn't, making a
copy of the image data is required.
2013-01-13 20:04:10 +01:00
wm4
d77d9fb933 mp_image: require using mp_image_set_size() for setting w/h
Setting the size of a mp_image must be done with mp_image_set_size()
now. Do this to guarantee that the redundant fields (like chroma_width)
are updated consistently. Replacing the redundant fields by function
calls would probably be better, but there are too many uses of them,
and is a bit less convenient.

Most code actually called mp_image_setfmt(), which did this as well.
This commit just makes things a bit more explicit.

Warning: the video filter chain still sets up mp_images manually,
and vf_get_image() is not updated.
2013-01-13 17:39:32 +01:00
wm4
881f82dd33 vd_lavc: use refcounting
Note that if the codec doesn't support DR1, the image has to be copied.
There is no other way to guarantee that the image will be valid after
decoding the next image.

The only important codec that doesn't support DR1 yet is rawvideo. It's
likely that ffmpeg/Libav will fix this at some time. For now, this
decoder uses an evil hack and puts pointers to the packet data into the
returned frame. This means the image will actually get invalid as soon
as the corresponding video packet is free'd, so copying the image is the
only reasonable thing anyway.
2013-01-13 17:39:32 +01:00
wm4
a8e69707f7 vd_lavc: add DR1 support
Replace libavcodec's native buffer allocation with code taken from
ffplay/ffmpeg's libavfilter support. The code in lavc_dr1.c is directly
copied from cmdutils.c. Note that this is quite arcane code, which
contains some workarounds for decoder bugs and the like. This is not
really a maintainance burden, since fixes from ffmpeg can be directly
applied to the code in lavc_dr1.c.

It's unknown why libavcodec doesn't provide such a function directly.
avcodec_default_get_buffer() can't be reused for various reasons.
There's some hope that the work known as The Evil Plan [1] will make
custom get_buffer implementations unneeded.

The DR1 support as of this commit does nothing. A future commit will
use it to implement ref-counting for mp_image (similar to how AVFrame
will be ref-counted with The Evil Plan.)

[1] http://lists.libav.org/pipermail/libav-devel/2012-December/039781.html
2013-01-13 17:39:32 +01:00
wm4
58d196c07e video: different way to enable hardware decoding, add software fallback
Deprecate the hardware specific video codec entries (like ffh264vdpau).
Replace them with the --hwdec switch, which requests that a specific
hardware decoding API should be used. The codecs.conf entries will be
removed at a later time, but for now they are useful for testing and
compatibility.

Instead of --vc=ffh264vdpau, --hwdec=vdpau should be used.

Add a fallback if hardware decoding fails. Most hardware decoders
(including vdpau) support only a subset of h264, and having such a
fallback is supposed to enable a better user experience.
2013-01-13 17:39:32 +01:00
wm4
dab286a159 vd_lavc: remove codec DR
This was buggy and didn't even work in the simplest cases. It was
disabled when multithreading was used, and always disabled for h264.

A better alternative (reference counting) will be added later.

Hardware decoding still uses the ffmpeg DR mechanism, but has been
decoupled from mpv's DR in the previous commit.
2013-01-13 17:39:32 +01:00
wm4
2d8fb838d7 video: make vdpau hardware decoding not use DR code path
vdpau hardware decoding used the DR (direct rendering) path to let the
decoder query a surface from the VO. Special-case the HW decoding path
instead, to make it separate from DR.
2013-01-13 17:39:32 +01:00
wm4
0d1aca1289 vd_lavc: do not mutate global threads option
This mutated the variable for the thread count option
(lavc_param->threads) on decoder initialization. This didn't have any
practical relevance, unless formats supporting hardware video decoding
and other formats were played in the same mpv instance. In this case,
hardware decoding would set threads to 1, and all files played after
that would use only one thread as well even with software decoding.

Remove XvMC leftover (CODEC_CAP_HWACCEL).
2013-01-13 17:39:31 +01:00
wm4
fcd6e74e48 vd_lavc: cosmetics: move debugging code out of the way
Relatively useless statistics printing code was stuffed directly into
the middle of the central decode() function. Move it away.
2013-01-13 17:39:31 +01:00
wm4
06ccd9f671 video: simplify decoder pixel format handling
Simplify the decoder pixel format handling by making it handle only
the case vd_lavc needs: a video stream always decodes to a single
pixel format.

Remove the handling for multiple pixel formats, and remove the
codecs.conf pixel format declarations that are left.

Remove the handling of "ambiguous" pixel formats like YV12 vs. I420 (via
VDCTRL_QUERY_FORMAT etc.). This is only a problem if the video chain
supports I420, but not YV12, which doesn't seem to be the case anywhere,
and in fact would not have any advantage.

Make the "flip" flag a global per-codec flag, rather than a pixel format
specific flag. (Some ffmpeg decoders still return a flipped image, so
this has to be done manually.) Also fix handling of the flip operation:
do not overwrite the global flip option, and make the --flip option
invert the codec flip option rather than overriding it.
2013-01-13 17:39:31 +01:00
wm4
23ab098969 video: remove slice based filtering and video output
Slices allowed filtering or drawing video in horizontal bands or
blocks. This allowed working on the video in smaller units. In theory,
this could bring a performance win by lowering cache pressure, as you
didn't have to keep the whole video frame in cache while filtering,
only the slice.

In practice, the slice code path was barely used for the following
reasons:
- Multithreaded decoding with ffmpeg didn't use slices. The ffmpeg
  slice callback was disabled, because it can be called from another
  thread, and the mplayer video chain is not thread-safe.
- There was nothing that would turn "full" images into appropriate
  slices, so slices were rarely used.
- Most filters didn't actually support slices.

On the other hand, supporting slices lead to code duplication and more
complex code in general. I made some experiments and didn't find any
actual measurable performance improvements when using slices. Even
ffmpeg removed slices based filtering from libavfilter in favor of
simpler code.

The most broken thing about the slices code path is that slices can't
be queued, like it is done for images in vo.c.
2013-01-13 17:39:31 +01:00
wm4
cfa1f9e082 video: make vdpau hardware decoding not use slices code path
For some reason, libavcodec abuses the slices rendering code path for
hardware decoding: in that case, the only purpose of the draw callback
is to pass a vdpau video surface object to video output. (It is unclear
to me why this had to use the slices code, instead of just returning an
AVFrame with the required vdpau state.)

Make this code separate within mpv, so that the internal slices code
path is not used for hardware decoding. Pass the vdpau state with
VOCTRL_HWDEC_DECODER_RENDER instead.

Remove the mencoder specific VOCTRLs.
2013-01-13 17:39:31 +01:00
wm4
53ee9aa6ae options, vo_x11: remove -zoom option, make it default
The -zoom option enabled scaling with vo_x11. Remove the -zoom option,
and make its behavior default. Since vo_x11 has to use libswscale for
colorspace conversion anyway, which doesn't do actual extra scaling when
vo_x11 is run in windowed mode, there should be no speed difference with
this change.

The code removed from vf_scale attempted to scale the video to d_width/
d_height, which matters for anamorphic video and the --xy option only.
vo_x11 can handle these natively. The only case for which the removed
vf_scale code could matter is encoding with vo_lavc, but since that
didn't set VOFLAG_SWSCALE, nothing actually changes.
2012-11-16 21:21:14 +01:00
wm4
46cf722d80 Add missing compat/libav.h includes
For avcodec_free_frame().
2012-11-12 20:08:24 +01:00
wm4
4873b32c59 Rename directories, move files (step 2 of 2)
Finish renaming directories and moving files. Adjust all include
statements to make the previous commit compile.

The two commits are separate, because git is bad at tracking renames
and content changes at the same time.

Also take this as an opportunity to remove the separation between
"common" and "mplayer" sources in the Makefile. ("common" used to be
shared between mplayer and mencoder.)
2012-11-12 20:08:18 +01:00
wm4
d4bdd0473d Rename directories, move files (step 1 of 2) (does not compile)
Tis drops the silly lib prefixes, and attempts to organize the tree in
a more logical way. Make the top-level directory less cluttered as
well.

Renames the following directories:
    libaf -> audio/filter
    libao2 -> audio/out
    libvo -> video/out
    libmpdemux -> demux

Split libmpcodecs:
    vf* -> video/filter
    vd*, dec_video.* -> video/decode
    mp_image*, img_format*, ... -> video/
    ad*, dec_audio.* -> audio/decode

libaf/format.* is moved to audio/ - this is similar to how mp_image.*
is located in video/.

Move most top-level .c/.h files to core. (talloc.c/.h is left on top-
level, because it's external.) Park some of the more annoying files
in compat/. Some of these are relicts from the time mplayer used
ffmpeg internals.

sub/ is not split, because it's too much of a mess (subtitle code is
mixed with OSD display and rendering).

Maybe the organization of core is not ideal: it mixes playback core
(like mplayer.c) and utility helpers (like bstr.c/h). Should the need
arise, the playback core will be moved somewhere else, while core
contains all helper and common code.
2012-11-12 20:06:14 +01:00