1
0
mirror of https://github.com/mpv-player/mpv synced 2024-12-11 17:37:23 +00:00
mpv/DOCS/tech/libmpcodecs.txt
ivo f3216dba06 clarify comments/docs about lav* being the preferred place to implement new
codecs and (de)muxers, except for wrappers around external libraries and
codecs and (de)muxers requiring binary support.


git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@25908 b3059339-0415-0410-9bf9-f77b7e298cf2
2008-01-28 22:09:21 +00:00

383 lines
17 KiB
Plaintext

NOTE: If you want to implement a new native codec, please add it to
libavcodec. libmpcodecs is considered mostly deprecated, except for wrappers
around external libraries and codecs requiring binary support.
The libMPcodecs API details, hints - by A'rpi
==================================
See also: colorspaces.txt, codec-devel.txt, dr-methods.txt, codecs.conf.txt
The VIDEO path:
===============
[MPlayer core]
| (1)
_____V______ (2) /~~~~~~~~~~\ (3,4) |~~~~~~|
| | -----> | vd_XXX.c | -------> | vd.c |
| decvideo | \__________/ <-(3a)-- |______|
| | -----, ,.............(3a,4a).....:
~~~~~~~~~~~~ (6) V V
/~~~~~~~~\ /~~~~~~~~\ (8)
| vf_X.c | --> | vf_Y.c | ----> vf_vo.c / ve_XXX.c
\________/ \________/
| ^
(7) | |~~~~~~| : (7a)
`-> | vf.c |...:
|______|
Short description of video path:
1. MPlayer/MEncoder core requests the decoding of a compressed video frame:
calls decvideo.c::decode_video()
2. decode_video() calls the previously ( init_video() ) selected video codec
(vd_XXX.c file, where XXX == vfm name, see the 'driver' line of codecs.conf)
3. The codec should initialize the output device before decoding the first
frame, it may happen in init() or at the middle of the first decode(), see
3a. It means calling vd.c::mpcodecs_config_vo() with the image dimensions,
and the _preferred_ (mean: internal, native, best) colorspace.
NOTE: This colorspace may not be equal to the actually used colorspace, it's
just a _hint_ for the csp matching algorithm, and mainly used _only_ when
csp conversion is required, as input format of the converter.
3a. Selecting the best output colorspace:
The vd.c::mpcodecs_config_vo() function will go through the outfmt list
defined by codecs.conf's 'out' lines, and query both vd (codec) and vo
(output device/filter/encoder) if it's supported or not.
For the vo, it calls the query_format() function of vf_XXX.c or ve_XXX.c.
It should return a set of feature flags, the most important ones for this
stage are: VFCAP_CSP_SUPPORTED (csp supported directly or by conversion)
and VFCAP_CSP_SUPPORTED_BY_HW (csp supported WITHOUT any conversion).
For the vd (codec), control() with VDCTRL_QUERY_FORMAT will be called.
If it doesn't implement VDCTRL_QUERY_FORMAT, (i.e. answers CONTROL_UNKNOWN
or CONTROL_NA), it will be assumed to be CONTROL_TRUE (csp supported)!
So, by default, if the list of supported colorspaces is constant, doesn't
depend on the actual file's/stream's header, it's enough to list them
in codecs.conf ('out' field), and don't implement VDCTRL_QUERY_FORMAT.
This is the case for most codecs.
If the supported csp list depends on the file being decoded, list the
possible out formats (colorspaces) in codecs.conf, and implement the
VDCTRL_QUERY_FORMAT to test the availability of the given csp for the
given video file/stream.
The vd.c core will find the best matching colorspace, depending on the
VFCAP_CSP_SUPPORTED_BY_HW flag (see vfcap.h). If no match at all, it
will try again with the 'scale' filter inserted between vd and vo.
If still no match, it will fail :(
4. Requesting buffer for the decoded frame:
The codec has to call mpcodecs_get_image() with proper imgtype & imgflag.
It will find the optimal buffering setup (preferred stride, alignment etc)
and return a pointer to the allocated and filled up mpi (mp_image_t*) struct.
The 'imgtype' controls the buffering setup, i.e. STATIC (just one buffer,
it 'remembers' its contents between frames), TEMP (write-only, full update),
EXPORT (memory allocation is done by the codec, not recommended) and so on.
The 'imgflags' set up the limits for the buffer, i.e. stride limitations,
readability, remembering content etc. See mp_image.h for the short
description. See dr-methods.txt for the explanation of buffer
importing and mpi imgtypes.
Always try to implement stride support! (stride == bytes per line)
If no stride support, then stride==bytes_per_pixel*image_width.
If you have stride support in your decoder, use the mpi->stride[] value
for the byte_per_line for each plane.
Also take care of other imgflags, like MP_IMGFLAG_PRESERVE and
MP_IMGFLAG_READABLE, MP_IMGFLAG_COMMON_STRIDE and MP_IMGFLAG_COMMON_PLANE!
The file mp_image.h contains flag descriptions in comments, read it!
Ask for help on dev-eng, describing the behaviour your codec, if unsure.
4.a. buffer allocation, vd.c::mpcodecs_get_image():
If the requested buffer imgtype!=EXPORT, then vd.c will try to do
direct rendering, i.e. asks the next filter/vo for the buffer allocation.
It's done by calling get_image() of the vf_XXX.c file.
If it was successful, the imgflag MP_IMGFLAG_DIRECT will be set, and one
memcpy() will be saved when passing the data from vd to the next filter/vo.
See dr-methods.txt for details and examples.
5. Decode the frame, to the mpi structure requested in 4., then return the mpi
to decvideo.c. Return NULL if the decoding failed or skipped the frame.
6. decvideo.c::decode_video() will now pass the 'mpi' to the next filter (vf_X).
7. The filter's (vf_X) put_image() then requests a new mpi buffer by calling
vf.c::vf_get_image().
7.a. vf.c::vf_get_image() will try to get direct rendering by asking the
next filter to do the buffer allocation (calls vf_Y's get_image()).
If it fails, it will fall back on normal system memory allocation.
8. When we're past the whole filter chain (multiple filters can be connected,
even the same filter multiple times) then the last, 'leaf' filters will be
called. The only difference between leaf and non-leaf filters is that leaf
filters have to implement the whole filter API.
Currently leaf filters are: vf_vo.c (wrapper over libvo) and ve_XXX.c
(video encoders used by MEncoder).
Video Filters
=============
Video filters are plugin-like code modules implementing the interface
defined in vf.h.
Basically it means video output manipulation, i.e. these plugins can
modify the image and the image properties (size, colorspace, etc) between
the video decoders (vd.h) and the output layer (libvo or video encoders).
The actual API is a mixture of the video decoder (vd.h) and libvo
(video_out.h) APIs.
main differences:
- vf plugins may be "loaded" multiple times, with different parameters
and context - it's new in MPlayer, old APIs weren't reentrant.
- vf plugins don't have to implement all functions - all functions have a
'fallback' version, so the plugins only override these if wanted.
- Each vf plugin has its own get_image context, and they can interchange
images/buffers using these get_image/put_image calls.
The VIDEO FILTER API:
=====================
filename: vf_FILTERNAME.c
vf_info_t* info;
pointer to the filter description structure:
const char *info; // description of the filter
const char *name; // short name of the filter, must be FILTERNAME
const char *author; // name and email/url of the author(s)
const char *comment; // comment, url to papers describing algo etc.
int (*open)(struct vf_instance_s* vf,char* args);
// pointer to the open() function:
Sample:
vf_info_t vf_info_foobar = {
"Universal Foo and Bar filter",
"foobar",
"Ms. Foo Bar",
"based on algorithm described at http://www.foo-bar.org",
open
};
The open() function:
open() is called when the filter is appended/inserted in the filter chain.
It'll receive the handler (vf) and the optional filter parameters as
char* string. Note that encoders (ve_*) and vo wrapper (vf_vo.c) have
non-string arg, but it's specially handled by MPlayer/MEncoder.
The open() function should fill the vf_instance_t structure, with the
implemented functions' pointers (see below).
It can optionally allocate memory for its internal data (vf_priv_t) and
store the pointer in vf->priv.
The open() func should parse (or at least check syntax) of parameters,
and fail (return 0) if error.
Sample:
static int open(vf_instance_t *vf, char* args){
vf->query_format=query_format;
vf->config=config;
vf->put_image=put_image;
// allocate local storage:
vf->priv=malloc(sizeof(struct vf_priv_s));
vf->priv->w=
vf->priv->h=-1;
if(args) // parse args:
if(sscanf(args, "%d:%d", &vf->priv->w, &vf->priv->h)!=2) return 0;
return 1;
}
Functions in vf_instance_s:
NOTE: All these are optional, their function pointer is either NULL or points
to a default implementation. If you implement them, don't forget to set
vf->FUNCNAME in your open() !
int (*query_format)(struct vf_instance_s* vf,
unsigned int fmt);
The query_format() function is called one or more times before the config(),
to find out the capabilities and/or support status of a given colorspace (fmt).
For the return values, see vfcap.h!
Normally, a filter should return at least VFCAP_CSP_SUPPORTED for all supported
colorspaces it accepts as input, and 0 for the unsupported ones.
If your filter does linear conversion, it should query the next filter,
and merge in its capability flags. Note: You should always ensure that the
next filter will accept at least one of your possible output colorspaces!
Sample:
static int query_format(struct vf_instance_s* vf, unsigned int fmt){
switch(fmt){
case IMGFMT_YV12:
case IMGFMT_I420:
case IMGFMT_IYUV:
case IMGFMT_422P:
return vf_next_query_format(vf,IMGFMT_YUY2) & (~VFCAP_CSP_SUPPORTED_BY_HW);
}
return 0;
}
For the more complex case, when you have an N->M colorspace mapping matrix,
see vf_scale or vf_rgb2bgr for examples.
int (*config)(struct vf_instance_s* vf,
int width, int height, int d_width, int d_height,
unsigned int flags, unsigned int outfmt);
The config() is called to initialize/configure the filter before using it.
Its parameters are already well-known from libvo:
width, height: size of the coded image
d_width, d_height: wanted display size (usually aspect corrected w/h)
Filters should use width,height as input image dimension, but the
resizing filters (crop, expand, scale, rotate, etc) should update
d_width/d_height (display size) to preserve the correct aspect ratio!
Filters should not rely on d_width, d_height as input parameters,
the only exception is when a filter replaces some libvo functionality
(like -vf scale with -zoom, or OSD rendering with -vf expand).
flags: the "good" old libvo flag set:
0x01 - force fullscreen (-fs)
0x02 - allow mode switching (-vm)
0x04 - allow software scaling (-zoom)
0x08 - flipping (-flip)
(Usually you don't have to worry about flags, just pass it to next config.)
outfmt: the selected colorspace/pixelformat. You'll receive images in this
format.
Sample:
static int config(struct vf_instance_s* vf,
int width, int height, int d_width, int d_height,
unsigned int flags, unsigned int outfmt){
// use d_width/d_height if not set by user:
if(vf->priv->w==-1) vf->priv->w=d_width;
if(vf->priv->h==-1) vf->priv->h=d_width;
// initialize your filter code
...
// OK now config the rest of the filter chain, with our output parameters:
return vf_next_config(vf,vf->priv->w,vf->priv->h,d_width,d_height,flags,outfmt);
}
void (*uninit)(struct vf_instance_s* vf);
Okey, uninit() is the simplest, it's called at the end. You can free your
private buffers etc here.
int (*put_image)(struct vf_instance_s* vf,
mp_image_t *mpi);
Ah, put_image(). This is the main filter function, it should convert/filter/
transform the image data from one format/size/color/whatever to another.
Its input parameter is an mpi (mplayer image) structure, see mp_image.h.
Your filter has to request a new image buffer for the output, using the
vf_get_image() function. NOTE: Even if you don't want to modify the image,
just pass it to the next filter, you have to either
- not implement put_image() at all - then it will be skipped
- request a new image with type==EXPORT and copy the pointers
NEVER pass the mpi as-is, it's local to the filters and may cause trouble.
If you completely copy/transform the image, then you probably want this:
dmpi=vf_get_image(vf->next,mpi->imgfmt,
MP_IMGTYPE_TEMP, MP_IMGFLAG_ACCEPT_STRIDE,
vf->priv->w, vf->priv->h);
It will allocate a new image, and return an mp_image structure filled by
buffer pointers and stride (bytes per line) values, in size of vf->priv->w
times vf->priv->h. If your filter cannot handle stride, then leave out
MP_IMGFLAG_ACCEPT_STRIDE. Note that you can do this, but it isn't recommended,
the whole video path is designed to use strides to get optimal throughput.
If your filter allocates output image buffers, then use MP_IMGTYPE_EXPORT,
and fill the returned dmpi's planes[], stride[] with your buffer parameters.
Note, it is not recommended (no direct rendering), so if you can, use
vf_get_image() for buffer allocation!
For other image types and flags see mp_image.h, it has comments.
If you are unsure, feel free to ask on the -dev-eng mailing list. Please
describe the behavior of your filter, and its limitations, so we can
suggest the optimal buffer type + flags for your code.
Now that you have the input (mpi) and output (dmpi) buffers, you can do
the conversion. If you didn't notice yet, mp_image has some useful info
fields, may help you a lot creating if() or for() structures:
flags: MP_IMGFLAG_PLANAR, MP_IMGFLAG_YUV, MP_IMGFLAG_SWAPPED
helps you to handle various pixel formats in single code.
bpp: bits per pixel
WARNING! It's number of bits _allocated_ to store a pixel,
it is not the number of bits actually used to keep colors!
So it's 16 for both 15 and 16 bit color depth, and is 32 for
32bpp (actually 24 bit color depth) mode!
It's 1 for 1bpp, 9 for YVU9, and is 12 for YV12 mode. Get it?
For planar formats, you also have chroma_width, chroma_height and
chroma_x_shift, chroma_y_shift too, they specify the chroma subsampling
for yuv formats:
chroma_width = luma_width >>chroma_x_shift;
chroma_height= luma_height>>chroma_y_shift;
If you're done, call the rest of the filter chain to process your output
image:
return vf_next_put_image(vf,dmpi);
Ok, the rest is for advanced functionality only:
int (*control)(struct vf_instance_s* vf,
int request, void* data);
You can control the filter at runtime from MPlayer/MEncoder/dec_video:
#define VFCTRL_QUERY_MAX_PP_LEVEL 4 /* test for postprocessing support (max level) */
#define VFCTRL_SET_PP_LEVEL 5 /* set postprocessing level */
#define VFCTRL_SET_EQUALIZER 6 /* set color options (brightness,contrast etc) */
#define VFCTRL_GET_EQUALIZER 8 /* get color options (brightness,contrast etc) */
#define VFCTRL_DRAW_OSD 7
#define VFCTRL_CHANGE_RECTANGLE 9 /* Change the rectangle boundaries */
void (*get_image)(struct vf_instance_s* vf,
mp_image_t *mpi);
This is for direct rendering support, works the same way as in libvo drivers.
It makes in-place pixel modifications possible.
If you implement it (vf->get_image!=NULL) then it will be called to do the
buffer allocation. You SHOULD check the buffer restrictions (stride, type,
readability etc) and if everything is OK, then allocate the requested buffer
using the vf_get_image() function and copying the buffer pointers.
NOTE: You HAVE TO save the dmpi pointer, as you'll need it in put_image()
later on. It is not guaranteed that you'll get the same mpi for put_image() as
in get_image() (think of out-of-order decoding, get_image is called in decoding
order, while put_image is called for display) so the only safe place to save
it is in the mpi struct itself: mpi->priv=(void*)dmpi;
void (*draw_slice)(struct vf_instance_s* vf,
unsigned char** src, int* stride, int w,int h, int x, int y);
It's the good old draw_slice callback, already known from libvo.
If your filter can operate on partial images, you can implement this one
to improve performance (cache utilization).
Ah, and there are two sets of capability/requirement flags (vfcap.h type)
in vf_instance_t, used by the default query_format() implementation, and by
the automatic colorspace/stride matching code (vf_next_config()).
// caps:
unsigned int default_caps; // used by default query_format()
unsigned int default_reqs; // used by default config()
BTW, you should avoid using global or static variables to store filter instance
specific stuff, as filters might be used multiple times & in the future even
multiple streams might be possible
The AUDIO path:
===============
TODO!!!