Description of what all those libavcodec options do ... WARNING: I am no encoding expert so the recommendations might be bad ... if you find any errors, missing stuff, ... send a patch or cvs commit if you have an cvs account :) lavcopts: (encoder options) --------------------------- vqmin 1-31 (minimum quantizer) for pass1/2 1 is not recommended (much larger file, little quality difference (if u are lucky) and other weird things (if u are less lucky)) weird things: msmpeg4, h263 will be very low quality ratecontrol will be confused -> lower quality some decoders will not be able to decode it 2 is recommended for normal mpeg4/mpeg1video encoding (default) 3 is recommended for h263(p)/msmpeg4 the reason for 3 instead of 2 is that 2 could lead to overflows (this will be fixed for h263(p) by changing the quanizer per MB in the future, but msmpeg4 doesnt support that so it cant be fixed for that) vqscale 1-31 quantizer for constant quantizer / constant quality encoding 1 is not recommended (much larger file, little quality difference and possible other weird things) lower means better quality but larger files see vqmin vqmax 1-31 (maximum quantizer) for pass1/2 31 default 10-31 should be a sane range mbqmin 1-31 (minimum macroblock quantizer) for pass1/2 2 default mbqmax 1-31 (maximum macroblock quantizer) for pass1/2 31 default vqdiff 1-31 (maximum quantizer difference between I or P frames) for pass1/2 3 default vmax_b_frames 0-4 (maximum number of B frames between non B frames) 0 no b frames (default) 0-2 is a sane range for mpeg4 vme 0-5 (motion estimation) 0 none (not recommanded, very lq) 1 full (not recommanded, too slow) 2 log (not recommanded, lq) 3 phods (not recommanded, lq) 4 EPZS (default) 5 X1 (experimantal, might change from time to time or be just broken) vhq (high quality mode) encode each MB as in all modes and choose the best (this is slow but better filesize/quality) disabled by default v4mv allow 4 MV per MB (little difference in filesize/quality) disabled by default keyint 0-300 (maximum interval between keyframes) keyframes are needed for seeking as seeking is only possible to a keyframe but they need more space than non-keyframes so larger numbers here mean slightly smaller files, but less precise seeking 0 no keyframes >300 is not recommended as the quality might be bad (depends upon decoder, encoder and luck) for strict mpeg1/2/4 compliance this would have to be <=132 vb_strategy 0-1 for pass 2 0 allways use the max number of B frames (default) 1 avoid B frames in high motion scenes (this will cause bitrate misprediction) vpass 1 first pass 2 second pass (only need to specify if two-pass encoding is used) Tip: u can try to use constant quantizer mode for pass1 (vqscale=) for huffyuv: pass 1 saves statistics pass 2 encodes with a optimal huffman table based upon the pass 1 stats vbitrate (kbits per second) for pass1/2 800 is default (if value is bigger then 16000 it is interpreted as bit not kbit!) vratetol (filesize tolerance in kbit) for pass1/2 this is just an approximation, the real difference can be much smaller or larger 1000-100000 is a sane range 8000 is default vrc_maxrate (maximum bitrate in kbit/sec) for pass1/2 vrc_minrate (minimum bitrate in kbit/sec) for pass1/2 vrc_buf_size (buffer size in kbit) for pass1/2 this is for stuff like VCD VCD: FIXME SVCD: ... DVD: ... Note: vratetol should not be too large during the 2.pass or there might be problems if vrc_(min|max)rate is used vb_qfactor (-31.0-31.0) for pass1/2 1.25 is default vi_qfactor (-31.0-31.0) for pass1/2 0.8 is default vb_qoffset (-31.0-31.0) for pass1/2 1.25 is default vi_qoffset (-31.0-31.0) for pass1/2 0.0 is default if v{b|i}_qfactor > 0 I/B-Frame quantizer = P-Frame quantizer * v{b|i}_qfactor + v{b|i}_qoffset else do normal ratecontrol (dont lock to next P frame quantizer) and set q= -q * v{b|i}_qfactor + v{b|i}_qoffset tip: to do constant quantizer encoding with different quantizers for I/P and B frames you can use: vqmin=:vqmax=:vb_qfactor= vqblur (0.0-1.0) quantizer blur (pass1) 0.0 qblur disabled 0.5 is the default 1.0 average the quantizer over all previous frames larger values will average the quantizer more over time so that it will be changed slower vqblur (0.0-99.0) quantizer blur (pass2) gaussian blur (gaussian blur cant be done during pass 1 as the future quantizers arent known) 0.5 is the default larger values will average the quantizer more over time so that it will be changed slower vqcomp quantizer compression (for pass1/2) depends upon vrc_eq vrc_eq the main ratecontrol equation (for pass1/2) 1 constant bitrate tex constant quality 1+(tex/avgTex-1)*qComp approximately the equation of the old ratecontrol code tex^qComp with qcomp 0.5 or something like that (default) infix operators: +,-,*,/,^ variables: tex texture complexity iTex,pTex intra, non intra texture complexity avgTex average texture complexity avgIITex average intra texture complexity in I frames avgPITex average intra texture complexity in P frames avgPPTex average non intra texture complexity in P frames avgBPTex average non intra texture complexity in B frames mv bits used for MVs fCode maximum length of MV in log2 scale iCount number of intra MBs / number of MBs var spatial complexity mcVar temporal complexity qComp qcomp from the command line isI, isP, isB is 1 if picture type is I/P/B else 0 Pi,E see ur favorite math book functions: max(a,b),min(a,b) maximum / minimum gt(a,b) is 1 if a>b, 0 otherwise lt(a,b) is 1 if a,,[/,,[/...]] quality 2..31 -> quantizer quality -500..0 -> quality correcture in % vrc_init_cplx (0-1000) initial complexity for pass1 vqsquish (0 or 1) for pass1/2 how to keep the quantizer between qmin & qmax 0 use cliping 1 use a nice differentiable function (default) vlelim (-1000-1000) single coefficient elimination threshold for luminance 0 disabled (default) -4 (JVT recommendation) negative values will allso consider the dc coefficient should be at least -4 or lower for encoding at quant=1 vcelim (-1000-1000) single coefficient elimination threshold for chrominance 0 disabled (default) 7 (JVT recommendation) negative values will allso consider the dc coefficient should be at least -4 or lower for encoding at quant=1 vstrict strict standard compliance only recommended if you want to feed the output into the mpeg4 reference decoder vdpart data partitioning adds 2 byte per video packet each videopacket will be encoded in 3 seperate partitions: 1. MVs (=movement) 2. DC coefficients (=low res picture) 3. AC coefficients (=details) the MV & DC are most important, loosing them looks far worse than loosing the AC and the 1. & 2. partition (MV&DC) are far smaller than the 3. partition (AC) -> errors will hit the AC partition much more often than the MV&DC -> the picture will look better with partitioning than without, as without partitining an error will trash AC/DC/MV equally improves error-resistance when transfering over unreliable channels (eg. streaming over the internet) vpsize (0-10000) video packet size 0 disabled (default) 100-1000 good choice improves error-resistance (see vdpart for more info) gray grayscale only encoding (a bit faster than with color ...) vfdct (0-99) dct algorithm 0 automatically select a good one (default) 1 fast integer 2 accurate integer 3 mmx 4 mlib idct (0-99) idct algorithm 0 automatically select a good one (default) 1 jpeg reference integer 2 simple 3 simplemmx 4 libmpeg2mmx (inaccurate, DONT USE for encoding with keyint >100) 5 ps2 6 mlib 7 arm note: all these IDCTs do pass the IEEE1180 tests AFAIK lumi_mask (0.0-1.0) luminance masking 0.0 disabled (default) 0.0-0.3 should be a sane range warning: be carefull, too large values can cause disasterous things warning2: large values might look good on some monitors but may look horrible on other monitors dark_mask (0.0-1.0) darkness masking 0.0 disabled (default) 0.0-0.3 should be a sane range warning: be carefull, too large values can cause disasterous things warning2: large values might look good on some monitors but may look horrible on other monitors / TV / TFT tcplx_mask (0.0-1.0) temporal complexity masking 0.0 disabled (default) scplx_mask (0.0-1.0) spatial complexity masking 0.0 disabled (default) 0.0-0.5 should be a sane range larger values help against blockiness, if no deblocking filter is used for decoding Tip: crop any black borders completly away as they will reduce the quality of the MBs there, this is true if scplx_mask isnt used at all too naq normalize adaptive quantization (experimental) when using adaptive quantization (*_mask), the average per-MB quantizer may no longer match the requested frame-level quantizer. using naq will attempt to adjust the per-MB quantizers to maintain the proper average. ildct use interlaced dct format YV12 (default) 422P (for huffyuv) pred (for huffyuv) 0 left prediction 1 plane/gradient prediction 2 median prediction qpel use quarter pel motion compensation Tip: this seems only usefull for high bitrate encodings precmp comparission function for motion estimation pre pass cmp comparission function for full pel motion estimation subcmp comparission function for sub pel motion estimation 0 SAD (sum of absolute differences) (default) 1 SSE (sum of squared errors) 2 SATD (sum of absolute hadamard transformed differences) 3 DCT (sum of absolute dct transformed differences) 4 PSNR (sum of the squared quantization errors) 7 ZERO (0) +256 (use chroma too, doesnt work with b frames currently) Tip: SAD is fast, SATD is good Tip2: when using SATD for full pel search u should use a larger diamond something like dia=2 or dia=4 predia (-99 - 6) diamond type & size for motion estimation pre pass dia (-99 - 6) diamond type & size for motion estimation ... -3 shape adaptive diamond with size 3 -2 shape adaptive diamond with size 2 -1 experimental 1 normal size=1 diamond (default) =EPZS type diamond 0 000 0 2 normal size=2 diamond 0 000 00000 000 0 ... Tip: the shape adaptive stuff seems to be faster at the same quality Note: the sizes of the normal diamonds and shape adaptive ones dont have the same meaning trell trellis quantization this will find the optimal encoding for each 8x8 block trellis quantization is quite simple a optimal quantization in the PSNR vs bitrate sense (assuming that there would be no rounding errors introduced by the IDCT, which is obviously not the case) it simply finds a block for the minimum of error + lambda*bits lambda is a qp dependant constant bits is the amount of bits needed to encode the block error is simple the sum of squared errors of the quantization last_pred (0-99) amount of motion predictors from the previous frame 0 (default) a -> will use 2a+1 x 2a+1 MB square of MV predictors from the previous frame preme (0-2) Motion estimation pre-pass 0 disabled 1 only after I frames (default) 2 allways subq (1-8) subpel refinement quality (for qpel) 8 (default) Note: this has a significant effect on the speed lavdopts: (decoder options) --------------------------- ec error concealment 1 use strong deblock filter for damaged MBs 2 iterative MV search (slow) 3 all (default) Note: just add the ones u want to enable er error resilience 0 disabled 1 carefull (should work with broken encoders) 2 normal (default) (works with compliant encoders) 3 agressive (more checks but might cause problems even for valid bitstreams) 4 very agressive bug manual workaround encoder bugs (autodetection isnt foolproof for these) 0 nothing 1 autodetect bugs (default) 2 for msmpeg4v3 some old lavc generated msmpeg4v3 files (no autodetect) 4 for mpeg4 xvid interlacing bug (autodetected if fourcc==XVIX) 8 for mpeg4 UMP4 (autodetected if fourcc==UMP4) 16for mpeg4 padding bug 32for mpeg4 illegal vlc bug (autodetected per fourcc) 64for mpeg4 XVID&DIVX qpel bug (autodetected) Note: just add the ones u want to enable gray grayscale only decoding (a bit faster than with color ...) idct see lavcopts note: the decoding quality is highest if the same idct algorithm is used for decoding as for encoding, this is often not the most accurate though Notes: 1. lavc will strictly follow the quantizer limits vqmin, vqmax, vqdiff even if it violates the bitrate / bitrate tolerance 2. changing some options between pass1 & 2 can reduce the quality FAQ: Q: Why is the filesize much too small? A: Try to increase vqmin=2 or 1 (be carefull with 1, it could cause strange things to happen). Q: What provides better error recovery while keeping the filesize low? Should I use data partitioning or increase the number of video packets? A: Data partitioning is better in this case. Glossary: MB Macroblock (16x16 luminance and 8x8 chrominance samples) MV Motion vector ME Motion estimation MC Motion compensation RC Rate control DCT Discrete Cosine Transform IDCT Inverse Discrete Cosine Transform DC The coefficient of the constant term in the DCT (avg value of block) JVT Joint Video Team Standard -- http://www.itu.int/ITU-T/news/jvtpro.html Examples: mencoder foobar.avi -lavcopts vcodec=mpeg4:vhq:keyint=300:vqscale=2 -o new-foobar.avi mplayer foobar.avi -lavdopts bug=1 Links: short intro to mpeg coding: http://www.eecs.umich.edu/~amarathe/mpeg.html longer intro to jpeg/mpeg coding: http://www.cs.sfu.ca/undergrad/CourseMaterials/CMPT479/material/notes/Chap4/Chap4.2/Chap4.2.html -- Written 2002 by Michael Niedermayer and reviewed by Felix Buenemann. Check the MPlayer documentation for contact-addresses.