mirror of https://git.ffmpeg.org/ffmpeg.git
947 lines
26 KiB
Plaintext
947 lines
26 KiB
Plaintext
\input texinfo @c -*- texinfo -*-
|
|
|
|
@settitle FFmpeg Documentation
|
|
@titlepage
|
|
@sp 7
|
|
@center @titlefont{FFmpeg Documentation}
|
|
@sp 3
|
|
@end titlepage
|
|
|
|
|
|
@chapter Introduction
|
|
|
|
FFmpeg is a very fast video and audio converter. It can also grab from
|
|
a live audio/video source.
|
|
|
|
The command line interface is designed to be intuitive, in the sense
|
|
that FFmpeg tries to figure out all parameters that can possibly be
|
|
derived automatically. You usually only have to specify the target
|
|
bitrate you want.
|
|
|
|
FFmpeg can also convert from any sample rate to any other, and resize
|
|
video on the fly with a high quality polyphase filter.
|
|
|
|
@chapter Quick Start
|
|
|
|
@c man begin EXAMPLES
|
|
@section Video and Audio grabbing
|
|
|
|
FFmpeg can grab video and audio from devices given that you specify the input
|
|
format and device.
|
|
|
|
@example
|
|
ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg
|
|
@end example
|
|
|
|
Note that you must activate the right video source and channel before
|
|
launching FFmpeg with any TV viewer such as xawtv
|
|
(@url{http://bytesex.org/xawtv/}) by Gerd Knorr. You also
|
|
have to set the audio recording levels correctly with a
|
|
standard mixer.
|
|
|
|
@section X11 grabbing
|
|
|
|
FFmpeg can grab the X11 display.
|
|
|
|
@example
|
|
ffmpeg -f x11grab -s cif -i :0.0 /tmp/out.mpg
|
|
@end example
|
|
|
|
0.0 is display.screen number of your X11 server, same as
|
|
the DISPLAY environment variable.
|
|
|
|
@example
|
|
ffmpeg -f x11grab -s cif -i :0.0+10,20 /tmp/out.mpg
|
|
@end example
|
|
|
|
0.0 is display.screen number of your X11 server, same as the DISPLAY environment
|
|
variable. 10 is the x-offset and 20 the y-offset for the grabbing.
|
|
|
|
@section Video and Audio file format conversion
|
|
|
|
* FFmpeg can use any supported file format and protocol as input:
|
|
|
|
Examples:
|
|
|
|
* You can use YUV files as input:
|
|
|
|
@example
|
|
ffmpeg -i /tmp/test%d.Y /tmp/out.mpg
|
|
@end example
|
|
|
|
It will use the files:
|
|
@example
|
|
/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V,
|
|
/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc...
|
|
@end example
|
|
|
|
The Y files use twice the resolution of the U and V files. They are
|
|
raw files, without header. They can be generated by all decent video
|
|
decoders. You must specify the size of the image with the @option{-s} option
|
|
if FFmpeg cannot guess it.
|
|
|
|
* You can input from a raw YUV420P file:
|
|
|
|
@example
|
|
ffmpeg -i /tmp/test.yuv /tmp/out.avi
|
|
@end example
|
|
|
|
test.yuv is a file containing raw YUV planar data. Each frame is composed
|
|
of the Y plane followed by the U and V planes at half vertical and
|
|
horizontal resolution.
|
|
|
|
* You can output to a raw YUV420P file:
|
|
|
|
@example
|
|
ffmpeg -i mydivx.avi hugefile.yuv
|
|
@end example
|
|
|
|
* You can set several input files and output files:
|
|
|
|
@example
|
|
ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg
|
|
@end example
|
|
|
|
Converts the audio file a.wav and the raw YUV video file a.yuv
|
|
to MPEG file a.mpg.
|
|
|
|
* You can also do audio and video conversions at the same time:
|
|
|
|
@example
|
|
ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2
|
|
@end example
|
|
|
|
Converts a.wav to MPEG audio at 22050 Hz sample rate.
|
|
|
|
* You can encode to several formats at the same time and define a
|
|
mapping from input stream to output streams:
|
|
|
|
@example
|
|
ffmpeg -i /tmp/a.wav -ab 64k /tmp/a.mp2 -ab 128k /tmp/b.mp2 -map 0:0 -map 0:0
|
|
@end example
|
|
|
|
Converts a.wav to a.mp2 at 64 kbits and to b.mp2 at 128 kbits. '-map
|
|
file:index' specifies which input stream is used for each output
|
|
stream, in the order of the definition of output streams.
|
|
|
|
* You can transcode decrypted VOBs:
|
|
|
|
@example
|
|
ffmpeg -i snatch_1.vob -f avi -vcodec mpeg4 -b 800k -g 300 -bf 2 -acodec libmp3lame -ab 128k snatch.avi
|
|
@end example
|
|
|
|
This is a typical DVD ripping example; the input is a VOB file, the
|
|
output an AVI file with MPEG-4 video and MP3 audio. Note that in this
|
|
command we use B-frames so the MPEG-4 stream is DivX5 compatible, and
|
|
GOP size is 300 which means one intra frame every 10 seconds for 29.97fps
|
|
input video. Furthermore, the audio stream is MP3-encoded so you need
|
|
to enable LAME support by passing @code{--enable-libmp3lame} to configure.
|
|
The mapping is particularly useful for DVD transcoding
|
|
to get the desired audio language.
|
|
|
|
NOTE: To see the supported input formats, use @code{ffmpeg -formats}.
|
|
|
|
* You can extract images from a video:
|
|
|
|
@example
|
|
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
|
|
@end example
|
|
|
|
This will extract one video frame per second from the video and will
|
|
output them in files named @file{foo-001.jpeg}, @file{foo-002.jpeg},
|
|
etc. Images will be rescaled to fit the new WxH values.
|
|
|
|
The syntax @code{foo-%03d.jpeg} specifies to use a decimal number
|
|
composed of three digits padded with zeroes to express the sequence
|
|
number. It is the same syntax supported by the C printf function, but
|
|
only formats accepting a normal integer are suitable.
|
|
|
|
If you want to extract just a limited number of frames, you can use the
|
|
above command in combination with the -vframes or -t option, or in
|
|
combination with -ss to start extracting from a certain point in time.
|
|
|
|
* You can put many streams of the same type in the output:
|
|
|
|
@example
|
|
ffmpeg -i test1.avi -i test2.avi -vcodec copy -acodec copy -vcodec copy -acodec copy test12.avi -newvideo -newaudio
|
|
@end example
|
|
|
|
In addition to the first video and audio streams, the resulting
|
|
output file @file{test12.avi} will contain the second video
|
|
and the second audio stream found in the input streams list.
|
|
|
|
The @code{-newvideo}, @code{-newaudio} and @code{-newsubtitle}
|
|
options have to be specified immediately after the name of the output
|
|
file to which you want to add them.
|
|
@c man end
|
|
|
|
@chapter Invocation
|
|
|
|
@section Syntax
|
|
|
|
The generic syntax is:
|
|
|
|
@example
|
|
@c man begin SYNOPSIS
|
|
ffmpeg [[infile options][@option{-i} @var{infile}]]... @{[outfile options] @var{outfile}@}...
|
|
@c man end
|
|
@end example
|
|
@c man begin DESCRIPTION
|
|
As a general rule, options are applied to the next specified
|
|
file. Therefore, order is important, and you can have the same
|
|
option on the command line multiple times. Each occurrence is
|
|
then applied to the next input or output file.
|
|
|
|
* To set the video bitrate of the output file to 64kbit/s:
|
|
@example
|
|
ffmpeg -i input.avi -b 64k output.avi
|
|
@end example
|
|
|
|
* To force the frame rate of the output file to 24 fps:
|
|
@example
|
|
ffmpeg -i input.avi -r 24 output.avi
|
|
@end example
|
|
|
|
* To force the frame rate of the input file (valid for raw formats only)
|
|
to 1 fps and the frame rate of the output file to 24 fps:
|
|
@example
|
|
ffmpeg -r 1 -i input.m2v -r 24 output.avi
|
|
@end example
|
|
|
|
The format option may be needed for raw input files.
|
|
|
|
By default, FFmpeg tries to convert as losslessly as possible: It
|
|
uses the same audio and video parameters for the outputs as the one
|
|
specified for the inputs.
|
|
@c man end
|
|
|
|
@c man begin OPTIONS
|
|
@section Main options
|
|
|
|
@table @option
|
|
@item -L
|
|
Show license.
|
|
|
|
@item -h
|
|
Show help.
|
|
|
|
@item -version
|
|
Show version.
|
|
|
|
@item -formats
|
|
Show available formats, codecs, protocols, ...
|
|
|
|
@item -f @var{fmt}
|
|
Force format.
|
|
|
|
@item -i @var{filename}
|
|
input file name
|
|
|
|
@item -y
|
|
Overwrite output files.
|
|
|
|
@item -t @var{duration}
|
|
Restrict the transcoded/captured video sequence
|
|
to the duration specified in seconds.
|
|
@code{hh:mm:ss[.xxx]} syntax is also supported.
|
|
|
|
@item -fs @var{limit_size}
|
|
Set the file size limit.
|
|
|
|
@item -ss @var{position}
|
|
Seek to given time position in seconds.
|
|
@code{hh:mm:ss[.xxx]} syntax is also supported.
|
|
|
|
@item -itsoffset @var{offset}
|
|
Set the input time offset in seconds.
|
|
@code{[-]hh:mm:ss[.xxx]} syntax is also supported.
|
|
This option affects all the input files that follow it.
|
|
The offset is added to the timestamps of the input files.
|
|
Specifying a positive offset means that the corresponding
|
|
streams are delayed by 'offset' seconds.
|
|
|
|
@item -title @var{string}
|
|
Set the title.
|
|
|
|
@item -timestamp @var{time}
|
|
Set the timestamp.
|
|
|
|
@item -author @var{string}
|
|
Set the author.
|
|
|
|
@item -copyright @var{string}
|
|
Set the copyright.
|
|
|
|
@item -comment @var{string}
|
|
Set the comment.
|
|
|
|
@item -album @var{string}
|
|
Set the album.
|
|
|
|
@item -track @var{number}
|
|
Set the track.
|
|
|
|
@item -year @var{number}
|
|
Set the year.
|
|
|
|
@item -v @var{number}
|
|
Set the logging verbosity level.
|
|
|
|
@item -target @var{type}
|
|
Specify target file type ("vcd", "svcd", "dvd", "dv", "dv50", "pal-vcd",
|
|
"ntsc-svcd", ... ). All the format options (bitrate, codecs,
|
|
buffer sizes) are then set automatically. You can just type:
|
|
|
|
@example
|
|
ffmpeg -i myfile.avi -target vcd /tmp/vcd.mpg
|
|
@end example
|
|
|
|
Nevertheless you can specify additional options as long as you know
|
|
they do not conflict with the standard, as in:
|
|
|
|
@example
|
|
ffmpeg -i myfile.avi -target vcd -bf 2 /tmp/vcd.mpg
|
|
@end example
|
|
|
|
@item -dframes @var{number}
|
|
Set the number of data frames to record.
|
|
|
|
@item -scodec @var{codec}
|
|
Force subtitle codec ('copy' to copy stream).
|
|
|
|
@item -newsubtitle
|
|
Add a new subtitle stream to the current output stream.
|
|
|
|
@item -slang @var{code}
|
|
Set the ISO 639 language code (3 letters) of the current subtitle stream.
|
|
|
|
@end table
|
|
|
|
@section Video Options
|
|
|
|
@table @option
|
|
@item -b @var{bitrate}
|
|
Set the video bitrate in bit/s (default = 200 kb/s).
|
|
@item -vframes @var{number}
|
|
Set the number of video frames to record.
|
|
@item -r @var{fps}
|
|
Set frame rate (Hz value, fraction or abbreviation), (default = 25).
|
|
@item -s @var{size}
|
|
Set frame size. The format is @samp{wxh} (ffserver default = 160x128, ffmpeg default = same as source).
|
|
The following abbreviations are recognized:
|
|
@table @samp
|
|
@item sqcif
|
|
128x96
|
|
@item qcif
|
|
176x144
|
|
@item cif
|
|
352x288
|
|
@item 4cif
|
|
704x576
|
|
@item qqvga
|
|
160x120
|
|
@item qvga
|
|
320x240
|
|
@item vga
|
|
640x480
|
|
@item svga
|
|
800x600
|
|
@item xga
|
|
1024x768
|
|
@item uxga
|
|
1600x1200
|
|
@item qxga
|
|
2048x1536
|
|
@item sxga
|
|
1280x1024
|
|
@item qsxga
|
|
2560x2048
|
|
@item hsxga
|
|
5120x4096
|
|
@item wvga
|
|
852x480
|
|
@item wxga
|
|
1366x768
|
|
@item wsxga
|
|
1600x1024
|
|
@item wuxga
|
|
1920x1200
|
|
@item woxga
|
|
2560x1600
|
|
@item wqsxga
|
|
3200x2048
|
|
@item wquxga
|
|
3840x2400
|
|
@item whsxga
|
|
6400x4096
|
|
@item whuxga
|
|
7680x4800
|
|
@item cga
|
|
320x200
|
|
@item ega
|
|
640x350
|
|
@item hd480
|
|
852x480
|
|
@item hd720
|
|
1280x720
|
|
@item hd1080
|
|
1920x1080
|
|
@end table
|
|
|
|
@item -aspect @var{aspect}
|
|
Set aspect ratio (4:3, 16:9 or 1.3333, 1.7777).
|
|
@item -croptop @var{size}
|
|
Set top crop band size (in pixels).
|
|
@item -cropbottom @var{size}
|
|
Set bottom crop band size (in pixels).
|
|
@item -cropleft @var{size}
|
|
Set left crop band size (in pixels).
|
|
@item -cropright @var{size}
|
|
Set right crop band size (in pixels).
|
|
@item -padtop @var{size}
|
|
Set top pad band size (in pixels).
|
|
@item -padbottom @var{size}
|
|
Set bottom pad band size (in pixels).
|
|
@item -padleft @var{size}
|
|
Set left pad band size (in pixels).
|
|
@item -padright @var{size}
|
|
Set right pad band size (in pixels).
|
|
@item -padcolor @var{hex_color}
|
|
Set color of padded bands. The value for padcolor is expressed
|
|
as a six digit hexadecimal number where the first two digits
|
|
represent red, the middle two digits green and last two digits
|
|
blue (default = 000000 (black)).
|
|
@item -vn
|
|
Disable video recording.
|
|
@item -bt @var{tolerance}
|
|
Set video bitrate tolerance (in bits, default 4000k).
|
|
Has a minimum value of: (target_bitrate/target_framerate).
|
|
In 1-pass mode, bitrate tolerance specifies how far ratecontrol is
|
|
willing to deviate from the target average bitrate value. This is
|
|
not related to min/max bitrate. Lowering tolerance too much has
|
|
an adverse effect on quality.
|
|
@item -maxrate @var{bitrate}
|
|
Set max video bitrate (in bit/s).
|
|
Requires -bufsize to be set.
|
|
@item -minrate @var{bitrate}
|
|
Set min video bitrate (in bit/s).
|
|
Most useful in setting up a CBR encode:
|
|
@example
|
|
ffmpeg -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v
|
|
@end example
|
|
It is of little use elsewise.
|
|
@item -bufsize @var{size}
|
|
Set video buffer verifier buffer size (in bits).
|
|
@item -vcodec @var{codec}
|
|
Force video codec to @var{codec}. Use the @code{copy} special value to
|
|
tell that the raw codec data must be copied as is.
|
|
@item -sameq
|
|
Use same video quality as source (implies VBR).
|
|
|
|
@item -pass @var{n}
|
|
Select the pass number (1 or 2). It is used to do two-pass
|
|
video encoding. The statistics of the video are recorded in the first
|
|
pass into a log file (see also the option -passlogfile),
|
|
and in the second pass that log file is used to generate the video
|
|
at the exact requested bitrate.
|
|
On pass 1, you may just deactivate audio and set output to null,
|
|
examples for Windows and Unix:
|
|
@example
|
|
ffmpeg -i foo.mov -vcodec libxvid -pass 1 -an -f rawvideo -y NUL
|
|
ffmpeg -i foo.mov -vcodec libxvid -pass 1 -an -f rawvideo -y /dev/null
|
|
@end example
|
|
|
|
@item -passlogfile @var{file}
|
|
Set two-pass log file name to @var{file}. Default name is
|
|
@file{ffmpeg2pass-N.log}, where N is a number specific to the output
|
|
stream.
|
|
|
|
@item -newvideo
|
|
Add a new video stream to the current output stream.
|
|
|
|
@end table
|
|
|
|
@section Advanced Video Options
|
|
|
|
@table @option
|
|
@item -pix_fmt @var{format}
|
|
Set pixel format. Use 'list' as parameter to show all the supported
|
|
pixel formats.
|
|
@item -sws_flags @var{flags}
|
|
Set SwScaler flags (only available when compiled with swscale support).
|
|
@item -g @var{gop_size}
|
|
Set the group of pictures size.
|
|
@item -intra
|
|
Use only intra frames.
|
|
@item -vdt @var{n}
|
|
Discard threshold.
|
|
@item -qscale @var{q}
|
|
Use fixed video quantizer scale (VBR).
|
|
@item -qmin @var{q}
|
|
minimum video quantizer scale (VBR)
|
|
@item -qmax @var{q}
|
|
maximum video quantizer scale (VBR)
|
|
@item -qdiff @var{q}
|
|
maximum difference between the quantizer scales (VBR)
|
|
@item -qblur @var{blur}
|
|
video quantizer scale blur (VBR) (range 0.0 - 1.0)
|
|
@item -qcomp @var{compression}
|
|
video quantizer scale compression (VBR) (default 0.5).
|
|
Constant of ratecontrol equation. Recommended range for default rc_eq: 0.0-1.0
|
|
|
|
@item -lmin @var{lambda}
|
|
minimum video lagrange factor (VBR)
|
|
@item -lmax @var{lambda}
|
|
max video lagrange factor (VBR)
|
|
@item -mblmin @var{lambda}
|
|
minimum macroblock quantizer scale (VBR)
|
|
@item -mblmax @var{lambda}
|
|
maximum macroblock quantizer scale (VBR)
|
|
|
|
These four options (lmin, lmax, mblmin, mblmax) use 'lambda' units,
|
|
but you may use the QP2LAMBDA constant to easily convert from 'q' units:
|
|
@example
|
|
ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
|
|
@end example
|
|
|
|
@item -rc_init_cplx @var{complexity}
|
|
initial complexity for single pass encoding
|
|
@item -b_qfactor @var{factor}
|
|
qp factor between P- and B-frames
|
|
@item -i_qfactor @var{factor}
|
|
qp factor between P- and I-frames
|
|
@item -b_qoffset @var{offset}
|
|
qp offset between P- and B-frames
|
|
@item -i_qoffset @var{offset}
|
|
qp offset between P- and I-frames
|
|
@item -rc_eq @var{equation}
|
|
Set rate control equation (@pxref{FFmpeg formula
|
|
evaluator}) (default = @code{tex^qComp}).
|
|
@item -rc_override @var{override}
|
|
rate control override for specific intervals
|
|
@item -me_method @var{method}
|
|
Set motion estimation method to @var{method}.
|
|
Available methods are (from lowest to best quality):
|
|
@table @samp
|
|
@item zero
|
|
Try just the (0, 0) vector.
|
|
@item phods
|
|
@item log
|
|
@item x1
|
|
@item hex
|
|
@item umh
|
|
@item epzs
|
|
(default method)
|
|
@item full
|
|
exhaustive search (slow and marginally better than epzs)
|
|
@end table
|
|
|
|
@item -dct_algo @var{algo}
|
|
Set DCT algorithm to @var{algo}. Available values are:
|
|
@table @samp
|
|
@item 0
|
|
FF_DCT_AUTO (default)
|
|
@item 1
|
|
FF_DCT_FASTINT
|
|
@item 2
|
|
FF_DCT_INT
|
|
@item 3
|
|
FF_DCT_MMX
|
|
@item 4
|
|
FF_DCT_MLIB
|
|
@item 5
|
|
FF_DCT_ALTIVEC
|
|
@end table
|
|
|
|
@item -idct_algo @var{algo}
|
|
Set IDCT algorithm to @var{algo}. Available values are:
|
|
@table @samp
|
|
@item 0
|
|
FF_IDCT_AUTO (default)
|
|
@item 1
|
|
FF_IDCT_INT
|
|
@item 2
|
|
FF_IDCT_SIMPLE
|
|
@item 3
|
|
FF_IDCT_SIMPLEMMX
|
|
@item 4
|
|
FF_IDCT_LIBMPEG2MMX
|
|
@item 5
|
|
FF_IDCT_PS2
|
|
@item 6
|
|
FF_IDCT_MLIB
|
|
@item 7
|
|
FF_IDCT_ARM
|
|
@item 8
|
|
FF_IDCT_ALTIVEC
|
|
@item 9
|
|
FF_IDCT_SH4
|
|
@item 10
|
|
FF_IDCT_SIMPLEARM
|
|
@end table
|
|
|
|
@item -er @var{n}
|
|
Set error resilience to @var{n}.
|
|
@table @samp
|
|
@item 1
|
|
FF_ER_CAREFUL (default)
|
|
@item 2
|
|
FF_ER_COMPLIANT
|
|
@item 3
|
|
FF_ER_AGGRESSIVE
|
|
@item 4
|
|
FF_ER_VERY_AGGRESSIVE
|
|
@end table
|
|
|
|
@item -ec @var{bit_mask}
|
|
Set error concealment to @var{bit_mask}. @var{bit_mask} is a bit mask of
|
|
the following values:
|
|
@table @samp
|
|
@item 1
|
|
FF_EC_GUESS_MVS (default = enabled)
|
|
@item 2
|
|
FF_EC_DEBLOCK (default = enabled)
|
|
@end table
|
|
|
|
@item -bf @var{frames}
|
|
Use 'frames' B-frames (supported for MPEG-1, MPEG-2 and MPEG-4).
|
|
@item -mbd @var{mode}
|
|
macroblock decision
|
|
@table @samp
|
|
@item 0
|
|
FF_MB_DECISION_SIMPLE: Use mb_cmp (cannot change it yet in FFmpeg).
|
|
@item 1
|
|
FF_MB_DECISION_BITS: Choose the one which needs the fewest bits.
|
|
@item 2
|
|
FF_MB_DECISION_RD: rate distortion
|
|
@end table
|
|
|
|
@item -4mv
|
|
Use four motion vector by macroblock (MPEG-4 only).
|
|
@item -part
|
|
Use data partitioning (MPEG-4 only).
|
|
@item -bug @var{param}
|
|
Work around encoder bugs that are not auto-detected.
|
|
@item -strict @var{strictness}
|
|
How strictly to follow the standards.
|
|
@item -aic
|
|
Enable Advanced intra coding (h263+).
|
|
@item -umv
|
|
Enable Unlimited Motion Vector (h263+)
|
|
|
|
@item -deinterlace
|
|
Deinterlace pictures.
|
|
@item -ilme
|
|
Force interlacing support in encoder (MPEG-2 and MPEG-4 only).
|
|
Use this option if your input file is interlaced and you want
|
|
to keep the interlaced format for minimum losses.
|
|
The alternative is to deinterlace the input stream with
|
|
@option{-deinterlace}, but deinterlacing introduces losses.
|
|
@item -psnr
|
|
Calculate PSNR of compressed frames.
|
|
@item -vstats
|
|
Dump video coding statistics to @file{vstats_HHMMSS.log}.
|
|
@item -vstats_file @var{file}
|
|
Dump video coding statistics to @var{file}.
|
|
@item -vhook @var{module}
|
|
Insert video processing @var{module}. @var{module} contains the module
|
|
name and its parameters separated by spaces.
|
|
@item -top @var{n}
|
|
top=1/bottom=0/auto=-1 field first
|
|
@item -dc @var{precision}
|
|
Intra_dc_precision.
|
|
@item -vtag @var{fourcc/tag}
|
|
Force video tag/fourcc.
|
|
@item -qphist
|
|
Show QP histogram.
|
|
@item -vbsf @var{bitstream_filter}
|
|
Bitstream filters available are "dump_extra", "remove_extra", "noise", "h264_mp4toannexb", "imxdump", "mjpegadump".
|
|
@example
|
|
ffmpeg -i h264.mp4 -vcodec copy -vbsf h264_mp4toannexb -an out.h264
|
|
@end example
|
|
@end table
|
|
|
|
@section Audio Options
|
|
|
|
@table @option
|
|
@item -aframes @var{number}
|
|
Set the number of audio frames to record.
|
|
@item -ar @var{freq}
|
|
Set the audio sampling frequency (default = 44100 Hz).
|
|
@item -ab @var{bitrate}
|
|
Set the audio bitrate in bit/s (default = 64k).
|
|
@item -ac @var{channels}
|
|
Set the number of audio channels (default = 1).
|
|
@item -an
|
|
Disable audio recording.
|
|
@item -acodec @var{codec}
|
|
Force audio codec to @var{codec}. Use the @code{copy} special value to
|
|
specify that the raw codec data must be copied as is.
|
|
@item -newaudio
|
|
Add a new audio track to the output file. If you want to specify parameters,
|
|
do so before @code{-newaudio} (@code{-acodec}, @code{-ab}, etc..).
|
|
|
|
Mapping will be done automatically, if the number of output streams is equal to
|
|
the number of input streams, else it will pick the first one that matches. You
|
|
can override the mapping using @code{-map} as usual.
|
|
|
|
Example:
|
|
@example
|
|
ffmpeg -i file.mpg -vcodec copy -acodec ac3 -ab 384k test.mpg -acodec mp2 -ab 192k -newaudio
|
|
@end example
|
|
@item -alang @var{code}
|
|
Set the ISO 639 language code (3 letters) of the current audio stream.
|
|
@end table
|
|
|
|
@section Advanced Audio options:
|
|
|
|
@table @option
|
|
@item -atag @var{fourcc/tag}
|
|
Force audio tag/fourcc.
|
|
@item -absf @var{bitstream_filter}
|
|
Bitstream filters available are "dump_extra", "remove_extra", "noise", "mp3comp", "mp3decomp".
|
|
@end table
|
|
|
|
@section Subtitle options:
|
|
|
|
@table @option
|
|
@item -scodec @var{codec}
|
|
Force subtitle codec ('copy' to copy stream).
|
|
@item -newsubtitle
|
|
Add a new subtitle stream to the current output stream.
|
|
@item -slang @var{code}
|
|
Set the ISO 639 language code (3 letters) of the current subtitle stream.
|
|
@item -sbsf @var{bitstream_filter}
|
|
Bitstream filters available are "mov2textsub", "text2movsub".
|
|
@example
|
|
ffmpeg -i file.mov -an -vn -sbsf mov2textsub -scodec copy -f rawvideo sub.txt
|
|
@end example
|
|
@end table
|
|
|
|
@section Audio/Video grab options
|
|
|
|
@table @option
|
|
@item -vc @var{channel}
|
|
Set video grab channel (DV1394 only).
|
|
@item -tvstd @var{standard}
|
|
Set television standard (NTSC, PAL (SECAM)).
|
|
@item -isync
|
|
Synchronize read on input.
|
|
@end table
|
|
|
|
@section Advanced options
|
|
|
|
@table @option
|
|
@item -map @var{input_stream_id}[:@var{sync_stream_id}]
|
|
Set stream mapping from input streams to output streams.
|
|
Just enumerate the input streams in the order you want them in the output.
|
|
@var{sync_stream_id} if specified sets the input stream to sync
|
|
against.
|
|
@item -map_meta_data @var{outfile}:@var{infile}
|
|
Set meta data information of @var{outfile} from @var{infile}.
|
|
@item -debug
|
|
Print specific debug info.
|
|
@item -benchmark
|
|
Add timings for benchmarking.
|
|
@item -dump
|
|
Dump each input packet.
|
|
@item -hex
|
|
When dumping packets, also dump the payload.
|
|
@item -bitexact
|
|
Only use bit exact algorithms (for codec testing).
|
|
@item -ps @var{size}
|
|
Set packet size in bits.
|
|
@item -re
|
|
Read input at native frame rate. Mainly used to simulate a grab device.
|
|
@item -loop_input
|
|
Loop over the input stream. Currently it works only for image
|
|
streams. This option is used for automatic FFserver testing.
|
|
@item -loop_output @var{number_of_times}
|
|
Repeatedly loop output for formats that support looping such as animated GIF
|
|
(0 will loop the output infinitely).
|
|
@item -threads @var{count}
|
|
Thread count.
|
|
@item -vsync @var{parameter}
|
|
Video sync method. Video will be stretched/squeezed to match the timestamps,
|
|
it is done by duplicating and dropping frames. With -map you can select from
|
|
which stream the timestamps should be taken. You can leave either video or
|
|
audio unchanged and sync the remaining stream(s) to the unchanged one.
|
|
@item -async @var{samples_per_second}
|
|
Audio sync method. "Stretches/squeezes" the audio stream to match the timestamps,
|
|
the parameter is the maximum samples per second by which the audio is changed.
|
|
-async 1 is a special case where only the start of the audio stream is corrected
|
|
without any later correction.
|
|
@item -copyts
|
|
Copy timestamps from input to output.
|
|
@item -shortest
|
|
Finish encoding when the shortest input stream ends.
|
|
@item -dts_delta_threshold
|
|
Timestamp discontinuity delta threshold.
|
|
@item -muxdelay @var{seconds}
|
|
Set the maximum demux-decode delay.
|
|
@item -muxpreload @var{seconds}
|
|
Set the initial demux-decode delay.
|
|
@end table
|
|
|
|
@section Preset files
|
|
|
|
A preset file contains a sequence of @var{option}=@var{value} pairs,
|
|
one for each line, specifying a sequence of options which would be
|
|
awkward to specify on the command line. Lines starting with the hash
|
|
('#') character are ignored and are used to provide comments. Check
|
|
the @file{ffpresets} directory in the FFmpeg source tree for examples.
|
|
|
|
Preset files are specified with the @code{vpre}, @code{apre} and
|
|
@code{spre} options. The options specified in a preset file are
|
|
applied to the currently selected codec of the same type as the preset
|
|
option.
|
|
|
|
The argument passed to the preset options identifies the preset file
|
|
to use according to the following rules.
|
|
|
|
First ffmpeg searches for a file named @var{arg}.ffpreset in the
|
|
directories @file{$HOME/.ffmpeg}, @file{/usr/local/share/ffmpeg} and
|
|
@file{/usr/share/ffmpeg} in that order. For example, if the argument
|
|
is @code{libx264-max}, it will search for the file
|
|
@file{libx264-max.ffpreset}.
|
|
|
|
If no such file is found, then ffmpeg will search for a file named
|
|
@var{codec_name}-@var{arg}.ffpreset in the above-mentioned
|
|
directories, where @var{codec_name} is the name of the codec to which
|
|
the preset file options will be applied. For example, if you select
|
|
the video codec with @code{-vcodec libx264} and use @code{-vpre max},
|
|
then it will search for the file @file{libx264-max.ffpreset}.
|
|
|
|
Finally, if the above rules failed and the argument specifies an
|
|
absolute pathname, ffmpeg will search for that filename. This way you
|
|
can specify the absolute and complete filename of the preset file, for
|
|
example @file{./ffpresets/libx264-max.ffpreset}.
|
|
|
|
@node FFmpeg formula evaluator
|
|
@section FFmpeg formula evaluator
|
|
|
|
When evaluating a rate control string, FFmpeg uses an internal formula
|
|
evaluator.
|
|
|
|
The following binary operators are available: @code{+}, @code{-},
|
|
@code{*}, @code{/}, @code{^}.
|
|
|
|
The following unary operators are available: @code{+}, @code{-},
|
|
@code{(...)}.
|
|
|
|
The following functions are available:
|
|
@table @var
|
|
@item sinh(x)
|
|
@item cosh(x)
|
|
@item tanh(x)
|
|
@item sin(x)
|
|
@item cos(x)
|
|
@item tan(x)
|
|
@item exp(x)
|
|
@item log(x)
|
|
@item squish(x)
|
|
@item gauss(x)
|
|
@item abs(x)
|
|
@item max(x, y)
|
|
@item min(x, y)
|
|
@item gt(x, y)
|
|
@item lt(x, y)
|
|
@item eq(x, y)
|
|
@item bits2qp(bits)
|
|
@item qp2bits(qp)
|
|
@end table
|
|
|
|
The following constants are available:
|
|
@table @var
|
|
@item PI
|
|
@item E
|
|
@item iTex
|
|
@item pTex
|
|
@item tex
|
|
@item mv
|
|
@item fCode
|
|
@item iCount
|
|
@item mcVar
|
|
@item var
|
|
@item isI
|
|
@item isP
|
|
@item isB
|
|
@item avgQP
|
|
@item qComp
|
|
@item avgIITex
|
|
@item avgPITex
|
|
@item avgPPTex
|
|
@item avgBPTex
|
|
@item avgTex
|
|
@end table
|
|
|
|
@c man end
|
|
|
|
@ignore
|
|
|
|
@setfilename ffmpeg
|
|
@settitle FFmpeg video converter
|
|
|
|
@c man begin SEEALSO
|
|
ffserver(1), ffplay(1) and the HTML documentation of @file{ffmpeg}.
|
|
@c man end
|
|
|
|
@c man begin AUTHOR
|
|
Fabrice Bellard
|
|
@c man end
|
|
|
|
@end ignore
|
|
|
|
@section Protocols
|
|
|
|
The file name can be @file{-} to read from standard input or to write
|
|
to standard output.
|
|
|
|
FFmpeg also handles many protocols specified with an URL syntax.
|
|
|
|
Use 'ffmpeg -formats' to see a list of the supported protocols.
|
|
|
|
The protocol @code{http:} is currently used only to communicate with
|
|
FFserver (see the FFserver documentation). When FFmpeg will be a
|
|
video player it will also be used for streaming :-)
|
|
|
|
@chapter Tips
|
|
|
|
@itemize
|
|
@item For streaming at very low bitrate application, use a low frame rate
|
|
and a small GOP size. This is especially true for RealVideo where
|
|
the Linux player does not seem to be very fast, so it can miss
|
|
frames. An example is:
|
|
|
|
@example
|
|
ffmpeg -g 3 -r 3 -t 10 -b 50k -s qcif -f rv10 /tmp/b.rm
|
|
@end example
|
|
|
|
@item The parameter 'q' which is displayed while encoding is the current
|
|
quantizer. The value 1 indicates that a very good quality could
|
|
be achieved. The value 31 indicates the worst quality. If q=31 appears
|
|
too often, it means that the encoder cannot compress enough to meet
|
|
your bitrate. You must either increase the bitrate, decrease the
|
|
frame rate or decrease the frame size.
|
|
|
|
@item If your computer is not fast enough, you can speed up the
|
|
compression at the expense of the compression ratio. You can use
|
|
'-me zero' to speed up motion estimation, and '-intra' to disable
|
|
motion estimation completely (you have only I-frames, which means it
|
|
is about as good as JPEG compression).
|
|
|
|
@item To have very low audio bitrates, reduce the sampling frequency
|
|
(down to 22050 Hz for MPEG audio, 22050 or 11025 for AC-3).
|
|
|
|
@item To have a constant quality (but a variable bitrate), use the option
|
|
'-qscale n' when 'n' is between 1 (excellent quality) and 31 (worst
|
|
quality).
|
|
|
|
@item When converting video files, you can use the '-sameq' option which
|
|
uses the same quality factor in the encoder as in the decoder.
|
|
It allows almost lossless encoding.
|
|
|
|
@end itemize
|
|
|
|
@bye
|