2010-07-31 15:45:29 +00:00
|
|
|
|
@chapter Protocols
|
|
|
|
|
@c man begin PROTOCOLS
|
|
|
|
|
|
2011-03-14 21:59:19 +00:00
|
|
|
|
Protocols are configured elements in Libav which allow to access
|
2010-07-31 15:45:29 +00:00
|
|
|
|
resources which require the use of a particular protocol.
|
|
|
|
|
|
2011-03-14 21:59:19 +00:00
|
|
|
|
When you configure your Libav build, all the supported protocols are
|
2010-08-06 23:15:35 +00:00
|
|
|
|
enabled by default. You can list all available ones using the
|
|
|
|
|
configure option "--list-protocols".
|
2010-07-31 15:45:29 +00:00
|
|
|
|
|
|
|
|
|
You can disable all the protocols using the configure option
|
|
|
|
|
"--disable-protocols", and selectively enable a protocol using the
|
|
|
|
|
option "--enable-protocol=@var{PROTOCOL}", or you can disable a
|
|
|
|
|
particular protocol using the option
|
|
|
|
|
"--disable-protocol=@var{PROTOCOL}".
|
|
|
|
|
|
|
|
|
|
The option "-protocols" of the ff* tools will display the list of
|
2010-08-06 23:15:35 +00:00
|
|
|
|
supported protocols.
|
2010-07-31 15:45:29 +00:00
|
|
|
|
|
|
|
|
|
A description of the currently available protocols follows.
|
|
|
|
|
|
2011-03-09 09:11:53 +00:00
|
|
|
|
@section applehttp
|
|
|
|
|
|
|
|
|
|
Read Apple HTTP Live Streaming compliant segmented stream as
|
|
|
|
|
a uniform one. The M3U8 playlists describing the segments can be
|
|
|
|
|
remote HTTP resources or local files, accessed using the standard
|
|
|
|
|
file protocol.
|
2011-03-13 18:50:37 +00:00
|
|
|
|
HTTP is default, specific protocol can be declared by specifying
|
|
|
|
|
"+@var{proto}" after the applehttp URI scheme name, where @var{proto}
|
|
|
|
|
is either "file" or "http".
|
2011-03-09 09:11:53 +00:00
|
|
|
|
|
|
|
|
|
@example
|
|
|
|
|
applehttp://host/path/to/remote/resource.m3u8
|
|
|
|
|
applehttp+http://host/path/to/remote/resource.m3u8
|
|
|
|
|
applehttp+file://path/to/local/resource.m3u8
|
|
|
|
|
@end example
|
|
|
|
|
|
2010-07-31 15:45:29 +00:00
|
|
|
|
@section concat
|
|
|
|
|
|
|
|
|
|
Physical concatenation protocol.
|
|
|
|
|
|
2010-08-06 23:15:35 +00:00
|
|
|
|
Allow to read and seek from many resource in sequence as if they were
|
|
|
|
|
a unique resource.
|
2010-07-31 15:45:29 +00:00
|
|
|
|
|
2010-08-07 21:06:46 +00:00
|
|
|
|
A URL accepted by this protocol has the syntax:
|
2010-07-31 15:45:29 +00:00
|
|
|
|
@example
|
|
|
|
|
concat:@var{URL1}|@var{URL2}|...|@var{URLN}
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
where @var{URL1}, @var{URL2}, ..., @var{URLN} are the urls of the
|
|
|
|
|
resource to be concatenated, each one possibly specifying a distinct
|
|
|
|
|
protocol.
|
|
|
|
|
|
|
|
|
|
For example to read a sequence of files @file{split1.mpeg},
|
|
|
|
|
@file{split2.mpeg}, @file{split3.mpeg} with @file{ffplay} use the
|
|
|
|
|
command:
|
|
|
|
|
@example
|
|
|
|
|
ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
Note that you may need to escape the character "|" which is special for
|
|
|
|
|
many shells.
|
|
|
|
|
|
|
|
|
|
@section file
|
|
|
|
|
|
|
|
|
|
File access protocol.
|
|
|
|
|
|
|
|
|
|
Allow to read from or read to a file.
|
|
|
|
|
|
|
|
|
|
For example to read from a file @file{input.mpeg} with @file{ffmpeg}
|
|
|
|
|
use the command:
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -i file:input.mpeg output.mpeg
|
|
|
|
|
@end example
|
|
|
|
|
|
2010-08-06 23:15:35 +00:00
|
|
|
|
The ff* tools default to the file protocol, that is a resource
|
|
|
|
|
specified with the name "FILE.mpeg" is interpreted as the URL
|
|
|
|
|
"file:FILE.mpeg".
|
2010-07-31 15:45:29 +00:00
|
|
|
|
|
|
|
|
|
@section gopher
|
|
|
|
|
|
|
|
|
|
Gopher protocol.
|
|
|
|
|
|
|
|
|
|
@section http
|
|
|
|
|
|
2010-08-06 23:15:35 +00:00
|
|
|
|
HTTP (Hyper Text Transfer Protocol).
|
2010-07-31 15:45:29 +00:00
|
|
|
|
|
|
|
|
|
@section mmst
|
|
|
|
|
|
|
|
|
|
MMS (Microsoft Media Server) protocol over TCP.
|
|
|
|
|
|
2010-08-21 19:04:41 +00:00
|
|
|
|
@section mmsh
|
|
|
|
|
|
|
|
|
|
MMS (Microsoft Media Server) protocol over HTTP.
|
|
|
|
|
|
|
|
|
|
The required syntax is:
|
|
|
|
|
@example
|
|
|
|
|
mmsh://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
|
|
|
|
|
@end example
|
|
|
|
|
|
2010-07-31 15:45:29 +00:00
|
|
|
|
@section md5
|
|
|
|
|
|
|
|
|
|
MD5 output protocol.
|
|
|
|
|
|
2010-08-06 23:15:35 +00:00
|
|
|
|
Computes the MD5 hash of the data to be written, and on close writes
|
|
|
|
|
this to the designated output or stdout if none is specified. It can
|
|
|
|
|
be used to test muxers without writing an actual file.
|
2010-07-31 15:45:29 +00:00
|
|
|
|
|
|
|
|
|
Some examples follow.
|
|
|
|
|
@example
|
2010-08-07 21:06:46 +00:00
|
|
|
|
# Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
|
2010-07-31 15:45:29 +00:00
|
|
|
|
ffmpeg -i input.flv -f avi -y md5:output.avi.md5
|
|
|
|
|
|
2010-08-06 23:15:35 +00:00
|
|
|
|
# Write the MD5 hash of the encoded AVI file to stdout.
|
2010-07-31 15:45:29 +00:00
|
|
|
|
ffmpeg -i input.flv -f avi -y md5:
|
|
|
|
|
@end example
|
|
|
|
|
|
2010-08-06 23:15:35 +00:00
|
|
|
|
Note that some formats (typically MOV) require the output protocol to
|
2010-07-31 15:45:29 +00:00
|
|
|
|
be seekable, so they will fail with the MD5 output protocol.
|
|
|
|
|
|
|
|
|
|
@section pipe
|
|
|
|
|
|
|
|
|
|
UNIX pipe access protocol.
|
|
|
|
|
|
|
|
|
|
Allow to read and write from UNIX pipes.
|
|
|
|
|
|
|
|
|
|
The accepted syntax is:
|
|
|
|
|
@example
|
|
|
|
|
pipe:[@var{number}]
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
@var{number} is the number corresponding to the file descriptor of the
|
2010-08-06 23:15:35 +00:00
|
|
|
|
pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr). If @var{number}
|
|
|
|
|
is not specified, by default the stdout file descriptor will be used
|
|
|
|
|
for writing, stdin for reading.
|
2010-07-31 15:45:29 +00:00
|
|
|
|
|
|
|
|
|
For example to read from stdin with @file{ffmpeg}:
|
|
|
|
|
@example
|
|
|
|
|
cat test.wav | ffmpeg -i pipe:0
|
2010-08-06 23:15:35 +00:00
|
|
|
|
# ...this is the same as...
|
2010-07-31 15:45:29 +00:00
|
|
|
|
cat test.wav | ffmpeg -i pipe:
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
For writing to stdout with @file{ffmpeg}:
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -i test.wav -f avi pipe:1 | cat > test.avi
|
2010-08-06 23:15:35 +00:00
|
|
|
|
# ...this is the same as...
|
2010-07-31 15:45:29 +00:00
|
|
|
|
ffmpeg -i test.wav -f avi pipe: | cat > test.avi
|
|
|
|
|
@end example
|
|
|
|
|
|
2010-08-06 23:15:35 +00:00
|
|
|
|
Note that some formats (typically MOV), require the output protocol to
|
2010-07-31 15:45:29 +00:00
|
|
|
|
be seekable, so they will fail with the pipe output protocol.
|
|
|
|
|
|
|
|
|
|
@section rtmp
|
|
|
|
|
|
|
|
|
|
Real-Time Messaging Protocol.
|
|
|
|
|
|
|
|
|
|
The Real-Time Messaging Protocol (RTMP) is used for streaming multime‐
|
|
|
|
|
dia content across a TCP/IP network.
|
|
|
|
|
|
|
|
|
|
The required syntax is:
|
|
|
|
|
@example
|
|
|
|
|
rtmp://@var{server}[:@var{port}][/@var{app}][/@var{playpath}]
|
|
|
|
|
@end example
|
|
|
|
|
|
2010-08-06 23:15:35 +00:00
|
|
|
|
The accepted parameters are:
|
2010-07-31 15:45:29 +00:00
|
|
|
|
@table @option
|
|
|
|
|
|
|
|
|
|
@item server
|
2010-08-06 23:15:35 +00:00
|
|
|
|
The address of the RTMP server.
|
2010-07-31 15:45:29 +00:00
|
|
|
|
|
|
|
|
|
@item port
|
2010-08-06 23:15:35 +00:00
|
|
|
|
The number of the TCP port to use (by default is 1935).
|
2010-07-31 15:45:29 +00:00
|
|
|
|
|
|
|
|
|
@item app
|
2010-08-06 23:15:35 +00:00
|
|
|
|
It is the name of the application to access. It usually corresponds to
|
|
|
|
|
the path where the application is installed on the RTMP server
|
2010-07-31 15:45:29 +00:00
|
|
|
|
(e.g. @file{/ondemand/}, @file{/flash/live/}, etc.).
|
|
|
|
|
|
|
|
|
|
@item playpath
|
|
|
|
|
It is the path or name of the resource to play with reference to the
|
|
|
|
|
application specified in @var{app}, may be prefixed by "mp4:".
|
|
|
|
|
|
|
|
|
|
@end table
|
|
|
|
|
|
|
|
|
|
For example to read with @file{ffplay} a multimedia resource named
|
|
|
|
|
"sample" from the application "vod" from an RTMP server "myserver":
|
|
|
|
|
@example
|
|
|
|
|
ffplay rtmp://myserver/vod/sample
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
@section rtmp, rtmpe, rtmps, rtmpt, rtmpte
|
|
|
|
|
|
|
|
|
|
Real-Time Messaging Protocol and its variants supported through
|
|
|
|
|
librtmp.
|
|
|
|
|
|
2010-08-06 23:15:35 +00:00
|
|
|
|
Requires the presence of the librtmp headers and library during
|
2010-07-31 15:45:29 +00:00
|
|
|
|
configuration. You need to explicitely configure the build with
|
|
|
|
|
"--enable-librtmp". If enabled this will replace the native RTMP
|
|
|
|
|
protocol.
|
|
|
|
|
|
|
|
|
|
This protocol provides most client functions and a few server
|
|
|
|
|
functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
|
|
|
|
|
encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
|
|
|
|
|
variants of these encrypted types (RTMPTE, RTMPTS).
|
|
|
|
|
|
|
|
|
|
The required syntax is:
|
|
|
|
|
@example
|
|
|
|
|
@var{rtmp_proto}://@var{server}[:@var{port}][/@var{app}][/@var{playpath}] @var{options}
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
where @var{rtmp_proto} is one of the strings "rtmp", "rtmpt", "rtmpe",
|
|
|
|
|
"rtmps", "rtmpte", "rtmpts" corresponding to each RTMP variant, and
|
|
|
|
|
@var{server}, @var{port}, @var{app} and @var{playpath} have the same
|
2010-08-06 23:15:35 +00:00
|
|
|
|
meaning as specified for the RTMP native protocol.
|
2010-07-31 15:45:29 +00:00
|
|
|
|
@var{options} contains a list of space-separated options of the form
|
|
|
|
|
@var{key}=@var{val}.
|
|
|
|
|
|
2010-08-06 23:15:35 +00:00
|
|
|
|
See the librtmp manual page (man 3 librtmp) for more information.
|
2010-07-31 15:45:29 +00:00
|
|
|
|
|
|
|
|
|
For example, to stream a file in real-time to an RTMP server using
|
|
|
|
|
@file{ffmpeg}:
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
To play the same stream using @file{ffplay}:
|
|
|
|
|
@example
|
|
|
|
|
ffplay "rtmp://myserver/live/mystream live=1"
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
@section rtp
|
|
|
|
|
|
|
|
|
|
Real-Time Protocol.
|
|
|
|
|
|
2010-10-04 07:06:58 +00:00
|
|
|
|
@section rtsp
|
|
|
|
|
|
|
|
|
|
RTSP is not technically a protocol handler in libavformat, it is a demuxer
|
|
|
|
|
and muxer. The demuxer supports both normal RTSP (with data transferred
|
|
|
|
|
over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
|
|
|
|
|
data transferred over RDT).
|
|
|
|
|
|
|
|
|
|
The muxer can be used to send a stream using RTSP ANNOUNCE to a server
|
|
|
|
|
supporting it (currently Darwin Streaming Server and Mischa Spiegelmock's
|
|
|
|
|
RTSP server, @url{http://github.com/revmischa/rtsp-server}).
|
|
|
|
|
|
|
|
|
|
The required syntax for a RTSP url is:
|
|
|
|
|
@example
|
|
|
|
|
rtsp://@var{hostname}[:@var{port}]/@var{path}[?@var{options}]
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
@var{options} is a @code{&}-separated list. The following options
|
|
|
|
|
are supported:
|
|
|
|
|
|
|
|
|
|
@table @option
|
|
|
|
|
|
|
|
|
|
@item udp
|
|
|
|
|
Use UDP as lower transport protocol.
|
|
|
|
|
|
|
|
|
|
@item tcp
|
|
|
|
|
Use TCP (interleaving within the RTSP control channel) as lower
|
|
|
|
|
transport protocol.
|
|
|
|
|
|
|
|
|
|
@item multicast
|
|
|
|
|
Use UDP multicast as lower transport protocol.
|
|
|
|
|
|
|
|
|
|
@item http
|
|
|
|
|
Use HTTP tunneling as lower transport protocol, which is useful for
|
|
|
|
|
passing proxies.
|
2011-01-06 15:22:58 +00:00
|
|
|
|
|
|
|
|
|
@item filter_src
|
|
|
|
|
Accept packets only from negotiated peer address and port.
|
2010-10-04 07:06:58 +00:00
|
|
|
|
@end table
|
|
|
|
|
|
|
|
|
|
Multiple lower transport protocols may be specified, in that case they are
|
|
|
|
|
tried one at a time (if the setup of one fails, the next one is tried).
|
|
|
|
|
For the muxer, only the @code{tcp} and @code{udp} options are supported.
|
|
|
|
|
|
|
|
|
|
When receiving data over UDP, the demuxer tries to reorder received packets
|
|
|
|
|
(since they may arrive out of order, or packets may get lost totally). In
|
|
|
|
|
order for this to be enabled, a maximum delay must be specified in the
|
|
|
|
|
@code{max_delay} field of AVFormatContext.
|
|
|
|
|
|
|
|
|
|
When watching multi-bitrate Real-RTSP streams with @file{ffplay}, the
|
|
|
|
|
streams to display can be chosen with @code{-vst} @var{n} and
|
|
|
|
|
@code{-ast} @var{n} for video and audio respectively, and can be switched
|
|
|
|
|
on the fly by pressing @code{v} and @code{a}.
|
|
|
|
|
|
|
|
|
|
Example command lines:
|
|
|
|
|
|
|
|
|
|
To watch a stream over UDP, with a max reordering delay of 0.5 seconds:
|
|
|
|
|
|
|
|
|
|
@example
|
|
|
|
|
ffplay -max_delay 500000 rtsp://server/video.mp4?udp
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
To watch a stream tunneled over HTTP:
|
|
|
|
|
|
|
|
|
|
@example
|
|
|
|
|
ffplay rtsp://server/video.mp4?http
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
To send a stream in realtime to a RTSP server, for others to watch:
|
|
|
|
|
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -re -i @var{input} -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
|
|
|
|
|
@end example
|
|
|
|
|
|
2010-10-09 18:50:02 +00:00
|
|
|
|
@section sap
|
|
|
|
|
|
|
|
|
|
Session Announcement Protocol (RFC 2974). This is not technically a
|
2010-10-13 09:06:59 +00:00
|
|
|
|
protocol handler in libavformat, it is a muxer and demuxer.
|
2010-10-09 18:50:02 +00:00
|
|
|
|
It is used for signalling of RTP streams, by announcing the SDP for the
|
|
|
|
|
streams regularly on a separate port.
|
|
|
|
|
|
2010-10-13 09:06:59 +00:00
|
|
|
|
@subsection Muxer
|
|
|
|
|
|
2010-10-09 18:50:02 +00:00
|
|
|
|
The syntax for a SAP url given to the muxer is:
|
|
|
|
|
@example
|
|
|
|
|
sap://@var{destination}[:@var{port}][?@var{options}]
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
The RTP packets are sent to @var{destination} on port @var{port},
|
|
|
|
|
or to port 5004 if no port is specified.
|
|
|
|
|
@var{options} is a @code{&}-separated list. The following options
|
|
|
|
|
are supported:
|
|
|
|
|
|
|
|
|
|
@table @option
|
|
|
|
|
|
|
|
|
|
@item announce_addr=@var{address}
|
|
|
|
|
Specify the destination IP address for sending the announcements to.
|
|
|
|
|
If omitted, the announcements are sent to the commonly used SAP
|
|
|
|
|
announcement multicast address 224.2.127.254 (sap.mcast.net), or
|
|
|
|
|
ff0e::2:7ffe if @var{destination} is an IPv6 address.
|
|
|
|
|
|
|
|
|
|
@item announce_port=@var{port}
|
|
|
|
|
Specify the port to send the announcements on, defaults to
|
|
|
|
|
9875 if not specified.
|
|
|
|
|
|
|
|
|
|
@item ttl=@var{ttl}
|
|
|
|
|
Specify the time to live value for the announcements and RTP packets,
|
|
|
|
|
defaults to 255.
|
|
|
|
|
|
|
|
|
|
@item same_port=@var{0|1}
|
|
|
|
|
If set to 1, send all RTP streams on the same port pair. If zero (the
|
|
|
|
|
default), all streams are sent on unique ports, with each stream on a
|
|
|
|
|
port 2 numbers higher than the previous.
|
|
|
|
|
VLC/Live555 requires this to be set to 1, to be able to receive the stream.
|
2010-10-13 09:06:59 +00:00
|
|
|
|
The RTP stack in libavformat for receiving requires all streams to be sent
|
|
|
|
|
on unique ports.
|
2010-10-09 18:50:02 +00:00
|
|
|
|
@end table
|
|
|
|
|
|
|
|
|
|
Example command lines follow.
|
|
|
|
|
|
|
|
|
|
To broadcast a stream on the local subnet, for watching in VLC:
|
|
|
|
|
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -re -i @var{input} -f sap sap://224.0.0.255?same_port=1
|
|
|
|
|
@end example
|
|
|
|
|
|
2010-10-13 09:06:59 +00:00
|
|
|
|
Similarly, for watching in ffplay:
|
|
|
|
|
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -re -i @var{input} -f sap sap://224.0.0.255
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
And for watching in ffplay, over IPv6:
|
|
|
|
|
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -re -i @var{input} -f sap sap://[ff0e::1:2:3:4]
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
@subsection Demuxer
|
|
|
|
|
|
|
|
|
|
The syntax for a SAP url given to the demuxer is:
|
|
|
|
|
@example
|
|
|
|
|
sap://[@var{address}][:@var{port}]
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
@var{address} is the multicast address to listen for announcements on,
|
|
|
|
|
if omitted, the default 224.2.127.254 (sap.mcast.net) is used. @var{port}
|
|
|
|
|
is the port that is listened on, 9875 if omitted.
|
|
|
|
|
|
|
|
|
|
The demuxers listens for announcements on the given address and port.
|
|
|
|
|
Once an announcement is received, it tries to receive that particular stream.
|
|
|
|
|
|
|
|
|
|
Example command lines follow.
|
|
|
|
|
|
|
|
|
|
To play back the first stream announced on the normal SAP multicast address:
|
|
|
|
|
|
|
|
|
|
@example
|
|
|
|
|
ffplay sap://
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
To play back the first stream announced on one the default IPv6 SAP multicast address:
|
|
|
|
|
|
|
|
|
|
@example
|
|
|
|
|
ffplay sap://[ff0e::2:7ffe]
|
|
|
|
|
@end example
|
|
|
|
|
|
2010-07-31 15:45:29 +00:00
|
|
|
|
@section tcp
|
|
|
|
|
|
|
|
|
|
Trasmission Control Protocol.
|
|
|
|
|
|
2011-03-04 00:41:22 +00:00
|
|
|
|
The required syntax for a TCP url is:
|
|
|
|
|
@example
|
|
|
|
|
tcp://@var{hostname}:@var{port}[?@var{options}]
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
@table @option
|
|
|
|
|
|
|
|
|
|
@item listen
|
|
|
|
|
Listen for an incoming connection
|
|
|
|
|
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -i @var{input} -f @var{format} tcp://@var{hostname}:@var{port}?listen
|
|
|
|
|
ffplay tcp://@var{hostname}:@var{port}
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
@end table
|
|
|
|
|
|
2010-07-31 15:45:29 +00:00
|
|
|
|
@section udp
|
|
|
|
|
|
|
|
|
|
User Datagram Protocol.
|
|
|
|
|
|
2010-09-25 10:16:15 +00:00
|
|
|
|
The required syntax for a UDP url is:
|
|
|
|
|
@example
|
|
|
|
|
udp://@var{hostname}:@var{port}[?@var{options}]
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
@var{options} contains a list of &-seperated options of the form @var{key}=@var{val}.
|
|
|
|
|
Follow the list of supported options.
|
|
|
|
|
|
|
|
|
|
@table @option
|
|
|
|
|
|
|
|
|
|
@item buffer_size=@var{size}
|
|
|
|
|
set the UDP buffer size in bytes
|
|
|
|
|
|
|
|
|
|
@item localport=@var{port}
|
|
|
|
|
override the local UDP port to bind with
|
|
|
|
|
|
|
|
|
|
@item pkt_size=@var{size}
|
|
|
|
|
set the size in bytes of UDP packets
|
|
|
|
|
|
|
|
|
|
@item reuse=@var{1|0}
|
|
|
|
|
explicitly allow or disallow reusing UDP sockets
|
|
|
|
|
|
|
|
|
|
@item ttl=@var{ttl}
|
|
|
|
|
set the time to live value (for multicast only)
|
2010-10-08 08:49:56 +00:00
|
|
|
|
|
|
|
|
|
@item connect=@var{1|0}
|
|
|
|
|
Initialize the UDP socket with @code{connect()}. In this case, the
|
2011-03-08 09:35:52 +00:00
|
|
|
|
destination address can't be changed with ff_udp_set_remote_url later.
|
2011-01-06 15:16:50 +00:00
|
|
|
|
If the destination address isn't known at the start, this option can
|
2011-03-08 09:35:52 +00:00
|
|
|
|
be specified in ff_udp_set_remote_url, too.
|
2010-10-08 08:49:56 +00:00
|
|
|
|
This allows finding out the source address for the packets with getsockname,
|
|
|
|
|
and makes writes return with AVERROR(ECONNREFUSED) if "destination
|
|
|
|
|
unreachable" is received.
|
2011-01-06 15:16:50 +00:00
|
|
|
|
For receiving, this gives the benefit of only receiving packets from
|
|
|
|
|
the specified peer address/port.
|
2010-09-25 10:16:15 +00:00
|
|
|
|
@end table
|
|
|
|
|
|
|
|
|
|
Some usage examples of the udp protocol with @file{ffmpeg} follow.
|
|
|
|
|
|
|
|
|
|
To stream over UDP to a remote endpoint:
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -i @var{input} -f @var{format} udp://@var{hostname}:@var{port}
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
To stream in mpegts format over UDP using 188 sized UDP packets, using a large input buffer:
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -i @var{input} -f mpegts udp://@var{hostname}:@var{port}?pkt_size=188&buffer_size=65535
|
|
|
|
|
@end example
|
|
|
|
|
|
|
|
|
|
To receive over UDP from a remote endpoint:
|
|
|
|
|
@example
|
|
|
|
|
ffmpeg -i udp://[@var{multicast-address}]:@var{port}
|
|
|
|
|
@end example
|
|
|
|
|
|
2010-07-31 15:45:29 +00:00
|
|
|
|
@c man end PROTOCOLS
|