We still only support one single layer though, but this allows
receiving streams that have this structure present even for
single layer streams.
Signed-off-by: Martin Storsjö <martin@martin.st>
This codepath isn't quite as bad as it used to sound, if fragments
are cut automatically at video packets.
Signed-off-by: Martin Storsjö <martin@martin.st>
Restore alphabetical order in lists, break overly long lines, do some
prettyprinting, add some explanatory section comments, group parts
together that belong together logically.
The problem is that the argument 'q' is of the type uint8_t.
According to the JPEG standard, if 1 <= q <= 50, the scale factor
'S' should be 5000 / Q. Because the create_default_qtables() reuses
the variable 'q' to store the result of this calculation, for small
values of q < 19, q wil subsequently overflow and give wrong results
in the calculated quantization tables.
Instead, use a new variable 'S' (same name as in RFC2435) with the
proper range to store the result of the division.
Signed-off-by: Martin Storsjö <martin@martin.st>
Apply the default value for timeout in code instead of via the
avoption, to allow distinguishing the default value from the user
not setting anything at all.
Signed-off-by: Martin Storsjö <martin@martin.st>
Since all URLContexts have the same AVOptions, such AVOptions
will be applied on the outermost context only and removed from the
dict, while they probably make sense on all contexts.
This makes sure that rw_timeout gets propagated to the innermost
URLContext (to make sure it gets passed to the tcp protocol, when
opening a http connection for instance).
Alternatively, such matching options would be kept in the dict
and only removed after the ffurl_connect call.
Signed-off-by: Martin Storsjö <martin@martin.st>
Using this requires setting the rw_timeout option to make it
terminate, alternatively using the interrupt callback (if used via
the API).
Signed-off-by: Martin Storsjö <martin@martin.st>
If set non-zero, this limits duration of the retry_transfer_wrapper()
loop, thus affecting ffurl_read*(), ffurl_write(). As soon as
one single byte is successfully received/transmitted, the timer
restarts.
This has further changes by Michael Niedermayer and Martin Storsjö.
Signed-off-by: Martin Storsjö <martin@martin.st>
Until now, the decoding API was restricted to outputting 0 or 1 frames
per input packet. It also enforces a somewhat rigid dataflow in general.
This new API seeks to relax these restrictions by decoupling input and
output. Instead of doing a single call on each decode step, which may
consume the packet and may produce output, the new API requires the user
to send input first, and then ask for output.
For now, there are no codecs supporting this API. The API can work with
codecs using the old API, and most code added here is to make them
interoperate. The reverse is not possible, although for audio it might.
Signed-off-by: Anton Khirnov <anton@khirnov.net>
Check if the size is written the first 4 bytes and read the next 4
as fourcc candidate, fallback checking the initial for 4 bytes.
"The CodecPrivate contains all additional data that is stored in the
'stsd' (sample description) atom in the QuickTime file after the
mandatory video descriptor structure (starting with the size and FourCC
fields)"
CC: libav-stable@libav.org
Samples produced by Omneon (Harmonic) store external references with
paths ending with 0s. Such movs cannot be loaded properly since every
0 is converted to '/', to keep the same parsing code for dref type 2
and type 18: this makes the external reference point to a non-existing
direactory, rather than to the actual referenced file.
Add a brief trimming loop that drops all ending 0s before trying to
parse the external reference path.
Signed-off-by: Vittorio Giovara <vittorio.giovara@gmail.com>
Store the file duration in the same timebase it arrives (i.e.
milliseconds) and only convert it to the file duration units (100ns)
when it's actually written, thus simplifying some calculations. Also,
store the duration as unsigned, since it cannot be negative.
CC: libav-stable@libav.org
Bug-ID: CVE-2016-2326
Currently, AVStream contains an embedded AVCodecContext instance, which
is used by demuxers to export stream parameters to the caller and by
muxers to receive stream parameters from the caller. It is also used
internally as the codec context that is passed to parsers.
In addition, it is also widely used by the callers as the decoding (when
demuxer) or encoding (when muxing) context, though this has been
officially discouraged since Libav 11.
There are multiple important problems with this approach:
- the fields in AVCodecContext are in general one of
* stream parameters
* codec options
* codec state
However, it's not clear which ones are which. It is consequently
unclear which fields are a demuxer allowed to set or a muxer allowed to
read. This leads to erratic behaviour depending on whether decoding or
encoding is being performed or not (and whether it uses the AVStream
embedded codec context).
- various synchronization issues arising from the fact that the same
context is used by several different APIs (muxers/demuxers,
parsers, bitstream filters and encoders/decoders) simultaneously, with
there being no clear rules for who can modify what and the different
processes being typically delayed with respect to each other.
- avformat_find_stream_info() making it necessary to support opening
and closing a single codec context multiple times, thus
complicating the semantics of freeing various allocated objects in the
codec context.
Those problems are resolved by replacing the AVStream embedded codec
context with a newly added AVCodecParameters instance, which stores only
the stream parameters exported by the demuxers or read by the muxers.
Instead of a linked list constructed at av_register_all(), store them
in a constant array of pointers.
Since no registration is necessary now, this removes some global state
from lavf. This will also allow the urlprotocol layer caller to limit
the available protocols in a simple and flexible way in the following
commits.