There are lots of files that don't need it: The number of object
files that actually need it went down from 2011 to 884 here.
Keep it for external users in order to not cause breakages.
Also improve the other headers a bit while just at it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
C11 required to use ATOMIC_VAR_INIT to statically initialize
atomic objects with static storage duration. Yet this macro
was unsuitable for initializing structures [1] and was actually
unneeded for all known implementations (this includes our
compatibility fallback implementations which simply wrap the value
in parentheses: #define ATOMIC_VAR_INIT(value) (value)).
Therefore C17 deprecated the macro and C23 actually removed it [2].
Since commit 5ff0eb34d2 we default
to C17 if the compiler supports it; Clang warns about ATOMIC_VAR_INIT
in this mode. Given that no implementation ever needed this macro,
this commit stops using it to avoid this warning.
[1]: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2396.htm#dr_485
[2]: https://en.cppreference.com/w/c/atomic/ATOMIC_VAR_INIT
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
lavfi does not require aligned buffers, so we can safely apply top/left
cropping by any amount, without passing any special flags to lavc.
Longer term, an even better solution would probably be auto-inserting
the crop filter (or its hwaccel versions) as needed.
Multiple FATE tests no longer need -flags unaligned.
Do not pass an options dictionary to avcodec_open2().
This should be equivalent to current behaviour, but will allow
overriding caller-supplied options in a cleaner and more robust manner.
We can now set the COPY_OPAQUE flag directly rather going through
dec_opts.
There is only a single caller of filter_codec_opts() that passes
a NULL codec to it, which is streamcopy in ffmpeg CLI. In that case we
only want generic AVCodecContext options, not private options of any
specific encoder.
Do not return the return value of the last enc_send_to_dst()
call, as this would treat the last call differently from the
earlier calls; furthermore, sch_enc_send() explicitly documents
to always return 0 on success.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
There is no need to free the already-added items, they will be freed
alongside the codec context. There is also little point in an error
message, as the only reason this can fail is malloc failure.
MATCH_PER_STREAM_OPT iterates over all options of a given
OptionDef and tests whether they apply to the current stream;
if so, they are set to ost->apad, otherwise, the code errors
out. If no error happens, ost->apad is av_strdup'ed in order
to take ownership of this pointer.
But this means that setting it originally was premature,
as it leads to double-frees when an error happens lateron.
This can simply be reproduced with
ffmpeg -filter_complex anullsrc -apad bar -apad:n baz -f null -
This is a regression since 83ace80bfd.
Fix this by using a temporary variable instead of directly
setting ost->apad. Also only strdup the string if it actually
is != NULL.
Reviewed-by: Marth64 <marth64@proxyid.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Since e0da916b8f the ffmpeg utility has
held multiple frames output by the decoder in internal queues without
telling the decoder that it is going to do so. When the decoder has a
fixed-size pool of frames (common in some hardware APIs where the output
frames must be stored as an array texture) this could lead to the pool
being exhausted and the decoder getting stuck. Fix this by telling the
decoder to allocate additional frames according to the queue size.
OptionDef.u is only an offset (i.e. its off member) iff OPT_FLAG_OFFSET
is true. Otherwise, the pointer arithmetic can be undefined behaviour.
UBSan warns about this (on 32bit arches):
src/fftools/ffmpeg_opt.c:102:15: runtime error: pointer index expression with base 0xffa4db10 overflowed to 0x56059a50
This commit fixes this by checking for OPT_FLAG_OFFSET first.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This allows using WRAPPED_AVFRAME encoders with loopback decoders in
order to connect multiple filtergraphs together.
Clear the flag in muxers, since lavf does not need it for anything and
it would change the results of framecrc FATE tests.
Encoder timebase is equal to the frame timebase, so does not need to be
passed separately.
Also, rename in_picture to frame, which is shorter and more accurate -
it always contains a frame, never a field.
These functions used to be passed directly to pthread_create(), which
required them to return void*. This is no longer the case, so they can
return a plain int.
Current callstack looks like this:
* ifilter_bind_ist() (filter) calls ist_filter_add() (demuxer);
* ist_filter_add() opens the decoder, and then calls
dec_add_filter() (decoder);
* dec_add_filter() calls ifilter_parameters_from_dec() (i.e. back into
the filtering code) in order to give post-avcodec_open2() parameters
to the filter.
This is unnecessarily complicated. Pass the parameters as follows
instead:
* dec_init() (which opens the decoder) returns post-avcodec_open2()
parameters to its caller (i.e. the demuxer) in a parameter-only
AVFrame
* the demuxer passes these parameters to the filter in
InputFilterOptions, together with other filter options
The first of these binds inputs of complex filtergraphs to demuxer
streams (with a misleading comment claiming it *creates* complex
filtergraphs).
The second ensures that all filtergraph outputs are connected to an
encoder.
Merge them into a single function, which simplifies the ffmpeg_filter
API, is shorter, and will also be useful in following commits.
Also, rename misleadingly-named init_input_filter() to
fg_complex_bind_input().
Rename dec_open to dec_init(), as it is more descriptive of its new
purpose.
Will be useful in following commits, which will add a new path for
opening decoders.
Treat it analogously to stream parameters like format/dimensions/etc.
This is functionally different from previous code in 2 ways:
* for non-CFR video, the frame timebase (set by the decoder) is used
rather than the demuxer timebase
* for sub2video, AV_TIME_BASE_Q is used, which is hardcoded by the
subtitle decoding API
These changes should avoid unnecessary and potentially lossy timestamp
conversions from decoder timebase into the demuxer one.
Changes the timebases used in sub2video tests.
Apparently it can happen that avfilter_graph_request_oldest() returns
EAGAIN, yet av_buffersrc_get_nb_failed_requests() returns 0 for every
input.
Works around #10795, though the root issue is most likely in the
scale2ref filter.