When we used the --disable-ssse3 --disable-optimizations options,
the compiler would not skip the MC_LINKS like the compilation that
enabled the optimization, so it would fail to find the function
prototypes. Hence, this commit uses the same way to add prototypes
for the functions as HEVC DSP.
Signed-off-by: Wu Jianhua <toqsxw@outlook.com>
Avoids allocations and therefore error checks; also avoids
indirections and allows to remove the boilerplate code
for creating an object with a dedicated free function.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Avoids boilerplate code when creating the context
and avoids allocations and therefore whole error paths
when creating references to it. Also avoids an indirection
and improves type-safety.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
After having created the AVBuffer that is put into frame->buf[0],
ownership of several objects (namely an AVDRMFrameDescriptor,
an MppFrame and some AVBufferRefs framecontextref and decoder_ref)
has passed to the AVBuffer and therefore to the frame.
Yet it has nevertheless been freed manually on error
afterwards, which would lead to a double-free as soon
as the AVFrame is unreferenced.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Avoids allocations and therefore error checks and cleanup code;
also avoids indirections.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Avoids implicit av_frame_ref() and therefore allocations
and error checks. It also avoids explicitly allocating
the AVFrames (done implicitly when getting the buffer).
It also fixes a data race: The AVFrame's sample_aspect_ratio
is currently updated after ff_thread_finish_setup()
and this write is unsynchronized with the read in av_frame_ref().
Removing the implicit av_frame_ref() fixed this.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Avoids implicit av_frame_ref() and therefore allocations
and error checks. It also avoids explicitly allocating
the AVFrames (done implicitly when getting the buffer).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Avoids implicit av_frame_ref() and therefore allocations
and error checks. It also avoids explicitly allocating
the AVFrames (done implicitly when getting the buffer).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Before commit f025b8e110,
every frame-threaded decoder used ThreadFrames, even when
they did not have any inter-frame dependencies at all.
In order to distinguish those decoders that need the AVBuffer
for progress communication from those that do not (to avoid
the allocation for the latter), the former decoders were marked
with the FF_CODEC_CAP_ALLOCATE_PROGRESS internal codec cap.
Yet distinguishing these two can be done in a more natural way:
Don't use ThreadFrames when not needed and split ff_thread_get_buffer()
into a core function that calls the user's get_buffer2 callback
and a wrapper around it that also allocates the progress AVBuffer.
This has been done in 02220b88fc
and since that commit the ALLOCATE_PROGRESS cap was nearly redundant.
The only exception was WebP and VP8. WebP can contain VP8
and uses the VP8 decoder directly (i.e. they share the same
AVCodecContext). Both decoders are frame-threaded and VP8
has inter-frame dependencies (in general, not in valid WebP)
and therefore the ALLOCATE_PROGRESS cap. In order to avoid
allocating progress in case of a frame-threaded WebP decoder
the cap and the check for the cap has been kept in place.
Yet now the VP8 decoder has been switched to use ProgressFrames
and therefore there is just no reason any more for this check
and the cap. This commit therefore removes both.
Also change the value of FF_CODEC_CAP_USES_PROGRESSFRAMES
to leave no gaps.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Also use the correct type limit SIZE_MAX; INT_MAX comes
from a time when this used av_buffer_allocz() which used
an int at the time.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The current code resets it all the time unless we are decoding
a DSD frame with identical parameters to the last frame.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It is more natural given that WavPack doesn't need the data of
the previous frame at all; it just needs the DSD context.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This is useful when the lifetime of the object to be shared
is the whole decoding process as it allows to avoid having
to sync them every time in update_thread_context.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This part of the code is not slice-threaded and they are
semantically an initialization, so use atomic_init()
instead of the potentially expensive atomic_store()
(which uses sequentially consistent memory ordering).
Also remove the initial initialization directly after
allocating this array.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
ff_thread_progress_replace() can handle a blank ProgressFrame
as src (in which case it simply unreferences dst), but not
a NULL one. So add a blank frame to be used as source for
this case, so that we can use the replace functions
to simplify vp9_frame_replace().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
When outputting a show-existing frame, the VP9 decoder simply
created a reference to said frame and returned it immediately to
the caller, without waiting for it to have finished decoding.
In case of frame-threading it is possible for the frame to
only be decoded while it was waiting to be output.
This is normally benign.
But there is one case where it is not: If the user wants
video encoding parameters to be exported, said side data
will only be attached to the src AVFrame at the end of
decoding the frame that is actually being shown. Without
synchronisation adding said side data in the decoder thread
and the reads in av_frame_ref() in the output thread
constitute a data race. This happens e.g. when using the
venc_data_dump tool with vp90-2-10-show-existing-frame.webm
from the FATE-suite.
Fix this by actually waiting for the frame to be output.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This already fixes a race in the vp9-encparams test. In this test,
side data is added to the current frame after having been decoded
(and therefore after ff_thread_finish_setup() has been called).
Yet the update_thread_context callback called ff_thread_ref_frame()
and therefore av_frame_ref() with this frame as source frame and
the ensuing read was unsynchronised with adding the side data,
i.e. there was a data race.
By switching to the ProgressFrame API the implicit av_frame_ref()
is removed and the race fixed except if this frame is later reused by
a show-existing-frame which uses an explicit av_frame_ref().
The vp9-encparams test does not cover this, so this commit
already fixes all the races in this test.
This decoder kept multiple references to the same ThreadFrames
in the same context and therefore had lots of implicit av_frame_ref()
even when decoding single-threaded. This incurred lots of small
allocations: When decoding an ordinary 10s video in single-threaded
mode the number of allocations reported by Valgrind went down
from 57,814 to 20,908; for 10 threads it went down from 84,223 to
21,901.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Avoids implicit av_frame_ref() and therefore allocations
and error checks. It also avoids explicitly allocating
the AVFrames (done implicitly when getting the buffer)
and it also allows to reuse the flushing code for freeing
the ProgressFrames.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Avoids implicit av_frame_ref() and therefore allocations
and error checks.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Frame-threaded decoders with inter-frame dependencies
use the ThreadFrame API for syncing. It works as follows:
During init each thread allocates an AVFrame for every
ThreadFrame.
Thread A reads the header of its packet and allocates
a buffer for an AVFrame with ff_thread_get_ext_buffer()
(which also allocates a small structure that is shared
with other references to this frame) and sets its fields,
including side data. Then said thread calls ff_thread_finish_setup().
From that moment onward it is not allowed to change any
of the AVFrame fields at all any more, but it may change
fields which are an indirection away, like the content of
AVFrame.data or already existing side data.
After thread A has called ff_thread_finish_setup(),
another thread (the user one) calls the codec's update_thread_context
callback which in turn calls ff_thread_ref_frame() which
calls av_frame_ref() which reads every field of A's
AVFrame; hence the above restriction on modifications
of the AVFrame (as any modification of the AVFrame by A after
ff_thread_finish_setup() would be a data race). Of course,
this av_frame_ref() also incurs allocations and therefore
needs to be checked. ff_thread_ref_frame() also references
the small structure used for communicating progress.
This av_frame_ref() makes it awkward to propagate values that
only become known during decoding to later threads (in case of
frame reordering or other mechanisms of delayed output (like
show-existing-frames) it's not the decoding thread, but a later
thread that returns the AVFrame). E.g. for VP9 when exporting video
encoding parameters as side data the number of blocks only
becomes known during decoding, so one can't allocate the side data
before ff_thread_finish_setup(). It is currently being done afterwards
and this leads to a data race in the vp9-encparams test when using
frame-threading. Returning decode_error_flags is also complicated
by this.
To perform this exchange a buffer shared between the references
is needed (notice that simply giving the later threads a pointer
to the original AVFrame does not work, because said AVFrame will
be reused lateron when thread A decodes the next packet given to it).
One could extend the buffer already used for progress for this
or use a new one (requiring yet another allocation), yet both
of these approaches have the drawback of being unnatural, ugly
and requiring quite a lot of ad-hoc code. E.g. in case of the VP9
side data mentioned above one could not simply use the helper
that allocates and adds the side data to an AVFrame in one go.
The ProgressFrame API meanwhile offers a different solution to all
of this. It is based around the idea that the most natural
shared object for sharing information about an AVFrame between
decoding threads is the AVFrame itself. To actually implement this
the AVFrame needs to be reference counted. This is achieved by
putting a (ownership) pointer into a shared (and opaque) structure
that is managed by the RefStruct API and which also contains
the stuff necessary for progress reporting.
The users get a pointer to this AVFrame with the understanding
that the owner may set all the fields until it has indicated
that it has finished decoding this AVFrame; then the users are
allowed to read everything. Every decoder may of course employ
a different contract than the one outlined above.
Given that there is no underlying av_frame_ref(), creating
references to a ProgressFrame can't fail. Only
ff_thread_progress_get_buffer() can fail, but given that
it will replace calls to ff_thread_get_ext_buffer() it is
at places where errors are already expected and properly
taken care of.
The ProgressFrames are empty (i.e. the AVFrame pointer is NULL
and the AVFrames are not allocated during init at all)
while not being in use; ff_thread_progress_get_buffer() both
sets up the actual ProgressFrame and already calls
ff_thread_get_buffer(). So instead of checking for
ThreadFrame.f->data[0] or ThreadFrame.f->buf[0] being NULL
for "this reference frame is non-existing" one should check for
ProgressFrame.f.
This also implies that one can only set AVFrame properties
after having allocated the buffer. This restriction is not deep:
if it becomes onerous for any codec, ff_thread_progress_get_buffer()
can be broken up. The user would then have to get a buffer
himself.
In order to avoid unnecessary allocations, the shared structure
is pooled, so that both the structure as well as the AVFrame
itself are reused. This means that there won't be lots of
unnecessary allocations in case of non-frame-threaded decoding.
It might even turn out to have fewer than the current code
(the current code allocates AVFrames for every DPB slot, but
these are often excessively large and not completely used;
the new code allocates them on demand). Pooling relies on the
reset function of the RefStruct pool API, it would be impossible
to implement with the AVBufferPool API.
Finally, ProgressFrames have no notion of owner; they are built
on top of the ThreadProgress API which also lacks such a concept.
Instead every ThreadProgress and every ProgressFrame contains
its own mutex and condition variable, making it completely independent
of pthread_frame.c. Just like the ThreadFrame API it is simply
presumed that only the actual owner/producer of a frame reports
progress on said frame.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The API is similar to the ThreadFrame API, with the exception
that it no longer has an included AVFrame and that it has its
own mutexes and condition variables which makes it more independent
of pthread_frame.c. One can wait on anything via a ThreadProgress.
One just has to ensure that the lifetime of the object containing
the ThreadProgress is long enough. This will typically be solved
by putting a ThreadProgress in a refcounted structure that is
shared between threads.
Reviewed-by: Anton Khirnov <anton@khirnov.net>
Reviewed-by: epirat07@gmail.com
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The native VVC decoder does not yet support quality/spatial/multiview
scalability. Bitstreams requiring this feature could cause crashes.
Patch fixes this by skipping NAL units which are not in the base layer,
warning the user while doing so.
Signed-off-by: Frank Plowman <post@frankplowman.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Some files with no image items have them, and were working prior to the recent
HEIF parsing overhaul.
Ignore such boxes instead, to recover the old behavior.
Fixes a regression since d9fed9df2a.
Tested-by: Wu Jianhua <toqsxw@outlook.com>
Signed-off-by: James Almer <jamrial@gmail.com>
Only the last 256 samples of each frame are used;
the encoder currently uses a buffer for 1536 + 256 samples
whose first 256 samples contain are the last 256 samples
from the last frame and the next 1536 are the samples
of the current frame.
Yet since 238b2d4155 all the
DSP functions only need 256 contiguous samples and this can
be achieved by only retaining the last 256 samples of each
frame. Doing so saves 6KiB per channel.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
By default don't use the color properties from input frame as output
frame properties when performing HDR to SDR conversion
Signed-off-by: Haihao Xiang <haihao.xiang@intel.com>