Just because we try to put multiple units of block_align bytes
(the atomic units for APTX and APTX HD) into one packet
does not mean that packets with fewer units than the
one we wanted are corrupt; only those packets that are not
a multiple of block_align are.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This field was misunderstood: It gives the number of samples
in a packet, not the number of bytes. Its usage was wrong for APTX HD.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Currently the APTX (HD) codecs set frame_size if unset and check
whether it is divisible by block_size (corresponding to block_align
as used by other codecs). But this is based upon a misunderstanding
of the API: frame_size is not in bytes, but in samples.
Said value is also not intended to be set by the user at all,
but set by encoders and (possibly) decoders if the number of channels
in a frame is constant. The latter condition is not fulfilled here,
so only set it for encoders. Given that the encoder can handle any
number of samples as long as it is divisible by four and given that
it worked to set a custom frame size before, the encoders accept
any multiple of four; otherwise the value is set to the value
that it already had for APTX: 1024 samples (per channel).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
APTX decodes four bytes of input to four stereo samples; APTX HD
does the same with six bytes of input. So it can be easily supported
in av_get_audio_frame_duration().
This fixes invalid durations and (derived) timestamps of demuxed
APTX HD packets and therefore fixed the timestamp in the aptx-hd
FATE test.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
We have de- and encoders for APTX and APTX HD, yet not FATE tests.
This commit therefore adds a transcoding test to utilize them.
Furthermore, during creating these tests it turned out that
the duration is set incorrectly for APTX HD. This will be fixed
in a future commit.
(Thanks to Andriy Gelman for finding an issue in an earlier version
that used a 192kHz input sample which does not work reliably accross
platforms.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
This block was scheduled for removal, which means that no validation would have
taken place after the old API was removed.
It was algo going to mistakenly remove an unrelated bits_per_coded_sample
check.
Signed-off-by: James Almer <jamrial@gmail.com>
The opaque parameter for the callback is set in videotoolbox_start(),
called when the hwaccel is initialized. When frame threading is used,
avctx will be the context corresponding to the frame thread currently
doing the decoding. Using this same codec context in all subsequent
invocations of the decoder callback (even those triggered by a different
frame thread) is unsafe, and broken after
cc867f2c09, since each frame thread now
cleans up its hwaccel state after decoding each frame.
Fix this by passing hwaccel_priv_data as the opaque parameter, which
exists in a single instance forwarded between all frame threads.
The only other use of AVCodecContext in the decoder output callback is
as a logging context. For this purpose, store a logging context in
hwaccel_priv_data.
This is no longer used since 4608996772.
It also has no implementations other than the plain C one.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Since introducing the various packed formats used by VAAPI (and p012),
we've noticed that there's actually a gap in how
av_find_best_pix_fmt_of_2 works. It doesn't actually assign any value
to having the same bit depth as the source format, when comparing
against formats with a higher bit depth. This usually doesn't matter,
because av_get_padded_bits_per_pixel() will account for it.
However, as many of these formats use padding internally, we find that
av_get_padded_bits_per_pixel() actually returns the same value for the
10 bit, 12 bit, 16 bit flavours, etc. In these tied situations, we end
up just picking the first of the two provided formats, even if the
second one should be preferred because it matches the actual bit depth.
This bug already existed if you tried to compare yuv420p10 against p016
and p010, for example, but it simply hadn't come up before so we never
noticed.
But now, we actually got a situation in the VAAPI VP9 decoder where it
offers both p010 and p012 because Profile 3 could be either depth and
ends up picking p012 for 10 bit content due to the ordering of the
testing.
In addition, in the process of testing the fix, I realised we have the
same gap when it comes to chroma subsampling - we do not favour a
format that has exactly the same subsampling vs one with less
subsampling when all else is equal.
To fix this, I'm introducing a small score penalty if the bit depth or
subsampling doesn't exactly match the source format. This will break
the tie in favour of the format with the exact match, but not offset
any of the other scoring penalties we already have.
I have added a set of tests around these formats which will fail
without this fix.
Update description for ssim and ms_ssim libvmaf options to specify
feature=float_ssim and feature=float_ms_ssim which are used to request
ssim and ms_ssim values in the latest versions of libvmaf.
Signed-off-by: Yondon Fu <yondon.fu@gmail.com>
Fixes: signed integer overflow: -2147448926 + -198321 cannot be represented in type 'int'
Fixes: 48798/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_APE_fuzzer-5739619273015296
Fixes: 48798/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_APE_fuzzer-6744428485672960
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: out of array access
Fixes: 50936/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_HDR_fuzzer-5423041009549312
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: -8427924 * 256 cannot be represented in type 'int'
Fixes: 48798/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_TTA_fuzzer-5409428670644224
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: 127 + 2147483536 cannot be represented in type 'int'
Fixes: 48798/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MOBICLIP_fuzzer-6014034970804224
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes: signed integer overflow: 17121181824 * 538976288 cannot be represented in type 'long long'
Fixes: 48798/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_EXR_fuzzer-5915330316206080
Found-by: continuous fuzzing process https://github.com/google/oss-fuzz/tree/master/projects/ffmpeg
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
Fixes regression with tickets/4364/L1004220.DNG
Reviewed-by: Paul B Mahol <onemda@gmail.com>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The patch fixes the bugs that occurred when running
fate-checkasm-hevc_pel on MIPS paltform.
Reviewed-by: Shiyou Yin <yinshiyou-hf@loongson.cn>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
This patch fixes a bug where the fate-checkasm-motion fails when
h is not a multiple of 8.
Reviewed-by: Shiyou Yin <yinshiyou-hf@loongson.cn>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
The code to initialize it takes more space (in .text) than
the table to be initialized (namely 86B vs 64B for GCC 11.2
with -O3 in an av_cold function), so hardcode the table.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Most of the VLCs used here have a max_depth of two;
some have a max_depth of one. Therefore one can just use two
and avoid the runtime check for whether one should
perform another round of LUT lookup in case the first read
did not read a complete codeword.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Up until now, initializing the dca VLC tables uses ff_init_vlc_sparse()
with length tables of type uint8_t and code tables of type uint16_t
(except for the LBR tables, which uses length and symbols of type
uint8_t; these tables are interleaved). In case of the quant index
codebooks these arrays were accessed via tables of pointers to the
individual tables.
This commit changes this: First, we switch to ff_init_vlc_from_lengths()
to replace the uint16_t code tables by uint8_t symbol tables
(this necessitates ordering the tables from left-to-right in the tree
first). These symbol tables are interleaved with the length tables.
Furthermore, these tables are combined in order to remove the table of
pointers to individual tables, thereby avoiding relocations (for x64
elf systems this amounts to 96*24B = 2304B saved in .rela.dyn) and
saving 1280B from .data.rel.ro (for 64bit systems). Meanwhile the
savings in .rodata amount to 2709 + 2 * 334 = 3377B. Due to padding
the actual savings are higher: The ELF x64 ABI requires objects >= 16B
to be padded to 16B and lots of the tables have 2^n + 1 elements
of these were from replacing uint16_t codes with uint8_t symbols;
the rest was due to the fact that combining the tables eliminated
padding (the ELF x64 ABI requires objects >= 16B to be padded to 16B
and lots of the tables have 2^n + 1 elements)). Taking this into
account gives savings of 4548B. (GCC by default uses an even higher
alignment (controlled by -malign-data); for it the savings are 5748B.)
These changes also necessitated to modify the init code for
the encoder tables.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Up until now, the encoder used the same tables that the decoder
uses to create its VLCs. These have the downside of requiring
the encoder to offset the tables at runtime as well as having
to read from separate tables for the length as well as the code
of the symbol to encode. The former are uint8_t, the latter uint16_t,
so using a joint table would require padding, but this doesn't
matter when these tables are generated at runtime, because they
live in the .bss segment.
Also move these init functions as well as the functions that
actually use them to dcaenc.c, because they are encoder-specific.
This also allows to remove an inclusion of PutBitContext from
dcahuff.h (and indirectly from all dca-decoder files).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
It increases the size of one VLC from two to three bits, thereby
requiring four more VLCEntries (16 bytes .bss), but it allows to
inline the number of bits used when reading them.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
The ff_dca_vlc_transition_mode VLCs don't use an offset at all,
so just use ordinary VLCs for them.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>