Commit Graph

14 Commits

Author SHA1 Message Date
Lynne 3e098cca6e
aarch64/yuv2rgb_neon: fix return value
We return 0 for this particular architecture but should instead be
returning the number of lines.
Fixes users who check the return value matches what they expect.
2020-07-09 10:33:14 +01:00
Martin Storsjö e0604d508e swscale: aarch64: Add a NEON implementation of interleaveBytes
This allows speeding up format conversions from yuv420 to nv12.

                             Cortex A53      A72      A73
interleave_bytes_c:             86077.5  51433.0  66972.0
interleave_bytes_neon:          19701.7  23019.2  15859.2
interleave_bytes_aligned_c:     86603.0  52017.2  67484.2
interleave_bytes_aligned_neon:   9061.0   7623.0   6309.0

Signed-off-by: Martin Storsjö <martin@martin.st>
2020-05-15 23:38:17 +03:00
Josh de Kock 718c8f9aa5 swscale: fix NEON hscale init
The NEON hscale function only supports X8 filter sizes and should only
be selected when these are being used. At the moment filterAlign is
set to 8 but in the future when extra NEON assembly for specific sizes is
added they will need to have checks here too.

The immediate usecase for this change is making the hscale checkasm
test easier and without NEON specific edge-cases (x86 already has these
guards).

Signed-off-by: Josh de Kock <josh@itanimul.li>
2020-05-15 10:29:30 +01:00
Martin Storsjö 9025d5c5ce swscale: aarch64: Don't clobber callee-saved registers v8-v15
Signed-off-by: Martin Storsjö <martin@martin.st>
2020-04-21 23:41:13 +03:00
Martin Storsjö 872790b1f9 swscale: aarch64: Avoid using the x18 register
The x18 is a reserved platform register on Darwin and Windows.

x8/w8 seems to be unused in this function though (and same about
x10 and x14), so there's really no reason to use x18 here - just change
the uses of x18/w18 into x8/w8 instead without any further rewrites.

Signed-off-by: Martin Storsjö <martin@martin.st>
2020-04-20 00:09:34 +03:00
Sebastian Pop c3a17ffff6 swscale/aarch64: use multiply accumulate and shift-right narrow
This patch rewrites the innermost loop of ff_yuv2planeX_8_neon to avoid zips and
horizontal adds by using fused multiply adds. The patch also uses ld1r to load
one element and replicate it across all lanes of the vector. The patch also
improves the clipping code by removing the shift right instructions and
performing the shift with the shift-right narrow instructions.

I see 8% difference on an m6g instance with neoverse-n1 CPUs:
$ ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf bench=start,scale=1024x1024,bench=stop -f null -
before: t:0.014015 avg:0.014096 max:0.015018 min:0.013971
after:  t:0.012985 avg:0.013013 max:0.013996 min:0.012818

Tested with `make check` on aarch64-linux.

Signed-off-by: Sebastian Pop <spop@amazon.com>
Reviewed-by: Clément Bœsch <u@pkh.me>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2020-01-04 20:59:31 +01:00
Sebastian Pop bd83191271 swscale/aarch64: use multiply accumulate and increase vector factor to 4
This patch implements ff_hscale_8_to_15_neon with NEON fused multiply accumulate
and bumps the vectorization factor from 2 to 4.
The speedup is of 25% on Graviton1 A1 instances based on A-72 cpus:

$ ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf bench=start,scale=1024x1024,bench=stop -f null -
before: t:0.040303 avg:0.040287 max:0.040371 min:0.039214
after:  t:0.032168 avg:0.032215 max:0.033081 min:0.032146

The speedup is of 39% on Graviton2 m6g instances based on Neoverse-N1 cpus:
$ ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf bench=start,scale=1024x1024,bench=stop -f null -
before: t:0.019446 avg:0.019423 max:0.019493 min:0.019181
after:  t:0.014015 avg:0.014096 max:0.015018 min:0.013971

Tested with `make check` on aarch64-linux.

Signed-off-by: Sebastian Pop <spop@amazon.com>
Reviewed-by: Jean-Baptiste Kempf <jb@videolan.org>
Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
2019-12-17 23:41:47 +01:00
Clément Bœsch c921f4f687 sws/aarch64: add ff_yuv2planeX_8_neon 2016-04-11 16:27:19 +02:00
Clément Bœsch cab9661dba sws/aarch64/yuv2rgb: honor iOS calling convention
y_offset and y_coeff being successive 32-bit integers, they are packed
into 8 bytes instead of 2x8 bytes.

See https://developer.apple.com/library/ios/documentation/Xcode/Conceptual/iPhoneOSABIReference/Articles/ARM64FunctionCallingConventions.html

> iOS diverges from Procedure Call Standard for the ARM 64-bit
> Architecture in several ways
[...]
> In the generic procedure call standard, all function arguments passed
> on the stack consume slots in multiples of 8 bytes. In iOS, this
> requirement is dropped, and values consume only the space required.
[...]
> Padding is still inserted on the stack to satisfy arguments’ alignment
> requirements.
2016-04-08 17:58:43 +02:00
Clément Bœsch 040598218f sws/aarch64: restore ff_hscale_8_to_15_neon()
Fix final scaling and required filter alignment. Pass FATE.
2016-04-05 12:00:36 +02:00
Clément Bœsch eadaef2a63 sws/aarch64: disable ff_hscale_8_to_15_neon temporarly
Looks broken.
2016-04-01 17:33:01 +02:00
Clément Bœsch 263eb76bdf sws/aarch64: add ff_hscale_8_to_15_neon
./ffmpeg -nostats -f lavfi -i testsrc2=4k:d=2 -vf bench=start,scale=1024x1024,bench=stop -f null -

    before: t:0.489726 avg:0.489883 max:0.491852 min:0.489482
    after:  t:0.256515 avg:0.256458 max:0.256999 min:0.253755
2016-03-31 10:12:55 +02:00
Clément Bœsch 277408b7f1 sws/aarch64/yuv2rgb: save a few mul and add
27ms to 26ms with UHD 2160 input.
2016-03-25 16:14:13 +01:00
Clément Bœsch f1148390d7 sws/aarch64: add {nv12,nv21,yuv420p,yuv422p}_to_{argb,rgba,abgr,rgba}_neon 2016-03-01 17:53:33 +01:00