x86_64/memset: avoid performing final store twice

The code does a potentially misaligned 8-byte store to fill the tail
of the buffer. Then it fills the initial part of the buffer
which is a multiple of 8 bytes.
Therefore, if size is divisible by 8, we were storing last word twice.

This patch decrements byte count before dividing it by 8,
making one less store in "size is divisible by 8" case,
and not changing anything in all other cases.
All at the cost of replacing one MOV insn with LEA insn.

Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com>
This commit is contained in:
Denys Vlasenko 2015-02-10 18:30:57 +01:00 committed by Rich Felker
parent bf2071eda3
commit 74e334dcd1

View File

@ -9,7 +9,7 @@ memset:
cmp $16,%rdx
jb 1f
mov %rdx,%rcx
lea -1(%rdx),%rcx
mov %rdi,%r8
shr $3,%rcx
mov %rax,-8(%rdi,%rdx)