mips32r6 and mips64r6 are actually new isas at both the asm source and
opcode levels (pre-r6 code cannot run on r6) and thus need to be
treated as a new subarch. the following changes are made, some of
which yield code generation improvements for non-r6 targets too:
- add subarch logic in configure script and reloc.h files for dynamic
linker name.
- suppress use of .set mips2 asm directives (used to allow mips2
atomic instructions on baseline mips1 builds; the kernel has to
emulate them on mips1) except when actually needed. they cause wrong
instruction encodings on r6, and pessimize inlining on at least some
compilers.
- only hard-code sync instruction encoding on mips1.
- use "ZC" constraint instead of "m" constraint for llsc memory
operands on r6, where the ll/sc instructions no longer accept full
16-bit offsets.
- only hard-code rdhwr instruction encoding with .word on targets
(pre-r2) where it may need trap-and-emulate by the kernel.
otherwise, just use the instruction mnemonic, and allow an arbitrary
destination register to be used.
commits e24984efd5 and
16b55298dc inadvertently disabled the
a_spin implementations for i386, x86_64, and x32 by defining a macro
named a_pause instead of a_spin. this should not have caused any
functional regression, but it inhibited cpu relaxation while spinning
for locks.
bug reported by George Kulakowski.
it was introduced for offloading copying between regular files
in linux commit 29732938a6289a15e907da234d6692a2ead71855
(microblaze and sh does not yet have the syscall number.)
currently five targets use the same mman.h constants and the rest
share most constants too, so move them to sys/mman.h before the
bits/mman.h include where the differences can be corrected by
redefinition of the macros.
this fixes two minor bugs: POSIX_MADV_DONTNEED was wrong on most
targets (it should be the same as MADV_DONTNEED), and sh defined
the x86-only MAP_32BIT mmap flag.
these changes should not affect generated code, but they reflect that
the underlying objects operated on by a_cas_p are supposed to have
type volatile void *, not volatile long. in theory a compiler could
treat the effective type mismatch in the "m" memory operands as
undefined behavior.
apparently clang does not accept matching-register input and output
constraints that differ in size (32-bit vs 64-bit).
based on patch by Jaydeep Patil.
Some PowerPC CPUs (e.g. Freescale MPC85xx) have a completely different
instruction set for floating point operations (SPE).
Executing regular PowerPC floating point instructions results in
"Illegal instruction" errors.
Make it possible to run these devices in soft-float mode.
at present this is done only for consistency, since this file defines
its own a_cas_p rather than using the new generic one from atomic.h
added in commit 225f6a6b5b. these
definitions may however be useful if we ever need to add other
pointer-sized atomic operations.
the workaround was for a bug that botched .gpword references to local
labels, applying a nonsensical random offset of -0x4000 to them.
this reverses commit 5e396fb996 and a
removes a similar hack that was added to syscall_cp.s in the later
commit 756c8af858. it turns out one
additional instance of the same idiom, the GETFUNCSYM macro in
arch/mips/reloc.h, was still affected by the assembler bug and does
not admit an easy workaround without making assumptions about how the
macro is used. the previous workarounds made static linking work but
left the early-stage dynamic linker broken and thus had limited
usefulness.
instead, affected users (using binutils versions older than 2.20) will
need to fix the bug on the binutils side; the trivial patch is commit
453f5985b13e35161984bf1bf657bbab11515aa4 in the binutils-gdb
repository.
"Q" input constraint was used for the written object, instead of "=Q"
output constraint. this should not cause problems because "memory"
is on the clobber list, but "=Q" better documents the intent and more
consistent with the actual asm code.
this changes the generated code, because different registers are used,
but other than the register names nothing should change.
all bits headers that were identical for a number of 'clean' archs are
moved to the new arch/generic tree. in addition, a few headers that
differed only cosmetically from the new generic version are removed.
additional deduplication may be possible in mman.h and in several
headers (limits.h, posix.h, stdint.h) that mostly depend on whether
the arch is 32- or 64-bit, but they are left alone for now because
greater gains are likely possible with more invasive changes to header
logic, which is beyond the scope of this commit.
vdso support is available on mips starting with kernel 4.4, see kernel
commit a7f4df4e21 "MIPS: VDSO: Add implementations of gettimeofday()
and clock_gettime()" for details.
In Linux kernel 4.4.0 the mips code returns -ENOSYS in case it can not
handle the vdso call and assumes the libc will call the original
syscall in this case. Handle this case in musl. Currently Linux kernel
4.4.0 handles the following types: CLOCK_REALTIME_COARSE,
CLOCK_MONOTONIC_COARSE, CLOCK_REALTIME and CLOCK_MONOTONIC.
si_errno and si_code are swapped in mips siginfo_t compared to other
archs and some si_code values are different. This fix is required
for POSIX timers to work.
based on patch by Dmitry Ivanov.
they lock faulted pages into memory (useful when a small part of a
large mapped file needs efficient access), new in linux v4.4, commit
b0f205c2a3082dd9081f9a94e50658c5fa906ff1
MLOCK_* is not in the POSIX reserved namespace for sys/mman.h
this is mlock with a flags argument, new in linux commit
a8ca5d0ecbdde5cc3d7accacbd69968b0c98764e
as usual microblaze and sh don't have allocated syscall number yet.
new in linux v4.3 added for aarch64, arm, i386, mips, or1k, powerpc,
x32 and x86_64.
membarrier is a system wide memory barrier, moves most of the
synchronization cost to one side, new in kernel commit
5b25b13ab08f616efd566347d809b4ece54570d1
userfaultfd is useful for qemu and is new in kernel commit
8d2afd96c20316d112e04d935d9e09150e988397
switch_endian is powerpc only for switching endianness, new in commit
529d235a0e190ded1d21ccc80a73e625ebcad09b
new in linux v4.3 commit 9dea5dc921b5f4045a18c63eb92e84dc274d17eb
direct calls instead of socketcall allow better seccomp filtering.
musl continues to use socketcalls internally on i386. (older kernels
would need a fallback mechanism if the direct calls were used.)
only use SYS_socketcall if SYSCALL_USE_SOCKETCALL is defined
internally, otherwise use direct syscalls.
this commit does not change the current behaviour, it is
preparation for adding direct syscall numbers for i386.
contrary to commit 89e149d275, big
endian arm does need the instruction bytes in big endian order. rather
than trying to use a special encoding that works as arm or thumb,
simply encode the simplest/canonical undefined instructions dependent
on whether __thumb__ is defined.
the .byte directive encodes a guaranteed-undefined instruction, the
same one Linux fills the kuser helper page with when it's disabled.
the udf mnemonic and and .insn directives are not supported by old
binutils versions, and larger-than-byte integer directives would
produce the wrong output on big-endian.
a_ll/a_sc inline asm used 64bit register operands (%0) instead of 32bit
ones (%w0), this at least broke a_and_64 (which always cleared the top
32bit, leaking memory in malloc).
aarch64 provides ll/sc variants with acquire/release memory order,
freeing us from the need to have full barriers both before and after
the ll/sc operation. previously they were not used because the a_cas
can fail without performing a_sc, in which case half of the barrier
would be omitted. instead, define a custom version of a_cas for
aarch64 which uses a_barrier explicitly when aborting the cas
operation. aside from cas, other operations built on top of ll/sc are
not affected since they never abort but rather loop until they
succeed.
a split ll/sc version of the pointer-sized a_cas_p is also introduced
using the same technique.
patch by Szabolcs Nagy.
commit f3ddd17380, the dynamic linker
bootstrap overhaul, silently disabled the definition of __fpscr_values
in this file since libc.so's copy of __fpscr_values now comes from
crt_arch.h, the same place the public definition in the main program's
crt1.o ultimately comes from. remove this file which is no longer in
use.
previously powerpc had a_cas defined in terms of its native ll/sc
style operations, but all other atomics were defined in terms of
a_cas. instead define a_ll and a_sc so the compiler can generate
optimized versions of all the atomic ops and perform better inlining
of a_cas.
extracting the result of the sc (stwcx.) instruction is rather awkward
because it's natively stored in a condition flag, which is not
representable in inline asm. but even with this limitation the new
code still seems significantly better.
this commit mostly makes consistent things like spacing, function
ordering in atomic_arch.h, argument names, use of volatile, etc.
a_ctz_l was also removed from x86_64 since atomic.h provides it
automatically using a_ctz_64.
this commit mostly makes consistent things like spacing, function
ordering in atomic_arch.h, argument names, use of volatile, etc. the
fake 64-bit and/or atomics are also removed because the shared
atomic.h does a better job of implementing them; it avoids making two
atomic memory accesses when only one 32-bit half needs to be touched.
no major overhaul is needed or possible because x86 actually has
native versions of all the usual atomic operations, rather than using
ll/sc or needing cas loops.
this is possible with the new build system that allows src/*/$(ARCH)/*
files which do not shadow a file in the parent directory, and yields a
more logical organization. eventually it will be possible to remove
arch/*/src from the build system.
switch to ll/sc model so that new atomic.h can provide optimized
versions of all the atomic primitives without needing an ll/sc loop
written in asm for each one.
all isa levels which use ldrex/strex now use the inline ll/sc model
even if the type of barrier to use is not known until runtime (v6).
the cas model is only used for arm v5 and earlier, and it has been
optimized to make the call via inline asm with custom constraints
rather than as a C function call.
sh needs runtime-selected atomic backends since there are a number of
supported models that use non-forwards-compatible (non-smp-compatible)
atomic mechanisms. previously, the code paths for this were highly
inefficient since they involved C function calls with multiple
branches in the callee and heavy spills in the caller. the new code
performs calls the runtime-selected asm fragment from inline asm with
extremely minimal clobbers, rather than using a function call.
for the sh4a case where the atomic mechanism is known and there is no
forward-compatibility issue, the movli.l and movco.l instructions are
provided as a_ll and a_sc, allowing the new shared atomic.h to
generate efficient inline versions of all the basic atomic operations
without needing a cas loop.
rather than having each arch provide its own atomic.h, there is a new
shared atomic.h in src/internal which pulls arch-specific definitions
from arc/$(ARCH)/atomic_arch.h. the latter can be extremely minimal,
defining only a_cas or new ll/sc type primitives which the shared
atomic.h will use to construct everything else.
this commit avoids making heavy changes to the individual archs'
atomic implementations. definitions which are identical or
near-identical to what the new shared atomic.h would produce have been
removed, but otherwise the changes made are just hooking up the
arch-specific files to the new infrastructure. major changes to take
advantage of the new system will come in subsequent commits.
commit 2f853dd6b9 failed to replicate
the old makefile logic that caused arch/arm/src/arm/atomics.s to be
built. since this was the only .s file under arch/*/src, rather than
trying to reproduce the old logic, I'm just moving it up a level and
adjusting the glob pattern in the makefile to catch it. eventually
arch/*/src will probably be removed in favor of moving all these files
to appropriate src/*/$(ARCH) locations.
the __SOFTFP__ macro which was wrongly being used does not reflect the
ABI (arm vs armhf) but just the availability of floating point
instructions/registers, so -mfloat-abi=softfp was wrongly being
treated as armhf. __ARM_PCS_VFP is the correct predefined macro to
check for the armhf EABI variant. this macro usage was corrected for
the build process in commit 4918c2bb20
but reloc.h was apparently overlooked at the time.
apparently the .gpword directive does not work reliably with local
text labels; values produced were offset by 64k from the correct
value, resulting in incorrect computation of the got pointer at
runtime. instead, use an external label so that the assembler does not
munge the relocation; the linker will then get it right.
commit 6fef8cafbd exposed this issue by
removing the old, non-PIE-compatible handwritten crt1.s, which was not
affected. presumably mips PIE executables (using Scrt1.o produced from
crt_arch.h) were already affected at the time.
at least gcc 4.7 claims c++11 support but does not accept the alignas
keyword, causing breakage when stddef.h is included in c++11 mode.
instead, prefer using __attribute__((__aligned__)) on any compiler
with GNU extensions, and only use the alignas keyword as a fallback
for other C++ compilers.
C code should not be affected by this patch.
commit 8a8fdf6398 was intended to remove
all such usage, but these arch-specific files were overlooked, leading
to inconsistent declarations and definitions.
on linux/nommu, non-writable private mappings of files may actually
use memory shared with other processes or the fs cache. the old nommu
loader code (used when mmap with MAP_FIXED fails) simply wrote over
top of the original file mapping, possibly clobbering this shared
memory. no such breakage was observed in practice, but it should have
been possible.
the new code starts by mapping anonymous writable memory on archs that
might support nommu, then maps load segments over top of it, falling
back to read if MAP_FIXED fails. we use an anonymous map rather than a
writable file map to avoid reading more data from disk than needed.
since pages cannot be loaded lazily on fault, in case of large
data/bss, mapping the full file may read a lot of data that will
subsequently be thrown away when processing additional LOAD segments.
as a result, we cannot skip the first LOAD segment when operating in
this mode.
these changes affect only non-FDPIC nommu support.
these files are all accepted as legacy arm syntax when producing arm
code, but legacy syntax cannot be used for producing thumb2 with
access to the full ISA. even after switching to UAL, some asm source
files contain instructions which are not valid in thumb mode, so these
will need to be addressed separately.
the idea of the three-instruction sequence being removed was to be
able to return to thumb code when used on armv4t+ from a thumb caller,
but also to be able to run on armv4 without the bx instruction
available (in which case the low bit of lr would always be 0).
however, without compiler support for generating such a sequence from
C code, which does not exist and which there is unlikely to be
interest in implementing, there is little point in having it in the
asm, and it would likely be easier to add pre-armv4t support via
enhanced linker handling of R_ARM_V4BX than at the compiler level.
removing this code simplifies adding support for building libc in
thumb2-only form (for cortex-m).
this assumption is borderline-unsafe to begin with, and fails badly
with -ffunction-sections since the linker can move the callee
arbitrarily far away when it lies in a different section.
using the actual mcontext_t definition rather than an overlaid pointer
array both improves correctness/readability and eliminates some ugly
hacks for archs with 64-bit registers bit 32-bit program counter.
also fix UB due to comparison of pointers not in a common array
object.
other archs use asm for the thread pointer load, so making that asm
volatile is sufficient to inform the compiler that it has a "side
effect" (crashing or giving the wrong result if the thread pointer was
not yet initialized) that prevents reordering. however, powerpc and
or1k have dedicated general purpose registers for the thread pointer
and did not need to use any asm to access it; instead, "local register
variables with a specified register" were used. however, there is no
specification for ordering constraints on this type of usage, and
presumably use of the thread pointer could be reordered across its
initialization.
to impose an ordering, I have added empty volatile asm blocks that
produce the "local register variable with a specified register" as
an output constraint.
this builds on commits a603a75a72 and
0ba35d69c0 to ensure that a compiler
cannot conclude that it's valid to reorder the asm to a point before
the thread pointer is set up, or to treat the inline function as if it
were declared with attribute((const)).
other archs already use volatile asm for thread pointer loading.
the restorer function pointer provided in the kernel sigaction
structure is interpreted by the kernel as a raw code address, not a
function descriptor.
this commit moves the declarations of the __restore and __restore_rt
symbols to ksigaction.h so that arch versions of the file can override
them, and introduces a version for sh which declares them as objects
rather than functions.
an alternate solution would have been defining SA_RESTORER to 0 so
that the functions are not used, but this both requires executable
stack (since the sh kernel does not have a vdso page with permanent
restorer functions) and crashes on qemu user-level emulation.
the entry point code supports being loaded by a loader which is not
fdpic-aware (in practice, either kernel with mmu or qemu without fdpic
support). this mostly just works, but signal handling will wrongly use
a function descriptor address as a code address if the personality is
not adjusted to fdpic.
ideally this code could be placed with sigaction so that it's not
needed except if/when a signal handler is installed. however,
personality is incorrectly maintained per-thread by the kernel, rather
than per-process, so it's necessary to correct the personality before
any threads are started. also, in order to skip the personality
syscall when an fdpic-aware loader is used, we need to be able to
detect how the program was loaded, and this information is only
readily available at the entry point.
previously, the normal ELF library loading code was used even for
fdpic, so only the kernel-loaded dynamic linker and main app could
benefit from separate placement of segments and shared text.
the __fdpic_fixup code is not needed for ET_DYN executables, which
instead use reloctions, so we can omit it from the dynamic linker and
static-pie entry point and save some code size.
the C implementation of __unmapself used for potentially-nommu sh
assumed CRTJMP takes a function descriptor rather than a code address;
however, the actual dynamic linker needs a code address, and so commit
7a9669e977 changed the definition of the
macro in reloc.h. this commit puts the old macro back in a place where
it only affects __unmapself.
this is an ugly workaround and should be cleaned up at some point, but
at least it's well isolated.
at this point not all functionality is complete. the dynamic linker
itself, and main app if it is also loaded by the kernel, take
advantage of fdpic and do not need constant displacement between
segments, but additional libraries loaded by the dynamic linker follow
normal ELF semantics for mapping still. this fully works, but does not
admit shared text on nommu.
in terms of actual functional correctness, dlsym's results are
presently incorrect for function symbols, RTLD_NEXT fails to identify
the caller correctly, and dladdr fails almost entirely.
with the dynamic linker entry point working, support for static pie is
automatically included, but linking the main application as ET_DYN
(pie) probably does not make sense for fdpic anyway. ET_EXEC is
equally relocatable but more efficient at representing relocations.
previously, the call into stage 2 was made by looking up the symbol
name "__dls2" (which was chosen short to be easy to look up) from the
dynamic symbol table. this was no problem for the dynamic linker,
since it always exports all its symbols. in the case of the static pie
entry point, however, the dynamic symbol table does not contain the
necessary symbol unless -rdynamic/-E was used when linking. this
linking requirement is a major obstacle both to practical use of
static-pie as a nommu binary format (since it greatly enlarges the
file) and to upstream toolchain support for static-pie (adding -E to
default linking specs is not reasonable).
this patch replaces the runtime symbolic lookup with a link-time
lookup via an inline asm fragment, which reloc.h is responsible for
providing. in this initial commit, the asm is provided only for i386,
and the old lookup code is left in place as a fallback for archs that
have not yet transitioned.
modifying crt_arch.h to pass the stage-2 function pointer as an
argument was considered as an alternative, but such an approach would
not be compatible with fdpic, where it's impossible to compute
function pointers without already having performed relocations. it was
also deemed desirable to keep crt_arch.h as simple/minimal as
possible.
in principle, archs with pc-relative or got-relative addressing of
static variables could instead load the stage-2 function pointer from
a static volatile object. that does not work for fdpic, and is not
safe against reordering on mips-like archs that use got slots even for
static functions, but it's a valid on i386 and many others, and could
provide a reasonable default implementation in the future.
with this commit it should be possible to produce a working
static-linked fdpic libc and application binaries for sh.
the changes in reloc.h are largely unused at this point since dynamic
linking is not supported, but the CRTJMP macro is used one place
outside of dynamic linking, in __unmapself.
this version of the entry point is only suitable for static linking in
ET_EXEC form. neither dynamic linking nor pie is supported yet. at
some point in the future the fdpic and non-fdpic versions of this code
may be unified but for now it's easiest to work with them separately.
clone calls back to a function pointer provided by the caller, which
will actually be a pointer to a function descriptor on fdpic. the
obvious solution is to have a separate version of clone for fdpic, but
I have taken a simpler approach to go around the problem. instead of
calling the pointed-to function from asm, a direct call is made to an
internal C function which then calls the pointed-to function. this
lets the C compiler generate the appropriate calling convention for an
indirect call with no need for ABI-specific assembly.
this error was only found by reading the code, but it seems to have
been causing gcc to produce wrong code in malloc: the same register
was used for the output and the high word of the input. in principle
this could have caused an infinite loop searching for an available
bin, but in practice most x86 models seem to implement the "undefined"
result of the bsf instruction as "unchanged".
these functions are part of the ARM EABI, meaning compilers may
generate references to them. known versions of gcc do not use them,
but llvm does. they are not provided by libgcc, and the de facto
standard seems to be that libc provides them.
commit 3c43c0761e fixed missing
synchronization in the atomic store operation for i386 and x86_64, but
opted to use mfence for the barrier on x86_64 where it's always
available. however, in practice mfence is significantly slower than
the barrier approach used on i386 (a nop-like lock orl operation).
this commit changes x86_64 (and x32) to use the faster barrier.
On 32bit systems long long arguments are passed in a special way
to some syscalls; this accidentally got copied to the AArch64 port.
The following interfaces were broken: fallocate, fanotify, ftruncate,
posix_fadvise, posix_fallocate, pread, pwrite, readahead,
sync_file_range, truncate.
despite being strongly ordered, the x86 memory model does not preclude
reordering of loads across earlier stores. while a plain store
suffices as a release barrier, we actually need a full barrier, since
users of a_store subsequently load a waiter count to determine whether
to issue a futex wait, and using a stale count will result in soft
(fail-to-wake) deadlocks. these deadlocks were observed in malloc and
possible with stdio locks and other libc-internal locking.
on i386, an atomic operation on the caller's stack is used as the
barrier rather than performing the store itself using xchg; this
avoids the need to read the cache line on which the store is being
performed. mfence is used on x86_64 where it's always available, and
could be used on i386 with the appropriate cpu model checks if it's
shown to perform better.
the TLS ABI spec for mips, powerpc, and some other (presently
unsupported) RISC archs has the return value of __tls_get_addr offset
by +0x8000 and the result of DTPOFF relocations offset by -0x8000. I
had previously assumed this part of the ABI was actually just an
implementation detail, since the adjustments cancel out. however, when
the local dynamic model is used for accessing TLS that's known to be
in the same DSO, either of the following may happen:
1. the -0x8000 offset may already be applied to the argument structure
passed to __tls_get_addr at ld time, without any opportunity for
runtime relocations.
2. __tls_get_addr may be used with a zero offset argument to obtain a
base address for the module's TLS, to which the caller then applies
immediate offsets for individual objects accessed using the local
dynamic model. since the immediate offsets have the -0x8000 adjustment
applied to them, the base address they use needs to include the
+0x8000 offset.
it would be possible, but more complex, to store the pointers in the
dtv[] array with the +0x8000 offset pre-applied, to avoid the runtime
cost of adding 0x8000 on each call to __tls_get_addr. this change
could be made later if measurements show that it would help.
nominally the low bits of the trap number on sh are the number of
syscall arguments, but they have never been used by the kernel, and
some code making syscalls does not even know the number of arguments
and needs to pass an arbitrary high number anyway.
sh3/sh4 traditionally used the trap range 16-31 for syscalls, but part
of this range overlapped with hardware exceptions/interrupts on sh2
hardware, so an incompatible range 32-47 was chosen for sh2.
using trap number 31 everywhere, since it's in the existing sh3/sh4
range and does not conflict with sh2 hardware, is a proposed
unification of the kernel syscall convention that will allow binaries
to be shared between sh2 and sh3/sh4. if this is not accepted into the
kernel, we can refit the sh2 target with runtime selection mechanisms
for the trap number, but doing so would be invasive and would entail
non-trivial overhead.
due to the way the interrupt and syscall trap mechanism works,
userspace on sh2 must never set the stack pointer to an invalid value.
thus, the approach used on most archs, where __unmapself executes with
no stack for the interval between SYS_munmap and SYS_exit, is not
viable on sh2.
in order not to pessimize sh3/sh4, the sh asm version of __unmapself
is not removed. instead it's renamed and redirected through code that
calls either the generic (safe) __unmapself or the sh3/sh4 asm,
depending on compile-time and run-time conditions.
the sh2 target is being considered an ISA subset of sh3/sh4, in the
sense that binaries built for sh2 are intended to be usable on later
cpu models/kernels with mmu support. so rather than hard-coding
sh2-specific atomics, the runtime atomic selection mechanisms that was
already in place has been extended to add sh2 atomics.
at this time, the sh2 atomics are not SMP-compatible; since the ISA
lacks actual atomic operations, the new code instead masks interrupts
for the duration of the atomic operation, producing an atomic result
on single-core. this is only possible because the kernel/hardware does
not impose protections against userspace doing so. additional changes
will be needed to support future SMP systems.
care has been taken to avoid producing significant additional code
size in the case where it's known at compile-time that the target is
not sh2 and does not need sh2-specific code.
the instruction used to align the stack, "and $sp, $sp, -8", does not
actually exist; it's expanded to 2 instructions using the 'at'
(assembler temporary) register, and thus cannot be used in a branch
delay slot. since alignment mod 16 commutes with subtracting 8, simply
swapping these two operations fixes the problem.
crt1.o was not affected because it's still being generated from a
dedicated asm source file. dlstart.lo was not affected because the
stack pointer it receives is already aligned by the kernel. but
Scrt1.o was affected in cases where the dynamic linker gave it a
misaligned stack pointer.
i386 and x86_64 versions already had the .text directive; other archs
did not. normally, top-level (file scope) __asm__ starts in the .text
section anyway, but problems were reported with some versions of
clang, and it seems preferable to set it explicitly anyway, at least
for the sake of consistency between archs.
conceptually, and on other archs, these functions take a pointer to
int, but in the i386, x86_64, and x32 versions of atomic.h, they took
a pointer to void instead.
If we're building for sh4a, the compiler is already free to use
instructions only available on sh4a, so we can do the same and inline the
llsc atomics. If we're building for an older processor, we still do the
same runtime atomics selection as before.
compilers targeting armv7 may be configured to produce thumb2 code
instead of arm code by default, and in the future we may wish to
support targets where only the thumb instruction set is available.
the instructions this patch omits in thumb mode are needed only for
non-thumb versions of armv4 or earlier, which are not supported by any
current compilers/toolchains and thus rather pointless to have. at
some point these compatibility return sequences may be removed from
all asm source files, and in that case it would make sense to remove
them here too and remove the ifdef.
compilers targeting armv7 may be configured to produce thumb2 code
instead of arm code by default, and in the future we may wish to
support targets where only the thumb instruction set is available.
the changes made here avoid operating directly on the sp register,
which is not possible in thumb code, and address an issue with the way
the address of _DYNAMIC is computed.
previously, the relative address of _DYNAMIC was stored with an
additional offset of -8 versus the pc-relative add instruction, since
on arm the pc register evaluates to ".+8". in thumb code, it instead
evaluates to ".+4". both are two (normal-size) instructions beyond "."
in the current execution mode, so the numbered label 2 used in the
relative address expression is simply moved two instructions ahead to
be compatible with both instruction sets.
i386, x86_64, x32, and powerpc all use TLS for stack protector canary
values in the default stack protector ABI, but the location only
matched the ABI on i386 and x86_64. on x32, the expected location for
the canary contained the tid, thus producing spurious mismatches
(resulting in process termination) upon fork. on powerpc, the expected
location contained the stdio_locks list head, so returning from a
function after calling flockfile produced spurious mismatches. in both
cases, the random canary was not present, and a predictable value was
used instead, making the stack protector hardening much less effective
than it should be.
in the current fix, the thread structure has been expanded to have
canary fields at all three possible locations, and archs that use a
non-default location must define a macro in pthread_arch.h to choose
which location is used. for most archs (which lack TLS canary ABI) the
choice does not matter.
the lifetime of compound literals is the block in which they appear.
the temporary struct __timespec_kernel objects created as compound
literals no longer existed at the time their addresses were passed to
the kernel.