Commit Graph

129 Commits

Author SHA1 Message Date
Szabolcs Nagy fe39aaae0e add bits/hwcap.h and include it in sys/auxv.h
aarch64, arm, mips, mips64, mipsn32, powerpc, powerpc64 and sh have
cpu feature bits defined in linux for AT_HWCAP auxv entry, so expose
those in sys/auxv.h

it seems the mips hwcaps were never exposed to userspace neither
by linux nor by glibc, but that's most likely an oversight.
2016-10-20 01:28:25 -04:00
Rich Felker befa5866ee make brace placement in public header struct definitions consistent
placing the opening brace on the same line as the struct keyword/tag
is the style I prefer and seems to be the prevailing practice in more
recent additions.

these changes were generated by the command:

find include/ arch/*/bits -name '*.h' \
-exec sed -i '/^struct [^;{]*$/{N;s/\n/ /;}' {} +

and subsequently checked by hand to ensure that the regex did not pick
up any false positives.
2016-07-03 15:02:25 -04:00
Szabolcs Nagy cae8ac485f fix CBAUDEX in powerpc termios.h
it seems it was a typo.
2016-07-03 15:02:24 -04:00
Szabolcs Nagy 3bda42ac4b fix powerpc termios.h macro exposure/namespace issues
same changes as in the generic header.

and BOTHER and IBSHIFT were removed (present in linux uapi but not
in glibc) and TIOCSER_TEMT was added (present in glibc).
2016-07-03 15:02:24 -04:00
Szabolcs Nagy 058c0b2d70 remove mips and powerpc ioctls that are missing from linux uapi
mips and powerpc use their own asm/ioctls.h, not the asm-generic/ioctls.h
and they lack termiox macros that are available on other targets.
see kernel commit 1d65b4a088de407e99714fdc27862449db04fb5c
2016-07-03 14:54:34 -04:00
Szabolcs Nagy 5ce901279e add missing TIOC* macros to ioctl.h
these are defined in linux asm/ioctls.h.
(powerpc64 and powerpc bits/ioctl.h are now identical)
2016-07-03 14:54:34 -04:00
Szabolcs Nagy 8735a921d0 add missing SIOCSIFNAME from linux/sockios.h to ioctl.h
glibc ioctl.h has it too.
2016-07-03 14:54:33 -04:00
Szabolcs Nagy 2df9ae9161 remove ioctl macros that were removed from linux uapi
TIOCTTYGSTRUCT, TIOCGHAYESESP, TIOCSHAYESESP and TIOCM_MODEM_BITS
were removed from the linux uapi and not present in glibc ioctl.h
2016-07-03 14:54:33 -04:00
Rich Felker 3dd27f3aab fix posix_fadvise syscall args on powerpc, unify with arm fix
commit 6d38c9cf80 provided an
arm-specific version of posix_fadvise to address the alternate
argument order the kernel expects on arm, but neglected to address
that powerpc (32-bit) has the same issue. instead of having arch
variant files in duplicate, simply put the alternate version in the
top-level file under the control of a macro defined in syscall_arch.h.
2016-07-01 13:32:35 -04:00
Szabolcs Nagy 78b1f3cb14 add preadv2 and pwritev2 syscall numbers for linux v4.6
the syscalls take an additional flag argument, they were added in commit
f17d8b35452cab31a70d224964cd583fb2845449 and a RWF_HIPRI priority hint
flag was added to linux/fs.h in 97be7ebe53915af504fb491fb99f064c7cf3cb09.

the syscall is not allocated for microblaze and sh yet.
2016-06-09 13:38:41 -04:00
Bobby Bingham 63e3a1661f deduplicate __NR_* and SYS_* syscall number definitions 2016-05-12 00:34:05 -05:00
Rich Felker 49631b7b6c fix spurious trailing whitespace in powerpc & powerpc64 bits/errno.h 2016-05-08 23:16:14 -04:00
Szabolcs Nagy 84d4f5eee5 add copy_file_range syscall numbers from linux v4.5
it was introduced for offloading copying between regular files
in linux commit 29732938a6289a15e907da234d6692a2ead71855

(microblaze and sh does not yet have the syscall number.)
2016-03-19 11:30:49 -04:00
Szabolcs Nagy e9f1c7981a deduplicate bits/mman.h
currently five targets use the same mman.h constants and the rest
share most constants too, so move them to sys/mman.h before the
bits/mman.h include where the differences can be corrected by
redefinition of the macros.

this fixes two minor bugs: POSIX_MADV_DONTNEED was wrong on most
targets (it should be the same as MADV_DONTNEED), and sh defined
the x86-only MAP_32BIT mmap flag.
2016-03-18 22:40:28 -04:00
Felix Fietkau 5a92dd95c7 add powerpc soft-float support
Some PowerPC CPUs (e.g. Freescale MPC85xx) have a completely different
instruction set for floating point operations (SPE).
Executing regular PowerPC floating point instructions results in
"Illegal instruction" errors.

Make it possible to run these devices in soft-float mode.
2016-03-06 17:03:01 -05:00
Rich Felker 4dfac11538 deduplicate the bulk of the arch bits headers
all bits headers that were identical for a number of 'clean' archs are
moved to the new arch/generic tree. in addition, a few headers that
differed only cosmetically from the new generic version are removed.

additional deduplication may be possible in mman.h and in several
headers (limits.h, posix.h, stdint.h) that mostly depend on whether
the arch is 32- or 64-bit, but they are left alone for now because
greater gains are likely possible with more invasive changes to header
logic, which is beyond the scope of this commit.
2016-01-27 21:52:14 -05:00
Szabolcs Nagy 789ff6a9f8 add MCL_ONFAULT and MLOCK_ONFAULT mlockall and mlock2 flags
they lock faulted pages into memory (useful when a small part of a
large mapped file needs efficient access), new in linux v4.4, commit
b0f205c2a3082dd9081f9a94e50658c5fa906ff1

MLOCK_* is not in the POSIX reserved namespace for sys/mman.h
2016-01-26 18:31:05 -05:00
Szabolcs Nagy 51d5f139ca add mlock2 syscall number from linux v4.4
this is mlock with a flags argument, new in linux commit
a8ca5d0ecbdde5cc3d7accacbd69968b0c98764e

as usual microblaze and sh don't have allocated syscall number yet.
2016-01-26 18:30:50 -05:00
Szabolcs Nagy 09001a8f97 add new membarrier, userfaultfd and switch_endian syscalls
new in linux v4.3 added for aarch64, arm, i386, mips, or1k, powerpc,
x32 and x86_64.

membarrier is a system wide memory barrier, moves most of the
synchronization cost to one side, new in kernel commit
5b25b13ab08f616efd566347d809b4ece54570d1

userfaultfd is useful for qemu and is new in kernel commit
8d2afd96c20316d112e04d935d9e09150e988397

switch_endian is powerpc only for switching endianness, new in commit
529d235a0e190ded1d21ccc80a73e625ebcad09b
2016-01-26 18:28:20 -05:00
Szabolcs Nagy bc443c3fe3 clean powerpc syscall.h
remove ifdefs for powerpc64.
2016-01-24 19:08:57 -05:00
Szabolcs Nagy f9c3a2e048 add missing powerpc specific PROT_SAO memory protection flag
this flag for strong access ordering was added in linux v2.6.27
commit aba46c5027cb59d98052231b36efcbbde9c77a1d
2016-01-24 19:08:40 -05:00
Szabolcs Nagy 2f6f3dccb4 fix powerpc MCL_* mlockall flags in bits/mman.h
the definitions didn't match the linux uapi headers.
2016-01-24 19:08:19 -05:00
Rich Felker 513c043694 overhaul powerpc atomics for new atomics framework
previously powerpc had a_cas defined in terms of its native ll/sc
style operations, but all other atomics were defined in terms of
a_cas. instead define a_ll and a_sc so the compiler can generate
optimized versions of all the atomic ops and perform better inlining
of a_cas.

extracting the result of the sc (stwcx.) instruction is rather awkward
because it's natively stored in a condition flag, which is not
representable in inline asm. but even with this limitation the new
code still seems significantly better.
2016-01-22 02:58:32 +00:00
Rich Felker 1315596b51 refactor internal atomic.h
rather than having each arch provide its own atomic.h, there is a new
shared atomic.h in src/internal which pulls arch-specific definitions
from arc/$(ARCH)/atomic_arch.h. the latter can be extremely minimal,
defining only a_cas or new ll/sc type primitives which the shared
atomic.h will use to construct everything else.

this commit avoids making heavy changes to the individual archs'
atomic implementations. definitions which are identical or
near-identical to what the new shared atomic.h would produce have been
removed, but otherwise the changes made are just hooking up the
arch-specific files to the new infrastructure. major changes to take
advantage of the new system will come in subsequent commits.
2016-01-21 19:08:54 +00:00
Rich Felker cb1bf2f321 properly access mcontext_t program counter in cancellation handler
using the actual mcontext_t definition rather than an overlaid pointer
array both improves correctness/readability and eliminates some ugly
hacks for archs with 64-bit registers bit 32-bit program counter.

also fix UB due to comparison of pointers not in a common array
object.
2015-11-02 12:41:49 -05:00
Rich Felker 92637bb0d8 prevent reordering of or1k and powerpc thread pointer loads
other archs use asm for the thread pointer load, so making that asm
volatile is sufficient to inform the compiler that it has a "side
effect" (crashing or giving the wrong result if the thread pointer was
not yet initialized) that prevents reordering. however, powerpc and
or1k have dedicated general purpose registers for the thread pointer
and did not need to use any asm to access it; instead, "local register
variables with a specified register" were used. however, there is no
specification for ordering constraints on this type of usage, and
presumably use of the thread pointer could be reordered across its
initialization.

to impose an ordering, I have added empty volatile asm blocks that
produce the "local register variable with a specified register" as
an output constraint.
2015-10-15 12:08:51 -04:00
Rich Felker c16182680c new dlstart stage-2 chaining for powerpc 2015-09-17 07:20:58 +00:00
Felix Janda 64b6684ddd reindent powerpc's bits/termios.h to be consistent with other archs 2015-09-15 14:30:08 -04:00
Roman Yeryomin 3975577922 socket.h: cleanup/reorder mips and powerpc bits/socket.h
....to be somewhat consistent and easily comparable with asm/socket.h

Signed-off-by: Roman Yeryomin <roman@ubnt.com>
2015-07-21 19:14:58 -04:00
Roman Yeryomin 29ec7677a7 socket.h: fix SO_* for mips
Signed-off-by: Roman Yeryomin <roman@ubnt.com>
2015-07-21 19:14:26 -04:00
Rich Felker 6ba5517a46 fix local-dynamic model TLS on mips and powerpc
the TLS ABI spec for mips, powerpc, and some other (presently
unsupported) RISC archs has the return value of __tls_get_addr offset
by +0x8000 and the result of DTPOFF relocations offset by -0x8000. I
had previously assumed this part of the ABI was actually just an
implementation detail, since the adjustments cancel out. however, when
the local dynamic model is used for accessing TLS that's known to be
in the same DSO, either of the following may happen:

1. the -0x8000 offset may already be applied to the argument structure
passed to __tls_get_addr at ld time, without any opportunity for
runtime relocations.

2. __tls_get_addr may be used with a zero offset argument to obtain a
base address for the module's TLS, to which the caller then applies
immediate offsets for individual objects accessed using the local
dynamic model. since the immediate offsets have the -0x8000 adjustment
applied to them, the base address they use needs to include the
+0x8000 offset.

it would be possible, but more complex, to store the pointers in the
dtv[] array with the +0x8000 offset pre-applied, to avoid the runtime
cost of adding 0x8000 on each call to __tls_get_addr. this change
could be made later if measurements show that it would help.
2015-06-25 22:22:00 +00:00
Rich Felker 63caf1d207 add .text section directive to all crt_arch.h files missing it
i386 and x86_64 versions already had the .text directive; other archs
did not. normally, top-level (file scope) __asm__ starts in the .text
section anyway, but problems were reported with some versions of
clang, and it seems preferable to set it explicitly anyway, at least
for the sake of consistency between archs.
2015-05-22 01:50:05 -04:00
Rich Felker 484194dbf4 fix stack protector crashes on x32 & powerpc due to misplaced TLS canary
i386, x86_64, x32, and powerpc all use TLS for stack protector canary
values in the default stack protector ABI, but the location only
matched the ABI on i386 and x86_64. on x32, the expected location for
the canary contained the tid, thus producing spurious mismatches
(resulting in process termination) upon fork. on powerpc, the expected
location contained the stdio_locks list head, so returning from a
function after calling flockfile produced spurious mismatches. in both
cases, the random canary was not present, and a predictable value was
used instead, making the stack protector hardening much less effective
than it should be.

in the current fix, the thread structure has been expanded to have
canary fields at all three possible locations, and archs that use a
non-default location must define a macro in pthread_arch.h to choose
which location is used. for most archs (which lack TLS canary ABI) the
choice does not matter.
2015-05-06 18:37:19 -04:00
Rich Felker f3ddd17380 dynamic linker bootstrap overhaul
this overhaul further reduces the amount of arch-specific code needed
by the dynamic linker and removes a number of assumptions, including:

- that symbolic function references inside libc are bound at link time
  via the linker option -Bsymbolic-functions.

- that libc functions used by the dynamic linker do not require
  access to data symbols.

- that static/internal function calls and data accesses can be made
  without performing any relocations, or that arch-specific startup
  code handled any such relocations needed.

removing these assumptions paves the way for allowing libc.so itself
to be built with stack protector (among other things), and is achieved
by a three-stage bootstrap process:

1. relative relocations are processed with a flat function.
2. symbolic relocations are processed with no external calls/data.
3. main program and dependency libs are processed with a
   fully-functional libc/ldso.

reduction in arch-specific code is achived through the following:

- crt_arch.h, used for generating crt1.o, now provides the entry point
  for the dynamic linker too.

- asm is no longer responsible for skipping the beginning of argv[]
  when ldso is invoked as a command.

- the functionality previously provided by __reloc_self for heavily
  GOT-dependent RISC archs is now the arch-agnostic stage-1.

- arch-specific relocation type codes are mapped directly as macros
  rather than via an inline translation function/switch statement.
2015-04-13 03:04:42 -04:00
Rich Felker fd427c4eae move O_PATH definition back to arch bits
while it's the same for all presently supported archs, it differs at
least on sparc, and conceptually it's no less arch-specific than the
other O_* macros. O_SEARCH and O_EXEC are still defined in terms of
O_PATH in the main fcntl.h.
2015-04-01 19:31:06 -04:00
Rich Felker d5a5045382 fix MINSIGSTKSZ values for archs with large signal contexts
the previous values (2k min and 8k default) were too small for some
archs. aarch64 reserves 4k in the signal context for future extensions
and requires about 4.5k total, and powerpc reportedly uses over 2k.
the new minimums are chosen to fit the saved context and also allow a
minimal signal handler to run.

since the default (SIGSTKSZ) has always been 6k larger than the
minimum, it is also increased to maintain the 6k usable by the signal
handler. this happens to be able to store one pathname buffer and
should be sufficient for calling any function in libc that doesn't
involve conversion between floating point and decimal representations.

x86 (both 32-bit and 64-bit variants) may also need a larger minimum
(around 2.5k) in the future to support avx-512, but the values on
these archs are left alone for now pending further analysis.

the value for PTHREAD_STACK_MIN is not increased to match MINSIGSTKSZ
at this time. this is so as not to preclude applications from using
extremely small thread stacks when they know they will not be handling
signals. unfortunately cancellation and multi-threaded set*id() use
signals as an implementation detail and therefore require a stack
large enough for a signal context, so applications which use extremely
small thread stacks may still need to avoid using these features.
2015-03-18 00:31:37 -04:00
Szabolcs Nagy 559de8f5f0 fix FLT_ROUNDS to reflect the current rounding mode
Implemented as a wrapper around fegetround introducing a new function
to the ABI: __flt_rounds. (fegetround cannot be used directly from float.h)
2015-03-07 12:05:28 -05:00
Trutz Behn f5011c62c3 fix POLLWRNORM and POLLWRBAND on mips
these macros have the same distinct definition on blackfin, frv, m68k,
mips, sparc and xtensa kernels. POLLMSG and POLLRDHUP additionally
differ on sparc.
2015-03-04 12:09:37 -05:00
Rich Felker 56fbaa3bbe make all objects used with atomic operations volatile
the memory model we use internally for atomics permits plain loads of
values which may be subject to concurrent modification without
requiring that a special load function be used. since a compiler is
free to make transformations that alter the number of loads or the way
in which loads are performed, the compiler is theoretically free to
break this usage. the most obvious concern is with atomic cas
constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be
transformed to a_cas(p,*p,f(*p)); where the latter is intended to show
multiple loads of *p whose resulting values might fail to be equal;
this would break the atomicity of the whole operation. but even more
fundamental breakage is possible.

with the changes being made now, objects that may be modified by
atomics are modeled as volatile, and the atomic operations performed
on them by other threads are modeled as asynchronous stores by
hardware which happens to be acting on the request of another thread.
such modeling of course does not itself address memory synchronization
between cores/cpus, but that aspect was already handled. this all
seems less than ideal, but it's the best we can do without mandating a
C11 compiler and using the C11 model for atomics.

in the case of pthread_once_t, the ABI type of the underlying object
is not volatile-qualified. so we are assuming that accessing the
object through a volatile-qualified lvalue via casts yields volatile
access semantics. the language of the C standard is somewhat unclear
on this matter, but this is an assumption the linux kernel also makes,
and seems to be the correct interpretation of the standard.
2015-03-03 22:50:02 -05:00
Szabolcs Nagy f54c28cba2 add syscall numbers for the new execveat syscall
this syscall allows fexecve to be implemented without /proc, it is new
in linux v3.19, added in commit 51f39a1f0cea1cacf8c787f652f26dfee9611874
(sh and microblaze do not have allocated syscall numbers yet)

added a x32 fix as well: the io_setup and io_submit syscalls are no
longer common with x86_64, so use the x32 specific numbers.
2015-02-09 23:00:56 +01:00
Trutz Behn 2d67ae923d move MREMAP_MAYMOVE and MREMAP_FIXED out of bits
the definitions are generic for all kernel archs. exposure of these
macros now only occurs on the same feature test as for the function
accepting them, which is believed to be more correct.
2015-01-30 22:02:23 -05:00
Szabolcs Nagy f90fafea3c add new syscall numbers for bpf and kexec_file_load
these syscalls are new in linux v3.18, bpf is present on all
supported archs except sh, kexec_file_load is only allocted for
x86_64 and x32 yet.

bpf was added in linux commit 99c55f7d47c0dc6fc64729f37bf435abf43f4c60

kexec_file_load syscall number was allocated in commit
f0895685c7fd8c938c91a9d8a6f7c11f22df58d2
2014-12-23 01:44:19 -05:00
Rich Felker 91f15e2d0d move wint_t definition to the shared part of alltypes.h.in 2014-12-21 02:43:35 -05:00
Rich Felker 4134c68dd4 unify non-inline version of syscall code across archs
except powerpc, which still lacks inline syscalls simply because
nobody has written the code, these are all fallbacks used to work
around a clang bug that probably does not exist in versions of clang
that can compile musl. however, it's useful to have the generic
non-inline code anyway, as it eases the task of porting to new archs:
writing inline syscall code is now optional. this approach could also
help support compilers which don't understand inline asm or lack
support for the needed register constraints.

mips could not be unified because it has special fixup code for broken
layout of the kernel's struct stat.
2014-11-22 21:50:13 -05:00
Rich Felker 867b1822f3 add explicit barrier operation to internal atomic.h API 2014-10-10 18:17:09 -04:00
Szabolcs Nagy 4ffc39c654 add new syscall numbers for seccomp, getrandom, memfd_create
these syscalls are new in linux v3.17 and present on all supported
archs except sh.

seccomp was added in commit 48dc92b9fc3926844257316e75ba11eb5c742b2c
it has operation, flags and pointer arguments (if flags==0 then it is
the same as prctl(PR_SET_SECCOMP,...)), the uapi header for flag
definitions is linux/seccomp.h

getrandom was added in commit c6e9d6f38894798696f23c8084ca7edbf16ee895
it provides an entropy source when open("/dev/urandom",..) would fail,
the uapi header for flags is linux/random.h

memfd_create was added in commit 9183df25fe7b194563db3fec6dc3202a5855839c
it allows anon mmap to have an fd, that can be shared, sealed and needs no
mount point, the uapi header for flags is linux/memfd.h
2014-10-08 10:25:04 -04:00
Rich Felker b7cf71a190 add threads.h and needed per-arch types for mtx_t and cnd_t
based on patch by Jens Gustedt.

mtx_t and cnd_t are defined in such a way that they are formally
"compatible types" with pthread_mutex_t and pthread_cond_t,
respectively, when accessed from a different translation unit. this
makes it possible to implement the C11 functions using the pthread
functions (which will dereference them with the pthread types) without
having to use the same types, which would necessitate either namespace
violations (exposing pthread type names in threads.h) or incompatible
changes to the C++ name mangling ABI for the pthread types.

for the rest of the types, things are much simpler; using identical
types is possible without any namespace considerations.
2014-09-06 20:44:30 -04:00
Rich Felker ea818ea834 add working a_spin() atomic for non-x86 targets
conceptually, a_spin needs to be at least a compiler barrier, so the
compiler will not optimize out loops (and the load on each iteration)
while spinning. it should also be a memory barrier, or the spinning
thread might keep spinning without noticing stores from other threads,
thus delaying for longer than it should.

ideally, an optimal a_spin implementation that avoids unnecessary
cache/memory contention should be chosen for each arch, but for now,
the easiest thing is to perform a useless a_cas on the calling
thread's stack.
2014-08-25 15:43:40 -04:00
Rich Felker 321f4fa906 add max_align_t definition for C11 and C++11
unfortunately this needs to be able to vary by arch, because of a huge
mess GCC made: the GCC definition, which became the ABI, depends on
quirks in GCC's definition of __alignof__, which does not match the
formal alignment of the type.

GCC's __alignof__ unexpectedly exposes the an implementation detail,
its "preferred alignment" for the type, rather than the formal/ABI
alignment of the type, which it only actually uses in structures. on
most archs the two values are the same, but on some (at least i386)
the preferred alignment is greater than the ABI alignment.

I considered using _Alignas(8) unconditionally, but on at least one
arch (or1k), the alignment of max_align_t with GCC's definition is
only 4 (even the "preferred alignment" for these types is only 4).
2014-08-20 17:20:14 -04:00
Rich Felker de7e99c585 make pointers used in robust list volatile
when manipulating the robust list, the order of stores matters,
because the code may be asynchronously interrupted by a fatal signal
and the kernel will then access the robust list in what is essentially
an async-signal context.

previously, aliasing considerations made it seem unlikely that a
compiler could reorder the stores, but proving that they could not be
reordered incorrectly would have been extremely difficult. instead
I've opted to make all the pointers used as part of the robust list,
including those in the robust list head and in the individual mutexes,
volatile.

in addition, the format of the robust list has been changed to point
back to the head at the end, rather than ending with a null pointer.
this is to match the documented kernel robust list ABI. the null
pointer, which was previously used, only worked because faults during
access terminate the robust list processing.
2014-08-17 00:46:26 -04:00