on old kernels, there's no way to detect errors; we must assume
negative syscall return values are pgrp ids. but if the F_GETOWN_EX
fcntl works, we can get a reliable answer.
this is actually rather ugly, and would get even uglier if we ever
want to support further feature test macros. at some point i may
factor the bits headers into separate files for C base, POSIX base,
and nonstandard extensions (the only distinctions that seem to matter
now) and then the logic for which to include can go in the main header
rather than being duplicated for each arch. the downside of this is
that it would result in more files having to be opened during
compilation, so as long as the ugliness does not grow, i'm inclined to
leave it alone for now.
fcntl values 1024 and up are universal, arch-independent. later I'll
add some of the other linux-specific ones for notify, leases, pipe
size, etc. here too.
when the "r" (register) constraint is used to let gcc choose a
register, gcc will sometimes assign the same register that was used
for one of the other fixed-register operands, if it knows the values
are the same. one common case is multiple zero arguments to a syscall.
this horribly breaks the intended usage, which is swapping the GOT
pointer from ebx into the temp register and back to perform the
syscall.
presumably there is a way to fix this with advanced usage of register
constaints on the inline asm, but having bad memories about hellish
compatibility issues with different gcc versions, for the time being
i'm just going to hard-code specific registers to be used. this may
hurt the compiler's ability to optimize, but it will fix serious
miscompilation issues.
so far the only function i know what compiled incorrectly is
getrlimit.c, and naturally the bug only applies to shared (PIC)
builds, but it may be more extensive and may have gone undetected..
DECIMAL_DIG is not the same as LDBL_DIG
type_DIG is the maximimum number of decimal digits that can survive a
round trip from decimal to type and back to decimal.
DECIMAL_DIG is the minimum number of decimal digits required in order
for any floating point type to survive the round trip to decimal and
back, and it is generally larger than LDBL_DIG. since the exact
formula is non-trivial, and defining it larger than necessary may be
legal but wasteful, just define the right value in bits/float.h.
the old abi was intended to duplicate glibc's abi at the expense of
being ugly and slow, but it turns out glib was not even using that abi
except on non-gcc-compatible compilers (which it doesn't even support)
and was instead using an exceptions-in-c/unwind-based approach whose
abi we could not duplicate anyway without nasty dwarf2/unwind
integration.
the new abi is copied from a very old glibc abi, which seems to still
be supported/present in current glibc. it avoids all unwinding,
whether by sjlj or exceptions, and merely maintains a linked list of
cleanup functions to be called from the context of pthread_exit. i've
made some care to ensure that longjmp out of a cleanup function should
work, even though it is not required to.
this change breaks abi compatibility with programs which were using
pthread cancellation, which is unfortunate, but that's why i'm making
the change now rather than later. considering that most pthread
features have not been usable until recently anyway, i don't see it as
a major issue at this point.
it's a keyword in c++ (wtf). i'm not sure this is the cleanest
solution; it might be better to avoid ever defining __NEED_wchar_t on
c++. but in any case, this works for now.
actually this is just to avoid gcc being stupid and refusing to inline
the function version, even when the size cost is essentially identical
whether it's inlined or not.
the arm syscall abi requires 64-bit arguments to be aligned on an even
register boundary. these new macros facilitate meeting the abi
requirement without imposing significant ugliness on the code.
really wchar_t should never vary, but the ARM EABI defines it as an
unsigned 32-bit int instead of a signed one, and gcc follows this
nonsense. thus, to give a conformant environment, we have to follow
(otherwise L""[0] and L'\0' would be 0U rather than 0, but the
application would be unaware due to a mismatched definition for
WCHAR_MIN and WCHAR_MAX, and Bad Things could happen with respect to
signed/unsigned comparisons, promotions, etc.).
fortunately no rules are imposed by the C standard on the relationship
between wchar_t and wint_t, and WEOF has type wint_t, so we can still
make wint_t always-signed and use -1 for WEOF.
this port assumes eabi calling conventions, eabi linux syscall
convention, and presence of the kernel helpers at 0xffff0f?0 needed
for threads support. otherwise it makes very few assumptions, and the
code should work even on armv4 without thumb support, as well as on
systems with thumb interworking. the bits headers declare this a
little endian system, but as far as i can tell the code should work
equally well on big endian.
some small details are probably broken; so far, testing has been
limited to qemu/aboriginal linux.
this behavior (opening fds 0-2 for a suid program) is explicitly
allowed (but not required) by POSIX to protect badly-written suid
programs from clobbering files they later open.
this commit does add some cost in startup code, but the availability
of auxv and the security flag will be useful elsewhere in the future.
in particular auxv is needed for static-linked vdso support, which is
still waiting to be committed (sorry nik!)
if gcc decided to move this across a conditional that checks validity
of the thread register, an invalid thread-register-based read could be
performed and raise sigsegv.
some notes:
- library search path is hard coded
- x86_64 code is untested and may not work
- dlopen/dlsym is not yet implemented
- relocations in read-only memory won't work
unfortunately traditional i386 practice was to use "long" rather than
"int" for wchar_t, despite the latter being much more natural and
logical. we followed this practice, but it seems some compilers (clang
and maybe certain gcc builds or others too..?) have switched to using
int, resulting in spurious pointer type mismatches when L"..." wide
strings are used. the best solution I could find is to use the
compiler's definition of wchar_t if it exists, and otherwise fallback
to the traditional definition.
there's no point in duplicating this approach on 64-bit archs, as
their only 32-bit type is int.
this slightly cuts down on the degree musl "fights with" gcc, but more
importantly, it fixes a critical bug when gcc inlines a variadic
function and optimizes out the variadic arguments due to noticing that
they were "not used" (by __builtin_va_arg).
we leave the old code in place if __GNUC__ >= 3 is false; it seems
like it might be necessary at least for tinycc support and perhaps if
anyone ever gets around to fixing gcc 2.95.3 enough to make it work..
strictly speaking this and a few other ops should be factored into
asm.h or the file should just be renamed to asm.h, but whatever. clean
it up someday.
this patch improves the correctness, simplicity, and size of
cancellation-related code. modulo any small errors, it should now be
completely conformant, safe, and resource-leak free.
the notion of entering and exiting cancellation-point context has been
completely eliminated and replaced with alternative syscall assembly
code for cancellable syscalls. the assembly is responsible for setting
up execution context information (stack pointer and address of the
syscall instruction) which the cancellation signal handler can use to
determine whether the interrupted code was in a cancellable state.
these changes eliminate race conditions in the previous generation of
cancellation handling code (whereby a cancellation request received
just prior to the syscall would not be processed, leaving the syscall
to block, potentially indefinitely), and remedy an issue where
non-cancellable syscalls made from signal handlers became cancellable
if the signal handler interrupted a cancellation point.
x86_64 asm is untested and may need a second try to get it right.