unlike the previous definition, NSIG/_NSIG is supposed to be one more
than the highest signal number. adding this will allow simplifying
libc-internal code that makes signal-related syscalls, which can be
done as a later step. some apps might use it too; while this usage is
questionable, it's at least not insane.
also handle the non-GNUC case where alignment attribute is not available
by simply omitting it. this will not cause problems except for
inclusion of mcontex_t/ucontext_t in application-defined structures,
since the natural alignment of the uc_mcontext member relative to the
start of ucontext_t is already correct. and shame on whoever designed
this for making it impossible to satisfy the ABI requirements without
GNUC extensions.
it's essential to decrement the stack pointer before writing to new
stack space, rather than afterwards. otherwise there is a race
condition during which asynchronous code (signals) could clobber the
data being stored.
it may be possible to optimize the code further using stwu, but I
wanted to avoid making any changes to the actual stack layout in this
commit. further improvements can be made separately if desired.
apparently some other archs have sys/io.h and should not break just
because they don't have the x86 port io functions. provide a blank
bits/io.h everywhere for now.
based on proposal by Isaac Dunham. nonexistance of bits/io.h will
cause inclusion of sys/io.h to produce an error on archs that are not
supposed to have it. this is probably the desired behavior, but the
error message may be a bit unusual.
put some macros that do not differ between architectures in the
main header and remove from bits.
restructure mips header so it has the same structure as the others.
similar to exp.c cleanup: use scalbnf, don't return excess precision,
drop some optimizatoins.
exp.c was changed to be more consistent with expf.c code.
fortunately the memory corruption could not hurt anything, but it
prevented clearing the final newline and thus prevented the last path
element from working.
priority inheritance is not yet supported, and priority protection
probably will not be supported ever unless there's serious demand for
it (it's a fairly heavy-weight feature).
per-thread cpu clocks would be nice to have, but to my knowledge linux
is still not capable of supporting them. glibc fakes them by using the
_process_ cpu-time clock and subtracting the thread creation time,
which gives seriously incorrect semantics (worse than not supporting
the feature at all), so until there's a way to do it right, it will
remain as a stub that always fails.
overflow and underflow was incorrect when the result was not stored.
an optimization for the 0.5*ln2 < |x| < 1.5*ln2 domain was removed.
did various cleanups around static constants and made the comments
consistent with the code.
incomplete but at least partly working. requires all files to be
compiled in the new "secure" plt model, not the old one that put plt
code in the data segment. TLS is untested but may work. invoking the
dynamic linker explicitly to load a program does not yet handle argv
correctly.
although a number is reserved for it, this option is not implemented
on Linux and does not work. defining it causes some applications to
use it, and subsequently break due to its failure.
the volatile hack in STRICT_ASSIGN is only needed if
assignment is not respected and excess precision is kept.
gcc -fexcess-precision=standard and -ffloat-store both
respect assignment and musl use these flags by default.
i kept the macro for now so the workaround may be used
for bad compilers in the future.
old code was correct only if the result was stored (without the
excess precision) or musl was compiled with -ffloat-store.
now we use STRICT_ASSIGN to work around the issue.
(see note 160 in c11 section 6.8.6.4)
old code was correct only if the result was stored (without the
excess precision) or musl was compiled with -ffloat-store.
(see note 160 in n1570.pdf section 6.8.6.4)