when the "r" (register) constraint is used to let gcc choose a
register, gcc will sometimes assign the same register that was used
for one of the other fixed-register operands, if it knows the values
are the same. one common case is multiple zero arguments to a syscall.
this horribly breaks the intended usage, which is swapping the GOT
pointer from ebx into the temp register and back to perform the
syscall.
presumably there is a way to fix this with advanced usage of register
constaints on the inline asm, but having bad memories about hellish
compatibility issues with different gcc versions, for the time being
i'm just going to hard-code specific registers to be used. this may
hurt the compiler's ability to optimize, but it will fix serious
miscompilation issues.
so far the only function i know what compiled incorrectly is
getrlimit.c, and naturally the bug only applies to shared (PIC)
builds, but it may be more extensive and may have gone undetected..
the buffer in getaddrinfo really only matters when /etc/hosts is huge,
but in that case, the huge number of syscalls resulting from a tiny
buffer would seriously impact the performance of every name lookup.
the buffer in __dns.c has also been enlarged a bit so that typical
resolv.conf files will fit fully in the buffer. there's no need to
make it so large as to dominate the syscall overhead for large files,
because resolv.conf should never be large.
special care is made to avoid any inexact computations when either arg
is zero (in which case the exact absolute value of the other arg
should be returned) and to support the special condition that
hypot(±inf,nan) yields inf.
hypotl is not yet implemented since avoiding overflow is nontrivial.
the error status is required to be sticky after failure of dlopen or
dlsym until cleared by dlerror. applications and especially libraries
should never rely on this since it is not thread-safe and subject to
race conditions, but glib does anyway.
the old formula atan2(1,sqrt((1+x)/(1-x))) was faster but
could give nan result at x=1 when the rounding mode is
FE_DOWNWARD (so 1-1 == -0 and 2/-0 == -inf), the new formula
gives -0 at x=+-1 with downward rounding.
DECIMAL_DIG is not the same as LDBL_DIG
type_DIG is the maximimum number of decimal digits that can survive a
round trip from decimal to type and back to decimal.
DECIMAL_DIG is the minimum number of decimal digits required in order
for any floating point type to survive the round trip to decimal and
back, and it is generally larger than LDBL_DIG. since the exact
formula is non-trivial, and defining it larger than necessary may be
legal but wasteful, just define the right value in bits/float.h.
this has not been tested heavily, but it's known to at least assemble
and run in basic usage cases. it's nearly identical to the
corresponding i386 code, and thus expected to be just as correct or
just as incorrect.
the main practical results of this change are
1. the regex code is no longer subject to LGPL; it's now 2-clause BSD
2. most (all?) popular nonstandard regex extensions are supported
I hesitate to call this a "sync" since both the old and new code are
heavily modified. in one sense, the old code was "more severely"
modified, in that it was actively hostile to non-strictly-conforming
expressions. on the other hand, the new code has eliminated the
useless translation of the entire regex string to wchar_t prior to
compiling, and now only converts multibyte character literals as
needed.
in the future i may use this modified TRE as a basis for writing the
long-planned new regex engine that will avoid multibyte-to-wide
character conversion entirely by compiling multibyte bracket
expressions specific to UTF-8.
old code saved/restored the fenv (the new code is only as slow
as that when inexact is not set before the call, but some other
flag is set and the rounding is inexact, which is rare)
before:
bench_nearbyint_exact 5000000 N 261 ns/op
bench_nearbyint_inexact_set 5000000 N 262 ns/op
bench_nearbyint_inexact_unset 5000000 N 261 ns/op
after:
bench_nearbyint_exact 10000000 N 94.99 ns/op
bench_nearbyint_inexact_set 25000000 N 65.81 ns/op
bench_nearbyint_inexact_unset 10000000 N 94.97 ns/op
the fscale instruction is slow everywhere, probably because it
involves a costly and unnecessary integer truncation operation that
ends up being a no-op in common usages. instead, construct a floating
point scale value with integer arithmetic and simply multiply by it,
when possible.
for float and double, this is always possible by going to the
next-larger type. we use some cheap but effective saturating
arithmetic tricks to make sure even very large-magnitude exponents
fit. for long double, if the scaling exponent is too large to fit in
the exponent of a long double value, we simply fallback to the
expensive fscale method.
on atom cpu, these changes speed up scalbn by over 30%. (min rdtsc
timing dropped from 110 cycles to 70 cycles.)
exponents (base 2) near 16383 were broken due to (1) wrong cutoff, and
(2) inability to fit the necessary range of scalings into a long
double value.
as a solution, we fall back to using frndint/fscale for insanely large
exponents, and also have to special-case infinities here to avoid
inf-inf generating nan.
thankfully the costly code never runs in normal usage cases.
zero, one, two, half are replaced by const literals
The policy was to use the f suffix for float consts (1.0f),
but don't use suffix for long double consts (these consts
can be exactly represented as double).