aio threads not using SIGEV_THREAD notification are created with small
stacks and no guard page, which is possible since they only run the
code for the requested io operation, not any application code. the
motivation is not creating a lot of VMAs. however, the io thread needs
to be able to receive a cancellation signal in case aio_cancel
(implemented via pthread_cancel) is called. this requires sufficient
stack space for a signal frame, which PTHREAD_STACK_MIN does not
necessarily include.
in principle MINSIGSTKSZ from signal.h should give us sufficient space
for a signal frame, but the value is incorrect on some existing archs
due to kernel addition of new vector register support without
consideration for impact on ABI. some powerpc models exceed
MINSIGSTKSZ by about 0.5k, and x86[_64] with AVX-512 can exceed it by
up to about 1.5k. so use MINSIGSTKSZ+2048 to allow for the discrepancy
plus some working space.
unfortunately, it's possible that signal frame sizes could continue to
grow, and some archs (aarch64) explicitly specify that they may.
passing of a runtime value for MINSIGSTKSZ via AT_MINSIGSTKSZ in the
aux vector was added to aarch64 linux, and presumably other archs will
use this mechanism to report if they further increase the signal frame
size. when AT_MINSIGSTKSZ is present, assume it's correct, so that we
only need a small amount of working space in addition to it; in this
case just add 512.
new in linux commit 76b7f670730e87974f71df9f6129811e2769666e
in struct signalfd_siginfo the pad member is changed to __pad to keep
the namespace clean, it's not part of the public api.
add UDP_NO_CHECK6_* to restrict zero UDP6 checksums, new in linux commit
1c19448c9ba6545b80ded18488a64a7f3d8e6998 (pre-v4.18 change, was missed)
add UDP_SEGMENT to support generic segmentation offload for udp datagrams,
bec1f6f697362c5bc635dacd7ac8499d0a10a4e7 (new in v4.18)
add packet delivery info to tcp_info,
new in linux commit feb5f2ec646483fb66f9ad7218b1aad2a93a2a5c
add TCP_ZEROCOPY_RECEIVE socket option for zerocopy receive,
new in linux commit 05255b823a6173525587f29c4e8f1ca33fd7677d
add TCP_INQ socket option and TCP_CM_INQ cmsg to get in-queue bytes in cmsg
upon read, new in linux commit b75eba76d3d72e2374fac999926dafef2997edd2
add TCP_REPAIR_* to fix repair socket window probe patch,
new in linux commit 31048d7aedf31bf0f69c54a662944632f29d82f2
commit b9410061e2 inadvertently omitted
optopt from the "dynamic list", causing it to be split into separate
objects that don't share their value if the main program contains a
copy relocation for it (for non-PIE executables that access it, and
some PIE ones, depending on arch and toolchain versions/options).
first, the condition (mem && k < p) is redundant, because mem being
nonzero implies the needle is periodic with period exactly p, in which
case any byte that appears in the needle must appear in the last p
bytes of the needle, bounding the shift (k) by p.
second, the whole point of replacing the shift k by mem (=l-p) is to
prevent shifting by less than mem when discarding the memory on shift,
in which case linear time could not be guaranteed. but as written, the
check also replaced shifts greater than mem by mem, reducing the
benefit of the shift. there is no possible benefit to this reduction of
the shift; since mem is being cleared, the full shift is valid and
more optimal. so only replace the shift by mem when it would be less
than mem.
commit ddc947eda3 fixed the
corresponding bug for exit which was introduced when commit
0b80a7b040 added support for
caller-provided buffers, making it possible for stderr to be a
buffered stream.
fflush(NULL) and __stdio_exit lock individual FILEs while holding the
open file list lock to walk the list. since fclose first locked the
FILE to be closed, then the ofl lock, it could deadlock with these
functions.
also, because fclose removed the FILE to be closed from the open file
list before flushing and closing it, a concurrent fclose or exit could
complete successfully before fclose flushed the FILE it was closing,
resulting in data loss.
reorder the body of fclose to first flush and close the file, then
remove it from the open file list only after unlocking it. this
creates a window where consumers of the open file list can see dead
FILE objects, but in the absence of undefined behavior on the part of
the application, such objects will be in an inactive-buffer state and
processing them will have no side effects.
__unlist_locked_file is also moved so that it's performed only for
non-permanent files. this change is not necessary, but preserves
consistency (and thereby provides safety/hardening) in the case where
an application uses one of the standard streams after closing it while
holding an explicit lock on it. such usage is of course undefined
behavior.
Use "+r" in the asm instead of implementing a non-transparent copy by
applying "0" constraint to the source value. Introduce a typedef for
the function type to avoid spelling it out twice.
commit aeeac9ca54 introduced fail-safe
invariants that creating a locale_t object for the C locale or C.UTF-8
locale will always succeed. extend the guarantee to also cover the
following:
- newlocale(LC_ALL_MASK, "", 0)
- newlocale(LC_ALL_MASK-LC_CTYPE_MASK, "C", 0)
provided that the LANG/LC_* environment variables have not been
changed by the program. these usages are idiomatic for getting the
default locale, and for getting a locale that behaves as the C locale
except for honoring the default locale's character encoding.
unify the code paths for allocated and non-allocated locale objects,
always using a tmp object. this is necessary to avoid clobbering the
base locale object too soon if we allow for the possibility that
looking up an explicitly requested locale name may fail, and makes the
code simpler and cleaner anyway.
eliminate the complex and fragile logic for checking whether one of
the non-allocated locale objects can be used for the result, and
instead just memcmp against each of them.
commit 63c188ec42 missed making this
change when switching from atomics to locking for modification of the
global locale, leaving access to locale structures unnecessarily
burdened with the restrictions of volatile.
the volatile qualification was originally added in commit
56fbaa3bbe.
introduce a new LOC_MAP_FAILED sentinel for errors, since null
pointers for a category's locale map indicate the C locale. at this
time, __get_locale does not fail, so there should be no functional
change by this commit.
the choice of signed char for lbf was a theoretically space-saving
hack that was not helping, and was unwantedly expensive. while
comparing bytes against a byte-sized member sounds easy, the trick
here was that the byte to be compared was unsigned while the lbf
member was signed, making it possible to set lbf negative to disable
line buffering. however, this imposed a requirement to promote both
operands, zero-extending one and sign-extending the other, in order to
compare them.
to fix this, repurpose the waiters count slot (unused since commit
c21f750727). while we're at it, switch
mode (orientation) from signed char to int as well. this makes no
semantic difference (its only possible values are -1, 0, and 1) but it
might help on archs where byte access is awkward.
to check whether flush due to line buffering is needed, the int-type
character argument must be truncated to unsigned char for comparison.
if the original value is subsequently passed to __overflow, it must be
preserved, adding to register pressure. since it doesn't matter,
truncate all uses so the original value is no longer live.
the internal putc_unlocked macro was wrongly returning a meaningless
boolean result rather than the written character or EOF.
bug was found by reading (very surprising) asm.
check whether the lock is free before loading the calling thread's
tid. if so, just use a dummy tid value that cannot compare equal to
any actual thread id (because it's one bit wider). this also avoids
the need to save the tid and pass it to locking_getc or locking_putc,
reducing register pressure.
this change might slightly hurt the case where the caller already
holds the lock, but it does not affect the single-threaded case, and
may significantly improve the multi-threaded case, especially on archs
where loading the thread pointer is disproportionately expensive like
early mips and arm ISA levels. but even on i386 it helps, at least on
some machines; I measured roughly a 10-15% improvement.
this is not needed for correctness, but doesn't hurt, and in some
cases the compiler may pessimize the call assuming the callee might be
variadic when it lacks a prototype.
commit 4390383b32 inadvertently used "r"
instead of "0" for the input constraint, which only happened to work
for the configuration I tested it on because it usually makes sense
for the compiler to choose the same input and output register.
by ABI, the public stdin/out/err macros use extern pointer objects,
and this is necessary to avoid copy relocations that would be
expensive and make the size of the FILE structure part of the ABI.
however, internally it makes sense to access the underlying FILE
objects directly. this avoids both an indirection through the GOT to
find the address of the stdin/out/err pointer objects (which can't be
computed PC-relative because they may have been moved to the main
program by copy relocations) and an indirection through the resulting
pointer object.
in most places this is just a minor optimization, but in the case of
getchar and putchar (and the unlocked versions thereof), ipa constant
propagation makes all accesses to members of stdin/out PC-relative or
GOT-relative, possibly reducing register pressure as well.
with these changes, in a program that has not created any threads
besides the main thread and that has not called f[try]lockfile, getc
performs indistinguishably from getc_unlocked. this was measured on
several i386 and x86_64 models, and should hold on other archs too
simply by the properties of the code generation.
the case where the caller already holds the lock (via flockfile) is
improved significantly as well (40-60% reduction in time on machines
tested) and the case where locking is needed is improved somewhat
(roughly 10%).
the key technique used here is forcing the non-hot path out-of-line
and enabling it to be a tail call. a static noinline function
(conditional on __GNUC__) is used rather than the extern hiddens used
elsewhere for this purpose, so that the compiler can choose
non-default calling conventions, making it possible to tail-call to a
callee that takes more arguments than the caller on archs where
arguments are passed on the stack or must have space reserved on the
stack for spilling the. the tid could just be reloaded via the thread
pointer in locking_getc, but that would be ridiculously expensive on
some archs where thread pointer load requires a trap or syscall.
on multiple occasions I've started to flatten/inline the code in
__init_libc, only to rediscover the reason it was not inlined: GCC
fails to deallocate its stack (and now, with the changes in commit
4390383b32, fails to produce a tail call
to the stage 2 function; see PR #87639) before calling main if it was
inlined.
document this with a comment and use an explicit noinline attribute if
__GNUC__ is defined so that even with CFLAGS that heavily favor
inlining it won't get inlined.
this is the analog of commit 1c84c99913
for static linking. unlike with dynamic linking, we don't have
symbolic lookup to use as a barrier. use a dummy (target-agnostic)
degenerate inline asm fragment instead. this technique has precedent
in commit 05ac345f89 where it's used for
explicit_bzero. if it proves problematic in any way, loading the
address of the stage 2 function from a pointer object whose address
leaks to kernelspace during thread pointer init could be used as an
even stronger barrier.
this will allow the compiler to cache and reuse the result, meaning we
no longer have to take care not to load it more than once for the sake
of archs where the load may be expensive.
depends on commit 1c84c99913 for
correctness, since otherwise the compiler could hoist loads during
stage 3 of dynamic linking before the initial thread-pointer setup.
revert commit a603a75a72.
as a result of commit 1c84c99913 this is
now safe, assuming an interpretation of the somewhat-underspecified
attribute((const)) consistent with real-world usage.
commit a603a75a72 removed attribute
const from __errno_location and pthread_self, and the same reasoning
forced arch definitions of __pthread_self to use volatile asm,
significantly impacting code generation and imposing manual caching of
pointers where the impact might be noticable.
reorder the thread pointer setup and place it across a strong barrier
(symbolic function lookup) so that there is no assumed ordering
between the initialization and the accesses to the thread pointer in
stage 3.
fma is only available on recent x86_64 cpus and it is much faster than
a software fma, so this should be done with a runtime check, however
that requires more changes, this patch just adds the code so it can be
tested when musl is compiled with -mfma or -mfma4.
vfma is available in the vfpv4 fpu and above, the ACLE standard feature
test for double precision hardware fma support is
__ARM_FEATURE_FMA && __ARM_FP&8
we need further checks to work around clang bugs (fixed in clang >=7.0)
&& !__SOFTFP__
because __ARM_FP is defined even with -mfloat-abi=soft
&& !BROKEN_VFP_ASM
to disable the single precision code when inline asm handling is broken.
For runtime selection the HWCAP_ARM_VFPv4 hwcap flag can be used, but
that requires further work.
previously (before and after rewrite), spurious escaping of path
separators as \/ was not treated the same as /, but rather got split
as an unpaired \ at the end of the fnmatch pattern and an unescaped /,
resulting in a mismatch/error.
for the case of \/ as part of the maximal literal prefix, remove the
explicit rejection of it and move the handling of / below escape
processing.
for the case of \/ after a proper glob pattern, it's hard to parse the
pattern, so don't. instead cheat and count repetitions of \ prior to
the already-found / character. if there are an odd number, the last is
escaping the /, so back up the split position by one. now the
char clobbered by null termination is variable, so save it and restore
as needed.
this code has been long overdue for a rewrite, but the immediate cause
that necessitated it was total failure to see past unreadable path
components. for example, A/B/* would fail to match anything, even
though it should succeed, when both A and A/B are searchable but only
A/B is readable. this problem both was caught in conformance testing,
and impacted users.
the old glob implementation insisted on searching the listing of each
path component for a match, even if the next component was a literal.
it also used considerable stack space, up to length of the pattern,
per recursion level, and relied on an artificial bound of the pattern
length by PATH_MAX, which was incorrect because a pattern can be much
longer than PATH_MAX while having matches shorter (for example, with
necessarily long bracket expressions, or with redundancy).
in the new implementation, each level of recursion starts by consuming
the maximal literal (possibly escaped-literal) path prefix remaining
in the pattern, and only opening a directory to read when there is a
proper glob pattern in the next path component. it then recurses into
each matching entry. the top-level glob function provided automatic
storage (up to PATH_MAX) for construction of candidate/result strings,
and allocates a duplicate of the pattern that can be modified in-place
with temporary null-termination to pass to fnmatch. this allocation is
not a big deal since glob already has to perform allocation, and has
to link free to clean up if it experiences an allocation failure or
other error after some results have already been allocated.
care is taken to use the d_type field from iterated dirents when
possible; stat is called only when there are literal path components
past the last proper-glob component, or when needed to disambiguate
symlinks for the purpose of GLOB_MARK.
one peculiarity with the new implementation is the manner in which the
error handling callback will be called. if attempting to match */B/C/D
where a directory A exists that is inaccessible, the error reported
will be a stat error for A/B/C/D rather than (previous and wrong
implementation) an opendir error for A, or (likely on other
implementations) a stat error for A/B. such behavior does not seem to
be non-conforming, but if it turns out to be undesirable for any
reason, backtracking could be done on error to report the first
component producing it.
also, redundant slashes are no longer normalized, but preserved as
they appear in the pattern; this is probably more correct, and falls
out naturally from the algorithm used. since trailing slashes (which
force all matches to be directories) are preserved as well, the
behavior of GLOB_MARK has been adjusted not to append an additional
slash to results that already end in slash.
commit 6ba5517a46 modified
__tls_get_addr to offset the address by +DTP_OFFSET (0x8000 on
powerpc, mips, etc.) and adjusted the result of DTPREL relocations by
-DTP_OFFSET to compensate, but missed changing the argument setup for
calls to __tls_get_addr from dlsym.