the old idiom, f->mode |= f->mode+1, was adapted from the idiom for
setting byte orientation, f->mode |= f->mode-1, but the adaptation was
incorrect. unless the stream was alreasdy set byte-oriented, this code
incremented f->mode each time it was executed, which would eventually
lead to overflow. it could be fixed by changing it to f->mode |= 1,
but upcoming changes will require slightly more work at the time of
wide orientation, so it makes sense to just call fwide. as an
optimization in the single-character functions, fwide is only called
if the stream is not already wide-oriented.
this can be used to put off writing an asm version of __unmapself for
new archs, or as a permanent solution on archs where it's not
practical or even possible to run momentarily with no stack.
the concept here is simple: the caller takes a lock on a global shared
stack and uses it to make the munmap and exit syscalls. the only trick
is unlocking, which must be done after the thread exits, and this is
achieved by using the set_tid_address syscall to have the kernel zero
and futex-wake the lock word as part of the exit syscall.
the linux/nommu fdpic ELF loader sets up the brk range to overlap
entirely with the main thread's stack (but growing from opposite
ends), so that the resulting failure mode for malloc is not to return
a null pointer but to start returning pointers to memory that overlaps
with the caller's stack. needless to say this extremely dangerous and
makes brk unusable.
since it's non-trivial to detect execution environments that might be
affected by this kernel bug, and since the severity of the bug makes
any sort of detection that might yield false-negatives unsafe, we
instead check the proximity of the brk to the stack pointer each time
the brk is to be expanded. both the main thread's stack (where the
real known risk lies) and the calling thread's stack are checked. an
arbitrary gap distance of 8 MB is imposed, chosen to be larger than
linux default main-thread stack reservation sizes and larger than any
reasonable stack configuration on nommu.
the effeciveness of this patch relies on an assumption that the amount
by which the brk is being grown is smaller than the gap limit, which
is always true for malloc's use of brk. reliance on this assumption is
why the check is being done in malloc-specific code and not in __brk.
for several pwd/grp functions, the only way the caller can distinguish
between a successful negative result ("no such user/group") and an
internal error is by clearing errno before the call and checking errno
afterwards. the nscd backend support code correctly simulated a
not-found response on systems where such a backend is not running, but
failed to restore errno.
this commit also fixed an outdated/incorrect comment.
the arm atomics/TLS runtime selection code is called from
__set_thread_area and depends on having libc.auxv and __hwcap
available. commit 71f099cb7d moved the
first call to __set_thread_area to the top of dynamic linking stage 3,
before this data is made available, causing the runtime detection code
to always see __hwcap as zero and thereby select the atomics/TLS
implementations based on kuser helper.
upcoming work on superh will use similar runtime detection.
ideally this early-init code should be cleanly refactored and shared
between the dynamic linker and static-linked startup.
unless/until the byte-based C locale is implemented, defining
MB_CUR_MAX to 1 in the C locale is wrong. no internal code currently
uses the MB_CUR_MAX macro, but having it defined inconsistently is
error-prone. applications get the value from stdlib.h and were
unaffected.
aside from being invalid, the early check only optimized the error
case, and likely pessimized the common case by separating the
two branches on isascii(c) at opposite ends of the function.
commit f3ddd17380 inadvertently removed
the early check for "none" type relocations, causing the address
dso->base+0 to be dereferenced to obtain an addend. shared libraries,
(including libc.so) and PIE executables were unaffected, since their
base addresses are the actual address of their mappings and are
readable. non-PIE main executables, however, have a base address of 0
because their load addresses are absolute and not offset at load time.
in practice none-type relocations do not arise with toolchains that
are in use except on mips, and on mips it's moderately rare for a
non-PIE executable to have a relocation table, since the mips-specific
got processing serves in its place for most purposes.
these functions were written to handle clearing eof status, but failed
to account for the __toread function's handling of eof. with this
patch applied, __toread still returns EOF when the file is in eof
status, so that read operations will fail, but it also sets up valid
buffer pointers for read mode, which are set to the end of the buffer
rather than the beginning in order to make the whole buffer available
to ungetc/ungetwc.
minor changes to __uflow were needed since it's now possible to have
non-zero buffer pointers while in eof status. as made, these changes
remove a 'fast path' bypassing the function call to __toread, which
could be reintroduced with slightly different logic, but since
ordinary files have a syscall in f->read, optimizing the code path
does not seem worthwhile.
the __stdio_read function is also updated not to zero the read buffer
pointers on eof/error. while not necessary for correctness, this
change avoids the overhead of calling __toread in ungetc after
reaching eof, and it also reduces code size and increases consistency
with the fmemopen read operation which does not zero the pointers.
some compilers (such as clang) accept unknown options without error,
but then print warnings on each invocation, cluttering the build
output and burying meaningful warnings. this patch makes configure's
tryflag and tryldflag functions use additional options to turn the
unknown-option warnings into errors, if available, but only at check
time. these options are not output in config.mak to avoid the risk of
spurious build breakage; if they work, they will have already done
their job at configure time.
this frees applications which need to make temporary use of the C
locale (via uselocale) from the possibility that newlocale might fail.
the C.UTF-8 locale is also provided as a static locale. presently they
behave the same, but this may change in the future.
previously, LC_MESSAGES was treated specially as the only category
which could be set to a locale name without a definition file, in
order to facilitate gettext message translations when no libc locale
was available. LC_NUMERIC was completely un-settable, and LC_CTYPE
stored a flag intended to be used for a possible future byte-based C
locale, instead of storing a __locale_map pointer like the other
categories use.
this patch changes all categories to be represented by pointers to
__locale_map structures, and allows locale names without definition
files to be treated as valid locales with trivial definition when used
in any category. outwardly visible functional changes should be minor,
limited mainly to the strings read back from setlocale and the way
gettext handles translations in categories other than LC_MESSAGES.
various internal refactoring has also been performed, and improvements
in const correctness have been made.
this is part of a general program of removing direct use of atomics
where they are not necessary to meet correctness or performance needs,
but in this case it's also an optimization. only the global locale
needs synchronization; allocated locales referenced with locale_t
handles are immutable during their lifetimes, and using atomics to
initialize them increases their cost of setup.
static-linked PIE files need startup code to relocate themselves, much
like the dynamic linker does. rcrt1.c reuses the code in dlstart.c,
stage 1 of the dynamic linker, which in turn reuses crt_arch.h, to
achieve static PIE with no new code. only relative relocations are
supported.
existing toolchains that don't yet support static PIE directly can be
repurposed by passing "-shared -Wl,-Bstatic -Wl,-Bsymbolic" instead of
"-static -pie" and substituting rcrt1.o in place of crt1.o.
all libraries being linked must be built as PIC/PIE; TEXTRELs are not
supported at this time.
commit de2b67f8d4 attempted to avoid
having vis.h affect crt files, but the Makefile variable used,
CRT_LIBS, refers to the final output copies in the lib directory, not
the copies in the crt build directory, and thus the -DCRT was not
applied.
while unlikely to be noticed, this regression probably broke
production of PIE executables whose main functions are not in the
executable but rather a shared library.
commit f3ddd17380 introduced early
relocations and subsequent reprocessing as part of the dynamic linker
bootstrap overhaul, to allow use of arbitrary libc functions before
the main application and libraries are loaded, but only reprocessed
GOT/PLT relocation types.
commit c093e2e820 added reprocessing of
non-GOT/PLT relocations to fix an actual regression that was observed
on powerpc, but only for RELA format tables with out-of-line addends.
REL table (inline addends at the relocation address) reprocessing is
trickier because the first relocation pass clobbers the addends.
this patch extends symbolic relocation reprocessing for libc/ldso to
support all relocation types, whether REL or RELA format tables are
used. it is believed not to alter behavior on any existing archs for
the current dynamic linker and libc code. the motivations for this
change are consistency and future-proofing. it ensures that behavior
does not differ depending on whether REL or RELA tables are used,
which could lead to undetected arch-specific bugs. it also ensures
that, if in the future code depending on additional relocation types
is added to libc.so, either at the source level or as part of the
compiler runtime that gets pulled in (for example, soft-float with TLS
for fenv), the new code will work properly.
the implementation concept is simple: stage 2 of the dynamic linker
counts the number of symbolic relocations in the libc/ldso REL table
and allocates a VLA to save their addends into; stage 3 then uses the
saved addends in place of the inline ones which were clobbered. for
stack safety, a hard limit (currently 4k) is imposed on the number of
such addends; this should be a couple orders of magnitude larger than
the actual need. this number is not a runtime variable that could
break fail-safety; it is constant for a given libc.so build.
this move eliminates a duplicate "by-hand" symbol lookup loop from the
stage-1 code and replaces it with a call to find_sym, which can be
used once we're in stage 2. it reduces the size of the stage 1 code,
which is helpful because stage 1 will become the crt start file for
static-PIE executables, and it will allow stage 3 to access stage 2's
automatic storage, which will be important in an upcoming commit.
the outer-loop approach made sense when we were also processing
DT_JMPREL, which might be in REL or RELA form, to avoid major code
duplication. commit 09db855b35 removed
processing of DT_JMPREL, and in the remaining two tables, the format
(REL or RELA) is known by the name of the table. simply writing two
versions of the loop results in smaller and simpler code.
the DT_JMPREL relocation table necessarily consists entirely of
JMP_SLOT (REL_PLT in internal nomenclature) relocations, which are
symbolic; they cannot be resolved in stage 1, so there is no point in
processing them.
the instruction used to align the stack, "and $sp, $sp, -8", does not
actually exist; it's expanded to 2 instructions using the 'at'
(assembler temporary) register, and thus cannot be used in a branch
delay slot. since alignment mod 16 commutes with subtracting 8, simply
swapping these two operations fixes the problem.
crt1.o was not affected because it's still being generated from a
dedicated asm source file. dlstart.lo was not affected because the
stack pointer it receives is already aligned by the kernel. but
Scrt1.o was affected in cases where the dynamic linker gave it a
misaligned stack pointer.
i386 and x86_64 versions already had the .text directive; other archs
did not. normally, top-level (file scope) __asm__ starts in the .text
section anyway, but problems were reported with some versions of
clang, and it seems preferable to set it explicitly anyway, at least
for the sake of consistency between archs.
while not a requirement, it's common convention in other iconv
implementations to accept "CHAR" as an alias for nl_langinfo(CODESET),
meaning the encoding used for char[] strings in the current locale,
and also "" as an alternate form. supporting this is not costly and
improves compatibility.
conceptually, and on other archs, these functions take a pointer to
int, but in the i386, x86_64, and x32 versions of atomic.h, they took
a pointer to void instead.
If we're building for sh4a, the compiler is already free to use
instructions only available on sh4a, so we can do the same and inline the
llsc atomics. If we're building for an older processor, we still do the
same runtime atomics selection as before.
this fixes a regression on powerpc that was introduced in commit
f3ddd17380. global data accesses on
powerpc seem to be using a translation-unit-local GOT filled via
R_PPC_ADDR32 relocations rather than R_PPC_GLOB_DAT. being a non-GOT
relocation type, these were not reprocessed after adding the main
application and its libraries to the chain, causing libc code not to
see copy relocations in the main program, and therefore to use the
pre-copy-relocation addresses for global data objects (like environ).
the motivation for the dynamic linker only reprocessing GOT/PLT
relocation types in stage 3 is that these types always have a zero
addend, making them safe to process again even if the storage for the
addend has been clobbered. other relocation types which can be used
for address constants in initialized data objects may have non-zero
addends which will be clobbered during the first pass of relocation
processing if they're stored inline (REL form) rather than out-of-line
(RELA form).
powerpc generally uses only RELA, so this patch is sufficient to fix
the regression in practice, but is not fully general, and would not
suffice if an alternate toolchain generated REL for powerpc.
if setlocale has not been called, the current locale's messages_name
may be a null pointer. the code path where it's assumed to be non-null
was only reachable if bindtextdomain had already been called, which is
normally not done in programs which do not call setlocale, so the
omitted check went unnoticed.
patch from Void Linux, with description rewritten.
the code being removed used atomics to track whether any threads might
be using a locale other than the current global locale, and whether
any threads might have abstract 8-bit (non-UTF-8) LC_CTYPE active, a
feature which was never committed (still pending). the motivations
were to support early execution prior to setup of the thread pointer,
to partially support systems (ancient kernels) where thread pointer
setup is not possible, and to avoid high performance cost on archs
where accessing the thread pointer may be very slow.
since commit 19a1fe670a, the thread
pointer is always available, so these hacks are no longer needed.
removing them greatly simplifies the affected code.
commit f630df09b1 added logic to handle
the case where __set_thread_area is called more than once by reusing
the GDT slot already in the %gs register, and only setting up a new
GDT slot when %gs is zero. this created a hidden assumption that %gs
is zero when a new process image starts, which is true in practice on
Linux, but does not seem to be documented ABI, and fails to hold under
qemu app-level emulation.
while it would in theory be possible to zero %gs in the entry point
code, this code is shared between static and dynamic binaries, and
dynamic binaries must not clobber the value of %gs already setup by
the dynamic linker.
the alternative solution implemented in this commit simply uses global
data to store the GDT index that's selected. __set_thread_area should
only be called in the initial thread anyway (subsequent threads get
their thread pointer setup by __clone), but even if it were called by
another thread, it would simply read and write back the same GDT index
that was already assigned to the initial thread, and thus (in the x86
memory model) there is no data race.
compilers targeting armv7 may be configured to produce thumb2 code
instead of arm code by default, and in the future we may wish to
support targets where only the thumb instruction set is available.
the instructions this patch omits in thumb mode are needed only for
non-thumb versions of armv4 or earlier, which are not supported by any
current compilers/toolchains and thus rather pointless to have. at
some point these compatibility return sequences may be removed from
all asm source files, and in that case it would make sense to remove
them here too and remove the ifdef.
compilers targeting armv7 may be configured to produce thumb2 code
instead of arm code by default, and in the future we may wish to
support targets where only the thumb instruction set is available.
the changes made here avoid operating directly on the sp register,
which is not possible in thumb code, and address an issue with the way
the address of _DYNAMIC is computed.
previously, the relative address of _DYNAMIC was stored with an
additional offset of -8 versus the pc-relative add instruction, since
on arm the pc register evaluates to ".+8". in thumb code, it instead
evaluates to ".+4". both are two (normal-size) instructions beyond "."
in the current execution mode, so the numbered label 2 used in the
relative address expression is simply moved two instructions ahead to
be compatible with both instruction sets.
i386, x86_64, x32, and powerpc all use TLS for stack protector canary
values in the default stack protector ABI, but the location only
matched the ABI on i386 and x86_64. on x32, the expected location for
the canary contained the tid, thus producing spurious mismatches
(resulting in process termination) upon fork. on powerpc, the expected
location contained the stdio_locks list head, so returning from a
function after calling flockfile produced spurious mismatches. in both
cases, the random canary was not present, and a predictable value was
used instead, making the stack protector hardening much less effective
than it should be.
in the current fix, the thread structure has been expanded to have
canary fields at all three possible locations, and archs that use a
non-default location must define a macro in pthread_arch.h to choose
which location is used. for most archs (which lack TLS canary ABI) the
choice does not matter.