apparently the original kernel commit's i386 version of siginfo.h
defined this field as unsigned int, but the asm-generic file always
had void *. unsigned int is obviously not a suitable type for an
address, in a non-arch-specific file, and glibc also has void * here,
so I think void * is the right type for it.
also fix redundant type specifiers.
linux commit 8d36eb01da5d371feffa280e501377b5c450f5a5 (2013-05-29)
added PF_IB for InfiniBand
linux commit d021c344051af91f42c5ba9fdedc176740cbd238 (2013-02-06)
added PF_VSOCK for VMware sockets
linux commit a0727e8ce513fe6890416da960181ceb10fbfae6 (2012-04-12)
added siginfo fields for SIGSYS (seccomp uses it)
linux commit ad5fa913991e9e0f122b021e882b0d50051fbdbc (2009-09-16)
added siginfo field and si_code values for SIGBUS (hwpoison signal)
this is a cheat since the _l versions take an extra argument, but
since these functions are only here for ABI purposes, it doesn't
really matter as long as the ABI matches. if the non-__-prefixed
versions are eventually made public, they should proabably be real
functions rather than hacks like this.
unlike the strftime commit, this one is purely an ABI compatibility
issue. the previous version of the code would have worked just as well
with LC_TIME once LC_TIME support is added.
based on a patch by orc, with indexing and flow control cleaned up a
little bit. this code is all going to be replaced at some point in the
near future.
these are needed for some C++ library binaries including most builds
of libstdc++. I'm not entirely clear on the rationale. this patch does
not implement any special semantics for them, but as far as I can
tell, no special treatment is needed in correctly-linked programs;
this binding seems to exist only for catching incorrectly-linked
programs.
this is necessary to meet the C++ ABI target. alternatives were
considered to avoid the size increase for non-sig jmp_buf objects, but
they seemed to have worse properties. moreover, the relative size
increase is only extreme on x86[_64]; one way of interpreting this is
that, if the size increase from this patch makes jmp_buf use too much
memory, then the program was already using too much memory when built
for non-x86 archs.
this bug was caught by the new footer-corruption check in realloc and
free.
if the block returned by malloc was already aligned to the desired
alignment, memalign's logic to split off the misaligned head was
incorrect; rather than writing to a point inside the allocated block,
it was overwriting the footer of the previous block on the heap with
the value 1 (length 0 plus an in-use flag).
fortunately, the impact of this bug was fairly low. (this is probably
why it was not caught sooner.) due to the way the heap works, malloc
will never return a block whose previous block is free. (doing so would
be harmful because it would increase fragmentation with no benefit.)
the footer is actually not needed for in-use blocks, except that its
in-use bit needs to remain set so that it does not get merged with
free blocks, so there was no harm in it being set to 1 instead of the
correct value.
however, there is one case where this bug could have had an impact: in
multi-threaded programs, if another thread freed the previous block
after memalign's call to malloc returned, but before memalign
overwrote the previous block's footer, the resulting block in the free
list could be left in a corrupt state. I have not analyzed the impact
of this bad state and whether it could lead to more serious
malfunction.
the motivation for this patch is that the vast majority of libc is
code that does not benefit at all from optimizations, but that certain
components like string/memory operations can be major performance
bottlenecks.
at the same time, the old -falign-*=1 options are removed, since they
were only beneficial for avoiding bloat when global -O3 was used, and
in that case, they may have prevented some of the performance gains.
to be the most useful, this patch will need further tuning. in
particular, research is needed to determine which components should be
built with -O3 by default, and it may be desirable to remove the
hard-coded -O3 and instead allow more customization of the
optimization level used for selected modules.
this patch is something of a compromise for a compatibility
regression discovered after the header refactoring: libtiff uses
_Int64 for its own use. this is absolutely wrong, invalid C, and
should not be supported, but it's also frustrating for users when code
that used to work suddenly breaks.
rather than leave the breakage in place or change musl internals to
accommodate broken software, I've found a change that makes the
problem go away and improves musl. by undefining these macros at the
end of alltypes.h, the temptation to use them in other headers is
removed. (for example, I almost used _Int64 in sys/types.h to define
u_int64_t rather than adding it back to alltypes.h.) by confining use
of these macros to alltypes.h, we keep it easy to go back and change
the implementation of alltypes later, if needed.
during the header refactoring, I had moved u_int64_t out of alltypes
under the assumption that we could just use long long everywhere.
however, it seems some broken applications make inconsistent mixed use
of u_int64_t and uint64_t, resulting in build errors when the
underlying type differs.
rather than moving nlink_t back to the arch-specific file, I've added
a macro _Reg defined to the canonical type for register-size values on
the arch. this is not the same as _Addr for (not-yet-supported)
32-on-64 pseudo-archs like x32 and mips n32, so a new macro was
needed.
for regoff_t, it's impossible to match on 64-bit archs because glibc
defined the type in a non-conforming way. however this change makes
the type match on 32-bit archs.
since the old, poorly-thought-out musl approach to init/fini arrays on
ARM (when it was the only arch that needed them) was to put the code
in crti/crtn and have the legacy _init/_fini code run the arrays,
adding proper init/fini array support caused the arrays to get
processed twice on ARM. I'm not sure skipping legacy init/fini
processing is the best solution to the problem, but it works, and it
shouldn't break anything since the legacy init/fini system was never
used for ARM EABI.
aside from the obvious C++ ABI purpose for this change, it also brings
musl into alignment with the compiler's idea of the definition of
wint_t (use in -Wformat), and makes the situation less awkward on ARM,
where wchar_t is unsigned.
internal code using wint_t and WEOF was checked against this change,
and while a few cases of storing WEOF into wchar_t were found, they
all seem to operate properly with the natural conversion from unsigned
to signed.
the arch-specific bits/alltypes.h.sh has been replaced with a generic
alltypes.h.in and minimal arch-specific bits/alltypes.h.in.
this commit is intended to have no functional changes except:
- exposing additional symbols that POSIX allows but does not require
- changing the C++ name mangling for some types
- fixing the signedness of blksize_t on powerpc (POSIX requires signed)
- fixing the limit macros for sig_atomic_t on x86_64
- making dev_t an unsigned type (ABI matching goal, and more logical)
in addition, some types that were wrongly defined with long on 32-bit
archs were changed to int, and vice versa; this change is
non-functional except for the possibility of making pointer types
mismatch, and only affects programs that were using them incorrectly,
and only at build-time, not runtime.
the following changes were made in the interest of moving
non-arch-specific types out of the alltypes system and into the
headers they're associated with, and also will tend to improve
application compatibility:
- netdb.h now includes netinet/in.h (for socklen_t and uint32_t)
- netinet/in.h now includes sys/socket.h and inttypes.h
- sys/resource.h now includes sys/time.h (for struct timeval)
- sys/wait.h now includes signal.h (for siginfo_t)
- langinfo.h now includes nl_types.h (for nl_item)
for the types in stdint.h:
- types which are of no interest to other headers were moved out of
the alltypes system.
- fast types for 8- and 64-bit are hard-coded (at least for now); only
the 16- and 32-bit ones have reason to vary by arch.
and the following types have been changed for C++ ABI purposes;
- mbstate_t now has a struct tag, __mbstate_t
- FILE's struct tag has been changed to _IO_FILE
- DIR's struct tag has been changed to __dirstream
- locale_t's struct tag has been changed to __locale_struct
- pthread_t is defined as unsigned long in C++ mode only
- fpos_t now has a struct tag, _G_fpos64_t
- fsid_t's struct tag has been changed to __fsid_t
- idtype_t has been made an enum type (also required by POSIX)
- nl_catd has been changed from long to void *
- siginfo_t's struct tag has been removed
- sigset_t's has been given a struct tag, __sigset_t
- stack_t has been given a struct tag, sigaltstack
- suseconds_t has been changed to long on 32-bit archs
- [u]intptr_t have been changed from long to int rank on 32-bit archs
- dev_t has been made unsigned
summary of tests that have been performed against these changes:
- nsz's libc-test (diff -u before and after)
- C++ ABI check symbol dump (diff -u before, after, glibc)
- grepped for __NEED, made sure types needed are still in alltypes
- built gcc 3.4.6
these functions were mistakenly assumed to be needed to match glibc
ABI, but glibc has them as part of the non-shared part of libc that's
always statically linked into the main program. moreover, the only
place they are referenced from is glibc's crt1.o.
modern (4.7.x and later) gcc uses init/fini arrays, rather than the
legacy _init/_fini function pasting and crtbegin/crtend ctors/dtors
system, on most or all archs. some archs had already switched a long
time ago. without following this change, global ctors/dtors will cease
to work under musl when building with new gcc versions.
the most surprising part of this patch is that it actually reduces the
size of the init code, for both static and shared libc. this is
achieved by (1) unifying the handling main program and shared
libraries in the dynamic linker, and (2) eliminating the
glibc-inspired rube goldberg machine for passing around init and fini
function pointers. to clarify, some background:
the function signature for __libc_start_main was based on glibc, as
part of the original goal of being able to run some glibc-linked
binaries. it worked by having the crt1 code, which is linked into
every application, static or dynamic, obtain and pass pointers to the
init and fini functions, which __libc_start_main is then responsible
for using and recording for later use, as necessary. however, in
neither the static-linked nor dynamic-linked case do we actually need
crt1.o's help. with dynamic linking, all the pointers are available in
the _DYNAMIC block. with static linking, it's safe to simply access
the _init/_fini and __init_array_start, etc. symbols directly.
obviously changing the __libc_start_main function signature in an
incompatible way would break both old musl-linked programs and
glibc-linked programs, so let's not do that. instead, the function can
just ignore the information it doesn't need. new archs need not even
provide the useless args in their versions of crt1.o. existing archs
should continue to provide it as long as there is an interest in
having newly-linked applications be able to run on old versions of
musl; at some point in the future, this support can be removed.
for conversion specifiers, alloc is always set when the specifier is
parsed. however, if scanf stops due to mismatching literal text,
either an uninitialized (if no conversions have been performed yet) or
stale (from the previous conversion) of the flag will be used,
possibly causing an invalid pointer to be passed to free when the
function returns.
the sizes in the header and footer for a chunk should always match. if
they don't, the program has definitely invoked undefined behavior, and
the most likely cause is a simple overflow, either of a buffer in the
block being freed or the one just below it.
crashing here should not only improve security of buggy programs, but
also aid in debugging, since the crash happens in a context where you
have a pointer to the likely-overflowed buffer.
while there's no POSIX namespace provision for UIO_* in uio.h, this
exact macro name is reserved in XBD 2.2.2. apparently some
glibc-centric software expects it to exist, so let's provide it.
the main aim of this patch is to ensure that if not all fields are
filled in, they contain zeros, so as not to confuse applications.
reportedly some older kernels, including commonly used openvz kernels,
lack the f_flags field, resulting in applications reading random junk
as the mount flags; the common symptom seems to be wrongly considering
the filesystem to be mounted read-only and refusing to operate. glibc
has some amazingly ugly fallback code to get the mount flags for old
kernels, but having them really is not that important anyway; what
matters most is not presenting incorrect flags to the application.
I have also aimed to fill in some fields of statvfs that were
previously missing, and added code to explicitly zero the reserved
space at the end of the structure, which will make things easier in
the future if this space someday needs to be used.
this change is both to fix one of the remaining type (and thus C++
ABI) mismatches with glibc/LSB and to allow use of the full range of
uid and gid values, if so desired.
passwd/group access functions were not prepared to deal with unsigned
values, so they too have been fixed with this commit.
prior to this change, using a non-default syslibdir was impractical on
systems where the ordinary library paths contain musl-incompatible
library files. the file containing search paths was always taken from
/etc, which would either correspond to a system-wide musl
installation, or fail to exist at all, resulting in searching of the
default library path.
the new search strategy is safe even for suid programs because the
pathname used comes from the PT_INTERP header of the program being
run, rather than any external input.
as part of this change, I have also begun differentiating the names of
arch variants that differ by endianness or floating point calling
convention. the corresponding changes in the build system and and gcc
wrapper script (to use an alternate dynamic linker name) for these
configurations have not yet been made.
POSIX is not clear on whether it includes the termination, but ISO C
requires that it does. the whole concept of this macro is rather
useless, but it's better to be correct anyway.
this is both a minor scheduling optimization and a workaround for a
difficult-to-fix bug in qemu app-level emulation.
from the scheduling standpoint, it makes no sense to schedule the
parent thread again until the child has exec'd or exited, since the
parent will immediately block again waiting for it.
on the qemu side, as regular application code running on an underlying
libc, qemu cannot make arbitrary clone syscalls itself without
confusing the underlying implementation. instead, it breaks them down
into either fork-like or pthread_create-like cases. it was treating
the code in posix_spawn as pthread_create-like, due to CLONE_VM, which
caused horribly wrong behavior: CLONE_FILES broke the synchronization
mechanism, CLONE_SIGHAND broke the parent's signals, and CLONE_THREAD
caused the child's exec to end the parent -- if it hadn't already
crashed. however, qemu special-cases CLONE_VFORK and emulates that
with fork, even when CLONE_VM is also specified. this also gives
incorrect semantics for code that really needs the memory sharing, but
posix_spawn does not make use of the vm sharing except to avoid
momentary double commit charge.
programs using posix_spawn (including via popen) should now work
correctly under qemu app-level emulation.