checking for EINVAL should be sufficient, but qemu user emulation
returns EPROTONOSUPPORT in some of the failure cases, and it seems
conceivable that other kernels doing linux-emulation could make the
same mistake. since DNS lookups and other important code might break
if the fallback does not get invoked, be extra careful and check for
either error.
note that it's important NOT to perform the fallback code on other
errors such as resource-exhaustion cases, since the fallback is not
atomic and will lead to file-descriptor leaks in multi-threaded
programs that use exec. the fallback code is only "safe" to run when
the initial failure is caused by the application's choice of
arguments, not the system state.
some of these were coming from stdio functions locking files without
unlocking them. I believe it's useful for this to throw a warning, so
I added a new macro that's self-documenting that the file will never
be unlocked to avoid the warning in the few places where it's wrong.
patches by Alex Caudill (npx). the dynamic-linked version is almost
identical to the final submitted patch; I just added a couple missing
lines for saving the phdr address when the dynamic linker is invoked
directly to run a program, and removed a couple to avoid introducing
another unnecessary type. the static-linked version is based on npx's
draft. it could use some improvements which are contingent on the
startup code saving some additional information for later use.
ideally, system would also be cancellable while running the external
command, but I cannot find any way to make that work without either
leaking zombie processes or introducing behavior that is far outside
what the standard specifies. glibc handles cancellation by killing the
child process with SIGKILL, but this could be unsafe in that it could
leave the data being manipulated by the command in an inconsistent
state.
for conformance, two functions should not have the same address. a
conforming program could use the addresses of getc and fgetc in ways
that assume they are distinct. normally i would just use a wrapper,
but these functions are so small and performance-critical that an
extra layer of function call could make the one that's a wrapper
nearly twice as slow, so I'm just duplicating the code instead.
-lpcc only works if -nostdlib is not passed, so it's useless. instead,
use -print-file-name to look up the full pathname for libpcc.a, and
check whether that succeeds before trying to link with the result.
also, silence pcc's junk printed on stdout during tests.
in old versions of pcc, the directory containing libpcc.a was not in
the library path, and other options like -print-file-name may have
been needed to locate it. however, -print-file-name itself seems to
have been added around the same time that the directory was added to
the search path, and moreover, I see no evidence that older versions
of pcc are capable of building a working musl shared library. thus, it
seems reasonable to just test whether -lpcc is accepted.
on x86 and some other archs, functions which make function calls which
might go through a PLT incur a significant overhead cost loading the
GOT register prior to making the call. this load is utterly useless in
musl, since all calls are bound at library-creation time using
-Bsymbolic-functions, but the compiler has no way of knowing this, and
attempts to set the default visibility to protected have failed due to
bugs in GCC and binutils.
this commit simply manually assigns hidden/protected visibility, as
appropriate, to a few internal-use-only functions which have many
callers, or which have callers that are hot paths like getc/putc. it
shaves about 5k off the i386 libc.so with -Os. many of the
improvements are in syscall wrappers, where the benefit is just size
and performance improvement is unmeasurable noise amid the syscall
overhead. however, stdio may be measurably faster.
if in the future there are toolchains that can do the same thing
globally without introducing linking bugs, it might be worth
considering removing these workarounds.
pcc wrongly passes any option beginning with -m to the linker, and
will break at link time if these options were added to CFLAGS. testing
linking lets us catch this at configure time and skip them.
these functions must behave as if they obtain the lock via flockfile
to satisfy POSIX requirements. since another thread can provably hold
the lock when they are called, they must wait to obtain the lock
before they can return, even if the correct return value could be
obtained without locking. in the case of fclose and freopen, failure
to do so could cause correct (albeit obscure) programs to crash or
otherwise misbehave; in the case of feof, ferror, and fwide, failure
to obtain the lock could sometimes return incorrect results. in any
case, having these functions proceed and return while another thread
held the lock was wrong.
1. don't open /dev/null just as a basis to copy flags; use shared
__fmodeflags function to get the right file flags for the mode.
2. handle the case (probably invalid, but whatever) case where the
original stream's file descriptor was closed; previously, the logic
re-closed it.
3. accept the "e" mode flag for close-on-exec; update dup3 to fallback
to using dup2 so we can simply call __dup3 instead of putting fallback
logic in freopen itself.
the behavior of putenv is left undefined if the argument does not
contain an equal sign, but traditional implementations behave this way
and gnulib replaces putenv if it doesn't do this.
__release_ptc() is only valid in the parent; if it's performed in the
child, the lock will be unlocked early then double-unlocked later,
corrupting the lock state.
since we target systems without overcommit, special care should be
taken that system() and popen(), like posix_spawn(), do not fail in
processes whose commit charges are too high to allow ordinary forking.
this in turn requires special precautions to ensure that the parent
process's signal handlers do not end up running in the shared-memory
child, where they could corrupt the state of the parent process.
popen has also been updated to use pipe2, so it does not have a
fd-leak race in multi-threaded programs. since pipe2 is missing on
older kernels, (non-atomic) emulation has been added.
some silly bugs in the old code should be gone too.
despite documentation that makes it sound a lot different, the only
ABI-constraint difference between TLS variants II and I seems to be
that variant II stores the initial TLS segment immediately below the
thread pointer (i.e. the thread pointer points to the end of it) and
variant I stores the initial TLS segment above the thread pointer,
requiring the thread descriptor to be stored below. the actual value
stored in the thread pointer register also tends to have per-arch
random offsets applied to it for silly micro-optimization purposes.
with these changes applied, TLS should be basically working on all
supported archs except microblaze. I'm still working on getting the
necessary information and a working toolchain that can build TLS
binaries for microblaze, but in theory, static-linked programs with
TLS and dynamic-linked programs where only the main executable uses
TLS should already work on microblaze.
alignment constraints have not yet been heavily tested, so it's
possible that this code does not always align TLS segments correctly
on archs that need TLS variant I.
usage of vfork creates a situation where a process of lower privilege
may momentarily have write access to the memory of a process of higher
privilege.
consider the case of a multi-threaded suid program which is calling
posix_spawn in one thread while another thread drops the elevated
privileges then runs untrusted (relative to the elevated privilege)
code as the original invoking user. this untrusted code can then
potentially modify the data the child process will use before calling
exec, for example changing the pathname or arguments that will be
passed to exec.
note that if vfork is implemented as fork, the lock will not be held
until the child execs, but since memory is not shared it does not
matter.