the concept of both versions is the same; they differ only in details.
for long runs, they use "rep movsl" or "rep movsq", and for small
runs, they use a trick, writing from both ends towards the middle,
that reduces the number of branches needed. in addition, if memset is
called multiple times with the same length, all branches will be
predicted; there are no loops.
for larger runs, there are likely faster approaches than "rep", at
least on some cpu models. for 32-bit, it's unlikely that there is any
faster approach that does not require non-baseline instructions; doing
anything fancier would require inspecting cpu capabilities. for
64-bit, there may very well be faster versions that work on all
models; further optimization could be explored in the future.
with these changes, memset is anywhere between 50% faster and 6 times
faster, depending on the cpu model and the length and alignment of the
destination buffer.
the original motivation for this patch was that qemu (and possibly
other syscall emulators) nop out madvise, resulting in an infinite
loop. however, there is another benefit to this change: madvise may
actually undo an explicit madvise the application intended for its
stack, whereas the mremap operation is a true nop. the logic here is
that mremap must fail if it cannot resize the mapping in-place, and
the caller knows that it cannot resize in-place because it knows the
next page of virtual memory is already occupied.
one of the arguments to memcmp may be shorter than the length l-3, and
memcmp is under no obligation not to access past the first byte that
differs. instead use strncmp which conveys the correct semantics. the
performance difference is negligible here and since the code is only
use for shared libc, both functions are already linked anyway.
the dev/inode for the main app and the dynamic linker ("interpreter")
are not available, so the subsequent checks don't work. in general we
don't want to make exact string matches to existing libraries prevent
loading new ones, since this breaks loading upgraded modules in
module-loading systems. so instead, special-case it.
the motivation for this fix is that calling dlopen on the names
returned by dl_iterate_phdr or walking the link map (obtained by
dlinfo) seem to be the only methods available to an application to
actually get a list of open dso handles.
reject elf files which are not ET_EXEC/ET_DYN type as bad exec format,
and reject ET_EXEC files when they cannot be loaded at the correct
address, since they are not relocatable at runtime. the main practical
benefit of this is to make dlopen of the main program fail rather than
producing an unsafe-to-use handle.
it's not clear to me why the linker even outputs these headers if they
are null, but apparently it does so. with the default startfiles, they
will never be null anyway, but this patch allows eliminating crti,
crtn, crtbegin, and crtend (leaving only crt1) if the toolchain is
using init_array/fini_array (or for a C-only, no-ctor environment).
in signal() it is needed since __sigaction uses restrict in parameters
and sharing the buffer is technically an aliasing error. do the same
for the syscall, as at least qemu-user does not handle it properly.
LC_GLOBAL_LOCALE refers to the global locale, controlled by setlocale,
not the thread-local locale in effect which these functions should be
using. neither LC_GLOBAL_LOCALE nor 0 has an argument to the *_l
functions has behavior defined by the standard, but 0 is a more
logical choice for requesting the callee to lookup the current locale.
in the future I may move the current locale lookup the the caller (the
non-_l-suffixed wrapper).
at this point, all of the locale logic is dummied out, so no harm was
done, but it should at least avoid misleading usage.
also add a warning to the existing sys/poll.h. the warning is absent
from sys/dir.h because it is actually providing a slightly different
API to the program, and thus just replacing the #include directive is
not a valid fix to programs using this one.
entry point was wrong for PIE. e_entry was being treated as an
absolute value, whereas it's actually relative to the load address
(which is zero for non-PIE).
phdr pointer was wrong for non-PIE. e_phoff was being treated as
load-address-relative, whereas it's actually a file offset in the ELF
file. in any case, map_library was already computing it correctly, and
the incorrect code in __dynlink was overwriting it with junk.
the only immediate effect of this commit is enabling PIE support on
some archs that did not previously have any Scrt1.s, since the
existing asm files for crt1 override this C code. so some of the
crt_arch.h files committed are only there for the sake of documenting
what their archs "would do" if they used the new C-based crt1.
the expectation is that new archs should use this new system rather
than using heavy asm for crt1. aside from being easier and less
error-prone, it also ensures that PIE support is available immediately
(since Scrt1.o is generated from the same C source, using -fPIC)
rather than having to be added as an afterthought in the porting
process.
based on a patch by orc. POSIX actually fails to specify the format of
the ntop conversion; presumably, any output that will correctly
round-trip back via the (well-specified) pton operation is acceptable.
the new behavior is much more convenient than the old, however.
this patch also affects getnameinfo, which is implemented in terms of
inet_ntop and which is the preferred interface for performing this
conversion.
I've also removed some inexplicable cruft (filling the buffer with 'x'
before doing anything) whose origin I was unable to track down.
it's not clear that -O3 helps them, and gcc seems to have floating
point optimization bugs that introduce additional failures when -O3 is
used on some of these files.
apparently the original kernel commit's i386 version of siginfo.h
defined this field as unsigned int, but the asm-generic file always
had void *. unsigned int is obviously not a suitable type for an
address, in a non-arch-specific file, and glibc also has void * here,
so I think void * is the right type for it.
also fix redundant type specifiers.
linux commit 8d36eb01da5d371feffa280e501377b5c450f5a5 (2013-05-29)
added PF_IB for InfiniBand
linux commit d021c344051af91f42c5ba9fdedc176740cbd238 (2013-02-06)
added PF_VSOCK for VMware sockets
linux commit a0727e8ce513fe6890416da960181ceb10fbfae6 (2012-04-12)
added siginfo fields for SIGSYS (seccomp uses it)
linux commit ad5fa913991e9e0f122b021e882b0d50051fbdbc (2009-09-16)
added siginfo field and si_code values for SIGBUS (hwpoison signal)
this is a cheat since the _l versions take an extra argument, but
since these functions are only here for ABI purposes, it doesn't
really matter as long as the ABI matches. if the non-__-prefixed
versions are eventually made public, they should proabably be real
functions rather than hacks like this.
unlike the strftime commit, this one is purely an ABI compatibility
issue. the previous version of the code would have worked just as well
with LC_TIME once LC_TIME support is added.
based on a patch by orc, with indexing and flow control cleaned up a
little bit. this code is all going to be replaced at some point in the
near future.
these are needed for some C++ library binaries including most builds
of libstdc++. I'm not entirely clear on the rationale. this patch does
not implement any special semantics for them, but as far as I can
tell, no special treatment is needed in correctly-linked programs;
this binding seems to exist only for catching incorrectly-linked
programs.
this is necessary to meet the C++ ABI target. alternatives were
considered to avoid the size increase for non-sig jmp_buf objects, but
they seemed to have worse properties. moreover, the relative size
increase is only extreme on x86[_64]; one way of interpreting this is
that, if the size increase from this patch makes jmp_buf use too much
memory, then the program was already using too much memory when built
for non-x86 archs.
this bug was caught by the new footer-corruption check in realloc and
free.
if the block returned by malloc was already aligned to the desired
alignment, memalign's logic to split off the misaligned head was
incorrect; rather than writing to a point inside the allocated block,
it was overwriting the footer of the previous block on the heap with
the value 1 (length 0 plus an in-use flag).
fortunately, the impact of this bug was fairly low. (this is probably
why it was not caught sooner.) due to the way the heap works, malloc
will never return a block whose previous block is free. (doing so would
be harmful because it would increase fragmentation with no benefit.)
the footer is actually not needed for in-use blocks, except that its
in-use bit needs to remain set so that it does not get merged with
free blocks, so there was no harm in it being set to 1 instead of the
correct value.
however, there is one case where this bug could have had an impact: in
multi-threaded programs, if another thread freed the previous block
after memalign's call to malloc returned, but before memalign
overwrote the previous block's footer, the resulting block in the free
list could be left in a corrupt state. I have not analyzed the impact
of this bad state and whether it could lead to more serious
malfunction.