Previous implementation wasn't entirely safe w.r.t. 32-bit off_t
systems. Specifically around mmap replacement hook. Also, API was a
lot more general and broad than we actually need.
Sadly, old mmap hooks API was shipped with our public headers. But
thankfully it appears to be unused externally (checked via github
search). So we keep this old API and ABI for the sake of formal API
and ABI compatibility. But this old API is now empty and always
fails (some OS/hardware combinations didn't have functional
implementations of those hooks anyways).
New API is 64-bit clean and only provides us with what we need. Namely
being able to react to virtual address space mapping changes for
logging, heap profiling and heap leak checker. I.e. no pre hooks or
mmap-replacement hooks. We also explicitly not ship this API
externally to give us freedom to change it.
New code is also hopefully tidier and slightly more portable. At least
there are fewer arch-specific ifdef-s.
Another somewhat notable change is, since mmap hook isn't needed in
"minimal" configuration, we now don't override system's
mmap/munmap/etc functions in this configuration. No big deal, but it
reduces risk of damage if we somehow mess those up. I.e. musl's mmap
does few things that our mmap replacement doesn't, such as very fancy
vm_lock thingy. Which doesn't look critical, but is good thing for us
not to interfere with when not necessary.
Fixes issue #1406 and issue #1407. Lets also mention issue #1010 which
is somewhat relevant.
It was originally separated into own function due to some (now
obsolete) compiler optimization bug. We stopped worked around that bug
few commits ago. But as github user plopresti rightfully pointed
out (much, much thanks!), we lost `static' on that function. But since
it is trivial function and used exactly once, it is best to simply
inline it's 2 lines of code back instead of returning static.
Previous commit was: 68b442714a
This facility allowed us to build tcmalloc without linking in actual
-lpthread. Via weak symbols we checked at runtime if pthread functions
are available and if not, special single-threaded stubs were used
instead. Not always brining in pthread dependency helped performance
of some programs or libraries which depended at runtime on whether
threads are linked or not. Most notable of those are libstdc++ which
uses non-atomic refcounting on single threaded programs.
But such optional dependency on pthreads caused complications for
nearly no benefit. One trouble was reported in github issue #1110.
This days glibc/libstdc++ combo actually depends on
sys/single_threaded.h facility. So bringing pthread at runtime is
fine. Also modern glibc ships pthread symbols inside libc anyways and
libpthread is empty. I also found that for whatever reason on BSDs and
osx we already pulled in proper pthreads too.
So we loose nothing and we get issue #1110 fixed. And we simplify
everything.
Our perftools_pthread_once implementation has some ifdefs and what
not. I.e. some OSes have complications around early usage and windows
is windows. But in fact we can implement trivial spinlock-supported
implementation and depend on nothing.
Update issue #1110
Original CL: https://chromiumcodereview.appspot.com/10391178
1. Enable large object pointer offset check in release build.
Following code will now cause a check error:
char* p = reinterpret_cast<char*>(malloc(kMaxSize + 1));
free(p + 1);
2. Remove a duplicated error reporting function "DieFromBadFreePointer",
can use "InvalidGetAllocatedSize".
Reviewed-on: https://chromium-review.googlesource.com/1184335
[alkondratenko@gmail.com] removed some unrelated formatting changes
Signed-off-by: Aliaksey Kandratsenka <alkondratenko@gmail.com>
It barks because we access the field in a special way through address
computation (see magic2_addr), so it is indeed not accessed
directly. To emphasize that it comes directly after size2_ we simply
converted size2 and magic2_ into explicit array size_and_magic2_.
Our tests do explicitly trigger certain "bad" cases. And compilers are
getting increasingly smarter, so they start giving us warnings. People
dislike warnings, so lets try to disable them specifically for unit
tests.
Refers to issue #1401
In heap checker we actually use test_str address after deletion, to
verify at runtime that heap checker is indeed tracking deletions. But
gcc barks. So lets hide allocation from it by calling
tc_newarray/tc_deletearray.
Refers to #1401
g++ 13 somehow considers reinterpret_cast<uintptr_t>(InitialNewHook)
as non-constexpr. It then generates runtime constructor for malloc
hooks, which run too late. That broke heap checker and heap profiler.
We fix by making array of hooks inside malloc to have proper function
pointer type, which lets us avoid reinterpret_cast which makes
construction compile-time again.
If we don't avoid there is possibility of deadlock. Like this:
*) thread A runs malloc and happens to have some lock(s) there. It
gets profiling signal.
*) thread B runs ProfileHandler::UnregisterCallback or similar and has
both control_lock_ and signal_lock_.
*) thread B calls memory allocator (it used to delete token under
lock) and hits the malloc lock, so waits
*) thread A that has malloc lock wants to get signal_lock_ and waits
too.
Correct behavior is since we grab signal_lock_ from pretty much
anywhere as part of signal handler, it then implies transitively that
anything that grabs that lock cannot call into anything that grabs any
other lock (e.g. including malloc). I.e. signal_lock_ "nests" under
every other lock in the program.
So we refector the code some. When removing callbacks we simply copy
entire list with right token removed. Then grab signal_lock_ and only
swap copy with old callback list. Then signal_lock_ lock is released
and all the memory freeing operations are performed.
When adding callback we utilize splice to also avoid anything "heavy"
under signal_lock_.
And as part of that change we're now able to have UpdateTimer method
only needing control_lock_ instead of signal_lock_.
We add somewhat elaborate unit test that was able to catch the issue
without fix and now works fine.
Closes github issue #412
We had this logic to ensure that TesterThread succeeds at least 50% of
times grabbing passed objects lock, apparently to ensure that multiple
threads actually participate. But this check occasionally flakes. It
is totally possible that lock owner gets preempted while holding lock
and other threads accumulate gazillion of failed trylocks. So while it
is nice to test this somehow, flakes are bad enough, so we drop this
test.
- Some small automake changes. Add libc++ for AIX instead of libstdc++
- Add the interface changes for AIX:User-defined malloc replacement
- Add code to avoid use of pthreads library prior to its initialization
- Some small changes to the unittest case
- Update INSTALL for AIX
[alkondratenko@gmail.com]: lower-case/de-trailing-dot for commit subject line
[alkondratenko@gmail.com]: rebase
[alkondratenko@gmail.com]: amputate unused AM_CONDITIONAL for AIX
[alkondratenko@gmail.com]: explicitly mention libc_override_aix.h in Makefile.am
We used to explicitly link to libstdc++, libm and even libpthread, but
this should be handled by libtool since those are dependencies of
libtcmalloc_minimal. What also helps is we now build everything with
C++ compiler, not C. So libstdc++ or (libc++) dependency doesn't need
to be added at all, even if libtool for some reason fails to handle
it.
This fixes github issue #1323.
When populating span with linked list of objects ptr + size <= limit
condition could overflow before comparing to limit. So fixed code
carefully tests limit. We also produce version with
__builtin_add_overflow since this is semi-hot loop and we want it to
be fast.
This fixes issue #1371
From time to time things file inside tcmalloc guts where calling to
malloc is not safe. Regular strerror does locale bits, so will
occasionally open files/malloc/etc. We avoid this by using our own
"safe" variant that hardcodes names of all POSIX errno constants.
This patch adds an arm64 implementation of the SpinlockPause() function
allowing the core to adaptively spin to acquire the lock and improving
performance in multi-threaded environments where the locks can be contended.
From experience with other projects, we've found a single isb is the closest
corollary to the rep; nop timing or x86.
Overall, across thirty runs, the binary_trees_shared benchmark improves 6% in
average, median, and P90 runtime with this change.
We had plenty of old and mostly no more correct i386 cruft. Now that
generic_fp backtracer covers i386 just fine, we can drop explicit x86
backtracer.
With that we refactored and simplified stacktrace.cc mostly around
picking default implementation, but also adding few more minor
cleanups.
The idea is -momit-leaf-frame-pointer gives us performance pretty much
same as fully omitting frame pointers. And having frame pointers
elsewhere allows us to support cases when user's code is built with
frame pointers. We also pass all tests with
TCMALLOC_STACKTRACE_METHOD=generic_fp (not just libunwind or libgcc).