Commit Graph

22 Commits

Author SHA1 Message Date
Aliaksei Kandratsenka be755a8d3c mass-replace NULL -> nullptr 2024-09-25 18:33:56 -04:00
Aliaksei Kandratsenka 5273567470 [trivialre] fix past-end-of-string access in MatchSubstring
Thanks goes to MSVC's string_view asserts that check this.
2024-09-16 19:33:14 -04:00
Aliaksey Kandratsenka a5d86777ce [malloc_bench] add rnd_dependent_8cores benchmark
This benchmark exercizes multi-threaded central free list operations,
and this is where we're losing to a bunch of competing malloc (i.e. which
shard heap).
2024-09-13 18:02:37 -04:00
Aliaksey Kandratsenka ea81e46ff1 improve benchmarks facility
We now support a set of command line flags similar to "abseil"
benchmark thingy. I.e. to let people specify a subset of benchmarks or
run them longer/shorter as needed.

This commit also includes small, portable and very simple regexp
facility. It isn't good enough for some production use, but it is
plenty good for some testing uses or benchmark selection.
2024-09-13 18:01:25 -04:00
Aliaksey Kandratsenka 7fa0c2da53 modernize malloc_bench
Instead of relying on gperftools-specific tc_XYZ functions for sized
deallocation and memalign we use standard C++ facilities. There are
also other minor improvements like mallocing larger buffers rather
than statically allocating them.
2024-09-05 22:56:23 -04:00
Xiang.Lin 717bf724a5 Add heap profiler support for QNX 2023-10-30 19:30:37 -04:00
Aliaksey Kandratsenka e5ac219780 restore unwind-bench
We previously deleted it, since it wasn't portable enough. But
unportable bits are now ifdef-ed out, so we can return it.
2023-07-02 22:30:00 -04:00
Aliaksey Kandratsenka 7dd1b82378 simplify project by making it C++-only
I.e. no need for any AC_LANG_PUSH stuff in configure. Most usefully,
only CXXFLAGS needs to be set now when you need to tweak compile
flags.
2023-07-02 22:30:00 -04:00
Aliaksey Kandratsenka 90eff0637b upgrade malloc bench away from std::random_shuffle
C++17 dropped it, so we use C++11 and later std::shuffle
instead. Fixes github issue #1378
2023-07-02 21:28:30 -04:00
Aliaksey Kandratsenka b501806f2a fix malloc_bench on clang
We got bitten again by compiler optimizing out free(malloc(sz))
combination. We replace calls to malloc/free with calls to global
function operator new/delete. Those are apparently forbidden by
standard to be optimized out.
2023-07-02 21:28:30 -04:00
Aliaksey Kandratsenka ff68bcab60 [benchmark] detect iterations overflow
If for some reason benchmark function is too fast (like when it got
optimized out to nothing), we'd previously hang in infinite loop. Now
we'll catch this condition due to iterations count overflowing.
2023-07-02 21:28:30 -04:00
Aliaksey Kandratsenka 5eec9d0ae3 Drop not very portable and not very useful unwind benchmark. 2018-10-07 08:17:04 -07:00
Gabriel Marin 3af509d4f9 benchmark: use angle brackets to include ucontext.h
Using quotes for a system header file fails a presubmit check in Chromium.
2018-08-05 16:15:12 -07:00
Aliaksey Kandratsenka 8b9728b023 add memalign benchmark to malloc_bench 2017-11-30 18:14:11 +00:00
Aliaksey Kandratsenka 5ac82ec5b9 added stacktrace capturing benchmark 2017-05-29 14:57:13 -07:00
Aliaksey Kandratsenka a2550b6309 turn bench_fastpath_throughput into actual throughput benchmark
Previously we bumped size by 16 between iterations, but for many size
classess that gave is subsequent iteration into same size
class. Multiplying by prime number randomizes sizes more so speeds up
this benchmark on at least modern x86.
2017-05-14 19:04:55 -07:00
Aliaksey Kandratsenka b762b1a492 added sized free benchmarks to malloc_bench 2017-05-14 19:04:55 -07:00
Aliaksey Kandratsenka 71ffc1cd6b added free lists randomization step to malloc_bench 2017-05-14 19:04:55 -07:00
Eugene 86ce69d77f Update binary_trees.cc 2017-04-16 13:26:20 -07:00
Aliaksey Kandratsenka 0fb6dd8aa3 added binary_trees benchmark 2015-11-21 18:17:21 -08:00
Aliaksey Kandratsenka 962aa53c55 added more fastpath microbenchmarks
This also makes them output nicer results. I.e. every benchmark is run 3
times and iteration duration is printed for every run.

While this is still very synthetic and unrepresentave of malloc performance
as a whole, it is exercising more situations in tcmalloc fastpath. So it a
step forward.
2015-10-17 20:34:19 -07:00
Aliaksey Kandratsenka 64e0133901 added trivial malloc fast-path benchmark
While this is not good representation of real-world production malloc
behavior, it is representative of length (instruction-wise and well as
cycle-wise) of fast-path. So this is better than nothing.
2015-08-02 16:53:19 -07:00