Commit Graph

17 Commits

Author SHA1 Message Date
Xiang.Lin 717bf724a5 Add heap profiler support for QNX 2023-10-30 19:30:37 -04:00
Aliaksey Kandratsenka e5ac219780 restore unwind-bench
We previously deleted it, since it wasn't portable enough. But
unportable bits are now ifdef-ed out, so we can return it.
2023-07-02 22:30:00 -04:00
Aliaksey Kandratsenka 7dd1b82378 simplify project by making it C++-only
I.e. no need for any AC_LANG_PUSH stuff in configure. Most usefully,
only CXXFLAGS needs to be set now when you need to tweak compile
flags.
2023-07-02 22:30:00 -04:00
Aliaksey Kandratsenka 90eff0637b upgrade malloc bench away from std::random_shuffle
C++17 dropped it, so we use C++11 and later std::shuffle
instead. Fixes github issue #1378
2023-07-02 21:28:30 -04:00
Aliaksey Kandratsenka b501806f2a fix malloc_bench on clang
We got bitten again by compiler optimizing out free(malloc(sz))
combination. We replace calls to malloc/free with calls to global
function operator new/delete. Those are apparently forbidden by
standard to be optimized out.
2023-07-02 21:28:30 -04:00
Aliaksey Kandratsenka ff68bcab60 [benchmark] detect iterations overflow
If for some reason benchmark function is too fast (like when it got
optimized out to nothing), we'd previously hang in infinite loop. Now
we'll catch this condition due to iterations count overflowing.
2023-07-02 21:28:30 -04:00
Aliaksey Kandratsenka 5eec9d0ae3 Drop not very portable and not very useful unwind benchmark. 2018-10-07 08:17:04 -07:00
Gabriel Marin 3af509d4f9 benchmark: use angle brackets to include ucontext.h
Using quotes for a system header file fails a presubmit check in Chromium.
2018-08-05 16:15:12 -07:00
Aliaksey Kandratsenka 8b9728b023 add memalign benchmark to malloc_bench 2017-11-30 18:14:11 +00:00
Aliaksey Kandratsenka 5ac82ec5b9 added stacktrace capturing benchmark 2017-05-29 14:57:13 -07:00
Aliaksey Kandratsenka a2550b6309 turn bench_fastpath_throughput into actual throughput benchmark
Previously we bumped size by 16 between iterations, but for many size
classess that gave is subsequent iteration into same size
class. Multiplying by prime number randomizes sizes more so speeds up
this benchmark on at least modern x86.
2017-05-14 19:04:55 -07:00
Aliaksey Kandratsenka b762b1a492 added sized free benchmarks to malloc_bench 2017-05-14 19:04:55 -07:00
Aliaksey Kandratsenka 71ffc1cd6b added free lists randomization step to malloc_bench 2017-05-14 19:04:55 -07:00
Eugene 86ce69d77f Update binary_trees.cc 2017-04-16 13:26:20 -07:00
Aliaksey Kandratsenka 0fb6dd8aa3 added binary_trees benchmark 2015-11-21 18:17:21 -08:00
Aliaksey Kandratsenka 962aa53c55 added more fastpath microbenchmarks
This also makes them output nicer results. I.e. every benchmark is run 3
times and iteration duration is printed for every run.

While this is still very synthetic and unrepresentave of malloc performance
as a whole, it is exercising more situations in tcmalloc fastpath. So it a
step forward.
2015-10-17 20:34:19 -07:00
Aliaksey Kandratsenka 64e0133901 added trivial malloc fast-path benchmark
While this is not good representation of real-world production malloc
behavior, it is representative of length (instruction-wise and well as
cycle-wise) of fast-path. So this is better than nothing.
2015-08-02 16:53:19 -07:00