diff --git a/docs/html/cpu_profiler.html b/docs/html/cpu_profiler.html new file mode 100644 index 0000000..f05d5ec --- /dev/null +++ b/docs/html/cpu_profiler.html @@ -0,0 +1,409 @@ +
To install the CPU profiler into your executable, add -lprofiler to +the link-time step for your executable. (It's also probably possible +to add in the profiler at run-time using LD_PRELOAD, but this isn't +necessarily recommended.)
+ +This does not turn on CPU profiling; it just inserts the code. +For that reason, it's practical to just always link -lprofiler into a +binary while developing; that's what we do at Google. (However, since +any user can turn on the profiler by setting an environment variable, +it's not necessarily recommended to install profiler-linked binaries +into a production, running system.)
+ + +There are two alternatives to actually turn on CPU profiling for a +given run of an executable:
+ +$ CPUPROFILE=/tmp/profile /usr/local/netscape # sh + % setenv CPUPROFILE /tmp/profile; /usr/local/netscape # csh ++ OR + +
In Linux 2.6 and above, profiling works correctly with threads, +automatically profiling all threads. In Linux 2.4, profiling only +profiles the main thread (due to a kernel bug involving itimers and +threads). Profiling works correctly with sub-processes: each child +process gets its own profile with its own name (generated by combining +CPUPROFILE with the child's process id).
+ +For security reasons, CPU profiling will not write to a file -- and +is thus not usable -- for setuid programs.
+ +In addition to the environment variable CPUPROFILE
,
+which determines where profiles are written, there are several
+environment variables which control the performance of the CPU
+profile.
PROFILEFREQUENCY=x |
+ How many interrupts/second the cpu-profiler samples. + | +
pprof is the script used to analyze a profile. It has many output +modes, both textual and graphical. Some give just raw numbers, much +like the -pg output of gcc, and others show the data in the form of a +dependency graph.
+ +pprof requires perl5 to be installed to run. It also +requires dot to be installed for any of the graphical output routines, +and gv to be installed for --gv mode (described below).
+ +Here are some ways to call pprof. These are described in more +detail below.
+ +% pprof "program" "profile" + Generates one line per procedure + +% pprof --gv "program" "profile" + Generates annotated call-graph and displays via "gv" + +% pprof --gv --focus=Mutex "program" "profile" + Restrict to code paths that involve an entry that matches "Mutex" + +% pprof --gv --focus=Mutex --ignore=string "program" "profile" + Restrict to code paths that involve an entry that matches "Mutex" + and does not match "string" + +% pprof --list=IBF_CheckDocid "program" "profile" + Generates disassembly listing of all routines with at least one + sample that match the --list=+ +pattern. The listing is + annotated with the flat and cumulative sample counts at each line. + +% pprof --disasm=IBF_CheckDocid "program" "profile" + Generates disassembly listing of all routines with at least one + sample that match the --disasm= pattern. The listing is + annotated with the flat and cumulative sample counts at each PC value. +
In the various graphical modes of pprof, the output is a call graph +annotated with timing information, like so:
+ + ++ + |
Each node represents a procedure. +The directed edges indicate caller to callee relations. Each node is +formatted as follows:
+ +Class Name +Method Name +local (percentage) +of cumulative (percentage) +
The last one or two lines contains the timing information. (The +profiling is done via a sampling method, where by default we take 100 +samples a second. Therefor one unit of time in the output corresponds +to about 10 milliseconds of execution time.) The "local" time is the +time spent executing the instructions directly contained in the +procedure (and in any other procedures that were inlined into the +procedure). The "cumulative" time is the sum of the "local" time and +the time spent in any callees. If the cumulative time is the same as +the local time, it is not printed. + +
For instance, the timing information for test_main_thread() +indicates that 155 units (about 1.55 seconds) were spent executing the +code in test_main_thread() and 200 units were spent while executing +test_main_thread() and its callees such as snprintf().
+ +The size of the node is proportional to the local count. The +percentage displayed in the node corresponds to the count divided by +the total run time of the program (that is, the cumulative count for +main()).
+ +An edge from one node to another indicates a caller to callee +relationship. Each edge is labelled with the time spent by the callee +on behalf of the caller. E.g, the edge from test_main_thread() to +snprintf() indicates that of the 200 samples in +test_main_thread(), 37 are because of calls to snprintf().
+ +Note that test_main_thread() has an edge to vsnprintf(), even +though test_main_thread() doesn't call that function directly. This +is because the code was compiled with -O2; the profile reflects the +optimized control flow.
+ +/tmp/profiler2_unittest + Total samples: 202 + Focusing on: 202 + Dropped nodes with <= 1 abs(samples) + Dropped edges with <= 0 samples ++ +This section contains the name of the program, and the total samples +collected during the profiling run. If the --focus option is on (see +the Focus section below), the legend also +contains the number of samples being shown in the focused display. +Furthermore, some unimportant nodes and edges are dropped to reduce +clutter. The characteristics of the dropped nodes and edges are also +displayed in the legend. + +
You can ask pprof to generate a display focused on a particular +piece of the program. You specify a regular expression. Any portion +of the call-graph that is on a path which contains at least one node +matching the regular expression is preserved. The rest of the +call-graph is dropped on the floor. For example, you can focus on the +vsnprintf() libc call in profiler2_unittest as follows:
+ +% pprof --gv --focus=vsnprintf /tmp/profiler2_unittest test.prof ++ +
+ + |
+Similarly, you can supply the --ignore option to ignore +samples that match a specified regular expression. E.g., +if you are interested in everything except calls to snprintf(), +you can say: +
% pprof --gv --ignore=snprintf /tmp/profiler2_unittest test.prof ++ +
+
--text |
+ + Produces a textual listing. This is currently the default + since it does not need to access to an X display, or + dot or gv. However if you + have these programs installed, you will probably be + happier with the --gv output. + | +
--gv |
+ + Generates annotated call-graph, converts to postscript, and + displays via gv. + | +
--dot |
+ + Generates the annotated call-graph in dot format and + emits to stdout. + | +
--ps |
+ + Generates the annotated call-graph in Postscript format and + emits to stdout. + | +
--gif |
+ + Generates the annotated call-graph in GIF format and + emits to stdout. + | +
--list=<regexp> |
+
+ Outputs source-code listing of routines whose + name matches <regexp>. Each line + in the listing is annotated with flat and cumulative + sample counts. + +In the presence of inlined calls, the samples + associated with inlined code tend to get assigned + to a line that follows the location of the + inlined call. A more precise accounting can be + obtained by disassembling the routine using the + --disasm flag. + |
+
--disasm=<regexp> |
+ + Generates disassembly of routines that match + <regexp>, annotated with flat and + cumulative sample counts and emits to stdout. + | +
By default, pprof produces one entry per procedure. However you can +use one of the following options to change the granularity of the +output. The --files option seems to be particularly useless, and may +be removed eventually.
+ +--addresses |
+ + Produce one node per program address. + | +
--lines |
+ + Produce one node per source line. + | +
--functions |
+ + Produce one node per function (this is the default). + | +
--files |
+ + Produce one node per source file. + | +
Some nodes and edges are dropped to reduce clutter in the output +display. The following options control this effect:
+ +--nodecount=<n> |
+ + This option controls the number of displayed nodes. The nodes + are first sorted by decreasing cumulative count, and then only + the top N nodes are kept. The default value is 80. + | +
--nodefraction=<f> |
+ + This option provides another mechanism for discarding nodes + from the display. If the cumulative count for a node is + less than this option's value multiplied by the total count + for the profile, the node is dropped. The default value + is 0.005; i.e. nodes that account for less than + half a percent of the total time are dropped. A node + is dropped if either this condition is satisfied, or the + --nodecount condition is satisfied. + | +
--edgefraction=<f> |
+ + This option controls the number of displayed edges. First of all, + an edge is dropped if either its source or destination node is + dropped. Otherwise, the edge is dropped if the sample + count along the edge is less than this option's value multiplied + by the total count for the profile. The default value is + 0.001; i.e., edges that account for less than + 0.1% of the total time are dropped. + | +
--focus=<re> |
+ + This option controls what region of the graph is displayed + based on the regular expression supplied with the option. + For any path in the callgraph, we check all nodes in the path + against the supplied regular expression. If none of the nodes + match, the path is dropped from the output. + | +
--ignore=<re> |
+ + This option controls what region of the graph is displayed + based on the regular expression supplied with the option. + For any path in the callgraph, we check all nodes in the path + against the supplied regular expression. If any of the nodes + match, the path is dropped from the output. + | +
The dropped edges and nodes account for some count mismatches in +the display. For example, the cumulative count for +snprintf() in the first diagram above was 41. However the local +count (1) and the count along the outgoing edges (12+1+20+6) add up to +only 40.
+ + +./
to your filename:
+ CPUPROFILE=./Ägypten
.
++You can heap-check any program that has the tcmalloc library linked +in. No recompilation is necessary to use the heap checker. +
+ ++In order to catch all heap leaks, tcmalloc must be linked last into +your executable. The heap checker may mischaracterize some memory +accesses in libraries listed after it on the link line. For instance, +it may report these libraries as leaking memory when they're not. +(See the source code for more details.) +
+ ++It's safe to link in tcmalloc even if you don't expect to +heap-check your program. Your programs will not run any slower +as long as you don't use any of the heap-checker features. +
+ ++You can run the heap checker on applications you didn't compile +yourself, by using LD_PRELOAD: +
+$ LD_PRELOAD="/usr/lib/libtcmalloc.so" HEAPCHECK=normal++
+We don't necessarily recommend this mode of usage. +
+ +There are two alternatives to actually turn on heap checking for a +given run of an executable.
+ +/bin/ls
:
+ $ HEAPCHECK=normal /bin/ls + % setenv HEAPCHECK normal; /bin/ls # csh ++ OR + +
HeapLeakChecker
object
+ (which takes a descriptive label as an argument), and calling
+ check.NoLeaks()
at the end of the code you want
+ checked. This will verify no more memory is allocated at the
+ end of the code segment than was allocated in the beginning. To
+ actually turn on the heap-checking, set the environment variable
+ HEAPCHECK to local
.
+
+
+
+Here is an example of the second usage. The following code will
+die if Foo()
leaks any memory
+(i.e. it allocates memory that is not freed by the time it returns):
+
HeapProfileLeakChecker checker("foo"); + Foo(); + assert(checker.NoLeaks()); ++ +
+When the checker
object is allocated, it creates
+one heap profile. When checker.NoLeaks()
is invoked,
+it creates another heap profile and compares it to the previously
+created profile. If the new profile indicates memory growth
+(or any memory allocation change if one
+uses checker.SameHeap()
instead), NoLeaks()
+will return false and the program will abort. An error message will
+also be printed out saying how pprof
command can be run
+to get a detailed analysis of the actual leaks.
+
+See the comments for HeapProfileLeakChecker
class in
+heap-checker.h
and the code in
+heap-checker_unittest.cc
for more information and
+examples. (TODO: document it all here instead!)
+
+IMPORTANT NOTE: pthreads handling is currently incomplete. +Heap leak checks will fail with bogus leaks if there are pthreads live +at construction or leak checking time. One solution, for global +heap-checking, is to make sure all threads but the main thread have +exited at program-end time. We hope (as of March 2005) to have a fix +soon. +
+ ++Sometimes your code has leaks that you know about and are willing to +accept. You would like the heap checker to ignore them when checking +your program. You can do this by bracketing the code in question with +an appropriate heap-checking object: +
+#include+ ++ ... + void *mark = HeapLeakChecker::GetDisableChecksStart(); + <leaky code> + HeapLeakChecker::DisableChecksToHereFrom(mark); +
+Some libc routines allocate memory, and may need to be 'disabled' in +this way. As time goes on, we hope to encode proper handling of +these routines into the heap-checker library code, so applications +needn't worry about them, but that process is not yet complete. +
+ ++You can profile any program that has the tcmalloc library linked +in. No recompilation is necessary to use the heap profiler. +
+ ++It's safe to link in tcmalloc even if you don't expect to +heap-profiler your program. Your programs will not run any slower +as long as you don't use any of the heap-profiler features. +
+ ++You can run the heap profiler on applications you didn't compile +yourself, by using LD_PRELOAD: +
+$ LD_PRELOAD="/usr/lib/libtcmalloc.so" HEAPPROFILE=...++
+We don't necessarily recommend this mode of usage. +
+ + ++Define the environment variable HEAPPROFILE to the filename to dump the +profile to. For instance, to profile /usr/local/netscape: +
+$ HEAPPROFILE=/tmp/profile /usr/local/netscape # sh + % setenv HEAPPROFILE /tmp/profile; /usr/local/netscape # csh ++ +
Profiling also works correctly with sub-processes: each child +process gets its own profile with its own name (generated by combining +HEAPPROFILE with the child's process id).
+ +For security reasons, heap profiling will not write to a file -- +and it thus not usable -- for setuid programs.
+ + + ++If heap-profiling is turned on in a program, the program will periodically +write profiles to the filesystem. The sequence of profiles will be named: +
+<prefix>.0000.heap + <prefix>.0001.heap + <prefix>.0002.heap + ... ++
+where <prefix>
is the value supplied in
+HEAPPROFILE
. Note that if the supplied prefix
+does not start with a /
, the profile files will be
+written to the program's working directory.
+
+By default, a new profile file is written after every 1GB of +allocation. The profile-writing interval can be adjusted by calling +HeapProfilerSetAllocationInterval() from your program. This takes one +argument: a numeric value that indicates the number of bytes of allocation +between each profile dump. +
+ +
+You can also generate profiles from specific points in the program
+by inserting a call to HeapProfile()
. Example:
+
extern const char* HeapProfile(); + const char* profile = HeapProfile(); + fputs(profile, stdout); + free(const_cast<char*>(profile)); ++ +
malloc
, calloc
, realloc
, or,
+new
.
+
+pprof
tool. The pprof
tool can print both
+CPU usage and heap usage information. It is documented in detail
+on the CPU Profiling page.
+Heap-profile-specific flags and usage are explained below.
+
+
+Here are some examples. These examples assume the binary is named
+gfs_master
, and a sequence of heap profile files can be
+found in files named:
+
profile.0001.heap + profile.0002.heap + ... + profile.0100.heap ++ +
% pprof --gv gfs_master profile.0100.heap ++ +This command will pop-up a
gv
window that displays
+the profile information as a directed graph. Here is a portion
+of the resulting output:
+
++
GFS_MasterChunk::AddServer
accounts for 255.6 MB
+ of the live memory, which is 25% of the total live memory.
+GFS_MasterChunkTable::UpdateState
is directly
+ accountable for 176.2 MB of the live memory (i.e., it directly
+ allocated 176.2 MB that has not been freed yet). Furthermore,
+ it and its callees are responsible for 729.9 MB. The
+ labels on the outgoing edges give a good indication of the
+ amount allocated by each callee.
+
+You often want to skip allocations during the initialization phase of
+a program so you can find gradual memory leaks. One simple way to do
+this is to compare two profiles -- both collected after the program
+has been running for a while. Specify the name of the first profile
+using the --base
option. Example:
+
% pprof --base=profile.0004.heap gfs_master profile.0100.heap ++ +
+The memory-usage in profile.0004.heap
will be subtracted from
+the memory-usage in profile.0100.heap
and the result will
+be displayed.
+
% pprof gfs_master profile.0100.heap + 255.6 24.7% 24.7% 255.6 24.7% GFS_MasterChunk::AddServer + 184.6 17.8% 42.5% 298.8 28.8% GFS_MasterChunkTable::Create + 176.2 17.0% 59.5% 729.9 70.5% GFS_MasterChunkTable::UpdateState + 169.8 16.4% 75.9% 169.8 16.4% PendingClone::PendingClone + 76.3 7.4% 83.3% 76.3 7.4% __default_alloc_template::_S_chunk_alloc + 49.5 4.8% 88.0% 49.5 4.8% hashtable::resize + ... ++ +
+
k
th entry in the third column is the
+ sum of the first k
entries in the second column.)
+DataBuffer
are included:
+% pprof --gv --focus=DataBuffer gfs_master profile.0100.heap ++ +Similarly, the following command will omit all paths subset of the +call-graph. All paths in the call-graph that match the regular +expression
DataBuffer
are discarded:
+% pprof --gv --ignore=DataBuffer gfs_master profile.0100.heap ++ +
+All of the previous examples have displayed the amount of in-use
+space. I.e., the number of bytes that have been allocated but not
+freed. You can also get other types of information by supplying
+a flag to pprof
:
+
--inuse_space |
+ + Display the number of in-use megabytes (i.e. space that has + been allocated but not freed). This is the default. + | +
--inuse_objects |
+ + Display the number of in-use objects (i.e. number of + objects that have been allocated but not freed). + | +
--alloc_space |
+ + Display the number of allocated megabytes. This includes + the space that has since been de-allocated. Use this + if you want to find the main allocation sites in the + program. + | +
--alloc_objects |
+ + Display the number of allocated objects. This includes + the objects that have since been de-allocated. Use this + if you want to find the main allocation sites in the + program. + | + +
+ Heap profiling requires the use of libtcmalloc. This requirement + may be removed in a future version of the heap profiler, and the + heap profiler separated out into its own library. +
+ ++ If the program linked in a library that was not compiled + with enough symbolic information, all samples associated + with the library may be charged to the last symbol found + in the program before the libary. This will artificially + inflate the count for that symbol. +
+ ++ If you run the program on one machine, and profile it on another, + and the shared libraries are different on the two machines, the + profiling output may be confusing: samples that fall within + the shared libaries may be assigned to arbitrary procedures. +
+ ++ Several libraries, such as some STL implementations, do their own + memory management. This may cause strange profiling results. We + have code in libtcmalloc to cause STL to use tcmalloc for memory + management (which in our tests is better than STL's internal + management), though it only works for some STL implementations. +
+ ++ If your program forks, the children will also be profiled (since + they inherit the same HEAPPROFILE setting). Each process is + profiled separately; to distinguish the child profiles from the + parent profile and from each other, all children will have their + process-id attached to the HEAPPROFILE name. +
+ +
+ Due to a hack we make to work around a possible gcc bug, your
+ profiles may end up named strangely if the first character of
+ your HEAPPROFILE variable has ascii value greater than 127. This
+ should be exceedingly rare, but if you need to use such a name,
+ just set prepend ./
to your filename:
+ HEAPPROFILE=./Ägypten
.
+
+TCMalloc also reduces lock contention for multi-threaded programs. +For small objects, there is virtually zero contention. For large +objects, TCMalloc tries to use fine grained and efficient spinlocks. +ptmalloc2 also reduces lock contention by using per-thread arenas but +there is a big problem with ptmalloc2's use of per-thread arenas. In +ptmalloc2 memory can never move from one arena to another. This can +lead to huge amounts of wasted space. For example, in one Google application, the first phase would +allocate approximately 300MB of memory for its data +structures. When the first phase finished, a second phase would be +started in the same address space. If this second phase was assigned a +different arena than the one used by the first phase, this phase would +not reuse any of the memory left after the first phase and would add +another 300MB to the address space. Similar memory blowup problems +were also noticed in other applications. + +
+Another benefit of TCMalloc is space-efficient representation of small
+objects. For example, N 8-byte objects can be allocated while using
+space approximately 8N * 1.01
bytes. I.e., a one-percent
+space overhead. ptmalloc2 uses a four-byte header for each object and
+(I think) rounds up the size to a multiple of 8 bytes and ends up
+using 16N
bytes.
+
+
+
To use TCmalloc, just link tcmalloc into your application via the +"-ltcmalloc" linker flag.
+ ++You can use tcmalloc in applications you didn't compile yourself, by +using LD_PRELOAD: +
+$ LD_PRELOAD="/usr/lib/libtcmalloc.so"++
+LD_PRELOAD is tricky, and we don't necessarily recommend this mode of +usage. +
+ +TCMalloc includes a heap checker +and heap profiler as well.
+ +If you'd rather link in a version of TCMalloc that does not include
+the heap profiler and checker (perhaps to reduce binary size for a
+static binary), you can link in libtcmalloc_minimal
+instead.
+TCMalloc treates objects with size <= 32K ("small" objects) +differently from larger objects. Large objects are allocated +directly from the central heap using a page-level allocator +(a page is a 4K aligned region of memory). I.e., a large object +is always page-aligned and occupies an integral number of pages. + +
+A run of pages can be carved up into a sequence of small objects, each +equally sized. For example a run of one page (4K) can be carved up +into 32 objects of size 128 bytes each. + +
+A thread cache contains a singly linked list of free objects per size-class. +
+If the free list is empty: (1) We fetch a bunch of objects from a +central free list for this size-class (the central free list is shared +by all threads). (2) Place them in the thread-local free list. (3) +Return one of the newly fetched objects to the applications. + +
+If the central free list is also empty: (1) We allocate a run of pages +from the central page allocator. (2) Split the run into a set of +objects of this size-class. (3) Place the new objects on the central +free list. (4) As before, move some of these objects to the +thread-local free list. + +
i < 256
, the
+k
th entry is a free list of runs that consist of
+k
pages. The 256
th entry is a free list of
+runs that have length >= 256
pages:
+
+An allocation for k
pages is satisfied by looking in the
+k
th free list. If that free list is empty, we look in
+the next free list, and so forth. Eventually, we look in the last
+free list if necessary. If that fails, we fetch memory from the
+system (using sbrk, mmap, or by mapping in portions of /dev/mem).
+
+
+If an allocation for k
pages is satisfied by a run
+of pages of length > k
, the remainder of the
+run is re-inserted back into the appropriate free list in the
+page heap.
+
+
Span
object. A span
+can either be allocated, or free. If free, the span
+is one of the entries in a page heap linked-list. If allocated, it is
+either a large object that has been handed off to the application, or
+a run of pages that have been split up into a sequence of small
+objects. If split into small objects, the size-class of the objects
+is recorded in the span.
+
++A central array indexed by page number can be used to find the span to +which a page belongs. For example, span a below occupies 2 +pages, span b occupies 1 page, span c occupies 5 +pages and span d occupies 3 pages. +
+If the object is large, the span tells us the range of pages covered
+by the object. Suppose this range is [p,q]
. We also
+lookup the spans for pages p-1
and q+1
. If
+either of these neighboring spans are free, we coalesce them with the
+[p,q]
span. The resulting span is inserted into the
+appropriate free list in the page heap.
+
+
+An object is allocated from a central free list by removing the +first entry from the linked list of some span. (If all spans +have empty linked lists, a suitably sized span is first allocated +from the central page heap.) + +
+An object is returned to a central free list by adding it to the +linked list of its containing span. If the linked list length now +equals the total number of small objects in the span, this span is now +completely free and is returned to the page heap. + +
+We walk over all free lists in the cache and move some number of +objects from the free list to the corresponding central list. + +
+The number of objects to be moved from a free list is determined using
+a per-list low-water-mark L
. L
records the
+minimum length of the list since the last garbage collection. Note
+that we could have shortened the list by L
objects at the
+last garbage collection without requiring any extra accesses to the
+central list. We use this past history as a predictor of future
+accesses and move L/2
objects from the thread cache free
+list to the corresponding central free list. This algorithm has the
+nice property that if a thread stops using a particular size, all
+objects of that size will quickly move from the thread cache to the
+central free list where they can be used by other threads.
+
+
t-test1 (included in google-perftools/tests/tcmalloc, and compiled +as ptmalloc_unittest1) was run with a varying numbers of threads +(1-20) and maximum allocation sizes (64 bytes - 32Kbytes). These tests +were run on a 2.4GHz dual Xeon system with hyper-threading enabled, +using Linux glibc-2.3.2 from RedHat 9, with one million operations per +thread in each test. In each case, the test was run once normally, and +once with LD_PRELOAD=libtcmalloc.so. + +
The graphs below show the performance of TCMalloc vs PTMalloc2 for +several different metrics. Firstly, total operations (millions) per elapsed +second vs max allocation size, for varying numbers of threads. The raw +data used to generate these graphs (the output of the "time" utility) +is available in t-test1.times.txt. + +
+
+ | + | + |
+ | + | + |
+ | + | + |
Next, operations (millions) per second of CPU time vs number of threads, for +max allocation size 64 bytes - 128 Kbytes. + +
+
+ | + | + |
+ | + | + |
+ | + | + |
Here we see again that TCMalloc is both more consistent and more +efficient than PTMalloc2. For max allocation sizes <32K, TCMalloc +typically achieves ~2-2.5 million ops per second of CPU time with a +large number of threads, whereas PTMalloc achieves generally 0.5-1 +million ops per second of CPU time, with a lot of cases achieving much +less than this figure. Above 32K max allocation size, TCMalloc drops +to 1-1.5 million ops per second of CPU time, and PTMalloc drops almost +to zero for large numbers of threads (i.e. with PTMalloc, lots of CPU +time is being burned spinning waiting for locks in the heavily +multi-threaded case). + +
For some systems, TCMalloc may not work correctly on with +applications that aren't linked against libpthread.so (or the +equivalent on your OS). It should work on Linux using glibc 2.3, but +other OS/libc combinations have not been tested. + +
TCMalloc may be somewhat more memory hungry than other mallocs, +though it tends not to have the huge blowups that can happen with +other mallocs. In particular, at startup TCMalloc allocates +approximately 6 MB of memory. It would be easy to roll a specialized +version that trades a little bit of speed for more space efficiency. + +
+TCMalloc currently does not return any memory to the system. + +
+Don't try to load TCMalloc into a running binary (e.g., using +JNI in Java programs). The binary will have allocated some +objects using the system malloc, and may try to pass them +to TCMalloc for deallocation. TCMalloc will not be able +to handle such objects. + + + +
diff --git a/docs/images/heap-example1.png b/docs/images/heap-example1.png new file mode 100644 index 0000000..9a14b6f Binary files /dev/null and b/docs/images/heap-example1.png differ diff --git a/docs/images/overview.gif b/docs/images/overview.gif new file mode 100644 index 0000000..43828da Binary files /dev/null and b/docs/images/overview.gif differ diff --git a/docs/images/pageheap.gif b/docs/images/pageheap.gif new file mode 100644 index 0000000..6632981 Binary files /dev/null and b/docs/images/pageheap.gif differ diff --git a/docs/images/pprof-test.gif b/docs/images/pprof-test.gif new file mode 100644 index 0000000..9eeab8a Binary files /dev/null and b/docs/images/pprof-test.gif differ diff --git a/docs/images/pprof-vsnprintf.gif b/docs/images/pprof-vsnprintf.gif new file mode 100644 index 0000000..42a8547 Binary files /dev/null and b/docs/images/pprof-vsnprintf.gif differ diff --git a/docs/images/spanmap.gif b/docs/images/spanmap.gif new file mode 100644 index 0000000..a0627f6 Binary files /dev/null and b/docs/images/spanmap.gif differ diff --git a/docs/images/tcmalloc-opspercpusec.png b/docs/images/tcmalloc-opspercpusec.png new file mode 100644 index 0000000..18715e3 Binary files /dev/null and b/docs/images/tcmalloc-opspercpusec.png differ diff --git a/docs/images/tcmalloc-opspercpusec_002.png b/docs/images/tcmalloc-opspercpusec_002.png new file mode 100644 index 0000000..3a99cbc Binary files /dev/null and b/docs/images/tcmalloc-opspercpusec_002.png differ diff --git a/docs/images/tcmalloc-opspercpusec_003.png b/docs/images/tcmalloc-opspercpusec_003.png new file mode 100644 index 0000000..642e245 Binary files /dev/null and b/docs/images/tcmalloc-opspercpusec_003.png differ diff --git a/docs/images/tcmalloc-opspercpusec_004.png b/docs/images/tcmalloc-opspercpusec_004.png new file mode 100644 index 0000000..183a77b Binary files /dev/null and b/docs/images/tcmalloc-opspercpusec_004.png differ diff --git a/docs/images/tcmalloc-opspercpusec_005.png b/docs/images/tcmalloc-opspercpusec_005.png new file mode 100644 index 0000000..3a080de Binary files /dev/null and b/docs/images/tcmalloc-opspercpusec_005.png differ diff --git a/docs/images/tcmalloc-opspercpusec_006.png b/docs/images/tcmalloc-opspercpusec_006.png new file mode 100644 index 0000000..6213021 Binary files /dev/null and b/docs/images/tcmalloc-opspercpusec_006.png differ diff --git a/docs/images/tcmalloc-opspercpusec_007.png b/docs/images/tcmalloc-opspercpusec_007.png new file mode 100644 index 0000000..48ebdb6 Binary files /dev/null and b/docs/images/tcmalloc-opspercpusec_007.png differ diff --git a/docs/images/tcmalloc-opspercpusec_008.png b/docs/images/tcmalloc-opspercpusec_008.png new file mode 100644 index 0000000..db59d61 Binary files /dev/null and b/docs/images/tcmalloc-opspercpusec_008.png differ diff --git a/docs/images/tcmalloc-opspercpusec_009.png b/docs/images/tcmalloc-opspercpusec_009.png new file mode 100644 index 0000000..8c0ae6b Binary files /dev/null and b/docs/images/tcmalloc-opspercpusec_009.png differ diff --git a/docs/images/tcmalloc-opspersec.png b/docs/images/tcmalloc-opspersec.png new file mode 100644 index 0000000..d7c79ef Binary files /dev/null and b/docs/images/tcmalloc-opspersec.png differ diff --git a/docs/images/tcmalloc-opspersec_002.png b/docs/images/tcmalloc-opspersec_002.png new file mode 100644 index 0000000..e8a3c9f Binary files /dev/null and b/docs/images/tcmalloc-opspersec_002.png differ diff --git a/docs/images/tcmalloc-opspersec_003.png b/docs/images/tcmalloc-opspersec_003.png new file mode 100644 index 0000000..d45458a Binary files /dev/null and b/docs/images/tcmalloc-opspersec_003.png differ diff --git a/docs/images/tcmalloc-opspersec_004.png b/docs/images/tcmalloc-opspersec_004.png new file mode 100644 index 0000000..37d406d Binary files /dev/null and b/docs/images/tcmalloc-opspersec_004.png differ diff --git a/docs/images/tcmalloc-opspersec_005.png b/docs/images/tcmalloc-opspersec_005.png new file mode 100644 index 0000000..1093e81 Binary files /dev/null and b/docs/images/tcmalloc-opspersec_005.png differ diff --git a/docs/images/tcmalloc-opspersec_006.png b/docs/images/tcmalloc-opspersec_006.png new file mode 100644 index 0000000..779eec6 Binary files /dev/null and b/docs/images/tcmalloc-opspersec_006.png differ diff --git a/docs/images/tcmalloc-opspersec_007.png b/docs/images/tcmalloc-opspersec_007.png new file mode 100644 index 0000000..da0328a Binary files /dev/null and b/docs/images/tcmalloc-opspersec_007.png differ diff --git a/docs/images/tcmalloc-opspersec_008.png b/docs/images/tcmalloc-opspersec_008.png new file mode 100644 index 0000000..76c125a Binary files /dev/null and b/docs/images/tcmalloc-opspersec_008.png differ diff --git a/docs/images/tcmalloc-opspersec_009.png b/docs/images/tcmalloc-opspersec_009.png new file mode 100644 index 0000000..52d7aee Binary files /dev/null and b/docs/images/tcmalloc-opspersec_009.png differ diff --git a/docs/images/threadheap.gif b/docs/images/threadheap.gif new file mode 100644 index 0000000..c43d0a3 Binary files /dev/null and b/docs/images/threadheap.gif differ