mirror of
https://github.com/prometheus/prometheus
synced 2025-01-13 18:33:34 +00:00
b02d900e61
Also, clean up some things in the code (especially introduction of the chunkLenWithHeader constant to avoid the same expression all over the place). Benchmark results: BEFORE BenchmarkLoadChunksSequentially 5000 283580 ns/op 152143 B/op 312 allocs/op BenchmarkLoadChunksRandomly 20000 82936 ns/op 39310 B/op 99 allocs/op BenchmarkLoadChunkDescs 10000 110833 ns/op 15092 B/op 345 allocs/op AFTER BenchmarkLoadChunksSequentially 10000 146785 ns/op 152285 B/op 315 allocs/op BenchmarkLoadChunksRandomly 20000 67598 ns/op 39438 B/op 103 allocs/op BenchmarkLoadChunkDescs 20000 99631 ns/op 12636 B/op 192 allocs/op Note that everything is obviously loaded from the page cache (as the benchmark runs thousands of times with very small series files). In a real-world scenario, I expect a larger impact, as the disk operations will more often actually hit the disk. To load ~50 sequential chunks, this reduces the iops from 100 seeks and 100 reads to 1 seek and 1 read. |
||
---|---|---|
.. | ||
local | ||
metric | ||
remote | ||
storage.go |