* Add unittests for PostingsForMatcher.
* Selector methods are all stateless, don't need a reference.
* Be smarter in how we look at matchers.
Look at all matchers to see if a label can be empty.
Optimise Not handling, so i!="2" is a simple lookup
rather than an inverse postings list.
All all the Withouts together, rather than
having to subtract each from all postings.
Change the pre-expand the postings logic to always do it before doing a
Without only. Don't do that if it's already a list.
The initial goal here was that the oft-seen pattern
i=~"something.+",i!="foo",i!="bar" becomes more efficient.
benchmark old ns/op new ns/op delta
BenchmarkHeadPostingForMatchers/n="1"-4 5888 6160 +4.62%
BenchmarkHeadPostingForMatchers/n="1",j="foo"-4 7190 6640 -7.65%
BenchmarkHeadPostingForMatchers/j="foo",n="1"-4 6038 5923 -1.90%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"-4 6030884 4850525 -19.57%
BenchmarkHeadPostingForMatchers/i=~".*"-4 887377940 230329137 -74.04%
BenchmarkHeadPostingForMatchers/i=~".+"-4 490316101 319931758 -34.75%
BenchmarkHeadPostingForMatchers/i=~""-4 594961991 130279313 -78.10%
BenchmarkHeadPostingForMatchers/i!=""-4 537542388 318751015 -40.70%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"-4 10460243 8565195 -18.12%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"-4 44964267 8561546 -80.96%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"-4 42244885 29137737 -31.03%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"-4 35285834 32774584 -7.12%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"-4 8951047 8379024 -6.39%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"-4 63813335 30672688 -51.93%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"-4 45381112 44924397 -1.01%
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
Use a heap for Next for merges, and
pre-compute if there's many postings on the
unset path.
Add posting lookup benchmarks
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
With 1M series:
Before:
BenchmarkHeadPostingForMatchers-8 1 3501996117 ns/op 61311520 B/op 78 allocs/op
After:
BenchmarkHeadPostingForMatchers-8 1 1403072952 ns/op 69261568 B/op 72 allocs/op
This works out as 3X faster, as the above time includes other things.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
Some benchmarks for HEAD and allocate the correct slice size in LabelValues , we already know what it'll be
This is ~15% time improvement, and ~25% allocation improvement:
```
benchmark old ns/op new ns/op delta
BenchmarkHeadPostingForMatchers-4 74452 63514 -14.69%
benchmark old allocs new allocs delta
BenchmarkHeadPostingForMatchers-4 20 13 -35.00%
benchmark old bytes new bytes delta
BenchmarkHeadPostingForMatchers-4 5425 3137 -42.18%
```
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>