Commit Graph

121 Commits

Author SHA1 Message Date
Ganesh Vernekar b1cd829030
Reuse Chunk Iterator (#642)
* Reset method for chunkenc.Iterator

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Reset method only for XORIterator

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Use Reset(...) in querier.go

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Reuse deletedIterator

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Another way of reusing chunk iterators

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Unexport xorIterator

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix memSeries.iterator(...)

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Add some comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-07-09 15:19:34 +05:30
naivewong 13c80a5979 Optimize queries using regex matchers for set lookups (#602)
* Original version of the set optimization

Signed-off-by: naivewong <867245430@qq.com>

* simple set matcher

Signed-off-by: naivewong <867245430@qq.com>

* simple set matcher

Signed-off-by: naivewong <867245430@qq.com>

* update

Signed-off-by: naivewong <867245430@qq.com>

* update

Signed-off-by: naivewong <867245430@qq.com>

* add benchmark

Signed-off-by: naivewong <867245430@qq.com>

* update

Signed-off-by: naivewong <867245430@qq.com>

* update

Signed-off-by: naivewong <867245430@qq.com>

* update benchmark

Signed-off-by: naivewong <867245430@qq.com>

* update

Signed-off-by: naivewong <867245430@qq.com>

* update benchmark

Signed-off-by: naivewong <867245430@qq.com>

* update benchmark

Signed-off-by: naivewong <867245430@qq.com>

* update benchmark

Signed-off-by: naivewong <867245430@qq.com>

* update

Signed-off-by: naivewong <867245430@qq.com>

* update

Signed-off-by: naivewong <867245430@qq.com>

* update

Signed-off-by: naivewong <867245430@qq.com>

* update

Signed-off-by: naivewong <867245430@qq.com>

* use genSeries from #467

Signed-off-by: naivewong <867245430@qq.com>

* update

Signed-off-by: naivewong <867245430@qq.com>
2019-05-27 16:54:46 +05:30
Brian Brazil 89a90fe96c
Simplify mergedPostings.Seek (#595)
The current implementation leads to very slow behaviour when there's
many lists, this no worse than n log k, where k is the number of posting
lists.

Adjust benchmark to catch this.

Remove flattening of without lists, not needed anymore.

Benchmark versus 0.4.0 (used in Prometheus 2.7):
```
benchmark                                                            old ns/op      new ns/op      delta
BenchmarkHeadPostingForMatchers/n="1"-8                              189907976      188863880      -0.55%
BenchmarkHeadPostingForMatchers/n="1",j="foo"-8                      113950106      110791414      -2.77%
BenchmarkHeadPostingForMatchers/j="foo",n="1"-8                      104965646      102388760      -2.45%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"-8                     138743592      104424250      -24.74%
BenchmarkHeadPostingForMatchers/i=~".*"-8                            5279594954     5206096267     -1.39%
BenchmarkHeadPostingForMatchers/i=~".+"-8                            8004610589     6184527719     -22.74%
BenchmarkHeadPostingForMatchers/i=~""-8                              2476042646     1003920432     -59.45%
BenchmarkHeadPostingForMatchers/i!=""-8                              7178244655     6059725323     -15.58%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"-8              199342649      166642946      -16.40%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"-8       215774683      167515095      -22.37%
BenchmarkHeadPostingForMatchers/n="1",i!=""-8                        2214714769     392943663      -82.26%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"-8                2148727410     322289262      -85.00%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"-8              2170658009     338458171      -84.41%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"-8             235720135      70597905       -70.05%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"-8       2190570590     343034307      -84.34%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"-8     2373784439     387297908      -83.68%

benchmark                                                            old allocs     new allocs     delta
BenchmarkHeadPostingForMatchers/n="1"-8                              33             33             +0.00%
BenchmarkHeadPostingForMatchers/n="1",j="foo"-8                      33             33             +0.00%
BenchmarkHeadPostingForMatchers/j="foo",n="1"-8                      33             33             +0.00%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"-8                     41             39             -4.88%
BenchmarkHeadPostingForMatchers/i=~".*"-8                            56             56             +0.00%
BenchmarkHeadPostingForMatchers/i=~".+"-8                            251577         100115         -60.21%
BenchmarkHeadPostingForMatchers/i=~""-8                              251123         100077         -60.15%
BenchmarkHeadPostingForMatchers/i!=""-8                              251525         100112         -60.20%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"-8              42             39             -7.14%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"-8       52             42             -19.23%
BenchmarkHeadPostingForMatchers/n="1",i!=""-8                        251069         100101         -60.13%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"-8                251473         100101         -60.19%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"-8              250914         100102         -60.11%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"-8             30038          11181          -62.78%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"-8       250813         100105         -60.09%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"-8     281503         111260         -60.48%

benchmark                                                            old bytes     new bytes     delta
BenchmarkHeadPostingForMatchers/n="1"-8                              10887600      10887600      +0.00%
BenchmarkHeadPostingForMatchers/n="1",j="foo"-8                      5456416       5456416       +0.00%
BenchmarkHeadPostingForMatchers/j="foo",n="1"-8                      5456416       5456416       +0.00%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"-8                     5456640       5456544       -0.00%
BenchmarkHeadPostingForMatchers/i=~".*"-8                            258254504     258254472     -0.00%
BenchmarkHeadPostingForMatchers/i=~".+"-8                            520126192     281554792     -45.87%
BenchmarkHeadPostingForMatchers/i=~""-8                              263446640     24908456      -90.55%
BenchmarkHeadPostingForMatchers/i!=""-8                              520121144     281553664     -45.87%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"-8              7062448       7062272       -0.00%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"-8       7063473       7062384       -0.02%
BenchmarkHeadPostingForMatchers/n="1",i!=""-8                        274325656     35793776      -86.95%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"-8                268926824     30362624      -88.71%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"-8              268882992     30363000      -88.71%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"-8             33193401      4269304       -87.14%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"-8       268875024     30363096      -88.71%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"-8     300589656     33099784      -88.99%
```

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-05-13 10:51:07 +01:00
Krasi Georgiev 5512826f13
make Close methods for the querier safe to call more than once. (#581)
* make Close methods for the querier safe to call more than once.

Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
2019-04-30 10:17:07 +03:00
Brian Brazil 259847a6b1
Be smarter in how we look at matchers. (#572)
* Add unittests for PostingsForMatcher.

* Selector methods are all stateless, don't need a reference.

* Be smarter in how we look at matchers.

Look at all matchers to see if a label can be empty.

Optimise Not handling, so i!="2" is a simple lookup
rather than an inverse postings list.

All all the Withouts together, rather than
having to subtract each from all postings.

Change the pre-expand the postings logic to always do it before doing a
Without only. Don't do that if it's already a list.

The initial goal here was that the oft-seen pattern
i=~"something.+",i!="foo",i!="bar" becomes more efficient.

benchmark                                                            old ns/op     new ns/op     delta
BenchmarkHeadPostingForMatchers/n="1"-4                              5888          6160          +4.62%
BenchmarkHeadPostingForMatchers/n="1",j="foo"-4                      7190          6640          -7.65%
BenchmarkHeadPostingForMatchers/j="foo",n="1"-4                      6038          5923          -1.90%
BenchmarkHeadPostingForMatchers/n="1",j!="foo"-4                     6030884       4850525       -19.57%
BenchmarkHeadPostingForMatchers/i=~".*"-4                            887377940     230329137     -74.04%
BenchmarkHeadPostingForMatchers/i=~".+"-4                            490316101     319931758     -34.75%
BenchmarkHeadPostingForMatchers/i=~""-4                              594961991     130279313     -78.10%
BenchmarkHeadPostingForMatchers/i!=""-4                              537542388     318751015     -40.70%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",j="foo"-4              10460243      8565195       -18.12%
BenchmarkHeadPostingForMatchers/n="1",i=~".*",i!="2",j="foo"-4       44964267      8561546       -80.96%
BenchmarkHeadPostingForMatchers/n="1",i!="",j="foo"-4                42244885      29137737      -31.03%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",j="foo"-4              35285834      32774584      -7.12%
BenchmarkHeadPostingForMatchers/n="1",i=~"1.+",j="foo"-4             8951047       8379024       -6.39%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!="2",j="foo"-4       63813335      30672688      -51.93%
BenchmarkHeadPostingForMatchers/n="1",i=~".+",i!~"2.*",j="foo"-4     45381112      44924397      -1.01%

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-04-09 11:59:45 +01:00
Bartek Płotka 3ab5f4e579 index: reduce empty postings trees (#509)
Improved Merge when all is empty.

Signed-off-by: Bartek Plotka <bwplotka@gmail.com>
2019-03-21 18:23:00 +02:00
Krasi Georgiev c3ffdf1a99
Test createBlock and check all os.RemoveAll in the tests for errors. (#549)
Testing that createBlock creates blocks that can be opened.

and checking the os.RemoveAll for errors will catch errors for un-closed files under windows.

Many missing .Close() calls were added for fixing failing os.RemoveAll

Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
2019-03-19 15:31:57 +02:00
Brian Brazil 62b652fbd0
Improve Merge performance (#531)
Use a heap for Next for merges, and
pre-compute if there's many postings on the
unset path.

Add posting lookup benchmarks

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-02-28 17:23:55 +00:00
Ganesh Vernekar c59ed492b2 Vertical query merging and compaction (#370)
* Vertical series iterator

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Select overlapped blocks first in compactor Plan()

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Added vertical compaction

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Code cleanup and comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix review comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix tests

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Add benchmark for compaction

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Perform vertical compaction only when blocks are overlapping.

Actions for vertical compaction:
* Sorting chunk metas
* Calling chunks.MergeOverlappingChunks on the chunks

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Benchmark for vertical compaction

* BenchmarkNormalCompaction => BenchmarkCompaction
* Moved the benchmark from db_test.go to compact_test.go

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Benchmark for query iterator and seek for non overlapping blocks

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Vertical query merge only for overlapping blocks

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Simplify logging in Compact(...)

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Updated CHANGELOG.md

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Calculate overlapping inside populateBlock

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* MinTime and MaxTime for BlockReader.

Using this to find overlapping blocks in populateBlock()

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Sort blocks w.r.t. MinTime in reload()

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Log about overlapping in LeveledCompactor.write() instead of returning bool

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Log about overlapping inside LeveledCompactor.populateBlock()

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix review comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Refactor createBlock to take optional []Series

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* review1

Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>

* Updated CHANGELOG and minor nits

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* nits

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Updated CHANGELOG

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Refactor iterator and seek benchmarks for Querier.

Also has as overlapping blocks.

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Additional test case

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* genSeries takes optional labels. Updated BenchmarkQueryIterator and BenchmarkQuerySeek.

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Split genSeries into genSeries and populateSeries

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Check error in benchmark

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Fix review comments

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* Warn about overlapping blocks in reload()

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2019-02-14 14:29:41 +01:00
Krasi Georgiev 48c439d26d
fix statick check errors (#475)
fix the tests for `check_license` and `staticcheck`

the static check also found some actual bugs.

Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
2019-01-02 19:48:42 +03:00
Krasi Georgiev 090b6852e1
remove unused `PrefixMatcher` (#474)
* remove unused `PrefixMatcher`

Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
2018-12-28 21:13:02 +03:00
Ganesh Vernekar 7f30395115 LabelNames() for Querier (#455)
* LabelNames() for Querier

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>

* nits

Signed-off-by: Ganesh Vernekar <cs15btech11018@iith.ac.in>
2018-11-16 19:02:24 +01:00
Krasi Georgiev 5a9ddeecef
fix lint errors (#439)
unexported NewMemTombstones as this returns unexported memTombstones
type which will not be shows in godoc.
Added missing comments for exported methods.
Removed unused RecordLogger,RecordReader interfaces.

Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
2018-11-14 18:40:01 +02:00
Alin Sinpalean 171fc4ab5d Limit the returned db.Querier to the requested time range (#351)
Limit the returned `db.Querier` to the requested time range. Preallocate the `baseChunkSeries.lset` and `baseChunkSeries.chks` slices to the previous series' slice sizes to avoid unnecessary grow slice reallocations.
2018-11-09 15:54:56 +02:00
Ye Ben 23a5f09085 fix a typo dont -> dont't (#438)
Signed-off-by: yeya24 <ben.ye@daocloud.io>
2018-10-31 13:49:57 +02:00
Krasi Georgiev d05611c027
removed some unused code and moved mockSeriesSet in querier_test (#394)
Signed-off-by: Krasi Georgiev <kgeorgie@redhat.com>
2018-09-21 11:07:35 +03:00
codwu 84a45cb79a add rwmutex to prevent concurrent map read when delete series
Signed-off-by: codwu <wuhan9087@163.com>
2018-06-08 19:52:01 +08:00
Simon Pasquier 7206a8456f Fix minor typos in comments 2018-01-15 14:27:49 +01:00
Fabian Reinartz 1e55b7987f Improve comments, handle allPostingsKey properly 2017-12-22 09:43:34 +01:00
Fabian Reinartz 67f0ca8f0e Move index and chunk encoders to own packages 2017-12-21 11:27:54 +01:00
Goutham Veeramachaneni 239cbae154
Merge pull request #228 from Gouthamve/not-matchers
Select series with label unset for != and !~
2017-12-21 12:10:51 +05:30
Goutham Veeramachaneni 3158b03e6c Select series with label unset for != and !~
Fixes https://github.com/prometheus/prometheus/issues/3575

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
2017-12-19 02:47:31 +00:00
Goutham Veeramachaneni 05d62ca842 Make sure gc'ed chunks are handled properly
Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
2017-12-14 19:48:39 +00:00
Fabian Reinartz f1512a368a Expose ChunkSeriesSet and lookups methods. 2017-11-13 14:02:32 +01:00
Fabian Reinartz 3ef4326114 Refactor tombstone reader types 2017-11-13 13:38:07 +01:00
Fabian Reinartz e5ce2bef43 Add explicit error to Querier.Select
This has been a frequent source of debugging pain since errors are
potentially delayed to a much later point. They bubble up in an
unrelated execution path.
2017-11-13 12:16:58 +01:00
Fabian Reinartz 82796db37b Ensure near-empty chunks end at correct boundary
We were determining a chunk's end time once it was one quarter full to
compute it so all chunks have uniform number of samples.
This accidentally skipped the case where series started near the end of
a chunk range/block and never reached that threshold. As a result they
got persisted but were continued across the range.

This resulted in corrupted persisted data.
2017-10-25 09:51:55 +02:00
Fabian Reinartz f8e88bfdb7 Close previous block queriers on error
This ensures we close all previously opened queriers if on of the block
querier fails to open.
Also swap in new blocks before closing old ones to avoid the situation
in general. Make read locking of blocks more conservative to avoid
unnecessary retries by clients, e.g. when blocks are getting closed
before we can successfully instantiate querier against them.
2017-10-23 21:56:12 +02:00
Fabian Reinartz 665955da48 Clarify postings index semantics, handle staleness
The postings list index may point to series that no longer
exist during garbage collection. This clarifies that this is valid
behavior.
It would be possible, though more complex, to always keep them in sync.
However, series existance means nothing in itself as the queried time
range defines whether there's actual data. Thus our definition is sane
overall as long as drift is kept small.
2017-10-11 09:37:19 +02:00
Fabian Reinartz fb9da52b11 Add more verbose error handling for closing, reduce locking
This commit introduces error returns in various places and is explicit
about closing persisted blocks.
{Index,Chunk,Tombstone}Readers are more consistent about their Close()
method. Whenever a reader is retrieved, the corresponding close method
must eventually be called. We use this to track pending readers against
persisted blocks.

Querier's against the DB no longer hold a read lock for their entire
lifecycle. This avoids long running queriers to starve new ones when we
have to acquire a write lock when reloading blocks.
2017-10-10 12:13:37 +02:00
Fabian Reinartz 3901b6e70b Remove multiple heads
This changes the structure to a single WAL backed by a single head
block.
Parts of the head block can be compacted. This relieves us from any head
amangement and greatly simplifies any consistency and isolation concerns
by just having a single head.
2017-09-01 11:50:58 +02:00
Goutham Veeramachaneni c463b0c8c8
Expose NewMergedSeriesSet for merging SeriesSets
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-08-25 19:36:24 +05:30
Goutham Veeramachaneni 5b242f35ba Expose a Querier with manually passed in readers.
Allows people to not copy the querying code.

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-08-25 16:06:45 +05:30
Goutham Veeramachaneni 7438ed7035 Expose Intervals type for use by TombstoneReader.
TombstoneReader is exposed but Intervals is not.

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-08-25 16:06:36 +05:30
Fabian Reinartz 2644c8665c Don't allocate ChunkMetas, reuse postings slices 2017-08-06 20:41:24 +02:00
Fabian Reinartz 96d7f540d4 Persist series without allocating the full set
Change index persistence for series to not be accumulated in memory
before being written as one large batch. `Labels` and `ChunkMeta`
objects are reused.
This cuts down memory spikes during compaction of multiple blocks
significantly.

As part of the the Index{Reader,Writer} now have an explicit notion of
symbols and series must be inserted in order.
2017-08-06 12:06:41 +02:00
Dmitry Ilyevsky 37194b7a30 Add prefix label matcher.
Implement labels.PrefixMatcher and use interface conversion in querier
to optimize label tuples search.

[unit-tests]: Fix bug and populate label index for mock index.

Signed-off-by: Dmitry Ilyevsky <ilyevsky@gmail.com>
2017-07-22 01:06:30 -07:00
Fabian Reinartz 3065be97d8 Fix and document locking order for DB 2017-07-14 09:00:22 +02:00
Fabian Reinartz bda3ed20ac Fix omitting of chunk on Seek() 2017-06-30 15:06:27 +02:00
Fabian Reinartz 3410559c1b Compact head block early
Let older head blocks be compacted once the newest once has samples at
50% of its total range. This allows the memory of the compacted blocks
to be released and garbage collected before a new head block gets
created. Thereby the number of head blocks is 1 or 2 instead of 2 or 3
and memory spikes are reduced.
2017-06-26 08:52:59 +02:00
Fabian Reinartz a2948f3c5f Merge pull request #97 from Gouthamve/nit
Fix bug with Seek() and optimise bounding params.
2017-06-13 10:11:10 +02:00
Fabian Reinartz eb8c9759fc Properly balance k-way operations 2017-06-13 08:25:13 +02:00
Goutham Veeramachaneni 9e1f34dae3
Fix bug with Seek() and optimise bounding params.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-06-13 11:24:04 +05:30
Goutham Veeramachaneni 44e9ae38b5
Incorporate PR feedback.
* Expose Stone as it is used in an exported method.
* Move from tombstoneReader to []Stone for the same reason as above.
* Make WAL reading a little cleaner.

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-05-26 21:26:31 +05:30
Goutham Veeramachaneni f29fb62fba
Make TombstoneReader a Getter.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-05-24 11:24:24 +05:30
Goutham Veeramachaneni 244b73fce1
Rename for clarity and consistency.
Misc. changes for code cleanliness.

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-05-22 16:42:36 +05:30
Goutham Veeramachaneni d6bd64357b
Fix Delete on HeadBlock
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-05-19 22:54:29 +05:30
Goutham Veeramachaneni 22c1b5b492
Make SeriesSets use tombstones.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-05-17 14:49:42 +05:30
Goutham Veeramachaneni cea3c88f17
Add Tombstones() method to Block.
Also add Seek() to TombstoneReader

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-05-16 19:48:28 +05:30
Goutham Veeramachaneni d57f269eb4
Make Select() reusable.
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-05-13 21:13:25 +05:30