* Add basic initial developer docs for TSDB
There's a decent amount of content already out there (blog posts,
conference talks, etc), but:
* when they get stale, they don't tend to get updated
* they still leave me with questions that I'ld like to answer
for developers (like me) who want to use, or work with, TSDB
What I propose is developer docs inside the prometheus
repository. Easy to find and harness the power of the community
to expand it and keep it up to date.
* perfect is the enemy of good. Let's have a base and incrementally improve
* Markdown docs should be broad but not too deep. Source code comments
can complement them, and are the ideal place for implementation details.
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* use example code that works out of the box
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Apply suggestions from code review
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* PR feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* more docs
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* PR feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Apply suggestions from code review
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Apply suggestions from code review
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Update tsdb/docs/usage.md
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* final tweaks
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* workaround docs versioning issue
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Move example code to real executable, testable example.
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* cleanup example test and make sure it always reproduces
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* obtain temp dir in a way that works with older Go versions
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Fix Ganesh's comments
Signed-off-by: Ganesh Vernekar <ganeshvern@gmail.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
Co-authored-by: Ganesh Vernekar <ganeshvern@gmail.com>
This creates a new `model` directory and moves all data-model related
packages over there:
exemplar labels relabel rulefmt textparse timestamp value
All the others are more or less utilities and have been moved to `util`:
gate logging modetimevfs pool runtime
Signed-off-by: beorn7 <beorn@grafana.com>
* TSDB: demistify seriesRefs and ChunkRefs
The TSDB package contains many types of series and chunk references,
all shrouded in uint types. Often the same uint value may
actually mean one of different types, in non-obvious ways.
This PR aims to clarify the code and help navigating to relevant docs,
usage, etc much quicker.
Concretely:
* Use appropriately named types and document their semantics and
relations.
* Make multiplexing and demuxing of types explicit
(on the boundaries between concrete implementations and generic
interfaces).
* Casting between different types should be free. None of the changes
should have any impact on how the code runs.
TODO: Implement BlockSeriesRef where appropriate (for a future PR)
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* feedback
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* agent: demistify seriesRefs and ChunkRefs
Signed-off-by: Dieter Plaetinck <dieter@grafana.com>
* Allow to disable trimming when querying TSDB
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Addressed review comments
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Added unit test
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Renamed TrimDisabled to DisableTrimming
Signed-off-by: Marco Pracucci <marco@pracucci.com>
* Push the matchers for LabelNames all the way into the index.
NB This doesn't actually implement it in the index, just plumbs it through for now...
Signed-off-by: Tom Wilkie <tom@grafana.com>
* Hack it up. Does not work.
Signed-off-by: Tom Wilkie <tom@grafana.com>
* Revert changes I don't understand
Can't see why do we need to hold a mutex on symbols, and the purpose of
the LabelNamesFor method.
Maybe I'll need to re-add this later.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Implement LabelNamesFor
This method provides the label names that appear in the postings
provided. We do that deeper than the label values because we know
beforehand that most of the label names we'll be the same across
different postings, and we don't want to go down an up looking up the
same symbols for all different series.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Mutex on symbols should be unlocked
However, I still don't understand why do we need a mutex here.
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Fix head.LabelNamesFor
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Implement mockIndex LabelNames with matchers
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Nitpick on slice initialisation
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Add tests for LabelNamesWithMatchers
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Fix the mutex mess on head.LabelValues/LabelNames
I still don't see why we need to grab that unrelated mutex, but at least
now we're grabbing it consistently
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Check error after iterating postings
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Use the error from posting when there was en error in postings
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Update storage/interface.go comment
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Update tsdb/index/index.go comment
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Update tsdb/index/index.go wrapped error msg
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Update tsdb/index/index.go wrapped error msg
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Update tsdb/index/index.go warpped error msg
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Remove unneeded comment
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Add testcases for LabelNames w/matchers in api.go
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
* Use t.Cleanup() instead of defer in tests
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Co-authored-by: Tom Wilkie <tom@grafana.com>
Co-authored-by: Ganesh Vernekar <15064823+codesome@users.noreply.github.com>
* Create experimental circular buffer resize method, benchmarks
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Optimize exemplar resize to only replay as many exemplars as needed
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* More comments, benchmark AddExemplar
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* optimizations
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* comment
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Slight refactor of resize benchmark + make use of resize via runtime
reloadable storage config.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Some more config related changes.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address some review comments.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address more review comments.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Refactor to remove usage of noopExemplarStorage and avoid race condition
when resizing from Head code.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Fix or add comments to clarify some of the new behaviour.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* fix potential panics related to negative exemplar buffer lengths
Signed-off-by: Callum Styan <callumstyan@gmail.com>
Co-authored-by: Callum Styan <callumstyan@gmail.com>
* Write exemplars to the WAL and send them over remote write.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Update example for exemplars, print data in a more obvious format.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Add metrics for remote write of exemplars.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Fix incorrect slices passed to send in remote write.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* We need to unregister the new metrics.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address review comments
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Order of exemplar append vs write exemplar to WAL needs to change.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Several fixes to prevent sending uninitialized or incorrect samples with an exemplar. Fix dropping exemplar for missing series. Add tests for queue_manager sending exemplars
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Store both samples and exemplars in the same timeseries buffer to remove the alloc when building final request, keep sub-slices in separate buffers for re-use
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Condense sample/exemplar delivery tests to parameterized sub-tests
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Rename test methods for clarity now that they also handle exemplars
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Rename counter variable. Fix instances where metrics were not updated correctly
Signed-off-by: Martin Disibio <mdisibio@gmail.com>
* Add exemplars to LoadWAL benchmark
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* last exemplars timestamp metric needs to convert value to seconds with
ms precision
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Process exemplar records in a separate go routine when loading the WAL.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address review comments related to clarifying comments and variable
names. Also refactor sample/exemplar to enqueue prompb types.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Regenerate types proto with comments, update protoc version again.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Put remote write of exemplars behind a feature flag.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address some of Ganesh's review comments.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Move exemplar remote write feature flag to a config file field.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address Bartek's review comments.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Don't allocate exemplar buffers in queue_manager if we're not going to
send exemplars over remote write.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Add ValidateExemplar function, validate exemplars when appending to head
and log them all to WAL before adding them to exemplar storage.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address more reivew comments from Ganesh.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Add exemplar total label length check.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address a few last review comments
Signed-off-by: Callum Styan <callumstyan@gmail.com>
Co-authored-by: Martin Disibio <mdisibio@gmail.com>
The purpose of GetRef() is to allow Append() to be called without
the caller needing to copy the labels. To avoid a race where a series
is removed from TSDB between the calls to GetRef() and Append(), we
return TSDB's copy of the labels.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add method to get reference number for TSDB Appender
In situations where we need to copy labels before calling Add(),
GetRef() allows to check first, then call AddFast() in the case that the series
is already known.
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Add explicit interface for GetRef() method
Suggested in code review by @bwplotka
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Rename OptionalGetRef to GetRef
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
* Simplify return value of GetRef()
0 can be relied on to mean 'no reference'
Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This moves the label lookup into TSDB, whilst still keeping the cached-ref optimisation for repeated Appends.
This makes the API easier to consume and implement. In particular this change is motivated by the scrape-time-aggregation work, which I don't think is possible to implement without it as it needs access to label values.
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
* tsdb: Added ChunkQueryable implementations to db; unified compactor, querier and fanout block iterating.
Chained to https://github.com/prometheus/prometheus/pull/7059
* NewMerge(Chunk)Querier now takies multiple primaries allowing tsdb DB code to use it.
* Added single SeriesEntry / ChunkEntry for all series implementations.
* Unified all vertical, and non vertical for compact and querying to single
merge series / chunk sets by reusing VerticalSeriesMergeFunc for overlapping algorithm (same logic as before)
* Added block (Base/Chunk/)Querier for block querying. We then use populateAndTomb(Base/Chunk/) to iterate over chunks or samples.
* Refactored endpoint tests and querier tests to include subtests.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed comments from Brian and Beorn.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed snapshot test and added chunk iterator support for DBReadOnly.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed race when iterating over Ats first.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed tests.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed populate block tests.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed endpoints test.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed test.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Added test & fixed case of head open chunk.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed DBReadOnly tests and bug producing 1 sample chunks.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Added cases for partial block overlap for multiple full chunks.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Added extra tests for chunk meta after compaction.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed small vertical merge bug and added more tests for that.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fixed nits introduced by https://github.com/prometheus/prometheus/pull/7334
* Added ChunkQueryable implementation to fanout and readyStorage.
* Added more comments.
* Changed NewVerticalChunkSeriesMerger to CompactingChunkSeriesMerger, removed tiny interface by reusing VerticalSeriesMergeFunc for overlapping algorithm for
both chunks and series, for both querying and compacting (!) + made sure duplicates are merged.
* Added ErrChunkSeriesSet
* Added Samples interface for seamless []promb.Sample to []tsdbutil.Sample conversion.
* Deprecating non chunks serieset based StreamChunkedReadResponses, added chunk one.
* Improved tests.
* Split remote client into Write (old storage) and read.
* Queryable client is now SampleAndChunkQueryable. Since we cannot use nice QueryableFunc I moved
all config based options to sampleAndChunkQueryableClient to aboid boilerplate.
In next commit: Changes for TSDB.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Add errors and Warnings to SeriesSet
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Change Querier interface and refactor accordingly
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactor promql/engine to propagate warnings at eval stage
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Make sure all the series from all Selects are pre-advanced
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Separate merge series sets
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Clean
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactor merge querier failure handling
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactored and simplified fanout with improvements from incoming chunk iterator PRs.
* Secondary logic is hidden, instead of weird failed series set logic we had.
* Fanout is well commented
* Fanout closing record all errors
* MergeQuerier improved API (clearer)
* deferredGenericMergeSeriesSet is not needed as we return no samples anyway for failed series sets (next = false).
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fix formatting
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix CI issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Added final tests for error handling.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Brian's comments.
* Moved hints in populate to be allocated only when needed.
* Used sync.Once in secondary Querier to achieve all-or-nothing partial response logic.
* Select after first Next is done will panic.
NOTE: in lazySeriesSet in theory we could just panic, I think however we can
totally just return error, it will panic in expand anyway.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Utilize errWithWarnings
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix recently introduced expansion issue
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Add tests for secondary querier error handling
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Implement lazy merge
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Add name to test cases
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Reorganize
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review comments
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review comments
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Remove redundant warnings
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix rebase mistake
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
* storage: Added Chunks{Queryable/Querier/SeriesSet/Series/Iteratable. Added generic Merge{SeriesSet/Querier} implementation.
## Rationales:
In many places (e.g. chunk Remote read, Thanos Receive fetching chunk from TSDB), we operate on encoded chunks not samples.
This means that we unnecessary decode/encode, wasting CPU, time and memory.
This PR adds chunk iterator interfaces and makes the merge code to be reused between both seriesSets
I will make the use of it in following PR inside tsdb itself. For now fanout implements it and mergers.
All merges now also allows passing series mergers. This opens doors for custom deduplications other than TSDB vertical ones (e.g. offline one we have in Thanos).
## Changes
* Added Chunk versions of all iterating methods. It all starts in Querier/ChunkQuerier. The plan is that
Storage will implement both chunked and samples.
* Added Seek to chunks.Iterator interface for iterating over chunks.
* NewMergeChunkQuerier was added; Both this and NewMergeQuerier are now using generigMergeQuerier to share the code. Generic code was added.
* Improved tests.
* Added some TODO for further simplifications in next PRs.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Brian's comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Moved s/Labeled/SeriesLabels as per Krasi suggestion.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Krasi's comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Second iteration of Krasi comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Another round of comments.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
This is technically BREAKING CHANGE, but it was like this from the beginning: I just notice that we rely in
Prometheus on remote read being sorted. This is because we use selected data from remote reads in MergeSeriesSet
which rely on sorting.
I found during work on https://github.com/prometheus/prometheus/pull/5882 that
we do so many repetitions because of this, for not good reason. I think
I found a good balance between convenience and readability with just one method.
Smaller the interface = better.
Also I don't know what TestSelectSorted was testing, but now it's testing sorting.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
I think the previous behavior is problematic as it will leave
`memSeries` around that still have `pendingCommit` set to `true`.
The only case where this can happen in this code path is a failure to
write to the WAL, in which case we are probably in trouble anyway. I
believe, however, we should still try to do the right thing and do the
full rollback. This will implicitly try to write to the WAL again, but
this time without samples, which may even succeed. (But we propagate
the previous error in any case.)
This also adds `a.head.putSeriesBuffer(a.sampleSeries)` to Rollback,
which was previously missing.
Signed-off-by: beorn7 <beorn@grafana.com>
This is part of https://github.com/prometheus/prometheus/pull/5882 that can be done to simplify things.
All todos I added will be fixed in follow up PRs.
* querier.Querier, querier.Appender, querier.SeriesSet, and querier.Series interfaces merged
with storage interface.go. All imports that.
* querier.SeriesIterator replaced by chunkenc.Iterator
* Added chunkenc.Iterator.Seek method and tests for xor implementation (?)
* Since we properly handle SelectParams for Select methods I adjusted min max
based on that. This should help in terms of performance for queries with functions like offset.
* added Seek to deletedIterator and test.
* storage/tsdb was removed as it was only a unnecessary glue with incompatible structs.
No logic was changed, only different source of abstractions, so no need for benchmarks.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Add Start/End to SelectParams
* Make remote read use the new selectParams for start/end
This commit will continue sending the start/end time of the remote read
query as the overarching promql time and the specific range of data that
the query is intersted in receiving a response to is now part of the
ReadHints (upstream discussion in #4226).
* Remove unused vendored code
The genproto.sh script was updated, but the code wasn't regenerated.
This simply removes the vendored deps that are no longer part of the
codegen output.
Signed-off-by: Thomas Jackson <jacksontj.89@gmail.com>
This adds a parameter to the storage selection interface which allows
query engine(s) to pass information about the operations surrounding a
data selection.
This can for example be used by remote storage backends to infer the
correct downsampling aggregates that need to be provided.
Federation makes use of dedupedSeriesSet to merge SeriesSets for every
query into one output stream. If many match[] arguments are provided,
many dedupedSeriesSet objects will get chained. This has the downside of
causing a potential O(n*k) running time, where n is the number of series
and k the number of match[] arguments.
In the mean time, the storage package provides a mergeSeriesSet that
accomplishes the same with an O(n*log(k)) running time by making use of
a binary heap. Let's just get rid of dedupedSeriesSet and change all
existing callers to use mergeSeriesSet.
In order to compose different querier implementations more easily, this
change introduces a separate storage.Queryable interface grouping the
query (Querier) function of the storage.
Furthermore, it adds a QueryableFunc type to ease writing very simple
queryable implementations.
Currently all read queries are simply pushed to remote read clients.
This is fine, except for remote storage for wich it unefficient and
make query slower even if remote read is unnecessary.
So we need instead to compare the oldest timestamp in primary/local
storage with the query range lower boundary. If the oldest timestamp
is older than the mint parameter, then there is no need for remote read.
This is an optionnal behavior per remote read client.
Signed-off-by: Thibault Chataigner <t.chataigner@criteo.com>
* Re-add contexts to storage.Storage.Querier()
These are needed when replacing the storage by a multi-tenant
implementation where the tenant is stored in the context.
The 1.x query interfaces already had contexts, but they got lost in 2.x.
* Convert promql.Engine to use native contexts
With this change the scraping caches series references and only
allocates label sets if it has to retrieve a new reference.
pkg/textparse is used to do the conditional parsing and reduce
allocations from 900B/sample to 0 in the general case.