* scrape: add label limits per scrape
Add three new limits to the scrape configuration to provide some
mechanism to defend against unbound number of labels and excessive
label lengths. If any of these limits are broken by a sample from a
scrape, the whole scrape will fail. For all of these configuration
options, a zero value means no limit.
The `label_limit` configuration will provide a mechanism to bound the
number of labels per-scrape of a certain sample to a user defined limit.
This limit will be tested against the sample labels plus the discovery
labels, but it will exclude the __name__ from the count since it is a
mandatory Prometheus label to which applying constraints isn't
meaningful.
The `label_name_length_limit` and `label_value_length_limit` will
prevent having labels of excessive lengths. These limits also skip the
__name__ label for the same reasons as the `label_limit` option and will
also make the scrape fail if any sample has a label name/value length
that exceed the predefined limits.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
* scrape: add metrics and alert to label limits
Add three gauge, one for each label limit to easily access the
limit set by a certain scrape target.
Also add a counter to count the number of targets that exceeded the
label limits and thus were dropped. This is useful for the
`PrometheusLabelLimitHit` alert that will notify the users that scraping
some targets failed because they had samples exceeding the label limits
defined in the scrape configuration.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
* scrape: apply label limits to __name__ label
Apply limits to the __name__ label that was previously skipped and
truncate the label names and values in the error messages as they can be
very very long.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
* scrape: remove label limits gauges and refactor
Remove `prometheus_target_scrape_pool_label_limit`,
`prometheus_target_scrape_pool_label_name_length_limit`, and
`prometheus_target_scrape_pool_label_value_length_limit` as they are not
really useful since we don't have the information on the labels in it.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
This moves the label lookup into TSDB, whilst still keeping the cached-ref optimisation for repeated Appends.
This makes the API easier to consume and implement. In particular this change is motivated by the scrape-time-aggregation work, which I don't think is possible to implement without it as it needs access to label values.
Signed-off-by: Tom Wilkie <tom.wilkie@gmail.com>
* Testify: move to require
Moving testify to require to fail tests early in case of errors.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* More moves
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Refactor test assertions
This pull request gets rid of assert.True where possible to use
fine-grained assertions.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
This PR test that de-duplicated targets are actually started.
It is a unit test for this line of code:
072b9649a3/scrape/scrape.go (L457)
which is working and necessary but was not tested yet.
It also tests that scrapes are started in the normal way, in the targets
limit test.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
With this, the storage tests inside the scrape package are more
realistic.
Discovered with #7593, but fixed independently as #7593 will probably
take some time.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* Add errors and Warnings to SeriesSet
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Change Querier interface and refactor accordingly
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactor promql/engine to propagate warnings at eval stage
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Make sure all the series from all Selects are pre-advanced
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Separate merge series sets
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Clean
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactor merge querier failure handling
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Refactored and simplified fanout with improvements from incoming chunk iterator PRs.
* Secondary logic is hidden, instead of weird failed series set logic we had.
* Fanout is well commented
* Fanout closing record all errors
* MergeQuerier improved API (clearer)
* deferredGenericMergeSeriesSet is not needed as we return no samples anyway for failed series sets (next = false).
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Fix formatting
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix CI issues
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Added final tests for error handling.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Addressed Brian's comments.
* Moved hints in populate to be allocated only when needed.
* Used sync.Once in secondary Querier to achieve all-or-nothing partial response logic.
* Select after first Next is done will panic.
NOTE: in lazySeriesSet in theory we could just panic, I think however we can
totally just return error, it will panic in expand anyway.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Utilize errWithWarnings
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix recently introduced expansion issue
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Add tests for secondary querier error handling
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Implement lazy merge
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Add name to test cases
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Reorganize
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review comments
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Address review comments
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Remove redundant warnings
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
* Fix rebase mistake
Signed-off-by: Kemal Akkoyun <kakkoyun@gmail.com>
Co-authored-by: Bartlomiej Plotka <bwplotka@gmail.com>
* Separate scrape add error checking out into it's own function.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* pass sampleLimitError to checkAddError instead of returning an error
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Return bool, error from checkAddError so we can properly handle
ErrNotFound for AddFast. This should in theory never happen, but the
previous code path handled this case. Adds a test for this, which master
passes and the previous commit fails.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Address comment changes.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
* Move sampleAdded inside the loop iteration within append, since that's
the only block the variable is used in.
Signed-off-by: Callum Styan <callumstyan@gmail.com>
This is technically BREAKING CHANGE, but it was like this from the beginning: I just notice that we rely in
Prometheus on remote read being sorted. This is because we use selected data from remote reads in MergeSeriesSet
which rely on sorting.
I found during work on https://github.com/prometheus/prometheus/pull/5882 that
we do so many repetitions because of this, for not good reason. I think
I found a good balance between convenience and readability with just one method.
Smaller the interface = better.
Also I don't know what TestSelectSorted was testing, but now it's testing sorting.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
This fixes#6992, which was introduced by #6777. There was an
intermediate component which translated TSDB errors into storage errors,
but that component was deleted and this bug went unnoticed, until we
were watching at the Prombench results. Without this, scrape will fail
instead of dropping samples or using "Add" when the series have been
garbage collected.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
* tsdb: don't allow ingesting empty labelsets
When we ingest an empty labelset in the head, further blocks can not be
compacted, with the error:
```
level=error ts=2020-02-27T21:26:58.379Z caller=db.go:659 component=tsdb
msg="compaction failed" err="persist head block: write compaction:
add series: out-of-order series added with label set \"{}\" / prev:
\"{}\""
```
We should therefore reject those invalid empty labelsets upfront.
This can be reproduced with the following:
```
cat << END > prometheus.yml
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 1s
basic_auth:
username: test
password: test
metric_relabel_configs:
- regex: ".*"
action: labeldrop
static_configs:
- targets:
- 127.0.1.1:9090
END
./prometheus --storage.tsdb.min-block-duration=1m
```
And wait a few minutes.
Signed-off-by: Julien Pivotto <roidelapluie@inuits.eu>
This is part of https://github.com/prometheus/prometheus/pull/5882 that can be done to simplify things.
All todos I added will be fixed in follow up PRs.
* querier.Querier, querier.Appender, querier.SeriesSet, and querier.Series interfaces merged
with storage interface.go. All imports that.
* querier.SeriesIterator replaced by chunkenc.Iterator
* Added chunkenc.Iterator.Seek method and tests for xor implementation (?)
* Since we properly handle SelectParams for Select methods I adjusted min max
based on that. This should help in terms of performance for queries with functions like offset.
* added Seek to deletedIterator and test.
* storage/tsdb was removed as it was only a unnecessary glue with incompatible structs.
No logic was changed, only different source of abstractions, so no need for benchmarks.
Signed-off-by: Bartlomiej Plotka <bwplotka@gmail.com>
To test the implementation of our metric metadata API, we need to represent various states of metadata in the scrape metadata store. That is currently not possible as the interface and method to set the store are private.
This changes the interface, list and get methods, and the SetMetadaStore function to be public.
Incidentally, the scrapeCache implementation needs to be renamed to match the new signature.
Signed-off-by: gotjosh <josue@grafana.com>
When using both a label and the suffix+label in the
relabel config. It's possible that Prometheus remove
the suffx+label for no obvious reason. It's due to a
collision when merging labels from target and from
the sample.
Signed-off-by: Geoffrey Beausire <g.beausire@criteo.com>
Fixes https://github.com/prometheus/common/issues/36
Move logic handling this into the labels package,
so all the cases are handled in one place and we're
less likely to have this come up again.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
This is an estimate of churn, with series being added to the cache being
considered churn. This will have both false positives (e.g. series
appearing and disappearing) and false negatives (e.g. series hit
sample_limit, but still created in head block), but should be generally
useful as-is.
Relevant docs live in another repo.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
* Reload certificates from disk automatically
This change bumps github.com/prometheus/common to include
https://github.com/prometheus/common/pull/173
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* scrape: close idle connections on reload/stop
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* use v0.3.0 tag
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
Now that we're not losing the scrape cache across failed
scrape, a scrape that continually failed but had varying
series or metadata (e.g. timestamps in metric names,
plus hitting smaple_limit) would grow the cache indefinitely.
Add some code to catch that, and flush the cache anyway.
Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
i) Uses the more idiomatic Wrap and Wrapf methods for creating nested errors.
ii) Fixes some incorrect usages of fmt.Errorf where the error messages don't have any formatting directives.
iii) Does away with the use of fmt package for errors in favour of pkg/errors
Signed-off-by: tariqibrahim <tariq181290@gmail.com>
* scrape: Add global jitter for HA server
Covers issue in https://github.com/prometheus/prometheus/pull/4926#issuecomment-449039848
where the HA setup become a problem for targets unable to be scraped simultaneously.
The new jitter per server relies on the hostname and external labels which necessarily to be uniq.
As before, scrape offset will be calculated with regard the absolute time, so even
restart/reload doesn't change scrape time per scrape target + prometheus instance.
Use fqdn if possible, otherwise fall back to the hostname. It adds extra random seed
to calculate server hash to be distinguish on machines with the same hostname, but
different DC.
Signed-off-by: Aleksei Semiglazov <xjewer@gmail.com>
* scrape: catch errors when creating HTTP clients
This change makes sure that no scrape pool is created with a nil HTTP
client.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Address Tariq's comment
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
* Address Brian's comment
Signed-off-by: Simon Pasquier <spasquie@redhat.com>