Rule warnings are logged with numDropped=N while every other component uses num_dropped=N:
```
notifier/notifier.go: level.Warn(n.logger).Log("msg", "Alert batch larger than queue capacity, dropping alerts", "num_dropped", d)
notifier/notifier.go: level.Warn(n.logger).Log("msg", "Alert notification queue full, dropping alerts", "num_dropped", d)
storage/remote/write_handler.go: _ = level.Warn(h.logger).Log("msg", "Error on ingesting out-of-order exemplars", "num_dropped", outOfOrderExemplarErrs)
rules/group.go: level.Warn(logger).Log("msg", "Error on ingesting out-of-order result from rule evaluation", "num_dropped", numOutOfOrder)
rules/group.go: level.Warn(logger).Log("msg", "Error on ingesting too old result from rule evaluation", "num_dropped", numTooOld)
rules/group.go: level.Warn(logger).Log("msg", "Error on ingesting results from rule evaluation with different value but same timestamp", "num_dropped", numDuplicates)
scrape/scrape.go: level.Warn(sl.l).Log("msg", "Error on ingesting out-of-order samples", "num_dropped", appErrs.numOutOfOrder)
scrape/scrape.go: level.Warn(sl.l).Log("msg", "Error on ingesting samples with different value but same timestamp", "num_dropped", appErrs.numDuplicates)
scrape/scrape.go: level.Warn(sl.l).Log("msg", "Error on ingesting samples that are too old or are too far into the future", "num_dropped", appErrs.numOutOfBounds)
scrape/scrape.go: level.Warn(sl.l).Log("msg", "Error on ingesting out-of-order exemplars", "num_dropped", appErrs.numExemplarOutOfOrder)
```
Rename numDropped to num_dropped for consistency.
Signed-off-by: Łukasz Mierzwa <l.mierzwa@gmail.com>
Using testify outside of unit tests results in panics rather than a
useful error for the user.
Fixes#13703
Signed-off-by: David Leadbeater <dgl@dgl.cx>
* add context cancellation check at get series iteration
* add warnings and closer on error
* add test
---------
Signed-off-by: Erlan Zholdubai uulu <erlanz@amazon.com>
After #13771, the list for specific parts of the codebase looks like
it is part of the "general maintainers" list. This commit makes things
clearer.
Signed-off-by: beorn7 <beorn@grafana.com>
Use `docker-repo-name` as the make target so that a repo Makfile can
override the common target implementation.
Signed-off-by: SuperQ <superq@gmail.com>
Since we skip repos that don't have a Dockerfile, we can force sync the
`.github/workflows/container_description.yml` config.
Signed-off-by: SuperQ <superq@gmail.com>
I was bored on a train and I spent some amount of time trying to scratch
some nanoseconds off the Labels.Compare when running with stringlabels.
I would be ashamed to admit the real amount of time I spent on it.
The worst thing is, I can't really explain why this is performing so
much better, and someone should re-run the benchmarks on their machine
to confirm that it's not something related to general relativity because
the train is moving. I also added some extra real-life benchmark cases
with longer labelsets (these aren't the longest we have in production,
but kubernetes labelsets are fairly common in Prometheus so I thought it
would be nice to have them).
My benchmarks show this diff:
goos: darwin
goarch: arm64
pkg: github.com/prometheus/prometheus/model/labels
│ old │ new │
│ sec/op │ sec/op vs base │
Labels_Compare/equal 5.898n ± 0% 5.875n ± 1% -0.40% (p=0.037 n=10)
Labels_Compare/not_equal 11.78n ± 2% 11.01n ± 1% -6.54% (p=0.000 n=10)
Labels_Compare/different_sizes 4.959n ± 1% 4.906n ± 2% -1.05% (p=0.050 n=10)
Labels_Compare/lots 21.32n ± 0% 17.54n ± 5% -17.75% (p=0.000 n=10)
Labels_Compare/real_long_equal 15.06n ± 1% 14.92n ± 0% -0.93% (p=0.000 n=10)
Labels_Compare/real_long_different_end 25.20n ± 0% 24.43n ± 0% -3.04% (p=0.000 n=10)
geomean 11.86n 11.25n -5.16%
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Also remove myself :)
Arve has been doing a lot of maintainance on the upstream component
and also reviewing PRs on Prometheus for this otlp ingest. I continue
to have less and less time for this, so I'd like make Arve a maintainer
for OTLP Ingestion.
Signed-off-by: Goutham <gouthamve@gmail.com>
When Prometheus scrapes a target and it sees the same time series repeated multiple times it currently silently ignores that. This change adds a test for that and fixes the scrape loop so that:
* Only first sample for each unique time series is appended
* Duplicated samples increment the prometheus_target_scrapes_sample_duplicate_timestamp_total metric
This allows one to identify such scrape jobs and targets.
Also fix some tests and benchmark.