When calculating dependencies between rules, we sometimes run into `{__name__...}` matchers
These can be used the same way as the actual rule names
This will enable even more rules to run concurrently
The new logic is also not slower:
```
julienduchesne@triceratops prometheus % benchstat test-old.txt test.txt
goos: darwin
goarch: arm64
pkg: github.com/prometheus/prometheus/rules
cpu: Apple M3 Pro
│ test-old.txt │ test.txt │
│ sec/op │ sec/op vs base │
DependencyMap-11 1.206µ ± 7% 1.024µ ± 7% -15.10% (p=0.000 n=10)
│ test-old.txt │ test.txt │
│ B/op │ B/op vs base │
DependencyMap-11 1.720Ki ± 0% 1.438Ki ± 0% -16.35% (p=0.000 n=10)
│ test-old.txt │ test.txt │
│ allocs/op │ allocs/op vs base │
DependencyMap-11 39.00 ± 0% 34.00 ± 0% -12.82% (p=0.000 n=10)
```
Signed-off-by: Julien Duchesne <julien.duchesne@grafana.com>
Change case order for switch scrapeLoop
This commit changes the ordering in error identification switch cases for better production performance and adds reasonings behind error switch case order as comments.
---------
Signed-off-by: Laimis Juzeliūnas <asnelaimis@gmail.com>
Rationales:
* metadata-wal-records might be deprecated and replaced going forward: https://github.com/prometheus/prometheus/issues/15911
* PRW 2.0 works without metadata just fine (although it sends untyped metrics as expected).
Signed-off-by: bwplotka <bwplotka@gmail.com>
* rulefmt: add tests with YAML aliases for Alert/Record/Expr
Altough somewhat discouraged in favour of using proper configuration
management tools to generate full YAML, it can still be useful in some
situations to use YAML anchors/aliases in rules.
The current implementation is however confusing: aliases will work
everywhere except on the alert/record name and expr
This first commit adds (failing) tests to illustrate the issue, the next
one fixes it. The YAML test file is intentionally filled with anchors
and aliases. Although this is probably not representative of a real-world
use case (which would have less of them), it errs on the safer side.
Signed-off-by: François HORTA <fhorta@scaleway.com>
* rulefmt: support YAML aliases for Alert/Record/Expr
This fixes the use of YAML aliases in alert/recording rule names and
expressions. A side effect of this change is that the RuleNode YAML type is
no longer propagated deeper in the codebase, instead the generic Rule type
can now be used.
Signed-off-by: François HORTA <fhorta@scaleway.com>
* rulefmt: Add test for YAML merge combined with aliases
Currently this does work, but adding a test for the related
functionally here makes sense.
Signed-off-by: David Leadbeater <dgl@dgl.cx>
* rulefmt: Rebase to latest changes
Signed-off-by: David Leadbeater <dgl@dgl.cx>
---------
Signed-off-by: François HORTA <fhorta@scaleway.com>
Signed-off-by: David Leadbeater <dgl@dgl.cx>
Co-authored-by: David Leadbeater <dgl@dgl.cx>
This is very useful when piping the input file to stdin and then using
/dev/stdin as the input file. e.g.
xzcat dump.xz |
promtool tsdb create-blocks-from openmetrics /dev/stdin /tmp/data
Signed-off-by: Nicolas Peugnet <nicolas.peugnet@lip6.fr>
* model/textparse: Change parser interface Metric(...) string to Labels(...)
Simplified the interface given no one is using the return argument.
Renamed for clarity too.
Found and discussed https://github.com/prometheus/prometheus/pull/15731#discussion_r1950916842
Signed-off-by: bwplotka <bwplotka@gmail.com>
* Fixed comments; optimized not needed copy for om and text.
Signed-off-by: bwplotka <bwplotka@gmail.com>
---------
Signed-off-by: bwplotka <bwplotka@gmail.com>
This should help a bit with the header icon overflow on narrow screens and
also overall make things look less cluttered.
Signed-off-by: Julius Volz <julius.volz@gmail.com>
* util/httputil: Benchmark newCompressedResponseWriter
This benchmark illustrates that newCompressedResponseWriter incurs a
prohibitive amount of heap allocations when handling a request containing a
malicious Accept-Encoding header.¬
Signed-off-by: jub0bs <jcretel-infosec+github@protonmail.com>
* util/httputil: Improve newCompressedResponseWriter
This change dramatically reduces the heap allocations (in bytes)
incurred when handling a request containing a malicious Accept-Encoding header.
Below are some benchmark results; for conciseness, I've omitted the name of the
benchmark function (BenchmarkNewCompressionHandler_MaliciousAcceptEncoding):
```
goos: darwin
goarch: amd64
pkg: github.com/prometheus/prometheus/util/httputil
cpu: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
│ old │ new │
│ sec/op │ sec/op vs base │
18.60m ± 2% 13.54m ± 3% -27.17% (p=0.000 n=10)
│ old │ new │
│ B/op │ B/op vs base │
16785442.50 ± 0% 32.00 ± 0% -100.00% (p=0.000 n=10)
│ old │ new │
│ allocs/op │ allocs/op vs base │
2.000 ± 0% 1.000 ± 0% -50.00% (p=0.000 n=10)
```
Signed-off-by: jub0bs <jcretel-infosec+github@protonmail.com>
---------
Signed-off-by: jub0bs <jcretel-infosec+github@protonmail.com>
Also:
* split benchmark functions to make sure no one compares across parsers.
* testdata file have meaningful names reflecting the type representation
* promtestdata.txt now has all types, taken directly from long running Prometheus (https://demo.do.prometheus.io/)
Needed for https://github.com/prometheus/prometheus/pull/15731
Signed-off-by: bwplotka <bwplotka@gmail.com>
Move to 24h-based time formatting and unambiguous date formats. Also add
more details to the default formatting of each tick instead of only showing
e.g. minutes/seconds at rollover ticks for the shorter breakpoints.
Fixes https://github.com/prometheus/prometheus/issues/15913
Signed-off-by: Julius Volz <julius.volz@gmail.com>
This is also meant to document the actual implementation, but
see #13934 for the current state.
This also improves and streamlines some parts of the documentation
that are not strictly native histogram related, but are colocated with
them. In particular, the section about aggregation operators got
restructured quite a bit, including the removal of a quite verbose
example for `limit_ratio` (which was just too long an this location
and also a bit questionabl in its usefulness).
Signed-off-by: beorn7 <beorn@grafana.com>
Around Mimir compactions we see logging in ShardedPostings do massive allocations and drive GC up to 50% of CPU.
Signed-off-by: Dimitar Dimitrov <dimitar.dimitrov@grafana.com>