prometheus/scrape
Brian Brazil f7184978f4 Protect against memory exhaustion when scraping.
Now that we're not losing the scrape cache across failed
scrape, a scrape that continually failed but had varying
series or metadata (e.g. timestamps in metric names,
plus hitting smaple_limit) would grow the cache indefinitely.

Add some code to catch that, and flush the cache anyway.

Signed-off-by: Brian Brazil <brian.brazil@robustperception.io>
2019-04-04 19:09:11 +01:00
..
testdata
helpers_test.go
manager.go refine error handling in prometheus (#5388) 2019-03-26 00:01:12 +01:00
manager_test.go refine error handling in prometheus (#5388) 2019-03-26 00:01:12 +01:00
scrape.go Protect against memory exhaustion when scraping. 2019-04-04 19:09:11 +01:00
scrape_test.go Protect against memory exhaustion when scraping. 2019-04-04 19:09:11 +01:00
target.go refine error handling in prometheus (#5388) 2019-03-26 00:01:12 +01:00
target_test.go scrape: Add global jitter for HA server (#5181) 2019-03-12 10:46:15 +00:00