Commit Graph

512 Commits

Author SHA1 Message Date
Fabian Reinartz d289dc55c3 storage: update TSDB 2017-05-22 11:53:08 +02:00
Brian Brazil 0920972f79 Initilise scraped sample map, and rename to series map. 2017-05-16 18:33:51 +01:00
Brian Brazil bf38963118 Plumb through logger with target field to scrape loop. 2017-05-16 18:33:51 +01:00
Brian Brazil d657d722dc Log count of dupliates/out of order samples as warnings.
Keep log of each sample as debug log.
2017-05-16 18:33:51 +01:00
Brian Brazil 8b9d3e7547 Put end of run staleness handler in seperate function.
Improve log message.
2017-05-16 18:33:51 +01:00
Brian Brazil d532272520 Add stalemarkers to synthetic series too when target stops. 2017-05-16 18:33:51 +01:00
Brian Brazil b87d3ca9ea Create stale markers when a target is stopped.
When a target is no longer returned from SD stop()
is called. However it may be recreated before the
next scrape interval happens. So we wait to set stalemarkers
until the scrape of the new target would have happened
and been ingested, which is 2 scrape intervals.

If we're shutting down the context will be cancelled,
so return immediately rather than holding things up for potentially
minutes waiting to safely set stalemarkers no newer than now.
If the server starts immediately back up again all is well.
If not, we're missing some stale markers.
2017-05-16 18:33:51 +01:00
Brian Brazil 95162ebc16 Add log messages for out of order samples 2017-05-16 18:33:51 +01:00
Brian Brazil 3c45400130 Don't fail scrape if one sample violates ordering.
In Prometheus 1.x one sample that is out of order
or that has a duplicate timestamp is discarded, and
the rest of the scrape ingestion continues on.
This will now also be true for 2.0.
2017-05-16 18:33:51 +01:00
Brian Brazil fd5c5a50a3 Add stale markers on parse error.
If we fail to parse the result of a scrape,
we should treat that as a failed scrape and
add stale markers.
2017-05-16 18:33:51 +01:00
Brian Brazil c0c7e32e61 Treat a failed scrape as an empty scrape for staleness.
If a target has died but is still in SD, we want the previously
scraped values to go stale. This would also apply to brief blips.
2017-05-16 18:33:51 +01:00
Brian Brazil 850ea412ad If an explicit timestamp is provided, bypass staleness. 2017-05-16 18:33:51 +01:00
Brian Brazil a5cf25743c Move stalness check into a function 2017-05-16 18:33:51 +01:00
Brian Brazil 5060a0fc51 Add unittests for ingestion stale NaNs 2017-05-16 18:33:51 +01:00
Brian Brazil 4f35952cf3 Inject a stale NaN when sample disappears between scrapes. 2017-05-16 18:33:51 +01:00
Brian Brazil beaa7d5a43 Move consistent NaN logic into the parser. 2017-05-16 18:33:51 +01:00
Brian Brazil 76acf7b9b1 Ensure all the NaNs we ingest have the same bit pattern. 2017-05-16 18:33:51 +01:00
Brian Brazil 0eabed8048 Remove unused metric 2017-05-15 15:06:54 +01:00
Fabian Reinartz 76b3378190 retrieval: add missing scrape context cancelation 2017-05-11 17:20:03 +02:00
Fabian Reinartz e829dbe2be retrieval: comment out accept header again 2017-04-27 11:46:08 +02:00
Fabian Reinartz 73b8ff0ddc Merge branch 'master' into dev-2.0 2017-04-27 10:19:55 +02:00
Matt Layher 5e4f5fb5ad retrieval: make scrape timeout header consistent with others 2017-04-05 14:56:22 -04:00
Alexey Palazhchenko 17f15d024a Small fixes. (#2578)
Fix typos. Simplify with gofmt -s
2017-04-05 14:24:22 +01:00
Matt Layher fe4b6693f7 retrieval: add Scrape-Timeout-Seconds header to each scrape request (#2565)
Fixes #2508.
2017-04-04 18:26:28 +01:00
Fabian Reinartz 8ffc851147 Merge branch 'master' into dev-2.0 2017-04-04 15:17:56 +02:00
Julius Volz 947c83be3b Sort targets by instance within a job
Fixes https://github.com/prometheus/prometheus/issues/2536
2017-03-31 13:14:20 +02:00
Julius Volz 815762a4ad Move retrieval.NewHTTPClient -> httputil.NewClientFromConfig 2017-03-20 14:17:04 +01:00
Fabian Reinartz c389193b37 Merge branch 'master' into dev-2.0 2017-03-17 16:27:07 +01:00
Fabian Reinartz 5ec1efe622 retrieval: fix test 2017-03-08 15:37:12 +01:00
Fabian Reinartz d9fb57cde4 *: Simplify []byte to string unsafe conversion 2017-03-07 11:41:11 +01:00
Fabian Reinartz 9304179ef7 Merge branch 'master' into dev-2.0 2017-03-02 08:16:58 +01:00
Erdem Agaoglu 8809735d7f Setting User-Agent header (#2447) 2017-02-28 09:59:33 -04:00
Fabian Reinartz cc0ff26f1f retrieval: handle GZIP compression ourselves
The automatic GZIP handling of net/http does not preserve
buffers across requests and thus generates a lot of garbage.
We handle GZIP ourselves to circumvent this.t
2017-02-22 13:25:25 +01:00
Fabian Reinartz 311e7b5069 storage/vendor: update to latest fabxc/tsdb 2017-02-20 11:11:44 +01:00
Fabian Reinartz 5772f1a7ba retrieval/storage: adapt to new interface
This simplifies the interface to two add methods for
appends with labels or faster reference numbers.
2017-02-02 13:05:46 +01:00
Brian Brazil 34767c2221 Clone lset before relabelling. (#2386)
We need to not change the lset passed into populateLabels, as that
is kept around by the SDs.

Fixes 2377
2017-02-01 19:49:50 +00:00
Fabian Reinartz 1d3cdd0d67 Merge branch 'master' into dev-2.0-rebase 2017-01-30 17:43:01 +01:00
Fabian Reinartz 035976b275 retrieval: handle not found error correctly 2017-01-20 11:27:01 +01:00
Fabian Reinartz 598e2f01c0 retrieval: don't erronously break appending 2017-01-17 08:39:18 +01:00
Fabian Reinartz c691895a0f retrieval: cache series references, use pkg/textparse
With this change the scraping caches series references and only
allocates label sets if it has to retrieve a new reference.
pkg/textparse is used to do the conditional parsing and reduce
allocations from 900B/sample to 0 in the general case.
2017-01-16 12:03:57 +01:00
Fabian Reinartz ad9bc62e4c storage: extend appender and adapt it 2017-01-13 14:48:01 +01:00
Fabian Reinartz 3302bb1eb1 Merge pull request #2323 from prometheus/beorn7/retrieval
Retrieval: Avoid copying Target
2017-01-08 06:49:47 +01:00
Björn Rabenstein ad40d0abbc Merge pull request #2288 from prometheus/limit-scrape
Add ability to limit scrape samples, and related metrics
2017-01-08 01:34:06 +01:00
beorn7 5dc01202d7 Retrieval: Remove some test lines that fail on Travis only
These lines exercise an append in
TestScrapeLoopWrapSampleAppender. Arguably, append shouldn't be tested
there in the first place.

Still it's weird why this fails on Travis:

```
--- FAIL: TestScrapeLoopWrapSampleAppender (0.00s)
    scrape_test.go:259: Expected count of 1, got 0
    scrape_test.go:290: Expected count of 1, got 0
2017/01/07 22:48:26 http: TLS handshake error from 127.0.0.1:50716: read tcp 127.0.0.1:40265->127.0.0.1:50716: read: connection reset by peer
FAIL
FAIL	github.com/prometheus/prometheus/retrieval	3.603s
```

Should anybody ever find out why, please revert this commit accordingly.
2017-01-08 00:01:46 +01:00
beorn7 3610331eeb Retrieval: Do not buffer the samples if no sample limit configured
Also, simplify and streamline the code a bit.
2017-01-07 18:18:54 +01:00
beorn7 767c0709b1 Retrieval: Avoid copying Target
retreival.Target contains a mutex. It was copied in the Targets()
call. This potentially can wreak a lot of havoc.

It might even have caused the issues reported as #2266 and #2262 .
2017-01-06 18:43:41 +01:00
Fabian Reinartz e631a1260d retrieval: use separate appender per target 2016-12-30 21:35:35 +01:00
Fabian Reinartz f8fc1f5bb2 *: migrate ingestion to new batch Appender 2016-12-29 11:03:56 +01:00
Brian Brazil 6c07453ec1 Only clone the metric in the one place relabelling needs it. (#2292)
This cuts ~17% off memory allocations related to ingesting data
in a basic setup.
2016-12-21 10:00:33 +00:00
Brian Brazil f421ce0636 Remove label from prometheus_target_skipped_scrapes_total (#2289)
This avoids it not being intialised, and breaking out by
interval wasn't partiuclarly useful.

Fixes #2269
2016-12-16 18:00:52 +00:00