Commit Graph

107 Commits

Author SHA1 Message Date
Julius Volz
099df0c5f0 Migrate "golang.org/x/net/context" -> "context" (#3333)
In some places, where ctxhttp or gRPC are concerned, we still need to use the
old contexts.
2017-10-24 21:21:42 -07:00
Marc Sluiter
6a633eece1 Added go-conntrack for monitoring http connections (#3241)
Added metrics for in- and outgoing traffic with go-conntrack.
2017-10-06 11:22:19 +01:00
Tobias Schmidt
40c278ee2d Send a HTTP Accept header when scraping 2017-09-25 14:51:29 +02:00
Fabian Reinartz
249d69b513 Merge pull request #3186 from prometheus/startweb
web: start web handler while TSDB is starting up
2017-09-21 09:53:03 +02:00
Fabian Reinartz
7b02bfee0a web: start web handler while TSDB is starting up 2017-09-20 15:03:19 +02:00
Fabian Reinartz
437f51a85f Fix cache maintenance on changing metric representations
We were not properly maintaining the scrape cache when the same metric
was exposed with a different string representation.
This overall reduces the scraping cache's complexity, which fixes the
issue and saves about 10% of memory in a scraping-only Prometheus
instance.
2017-09-19 15:03:27 +02:00
Goutham Veeramachaneni
3f0267c548 Merge branch 'dev-2.0' into go-kit/log
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-09-15 23:15:27 +05:30
Fabian Reinartz
1121b9f7d4 retrieval: cache dropped series, mutate labels in place 2017-09-14 08:36:19 +02:00
Fabian Reinartz
d21f149745 *: migrate to go-kit/log 2017-09-08 22:01:51 +05:30
Fabian Reinartz
5bed8af4cb retrieval: pool scrape buffers
This adds a bucketed buffer pool to the scrapers so we don't have to
allocate a new buffer on each scrape or hold it fixed to the scrape
loop.

The latter can consume significant amounts of unused memory, e.g. 4GB
when scraping 2MB /metrics from 2000 targets.
2017-09-07 14:43:21 +02:00
Fabian Reinartz
a8887f46dc Merge branch 'dev-2.0' of github.com:prometheus/prometheus into dev-2.0 2017-09-07 14:15:12 +02:00
Fabian Reinartz
0efecea6d4 Adapt storage APIs to uint64 references 2017-09-07 14:14:41 +02:00
Krasi Georgiev
153cb0cbe3 scraping errors will show in the log when debug mode is enabled (#3135)
Signed-off-by: Krasi Georgiev <krasi.root@gmail.com>
2017-09-05 11:55:14 +01:00
Fabian Reinartz
9516d04472 util: Add idle timeout for scrape connections 2017-08-10 14:47:51 +02:00
Fabian Reinartz
4d3d8ee229 Merge pull request #2850 from tomwilkie/dev-2.0-remote
Remote APIs for v2
2017-08-03 13:39:09 +02:00
Edward Marshall
c490725ac9 Additional targetScrapeSample metrics (#3018) 2017-08-02 13:10:18 +01:00
Tom Wilkie
1f3b59ccf5 s/met/lset/ 2017-07-18 11:42:29 +01:00
Tom Wilkie
014bd31a86 Remove unnecessary whitespace changes, add comment. 2017-07-13 11:26:46 +01:00
Tom Wilkie
2ac1809a5b Get label set from cache in addReportSample. 2017-07-12 22:09:16 +01:00
Tom Wilkie
240feb313b Don't regenerate label set for cached values. 2017-07-12 15:54:38 +01:00
Tom Wilkie
db8128ceeb Add label set as first parameter to AddFast, ingored by TSDB adapter. 2017-07-12 15:20:12 +01:00
Goutham Veeramachaneni
243419c007 Return tsdb.ErrOutOfBounds as storage.ErrOutOfBounds
Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-07-06 14:18:31 +02:00
Goutham Veeramachaneni
643c5837a0 Stop metrics that are 10mins ahead from now
Fixes #2893

Signed-off-by: Goutham Veeramachaneni <goutham@boomerangcommerce.com>
2017-07-04 15:34:08 +02:00
Goutham Veeramachaneni
3069bd3996 Handle scrapes with OutOfBounds metrics better
fixes #2894

Signed-off-by: Goutham Veeramachaneni <goutham@boomerangcommerce.com>
2017-07-04 11:24:13 +02:00
Fabian Reinartz
9ea748e745 Don't reallocate label set if still known
If the storage deprecates a ref, we have to re-insert with the full
label set. Typically that doesn't correlate with a new series being
created.
We can still use the allocated label set from before.
2017-06-26 14:38:57 +02:00
Fabian Reinartz
2368d2c45b retrieval: fix memory leak in scrape cache 2017-06-26 00:24:54 +02:00
Fabian Reinartz
98c2d8477a Merge pull request #2844 from Gouthamve/cobra
Move CLI commander to cobra
2017-06-19 11:59:52 +02:00
Goutham Veeramachaneni
507790a357
Rework logging to use explicitly passed logger
Mostly cleaned up the global logger use. Still some uses in discovery
package.

Signed-off-by: Goutham Veeramachaneni <cs14btech11014@iith.ac.in>
2017-06-16 15:52:44 +05:30
Julius Volz
6f66125809 retrieval: Fix "up" reporting for failed scrapes 2017-06-14 22:22:12 -04:00
Fabian Reinartz
669075c6b9 Merge branch 'master' into dev-2.0 2017-06-06 09:36:51 +02:00
Fabian Reinartz
eb651233ac Merge pull request #2787 from prometheus/limit2
Rework sample limit to work for 2.0
2017-06-06 08:21:12 +02:00
Brian Brazil
37bc607e96 Rework sample limit to work for 2.0
Correctly update reported series.
Increment prometheus_target_scrapes_exceeded_sample_limit_total.
Add back unittests.
Ignore stale markers when calculating sample limit.

Fixes #2770
2017-05-31 15:41:51 +01:00
Fabian Reinartz
bc7aff8cef retrieval: extract scrape cache 2017-05-30 09:37:23 -07:00
Fabian Reinartz
a83014f53c retrieval: fix memory leak and consumption for caches 2017-05-26 08:44:24 +02:00
Fabian Reinartz
3d8661b8d5 Add comment 2017-05-24 17:05:42 +02:00
Fabian Reinartz
43ca652217 retrieval: Don't allocate map on every scrape 2017-05-24 16:23:48 +02:00
Fabian Reinartz
d3f662f15e Merge branch 'dev-2.0' into grobie/reduce-noisy-append-errors 2017-05-24 15:29:30 +02:00
Fabian Reinartz
d289dc55c3 storage: update TSDB 2017-05-22 11:53:08 +02:00
Brian Brazil
0920972f79 Initilise scraped sample map, and rename to series map. 2017-05-16 18:33:51 +01:00
Brian Brazil
bf38963118 Plumb through logger with target field to scrape loop. 2017-05-16 18:33:51 +01:00
Brian Brazil
d657d722dc Log count of dupliates/out of order samples as warnings.
Keep log of each sample as debug log.
2017-05-16 18:33:51 +01:00
Brian Brazil
8b9d3e7547 Put end of run staleness handler in seperate function.
Improve log message.
2017-05-16 18:33:51 +01:00
Brian Brazil
d532272520 Add stalemarkers to synthetic series too when target stops. 2017-05-16 18:33:51 +01:00
Brian Brazil
b87d3ca9ea Create stale markers when a target is stopped.
When a target is no longer returned from SD stop()
is called. However it may be recreated before the
next scrape interval happens. So we wait to set stalemarkers
until the scrape of the new target would have happened
and been ingested, which is 2 scrape intervals.

If we're shutting down the context will be cancelled,
so return immediately rather than holding things up for potentially
minutes waiting to safely set stalemarkers no newer than now.
If the server starts immediately back up again all is well.
If not, we're missing some stale markers.
2017-05-16 18:33:51 +01:00
Brian Brazil
95162ebc16 Add log messages for out of order samples 2017-05-16 18:33:51 +01:00
Brian Brazil
3c45400130 Don't fail scrape if one sample violates ordering.
In Prometheus 1.x one sample that is out of order
or that has a duplicate timestamp is discarded, and
the rest of the scrape ingestion continues on.
This will now also be true for 2.0.
2017-05-16 18:33:51 +01:00
Brian Brazil
fd5c5a50a3 Add stale markers on parse error.
If we fail to parse the result of a scrape,
we should treat that as a failed scrape and
add stale markers.
2017-05-16 18:33:51 +01:00
Brian Brazil
c0c7e32e61 Treat a failed scrape as an empty scrape for staleness.
If a target has died but is still in SD, we want the previously
scraped values to go stale. This would also apply to brief blips.
2017-05-16 18:33:51 +01:00
Brian Brazil
850ea412ad If an explicit timestamp is provided, bypass staleness. 2017-05-16 18:33:51 +01:00
Brian Brazil
a5cf25743c Move stalness check into a function 2017-05-16 18:33:51 +01:00