Commit Graph

2799 Commits

Author SHA1 Message Date
Tobias Schmidt
d7889e61bb Detect code style violations in deeply nested files
So far the style check did not recognize issues in files in deeply
nested directories, e.g. retrieval/discovery/kubernetes/discovery.go.
2016-03-03 02:21:16 -05:00
Patrick Bogen
2062fbae0f rewrite operator balancing to be recursive 2016-03-02 15:56:40 -08:00
beorn7
0ea5801e47 Handle errors caused by data corruption more gracefully
This requires all the panic calls upon unexpected data to be converted
into errors returned. This pollute the function signatures quite
lot. Well, this is Go...

The ideas behind this are the following:

- panic only if it's a programming error. Data corruptions happen, and
  they are not programming errors.

- If we detect a data corruption, we "quarantine" the series,
  essentially removing it from the database and putting its data into
  a separate directory for forensics.

- Failure during writing to a series file is not considered corruption
  automatically. It will call setDirty, though, so that a
  crashrecovery upon the next restart will commence and check for
  that.

- Series quarantining and setDirty calls are logged and counted in
  metrics, but are hidden from the user of the interfaces in
  interface.go, whith the notable exception of Append(). The reasoning
  is that we treat corruption by removing the corrupted series, i.e. a
  query for it will return no results on its next call anyway, so
  return no results right now. In the case of Append(), we want to
  tell the user that no data has been appended, though.

Minor side effects:

- Now consistently using filepath.* instead of path.*.

- Introduced structured logging where I touched it. This makes things
  less consistent, but a complete change to structured logging would
  be out of scope for this PR.
2016-03-02 23:02:34 +01:00
beorn7
8766f99085 Merge branch 'beorn7/storage2' into beorn7/storage3 2016-03-02 23:02:06 +01:00
beorn7
162f6fa6f6 Merge branch 'beorn7/storage' into beorn7/storage2 2016-03-02 23:01:26 +01:00
beorn7
79a2ae2d2e Add missing test file 2016-03-02 23:00:23 +01:00
Fabian Reinartz
7a0c0c3ca2 Remove noise from CHANGELOG 2016-03-02 17:59:23 +01:00
Fabian Reinartz
1e7ce3ffdb Bump version to 0.17.0 2016-03-02 17:59:10 +01:00
Fabian Reinartz
2bfb86d77c Update changelog for 0.17.0 release 2016-03-02 17:58:55 +01:00
beorn7
b6840997a7 Merge branch 'beorn7/storage2' into beorn7/storage3 2016-03-02 16:11:25 +01:00
beorn7
ce58fd357b Merge branch 'beorn7/storage' into beorn7/storage2
Conflicts:
	storage/local/chunk.go
	storage/local/interface.go
2016-03-02 16:09:32 +01:00
beorn7
2581648f70 Separate iterators by offset
Add test that exposes the problem.
2016-03-02 16:01:03 +01:00
Fabian Reinartz
6adf77e411 Merge pull request #1447 from prometheus/fabxc/alertfix
Make copying alerting state safer.
2016-03-02 12:25:19 +01:00
Fabian Reinartz
d89c254849 Make copying alerting state safer.
This considers static labels in the equality of alerts to
avoid falsely copying state from a different alert definition with
the same name across reloads.

To be safe, it also copies the state map rather than just its pointer
so that remaining collisions disappear after one evaluation interval.
2016-03-02 12:21:54 +01:00
Fabian Reinartz
95c9706d2d Fix missing comment period. 2016-03-02 09:16:56 +01:00
Fabian Reinartz
ddc74f712b Add sortable target list 2016-03-02 09:10:20 +01:00
Julius Volz
9ea2465b99 Fix typo in lexer test. 2016-03-02 01:13:27 +01:00
Brian Brazil
ca31d36382 Merge pull request #1444 from prometheus/add-tests-for-string-parsing
Add tests to specify the string escaping behavior
2016-03-01 22:31:08 +00:00
Tobias Schmidt
907b1380a7 Add tests to specify the string escaping behavior 2016-03-01 17:23:18 -05:00
Fabian Reinartz
5b78fdd6b7 Merge pull request #1439 from prometheus/fabxc/notifier
Rename notification to notifier
2016-03-01 20:14:52 +01:00
Fabian Reinartz
2c931950c6 Merge pull request #1441 from prometheus/scraperef9
Next iteration of retrieval refactoring
2016-03-01 17:16:43 +01:00
Fabian Reinartz
499f4af4aa Test target URL 2016-03-01 14:49:57 +01:00
Fabian Reinartz
50c2f20756 Add targetScraper tests 2016-03-01 14:33:28 +01:00
Fabian Reinartz
1ede7b9d72 Consolidate TargetStatus into Target.
This commit simplifies the TargetHealth type and moves the target
status into the target itself. This also removes a race where error
and last scrape time could have been out of sync.
2016-03-01 14:33:21 +01:00
Fabian Reinartz
2060a0a15b Turn target group members into plain lists.
As the scrape pool deduplicates targets now, it is no longer necessary
to store a hash map for members of each group.
2016-03-01 14:33:12 +01:00
Fabian Reinartz
0d7105abee Remove scrape config from Target.
This commit removes the scrapeConfig entirely from Target.
All identity defining parameters are thus immutable now and the mutex
can be removed..

Target identity is now correctly defined by the labels and the full URL.
This in particular includes URL parameters that are not specified in the
label set.

Fingerprint is also removed from hash to remove an unnecessary tight coupling
to the common/model package.
2016-03-01 14:32:57 +01:00
Fabian Reinartz
75681b691a Extract HTTP client from Target.
The HTTP client is the same across all targets with the same
scrape configuration. Thus, this commit moves it into the scrape
pool.
2016-03-01 14:31:57 +01:00
Fabian Reinartz
cf56e33030 Merge pull request #1429 from prometheus/scraperef7
Next iteration of retrieval refactoring
2016-03-01 14:04:27 +01:00
Fabian Reinartz
9bea27ae8a Add scraping tests 2016-03-01 14:00:48 +01:00
Fabian Reinartz
76a8c6160d Deduplicate targets in scrape pool.
With this commit the scrape pool deduplicates incoming
targets before scraping them. This way multiple target providers
can produce the same target but it will be scraped only once.
2016-03-01 13:50:51 +01:00
Fabian Reinartz
84f74b9a84 Apply new scrape config on reload.
This commit updates a target set's scrape configuration
on reload. This will cause all running scrape loops to be
stopped and started again with new parameters.
2016-03-01 13:50:51 +01:00
Fabian Reinartz
02f635dc24 Remove interval/timeout from Target internals 2016-03-01 13:50:51 +01:00
Fabian Reinartz
775316f8d2 Move appender construction from Target to scrapePool 2016-03-01 13:50:51 +01:00
Fabian Reinartz
fbe251c2df Fix scrape interval length calculation 2016-03-01 13:48:36 +01:00
Fabian Reinartz
1a3253e8ed Make scrape time unambigious.
This commit changes the scraper interface to accept a timestamp
so the reported timestamp by the caller and the timestamp
attached to samples does not differ.
2016-03-01 13:48:36 +01:00
Fabian Reinartz
2bb8ef99d1 Test scrape loop behavior. 2016-03-01 13:48:36 +01:00
Fabian Reinartz
c7bbe95597 Remove outdated target tests 2016-03-01 13:48:36 +01:00
Fabian Reinartz
05de8b7f8d Extract target scraping into scrape loop.
This commit factors out the scrape loop handling into
its own data structure.
For the transition it will be directly attached to the
target.
2016-03-01 13:48:36 +01:00
Fabian Reinartz
cebba3efbb Simplify and fix TargetManager reloading 2016-03-01 13:48:36 +01:00
Fabian Reinartz
da99366f85 Consolidate Target.Update into constructor.
The Target.Update method is no longer needed.
2016-03-01 13:48:36 +01:00
Fabian Reinartz
d15adfc917 Preserve target state across reloads.
This commit moves Scraper handling into a separate scrapePool type.
TargetSets only manage TargetProvider lifecycles and sync the
retrieved updates to the scrapePool.

TargetProviders are now expected to send a full initial target set
within 5 seconds. The scrapePools preserve target state across reloads
and only drop targets after the initial set was synced.
2016-03-01 13:48:36 +01:00
Fabian Reinartz
5b30bdb610 Change TargetProvider interface.
This commit changes the TargetProvider interface to use a
context.Context and send lists of TargetGroups, rather than
single ones.
2016-03-01 13:48:36 +01:00
Fabian Reinartz
bb6dc3ff78 Remove old tests 2016-03-01 13:48:36 +01:00
Fabian Reinartz
5bfa4cdd46 Simplify target update handling.
We group providers by their scrape configuration. Each provider produces
target groups with an unique identifier.

On stopping a set of target providers we cancel the target providers,
stop scraping the targets and wait for the scrapers to finish.

On configuration reload all provider sets are stopped and new ones
are created. This will make targets disappear briefly on configuration
reload. Potentially scrapes are missed but due to the consistent
scrape intervals implemented recently, the impact is minor.
2016-03-01 13:48:36 +01:00
Brian Brazil
671cc59de7 Merge pull request #1440 from fabric8io/kubernetes-discovery
Kubernetes SD: Fix node IP discovery
2016-03-01 12:27:48 +00:00
Jimmi Dyson
e59b7c15a3 Kubernetes SD: Fix node IP discovery 2016-03-01 12:24:52 +00:00
Fabian Reinartz
bfa8aaa017 Rename notification to notifier 2016-03-01 12:39:08 +01:00
Fabian Reinartz
42a64a7d0b Merge pull request #1434 from igncp/master
Fix function names in comments
2016-03-01 12:32:15 +01:00
Ignacio Carbajo
1b3ea0ea1b Fix function names in comments 2016-02-29 21:58:32 +00:00
Björn Rabenstein
e4d0ae9b4e Merge pull request #1432 from prometheus/beorn7/fix-deadlock
Fix a deadlock
2016-02-29 16:46:33 +01:00