Commit Graph

744 Commits

Author SHA1 Message Date
Julius Volz
02395a224d [WIP] Remote Read 2017-03-20 13:13:44 +01:00
Julius Volz
40e41a4776 Merge pull request #2494 from tomwilkie/remote-write-sharding
Dynamically reshard the QueueManager based on observed load.
2017-03-20 12:45:17 +01:00
beorn7
48d221c11e storage: Fix typo in comment 2017-03-16 11:49:41 +01:00
Tom Wilkie
75bb0f3253 Review feedback 2017-03-13 21:24:49 +00:00
Tom Wilkie
77cce900b8 Fix tests 2017-03-13 15:21:59 +00:00
Tom Wilkie
b48799a01e Add license stanza 2017-03-13 14:50:15 +00:00
Tom Wilkie
9d22f030cf Dynamically reshard the QueueManager based on observed load. 2017-03-13 14:41:16 +00:00
Tom Wilkie
1ab893c6ec Limit 'discarding sample' logs to 1 every 10s (#2446)
* Limit 'discarding sample' logs to 1 every 10s

* Include the vendored library

* Review feedback
2017-02-23 19:20:39 +01:00
Julius Volz
2f39dbc8b3 Rename StorageQueueManager -> QueueManager 2017-02-21 21:45:43 +01:00
Julius Volz
e9476b35d5 Re-add multiple remote writers
Each remote write endpoint gets its own set of relabeling rules.

This is based on the (yet-to-be-merged)
https://github.com/prometheus/prometheus/pull/2419, which removes legacy
remote write implementations.
2017-02-20 13:23:12 +01:00
Björn Rabenstein
089dc1076b Merge pull request #2435 from jmeulemans/open-chunks-gauge
Adding gauge for number of open head chunks.
2017-02-17 16:02:06 +01:00
Jeremy Meulemans
025c828976 Changed to open_head_chunks to address review.
Now incrementing numHeadChunks directly.
2017-02-17 07:10:13 -06:00
Jeremy Meulemans
074050b8c0 Updating for failed codeclimate check. 2017-02-16 18:04:28 -06:00
Jeremy Meulemans
f70b52d0b6 Adding gauge for number of open head chunks.
Fixes #1710
2017-02-16 17:56:45 -06:00
Julius Volz
beb3c4b389 Remove legacy remote storage implementations
This removes legacy support for specific remote storage systems in favor
of only offering the generic remote write protocol. An example bridge
application that translates from the generic protocol to each of those
legacy backends is still provided at:

documentation/examples/remote_storage/remote_storage_bridge

See also https://github.com/prometheus/prometheus/issues/10

The next step in the plan is to re-add support for multiple remote
storages.
2017-02-14 17:52:05 +01:00
beorn7
d771185a43 storage: Fix chunkIndexToStartSeek calculation
With a high enough shrink ratio and enough chunks to persist, the
cutoff point could be _outside_ of the file, which wreaks havoc in the
storage.
2017-02-10 11:42:59 +01:00
beorn7
73bd5e4dff Merge branch 'beorn7/storage' into beorn7/storage3 2017-02-09 14:44:10 +01:00
beorn7
46a0837816 storage: Fix offset returned by dropAndPersistChunks
This is another corner-case that was previously never exercised
because the rewriting of a series file was never prevented by the
shrink ratio.

Scenario: There is an existing series on disk, which is archived. If a
new sample comes in for that file, a new chunk in memory is created,
and the chunkDescsOffset is set to -1. If series maintenance happens
before the series has at least one chunk to persist _and_ an
insufficient chunks on disk is old enough for purging (so that the
shrink ratio kicks in), dropAndPersistChunks would return 0, but it
should return the chunk length of the series file.
2017-02-09 14:35:07 +01:00
beorn7
9d12204da5 Merge branch 'release-1.5' 2017-02-09 13:11:53 +01:00
beorn7
bed4934224 storage: One more persist error code path discovered
Also, in that code path, set chunkDescsOffset to 0 rather than -1 in
case of "dropped more chunks from persistence than from memory" so
that no other weird things happen before the series is quarantined for
good.
2017-02-09 11:51:40 +01:00
beorn7
242d8edcb5 Merge branch 'release-1.5' 2017-02-08 17:28:09 +01:00
beorn7
8c8baaa558 storage: writeMemorySeries needs to return true for quarantined series
This is another fallout of my bug hunt.
2017-02-08 16:28:56 +01:00
Mitsuhiro Tanda
be8b1eb656 storage: optimize dropping chunks by using minShrinkRatio (#2397)
storage: prevent unnecessary chunk header reading if minShrinkRatio > 0
2017-02-07 17:33:54 +01:00
beorn7
2363a90adc storage: Do not throw away fully persisted memory series in checkpointing 2017-02-06 17:39:59 +01:00
beorn7
244a65fb29 storage: Increase persist watermark before calling append
The append call may reuse cds, and thus change its len.
(In practice, this wouldn't happen as cds should have len==cap.
Still, the previous order of lines was problematic.)
2017-02-05 02:25:09 +01:00
beorn7
75282b27ba storage: Added checks for invariants 2017-02-04 23:40:22 +01:00
beorn7
31e9db7f0c storage: Simplify evictChunkDesc method 2017-02-04 22:29:37 +01:00
beorn7
65dc8f44d3 storage: Test for errors returned by MaybePopulateLastTime 2017-02-01 23:43:58 +01:00
beorn7
752fac60ae storage: Remove race condition from TestLoop 2017-02-01 23:43:58 +01:00
beorn7
4ccfc93dcf storage: Set shrink ratio in the constructor. 2017-02-01 15:37:16 +01:00
beorn7
b2f086c6c4 storage: Expose bug of not setting the shrink ratio in the contstructor 2017-02-01 15:37:10 +01:00
Brian Brazil
c1b547a90e Only checkpoint chunkdescs and series that need persisting. (#2340)
This decreases checkpoint size by not checkpointing things
that don't actually need checkpointing.

This is fully compatible with the v2 checkpoint format,
as it makes series appear as though the only chunksdescs
in memory are those that need persisting.
2017-01-17 00:59:38 +00:00
Brian Brazil
f64c231dad Allow checkpoints and maintenance to happen concurrently. (#2321)
This is essential on larger Prometheus servers, as otherwise
checkpoints prevent sufficient persisting of chunks to disk.
2017-01-13 17:24:19 +00:00
Brian Brazil
1dcb7637f5 Add various persistence related metrics (#2333)
Add metrics around checkpointing and persistence

* Add a metric to say if checkpointing is happening,
and another to track total checkpoint time and count.

This breaks the existing prometheus_local_storage_checkpoint_duration_seconds
by renaming it to prometheus_local_storage_checkpoint_last_duration_seconds
as the former name is more appropriate for a summary.

* Add metric for last checkpoint size.

* Add metric for series/chunks processed by checkpoints.

For long checkpoints it'd be useful to see how they're progressing.

* Add metric for dirty series

* Add metric for number of chunks persisted per series.

You can get the number of chunks from chunk_ops,
but not the matching number of series. This helps determine
the size of the writes being made.

* Add metric for chunks queued for persistence

Chunks created includes both chunks that'll need persistence
and chunks read in for queries. This only includes chunks created
for persistence.

* Code review comments on new persistence metrics.
2017-01-11 15:11:19 +00:00
Brian Brazil
f9e581907a Make index queue bigger. (#2322)
When a large Prometheus starts up fresh it can take many minutes
to warmup and clear out the index queue. A larger queue means less
blocking, bigger batches and cuts down startup time by ~50%.
2017-01-05 17:57:42 +00:00
Mitsuhiro Tanda
7e369b9318 expose max memory chunks metrics (#2303)
* expose max memory chunks metrics
2016-12-27 18:34:07 +00:00
Brian Brazil
93b70ee4ea Evict chunk descs of all unloaded chunks during maintenance. (#2297)
Keeping these around has two problems:
1) Each desc takes 64 bytes, 10 of them is 640B. This is a lot of
overhead on a 1024 byte chunk.
2) It can take well over a week to reach a point where this and thus
Prometheus memory usage as a whole enters steady state. This makes RAM
estimation very hard for users, and makes it difficult to investigate
things like memory fragmentation.

Instead we'll wipe them during each memory series maintenance cycle, and
if a query pulls them in they'll hang around as cache until the next
cycle.
2016-12-22 13:49:03 +00:00
Brian Brazil
1b8a474612 Don't clone the metric if there's no remote writes.
The metric clone can't be further optimised, and is a
non-trivial memory allocation cost so fast path it
if there's no remote writes configured.
2016-12-21 11:34:48 +00:00
Tristan Colgate
30be8e0b8a ignore dotfiles in data directory 2016-12-15 11:48:23 +00:00
Björn Rabenstein
45570e5972 Merge pull request #2277 from prometheus/beorn7/storage2
storage: Sanity-check number of loaded chunk descs
2016-12-14 02:59:10 +01:00
beorn7
253be23c00 storage: Sanity-check number of loaded chunk descs
Two cases:

- An unarchived metric must have at least one chunk desc loaded upon
  unarchival. Otherwise, the file is gone or has size 0, which is an
  inconsistency (because the series is still indexed in the archive
  index). Hence, quarantining is triggered.

- If loading the chunk descs of a series with a known chunkDescsOffset
  (i.e. != -1), the number of chunks loaded must be equal to
  chunkDescsOffset. If not, there is a data corruption. An error is
  returned, which leads to qurantining.

In any case, there is a guard added to not access the 1st element of
an empty chunkDescs slice. (That's what triggered the crashes in issue
2249.)  A time series with unknown chunkDescsOffset and no chunks in
memory and no chunks on disk either could trigger that case. I would
assume such a "null series" doesn't exist, but it's not entirely
unthinkable and unreasonable to happen (perhaps in future uses of the
storage). (Create a series, and then something tries to preload chunks
before the first sample is added.)
2016-12-13 23:19:39 +01:00
Björn Rabenstein
5f0c0e43cf Merge pull request #2276 from prometheus/beorn7/storage
storage: Catch data corruption that leads to division by zero
2016-12-13 23:13:39 +01:00
beorn7
837c029b16 storage: Fix linter issue
Go style tries to avoid indented `else` blocks.
2016-12-13 19:05:30 +01:00
beorn7
4719482f5f storage: Make tests go-vet and golint clean 2016-12-13 17:07:27 +01:00
beorn7
485ac8dff7 storage: Verify validity of byte length when unmarshalling (double)delta chunks
This makes sure a division-by-zero crash cannot happen in the Len()
method.

Fixes #2773
2016-12-13 17:07:27 +01:00
tattsun
e714079cf2 storage: fix error message (#2270)
* storage: add error message
2016-12-09 22:36:27 +00:00
Christopher M. Luciano
148b006e25 Clarify error message when Prometheus data dir finds unexpected files 2016-12-05 10:51:57 -05:00
Julius Volz
127332c56f Merge pull request #2168 from tomwilkie/chunk-len
Add call to estimate number of samples in a chunk to the API
2016-11-17 23:13:50 -08:00
Tom Wilkie
585878cdb2 Add call to estimate number of samples in a chunk to the API 2016-11-17 19:09:59 +00:00
Björn Rabenstein
036715370f Merge pull request #2184 from huydx/master
Fix possible memory leak by defer inside loop
2016-11-14 15:26:39 +01:00