Default to bigger remote_write sends (#5267)

* Default to bigger remote_write sends

Raise the default MaxSamplesPerSend to amortise the cost of remote
calls across more samples. Lower MaxShards to keep the expected max
memory usage within reason.

Signed-off-by: Bryan Boreham <bryan@weave.works>

* Change default Capacity to 2500

To maintain ratio with MaxSamplesPerSend

Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This commit is contained in:
Bryan Boreham 2020-09-09 21:00:23 +01:00 committed by GitHub
parent f0f8e50567
commit 90fc6be70f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 8 additions and 8 deletions

View File

@ -104,16 +104,16 @@ var (
// DefaultQueueConfig is the default remote queue configuration.
DefaultQueueConfig = QueueConfig{
// With a maximum of 1000 shards, assuming an average of 100ms remote write
// time and 100 samples per batch, we will be able to push 1M samples/s.
MaxShards: 1000,
// With a maximum of 200 shards, assuming an average of 100ms remote write
// time and 500 samples per batch, we will be able to push 1M samples/s.
MaxShards: 200,
MinShards: 1,
MaxSamplesPerSend: 100,
MaxSamplesPerSend: 500,
// Each shard will have a max of 500 samples pending in it's channel, plus the pending
// samples that have been enqueued. Theoretically we should only ever have about 600 samples
// per shard pending. At 1000 shards that's 600k.
Capacity: 500,
// Each shard will have a max of 2500 samples pending in its channel, plus the pending
// samples that have been enqueued. Theoretically we should only ever have about 3000 samples
// per shard pending. At 200 shards that's 600k.
Capacity: 2500,
BatchSendDeadline: model.Duration(5 * time.Second),
// Backoff times for retrying a batch of samples on recoverable errors.