Default to bigger remote_write sends (#5267)
* Default to bigger remote_write sends Raise the default MaxSamplesPerSend to amortise the cost of remote calls across more samples. Lower MaxShards to keep the expected max memory usage within reason. Signed-off-by: Bryan Boreham <bryan@weave.works> * Change default Capacity to 2500 To maintain ratio with MaxSamplesPerSend Signed-off-by: Bryan Boreham <bjboreham@gmail.com>
This commit is contained in:
parent
f0f8e50567
commit
90fc6be70f
|
@ -104,16 +104,16 @@ var (
|
||||||
|
|
||||||
// DefaultQueueConfig is the default remote queue configuration.
|
// DefaultQueueConfig is the default remote queue configuration.
|
||||||
DefaultQueueConfig = QueueConfig{
|
DefaultQueueConfig = QueueConfig{
|
||||||
// With a maximum of 1000 shards, assuming an average of 100ms remote write
|
// With a maximum of 200 shards, assuming an average of 100ms remote write
|
||||||
// time and 100 samples per batch, we will be able to push 1M samples/s.
|
// time and 500 samples per batch, we will be able to push 1M samples/s.
|
||||||
MaxShards: 1000,
|
MaxShards: 200,
|
||||||
MinShards: 1,
|
MinShards: 1,
|
||||||
MaxSamplesPerSend: 100,
|
MaxSamplesPerSend: 500,
|
||||||
|
|
||||||
// Each shard will have a max of 500 samples pending in it's channel, plus the pending
|
// Each shard will have a max of 2500 samples pending in its channel, plus the pending
|
||||||
// samples that have been enqueued. Theoretically we should only ever have about 600 samples
|
// samples that have been enqueued. Theoretically we should only ever have about 3000 samples
|
||||||
// per shard pending. At 1000 shards that's 600k.
|
// per shard pending. At 200 shards that's 600k.
|
||||||
Capacity: 500,
|
Capacity: 2500,
|
||||||
BatchSendDeadline: model.Duration(5 * time.Second),
|
BatchSendDeadline: model.Duration(5 * time.Second),
|
||||||
|
|
||||||
// Backoff times for retrying a batch of samples on recoverable errors.
|
// Backoff times for retrying a batch of samples on recoverable errors.
|
||||||
|
|
Loading…
Reference in New Issue