Remove a deadlock during shutdown.
If queries are still running when the shutdown is initiated, they will finish _during_ the shutdown. In that case, they might request chunk eviction upon unpinning their pinned chunks. That might completely fill the evict request queue _after_ draining it during storage shutdown. If that ever happens (which is the case if there are _many_ queries still running during shutdown), the affected queries will be stuck while keeping a fingerprint locked. The checkpointing can then not process that fingerprint (or one that shares the same lock). And then we are deadlocked.
This commit is contained in:
parent
edc91cbabb
commit
2c8fdcbc23
|
@ -467,18 +467,17 @@ func (s *memorySeriesStorage) handleEvictList() {
|
|||
s.maybeEvict()
|
||||
}
|
||||
case <-s.evictStopping:
|
||||
// Drain evictRequests to not let requesters hang.
|
||||
for {
|
||||
select {
|
||||
case <-s.evictRequests:
|
||||
// Do nothing.
|
||||
default:
|
||||
ticker.Stop()
|
||||
glog.Info("Chunk eviction stopped.")
|
||||
close(s.evictStopped)
|
||||
return
|
||||
// Drain evictRequests forever in a goroutine to not let
|
||||
// requesters hang.
|
||||
go func() {
|
||||
for {
|
||||
<-s.evictRequests
|
||||
}
|
||||
}
|
||||
}()
|
||||
ticker.Stop()
|
||||
glog.Info("Chunk eviction stopped.")
|
||||
close(s.evictStopped)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
Loading…
Reference in New Issue