Remove a deadlock during shutdown.

If queries are still running when the shutdown is initiated, they will
finish _during_ the shutdown. In that case, they might request chunk
eviction upon unpinning their pinned chunks. That might completely
fill the evict request queue _after_ draining it during storage
shutdown. If that ever happens (which is the case if there are _many_
queries still running during shutdown), the affected queries will be
stuck while keeping a fingerprint locked. The checkpointing can then
not process that fingerprint (or one that shares the same lock). And
then we are deadlocked.
This commit is contained in:
Bjoern Rabenstein 2015-01-22 14:42:15 +01:00
parent edc91cbabb
commit 2c8fdcbc23
1 changed files with 10 additions and 11 deletions

View File

@ -467,18 +467,17 @@ func (s *memorySeriesStorage) handleEvictList() {
s.maybeEvict() s.maybeEvict()
} }
case <-s.evictStopping: case <-s.evictStopping:
// Drain evictRequests to not let requesters hang. // Drain evictRequests forever in a goroutine to not let
for { // requesters hang.
select { go func() {
case <-s.evictRequests: for {
// Do nothing. <-s.evictRequests
default:
ticker.Stop()
glog.Info("Chunk eviction stopped.")
close(s.evictStopped)
return
} }
} }()
ticker.Stop()
glog.Info("Chunk eviction stopped.")
close(s.evictStopped)
return
} }
} }
} }