after commit 3afb889 "qa: add supported distros for ceph-ansible", git
submodule update commands are failing with:
No submodule mapping found in .gitmodules for path 'ceph-qa-suite'
Signed-off-by: Casey Bodley <cbodley@redhat.com>
In normal operation we generate flushes from
_consume when we read from the journaler. However,
we should also have a fallback flush mechanism for
situations where can_consume() is false fo a long time.
This comes up in testing when we set throttle to zero to
prevent progress, but would also come up in real life if
we were busy purging a few very large files, or if purging
was stuck due to bad PGs in the data pool -- we don't want
that to stop us completing appends to the PQ.
Signed-off-by: John Spray <john.spray@redhat.com>
Previously write_head calls were only generated
on the write side, so if you had a big queue
and were just working through consuming it, you
wouldn't record your progress, and on a daemon
restart would end up repeating a load of work.
Signed-off-by: John Spray <john.spray@redhat.com>
So that callers on the read side can optionally
do their own write_head calls according to
the same condition that Journaler uses
internally for its write_head during _flush() condition.
Signed-off-by: John Spray <john.spray@redhat.com>
Previously, if doing a write/is_readable/write/is_readable sequence,
you'd end up doing a flush after every write, even though there
was already a flush in flight that would advance the readable-ness
of the journal.
Because this flush-during-read path is only active when using
a read/write journal such as in PurgeQueue, tweak the behaviour
to suit this case.
Signed-off-by: John Spray <john.spray@redhat.com>
This was an unused code path. If anyone set a nonzero
value here the MDS would crash because the Timer implementation
has changed since this code was written, and now requires
add_event_after callers to hold the right lock.
Signed-off-by: John Spray <john.spray@redhat.com>
For decode errors, and for Journaler errors.
Both are considered damage to the MDS rank, as
with other per-rank data structures.
Signed-off-by: John Spray <john.spray@redhat.com>
We don't track an item count, but we do have
a number of bytes left in the Journaler, so
can use that to give an indication of progress
while the MDS rank shutdown is waiting for
the PurgeQueue to do its thing.
Also lift the ops limit on the PurgeQueue
when it goes into the drain phase.
Signed-off-by: John Spray <john.spray@redhat.com>
Also, move shutdown_pass call from dispatch
to tick, so that it doesn't rely on incoming
messages to make progress.
Signed-off-by: John Spray <john.spray@redhat.com>
This will belong in PurgeQueue from now on. We assume
that there is no need to throttle the rate of insertions
into purge queue as it is an efficient sequentially-written
journal.
Signed-off-by: John Spray <john.spray@redhat.com>
To better reflect its lifecycle: it has a part to play
in create/open and has an init/shutdown, unlike StrayManager.
Signed-off-by: John Spray <john.spray@redhat.com>
To avoid creating stray directories of unbounded size
and all the associated pain, use a more appropriate
datastructure to store a FIFO of inodes that need
purging.
Fixes: http://tracker.ceph.com/issues/11950
Signed-off-by: John Spray <john.spray@redhat.com>
Otherwise, the callback will deadlock if it in turn
calls into any Journaler functions. Don't care
about performance because we do this once at startup.
Signed-off-by: John Spray <john.spray@redhat.com>