I'm sure people will still find them, but let's at least
force people to click through one more time to get to the
commands that can damage your cluster.
Also, the ".. danger" directive at the top of the page
wasn't actually getting special formatting, so I changed
it to a ".. warning" which is red.
Signed-off-by: John Spray <john.spray@redhat.com>
Since kraken, Ceph enforces a 1:1 correspondence between CRUSH ruleset and
CRUSH rule, so effectively ruleset and rule are the same thing, although
the term "ruleset" still survives - notably in the CRUSH rule itself, where it
effectively denotes the number of the rule.
This commit updates the documentation to more faithfully reflect the current
state of the code.
Fixes: http://tracker.ceph.com/issues/20559
Signed-off-by: Nathan Cutler <ncutler@suse.com>
Add a procedure that permits reconstructing metadata in a potentially
damaged cephfs metadata pool and writing the results into a
freshly-initialized pool that refers to the same data pool. Add option
flags to override checks that would ordinarily prevent this and add
options to the recovery tools to write output to a separate pool instead of
the one selected for recovery. See docs/cephfs/disaster-recovery.rst for
details.
Fixes: http://tracker.ceph.com/issues/15068
Fixes: http://tracker.ceph.com/issues/15069
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Sometimes users know that particular data pool PGs
have been damaged, and they would like to scan
their files to work out which ones might have
been affected.
Fixes: http://tracker.ceph.com/issues/17249
Signed-off-by: John Spray <john.spray@redhat.com>
These are deliberately fairly sparse, because:
* These tools are for experts
* These tools may well be wrapped in a higher
level recovery tool that orchestrates parallel
workers at some stage.
Signed-off-by: John Spray <john.spray@redhat.com>