mirror of
https://github.com/ceph/ceph
synced 2024-12-18 17:37:38 +00:00
154330fd68
If there is a stray clone (one that does not appear in the SnapSet) and we do any sort of recovery on it the OSD will crash. Log an error instead but continue. This addresses a problem where a cluster has both (1) an unexpected clone and (2) the clone is not present on all replicas. Doing repair on that PG will both not fix the unexpected clone and also cause the remaining OSDs to crash trying to recover it. Include a test. Fixes: https://tracker.ceph.com/issues/24396 Signed-off-by: Sage Weil <sage@redhat.com> |
||
---|---|---|
.. | ||
crush | ||
erasure-code | ||
misc | ||
mon | ||
osd | ||
scrub | ||
special | ||
ceph-helpers.sh | ||
README |
qa/standalone ============= These scripts run standalone clusters, but not in a normal way. They make use of functions ceph-helpers.sh to quickly start/stop daemons against toy clusters in a single directory. They are normally run via teuthology based on qa/suites/rados/standalone/*.yaml. You can run them in a git checkout + build directory as well: * The qa/run-standalone.sh will run all of them in sequence. This is slow since there is no parallelism. * You can run individual script(s) by specifying the basename or path below qa/standalone as arguments to qa/run-standalone.sh. ../qa/run-standalone.sh misc.sh osd/osd-dup.sh * Add support for specifying arguments to selected tests by simply adding list of tests to each argument. ../qa/run-standalone.sh "test-ceph-helpers.sh test_get_last_scrub_stamp"