We no longer have a snaps field with real values, so dumping this as a
"snap_context" is silly. Instead, just dump the seq.
Adjust qa/standalone/scrub/osd-scrub-repair.sh accordingly.
Signed-off-by: Sage Weil <sage@redhat.com>
Case 1: A more recent update exists
Case 2: The first entry in the divergent sequence is a create
Case 3 NOT TESTED - Ohject currently missing
Case 4: We can rollback all of the entries
Case 5: We cannot rollback at least 1 of the entries
Support starting OSDs even when "noup" is set (don't wait for up).
Move create_ec_pool() to ceph-helpers.sh
Fixes: https://tracker.ceph.com/issues/39162
Signed-off-by: David Zafman <dzafman@redhat.com>
Change run_osd() to default objectstore bluestore
Use run_osd_filestore() to use the non-default objectstore
Fix inject_eio to handle any objectstore if config prefixed with type
Remaining tests using filestore:
osd-pool-create.sh TEST_pool_create_rep_expected_num_objects
Test filestore directory creation
qa/standalone/osd/osd-dup.sh TEST_filestore_to_bluestore
Obvious
qa/standalone/osd/osd-rep-recov-eio.sh TEST_rep_read_unfound
Requires data digest in object info
qa/standalone/scrub/osd-scrub-repair.sh multiple tests
Erasure code pools append mode for filestore is tested
qa/standalone/special/ceph_objectstore_tool.py
Test code verifies COT by directly examining filestore contents
Fixes: https://tracker.ceph.com/issues/39162
Signed-off-by: David Zafman <dzafman@redhat.com>
Leave repair pg state on until recovery finishes or a new scrub starts
Fixes: http://tracker.ceph.com/issues/38616
Signed-off-by: David Zafman <dzafman@redhat.com>
Allow auto_repair for replicated bluestore pools
Regular scrub within auto repair parameters will trigger deep scrub
New state failed_repair if PG repair attempt could not fix everything
Set failed_repair if not possible to repair anything
Fixes: http://tracker.ceph.com/issues/38616
Signed-off-by: David Zafman <dzafman@redhat.com>
Fix for argument handling of create_ec_pool()
Always pass a value for allow_overwrites for consistency
Caused by: 3ca750d41d
Signed-off-by: David Zafman <dzafman@redhat.com>
It's now "1/2 unfound":
1/2 objects unfound (50.000%)
..presumably due to the rbd pool init creating the rbd_directory.
Signed-off-by: Sage Weil <sage@redhat.com>
Add trigger_deep_scrub osd command for testing
Publish stats when trigger_scrub/trigger_deep_scrub is used for testing
Add optional argument to trigger_scrub/trigger_deep_scrub
for amount of extra time to change last scrub stamps
Signed-off-by: David Zafman <dzafman@redhat.com>
Scrub testing requires an orderly control of scrubbing. Most but not
all the time, the duplicate scrub request is ignored because the first
request hasn't finished. Teuthology enables this environment variable
in the workunit handling.
Fixes: https://tracker.ceph.com/issues/36525
Signed-off-by: David Zafman <dzafman@redhat.com>
Use --rmtype snapmap with new obj16 to remove snapmap only, check for repair message
Use --rmtype nosnapmap to remove obj5 while leaving snapmap behind
Signed-off-by: David Zafman <dzafman@redhat.com>
Also:
- Do not print **offset** until specified
- Count missing objects correctly (used to be primary's local missing)
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
If there is a stray clone (one that does not appear in the SnapSet) and
we do any sort of recovery on it the OSD will crash. Log an error instead
but continue.
This addresses a problem where a cluster has both (1) an unexpected clone
and (2) the clone is not present on all replicas. Doing repair on that
PG will both not fix the unexpected clone and also cause the remaining
OSDs to crash trying to recover it.
Include a test.
Fixes: https://tracker.ceph.com/issues/24396
Signed-off-by: Sage Weil <sage@redhat.com>
This reverts commit 886606bfd7.
Signed-off-by: David Zafman <dzafman@redhat.com>
Conflicts:
qa/standalone/scrub/osd-scrub-repair.sh (manually made equivalent changes)
Check list-inconsistent-obj output
Check how many _scan_snap groupings
Use more general check for crashed osd(s)
Signed-off-by: David Zafman <dzafman@redhat.com>