For EC pools we have a lot of shards, and 30% probability on each one
means we are very like to repeatedly fail backfill reservations.. long
enough that teuthology gives up waiting.
Signed-off-by: Sage Weil <sage@redhat.com>
Having lots of deletes will mean deletes on objects that don't exist,
which will in turn mean error log entries and more coverage of the
append_log_entries_update_missing code. Hopefully this will trigger
http://tracker.ceph.com/issues/24597
Signed-off-by: Sage Weil <sage@redhat.com>
This utilizes the recent feature in teuthology [1] to skip hidden files in
suites when building the job matrix.
Idea of this change is to enable referring to the top-level qa directory in a
position-independent way such that copies of a suite to another location do not
break any symlinks.
[1] https://github.com/ceph/teuthology/pull/1185
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
- snapdir conversion (at-end) stuff
- merge luminous-specific collections that avoided the above back
into their normal locations
Signed-off-by: Sage Weil <sage@redhat.com>
We test EC profiles with m=1 here, and mapgap can lead to incomplete pgs
because it takes an osd down and waits for healthy.
Fixes: http://tracker.ceph.com/issues/20844
Signed-off-by: Sage Weil <sage@redhat.com>
This lets us run multiple cleanup steps right before ceph
teardown.
Note that we drop the facet from multimon/ because it
doesn't factor out cluster creation before this step
properly. That's fine because the require_luminous
cleanup shouldn't be related to the multimon tests.
Signed-off-by: Sage Weil <sage@redhat.com>
Keep the pool flag around so we can distinguish between a pool that
should maintain hashes for each chunk, and a missing one is a bug, vs
an overwrites pool where we rely on bluestore checksums for detecting
corruption.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>