otherwise the bluestore tests will fail with failures like
qa/workunits/cephtool/test.sh:1343: test_mon_osd_pool: ceph osd pool set ec_test allow_ec_overwrites true
Error EINVAL: pool must only be stored on bluestore for scrubbing to work: osd.1 uses filestore
qa/workunits/cephtool/test.sh:1343: test_mon_osd_pool: return 1
Signed-off-by: Kefu Chai <kchai@redhat.com>
cephtool.yaml is bluestore-only, yet it was in singleton/ which runs against a
generalized objectstore matrix.
Fixes: http://tracker.ceph.com/issues/19797
Signed-off-by: Nathan Cutler <ncutler@suse.com>
This change happened a while back, but it got rolled back
when the generic objectstore/ dir had its filestore
entry split out into xfs and btrfs in 208675af.
Signed-off-by: John Spray <john.spray@redhat.com>
The "recovery" sub suite was originally tests for
client/mds recovery in certain failure cases, it has
since grown to include lots of unit testing of
various features using CephFSTestCase.
The "basic" suite is now specifically just running workloads
now that I've moved out the smaller functional tests.
Signed-off-by: John Spray <john.spray@redhat.com>
Most of what's in basic/ is "workload" type testing
(i.e. a simple cluster cluster configuration and then
running a script inside the client), which gets
permuted in various ways. Move the simpler
functional tests out with the others like themselves.
Signed-off-by: John Spray <john.spray@redhat.com>
These are unit tests for specific CephFS functionality,
it is gratuitous to repeat them with different underlying
RADOS object stores.
We retain coverage of XFS vs. bluestore in the workload tests.
Signed-off-by: John Spray <john.spray@redhat.com>
Fix full testing in cephtool/test.sh when used by rados suite
Replace using sleep with new wait_for_health() bash function
Reviewed-by: Loic Dachary <ldachary@redhat.com>
test: rbd master/slave notify test should test active features
Reviewed-by: Mykola Golub <mgolub@mirantis.com>
Reviewed-by: Nathan Cutler <ncutler@suse.com>
Keep the pool flag around so we can distinguish between a pool that
should maintain hashes for each chunk, and a missing one is a bug, vs
an overwrites pool where we rely on bluestore checksums for detecting
corruption.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
'remap' is to non-specific a name. In particular, it
sounds like it is related to the 'remapped' PG state
but in reality it is not related.
'upmap' or 'pg-upmap' is more specific: it maps a pgid
to the 'up' set value (or item)
Signed-off-by: Sage Weil <sage@redhat.com>
Now that we send these to the cluster log, we must
whitelist them in the tests that exercise those
unhealthy states.
Fixes: http://tracker.ceph.com/issues/19551
Signed-off-by: John Spray <john.spray@redhat.com>
bluestore options dont work yet as the tests use
ceph-deploy to setup cluster and they still assume
either xfs or btrfs
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
qa/suites: drop 'fs' facet, and add 'objectstore' facet where missing
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Reviewed-by: Jason Dillaman <dillaman@redhat.com>