ceph/qa/suites/rados/singleton-nomsgr/all/export-after-evict.yaml
Patrick Donnelly d6c66f3fa6
qa,pybind/mgr: allow disabling .mgr pool
This is mostly for testing: a lot of tests assume that there are no
existing pools. These tests relied on a config to turn off creating the
"device_health_metrics" pool which generally exists for any new Ceph
cluster. It would be better to make these tests tolerant of the new .mgr
pool but clearly there's a lot of these. So just convert the config to
make it work.

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2021-06-11 19:35:17 -07:00

41 lines
1.1 KiB
YAML

openstack:
- volumes: # attached to each instance
count: 3
size: 10 # GB
roles:
- - mon.a
- mgr.x
- osd.0
- osd.1
- osd.2
- client.0
tasks:
- install:
- ceph:
pre-mgr-commands:
- sudo ceph config set mgr mgr_pool false --force
log-ignorelist:
- overall HEALTH_
- \(CACHE_POOL_NO_HIT_SET\)
conf:
global:
osd max object name len: 460
osd max object namespace len: 64
- exec:
client.0:
- ceph osd pool create base-pool 4
- ceph osd pool application enable base-pool rados
- ceph osd pool create cache-pool 4
- ceph osd tier add base-pool cache-pool
- ceph osd tier cache-mode cache-pool writeback
- ceph osd tier set-overlay base-pool cache-pool
- dd if=/dev/urandom of=$TESTDIR/foo bs=1M count=1
- rbd import --image-format 2 $TESTDIR/foo base-pool/bar
- rbd snap create base-pool/bar@snap
- rados -p base-pool cache-flush-evict-all
- rbd export base-pool/bar $TESTDIR/bar
- rbd export base-pool/bar@snap $TESTDIR/snap
- cmp $TESTDIR/foo $TESTDIR/bar
- cmp $TESTDIR/foo $TESTDIR/snap
- rm $TESTDIR/foo $TESTDIR/bar $TESTDIR/snap