ceph/qa/suites/rados/singleton/all/recovery-preemption.yaml
Sage Weil 695d0be225 qa/suites/rados/singleton/all/recovery-preemption: fix pg log length
This was broken by the variable PG log lengths in
9c69c2f7cc585b5e13e4d1b0432016d38135a3de.

Disable the new option to get (roughly) the old behavior, or at least the
short logs that we want to trigger some backfill.

Fixes: https://tracker.ceph.com/issues/43810
Signed-off-by: Sage Weil <sage@redhat.com>
2020-01-27 07:42:50 -06:00

58 lines
1.3 KiB
YAML

roles:
- - mon.a
- mon.b
- mon.c
- mgr.x
- osd.0
- osd.1
- osd.2
- osd.3
openstack:
- volumes: # attached to each instance
count: 3
size: 20 # GB
tasks:
- install:
- ceph:
conf:
osd:
osd recovery sleep: .1
osd min pg log entries: 10
osd max pg log entries: 1000
osd_target_pg_log_entries_per_osd: 0
osd pg log trim min: 10
log-whitelist:
- \(POOL_APP_NOT_ENABLED\)
- \(OSDMAP_FLAGS\)
- \(OSD_
- \(OBJECT_
- \(PG_
- \(SLOW_OPS\)
- overall HEALTH
- exec:
osd.0:
- ceph osd pool create foo 128
- ceph osd pool application enable foo foo
- sleep 5
- ceph.healthy:
- exec:
osd.0:
- rados -p foo bench 30 write -b 4096 --no-cleanup
- ceph osd out 0
- sleep 5
- ceph osd set noup
- ceph.restart:
daemons: [osd.1]
wait-for-up: false
wait-for-healthy: false
- exec:
osd.0:
- rados -p foo bench 3 write -b 4096 --no-cleanup
- ceph osd unset noup
- sleep 10
- for f in 0 1 2 3 ; do sudo ceph daemon osd.$f config set osd_recovery_sleep 0 ; sudo ceph daemon osd.$f config set osd_recovery_max_active 20 ; done
- ceph.healthy:
- exec:
osd.0:
- egrep '(defer backfill|defer recovery)' /var/log/ceph/ceph-osd.*.log