Many of the files in qa/qa_scripts/openstack had incorrect shebang
lines: the bang was missing. This means that those scripts would
execute using the calling user's login shell, which is doubtless not
what the author intended. Now they'll always use bash.
Two scripts do not need shebangs, because they contain only library
functions and don't execute anything. I removed their shebangs.
Signed-off-by: Alan Somers <asomers@gmail.com>
to shorten the pathname of unix domain socket created for admin socket,
so it does not exceed the limit of 107 on GNU/Linux:
* ceph-helper.sh: the temp directory is named ${TMPDIR:-/tmp}/ceph-asok.$$
* vstart.sh: the temp directory is named `mktemp -u -d "${TMPDIR:-/tmp}/ceph-asok.XXXXXX"`
Fixes: http://tracker.ceph.com/issues/16895
Signed-off-by: Kefu Chai <kchai@redhat.com>
We have a few open tickets regarding the mgr being down during suites
involving messenger failure injection. There are a few suspicions that
this may be related with the monclient, but we'll need more logs to
validate those suspicions and, more, to validate we're actually fixing
the issue.
Signed-off-by: Joao Eduardo Luis <joao@suse.de>
The CRUSH rule creation is busted (rules and buckets out of order), but
after I fix that it doesn't seem to run right anyway. Remove it.
We get the mon thrasher coverage from rados/monthrash already; I don't
think this is adding meaningful coverage for the amount of effort it takes
to maintain.
Signed-off-by: Sage Weil <sage@redhat.com>
cephtool/test.sh: Only delete a test pool when no longer needed.
Reviewed-by: Willem Jan Withagen <wjw@digiware.nl>
Reviewed-by: xie xingguo <xie.xingguo@zte.com.cn>
the pool_getset pool is deleted before all tests on it are complete
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1990: test_mon_osd_pool_set: ceph osd pool delete pool_get
set pool_getset --yes-i-really-really-mean-it
4: pool 'pool_getset' removed
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1992: test_mon_osd_pool_set: ceph osd pool get rbd crush_r
ule
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1992: test_mon_osd_pool_set: grep 'crush_rule: '
4: crush_rule: replicated_rule
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1994: test_mon_osd_pool_set: ceph -f json osd pool get poo
l_getset compression_mode
4: Error ENOENT: unrecognized pool 'pool_getset'
Signed-off-by: Willem Jan Withagen <wjw@digiware.nl>
This randomly issues pg force-recovery/force-backfill and
pg cancel-force-recovery/cancel-force-backfill during QA
testing. Disabled for upgrades from hammer, jewel and kraken.
Signed-off-by: Piotr Dałek <piotr.dalek@corp.ovh.com>
The output of ceph osd stat has changed,
It printed:
cluster b370a29d-9287-4ca3-ab57-3d824f65e339
health HEALTH_OK
monmap e1: 1 mons at {ceph1=10.0.0.8:6789/0}, election epoch 2, quorum 0 ceph1
osdmap e63: 2 osds: 2 up, 2 in
pgmap v41338: 952 pgs, 20 pools, 17130 MB data, 2199 objects
115 GB used, 167 GB / 297 GB avail
952 active+clean
but now the osdmap line has gone and thus this no longer works:
qa/workunits/cephtool/test.sh:1944:
old_pgs=$(ceph osd pool get $TEST_POOL_GETSET pg_num | sed -e 's/pg_num: //')
new_pgs=$(($old_pgs+$(ceph osd stat | grep osdmap | awk '{print $3}')*32))
4: qa/workunits/cephtool/test.sh: line 1945: 10+*32: syntax errotoken is "*32")
- And parse the output in json , with jq, for better reliability
Signed-off-by: Willem Jan Withagen <wjw@digiware.nl>
- factor out install and ceph into ceph/ceph.yaml
- pg_num thrashing + 20 minute health timeout for thrashosds
- common thrashosds-health.yaml whitelist
- drop iozone workload
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>