ceph/qa/workunits
Sage Weil b436930779 qa/workunits/rest/test.py: do not test 'osd thrash'
This wreaks havoc on our QA because it marks osds up and down and then
immediately after that we try to scrub and some osds are still down.

Adjust the CLI test to wait for all OSDs to come back up after thrashing.

Signed-off-by: Sage Weil <sage@inktank.com>
2014-03-06 13:46:10 -08:00
..
caps
cephtool qa/workunits/rest/test.py: do not test 'osd thrash' 2014-03-06 13:46:10 -08:00
cls cls/hello: hello, world rados class 2013-08-15 17:21:29 -07:00
direct_io
erasure-code osd: erasure code benchmark workunit 2013-12-20 12:15:44 +01:00
filestore Rename test/filestore to test/objectstore 2014-02-08 15:41:52 +08:00
fs qa/workunits/fs/multiclient_sync_read_eof.py 2013-08-13 21:28:35 -07:00
hadoop-internal-tests
hadoop-wordcount
kclient
libcephfs
libcephfs-java
misc Merge pull request #691 from ceph/wip-dirfrag 2013-10-17 08:35:38 -07:00
mon mon/OSDMonitor: disallow crush buckets of type 0 2014-03-05 13:15:58 -08:00
osdc
rados qa: add librados c object operations tests to librados test script 2014-02-18 12:34:33 -08:00
rbd qa: fix rbd cli tests checking size 2013-10-03 15:16:59 -07:00
rename
rest qa/workunits/rest/test.py: do not test 'osd thrash' 2014-03-06 13:46:10 -08:00
restart
rgw script to test rgw multi part uploads using s3 interface 2014-02-07 22:27:05 -08:00
snaps qa/workunits/snaps: New allow_new_snaps syntax 2014-02-05 21:00:12 +00:00
suites qa/workunits/suites/pjd: use test suite with acl tweak 2014-02-16 22:25:49 -08:00
false.sh
kernel_untar_build.sh
Makefile