ceph/qa/workunits
Kefu Chai a2335091d6 qa/workunits/ceph-helpers: test wait_for_health_ok differently
0 OSDs is not an error anymore in the new health checking implemented by
OSDMap::check_health(). this case was treated as an error before, see
OSDMonitor::get_health(). but an osdmap without any OSD is fine, i
think. but an osdmap with 3 OSDs, but all of them are down and out, this
is an error. and we do report this as an error. so, let's update the
test instead.

Signed-off-by: Kefu Chai <kchai@redhat.com>
2017-07-13 17:49:44 +08:00
..
caps
ceph-disk ceph-disk: add --filestore argument, default to --bluestore 2017-06-06 19:45:24 +02:00
ceph-tests create rbd pool since its not created by default anymore 2017-07-07 09:23:43 -07:00
cephtool qa/workunits/cephtool/test.sh: adjust full tests to avoid races 2017-07-12 12:52:03 -04:00
cls objclass-sdk: create SDK for Ceph object classes 2017-04-27 13:05:53 -07:00
direct_io
erasure-code
fs
hadoop
libcephfs
libcephfs-java
mon crush: fix potential weight overflow 2017-07-10 11:23:36 +08:00
objectstore qa/workunits/objectstore/test_fuse.sh: enable experimental features 2017-02-17 11:23:41 +08:00
osdc
rados Merge pull request #15858 from liewegas/wip-mgr-servicemap 2017-07-10 15:03:07 +01:00
rbd erasure-code: ruleset-* -> crush-* 2017-07-06 15:01:03 -04:00
rename
rest test,qa/workunits: fix a zillion tests 2017-06-28 10:52:49 -04:00
restart
rgw qa: run-s3tests: use python2 for s3tests & set PATH correctly 2017-05-19 17:39:50 +02:00
suites qa: reset journal before cephfs_journal_tool_smoke.sh exits 2017-06-29 17:44:19 +08:00
ceph-helpers-root.sh
ceph-helpers.sh qa/workunits/ceph-helpers: test wait_for_health_ok differently 2017-07-13 17:49:44 +08:00
false.sh
kernel_untar_build.sh
Makefile
post-file.sh
true.sh