ceph/qa/workunits
Sage Weil 015df934af mon/OSDMonitor: require force argument to split a cache pool
There are several perils when splitting a cache pool:

 - split invalidstes pg stats, which disables the agent
 - a scrub must be manually triggered post-split to rebuild stats
 - the pool may fill the OSDs during that period.
 - or, the pool may end up beyond the 'full' mark and once scrub does
   complete and the agent activate we may block IO for a long time while
   we catch up with flush/evict

Make it a bit harder for users to shoot themselves in the foot.

Fixes: #8043
Signed-off-by: Sage Weil <sage@inktank.com>
2014-04-15 13:57:21 -07:00
..
caps
cephtool mon/OSDMonitor: require force argument to split a cache pool 2014-04-15 13:57:21 -07:00
cls
direct_io
erasure-code
filestore Rename test/filestore to test/objectstore 2014-02-08 15:41:52 +08:00
fs qa/workunits/fs/misc/layout_vxattrs: ceph.file.layout is not listed 2014-03-29 14:23:21 -07:00
hadoop-internal-tests
hadoop-wordcount
kclient
libcephfs
libcephfs-java
mon qa: workunits: mon: auth_caps.sh: test 'auth' caps requirements 2014-04-07 18:30:56 +01:00
osdc
rados qa: test_alloc_hint: set ec ruleset-failure-domain to osd 2014-04-03 21:16:14 +04:00
rbd
rename
rest qa/workunits/rest/test.py: do not test 'osd thrash' 2014-03-06 13:46:10 -08:00
restart
rgw Make sure s3_utilities are found. 2014-03-25 16:30:03 -07:00
snaps qa/workunits/snaps: New allow_new_snaps syntax 2014-02-05 21:00:12 +00:00
suites qa/workunits/suites/pjd: use test suite with acl tweak 2014-02-16 22:25:49 -08:00
false.sh
kernel_untar_build.sh
Makefile