ceph osd pool create test 100
Error ERANGE: pg_num 100 size 3 would mean 648 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)
Signed-off-by: Mykola Golub <to.my.trociny@gmail.com>
We change ruleset -> crush back in dc7a2aaf7a.
If someone tries to use the old property, error out early, instead of
silently not doing the thing they thought they told us to do.
Signed-off-by: Sage Weil <sage@redhat.com>
We now set full flag if a pool is currently running out of space and
set both full and full_no_quota flags if it is running out of quota.
Therefore the full_no_quota flag should be instead used to uniquely
identify whether we are running out of quota or not.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
https://github.com/ceph/ceph/pull/17371 introduces support of
per-pool space-full flag, which turns out to set both
full and full_no_quota flags now if a pool is currently running out
of quota.
Actually this test is fragile as long as we keep appending new flags
at pool granularity, but let's not bother with that complexity now.
Fixes: http://tracker.ceph.com/issues/21409
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
rbd pool should exist for many rbd tests to work properly, create
the pool right after install is successful.
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
So we can combine "crush add-bucket" with "crush move" command,
and hence avoid making two separate changes to the osdmap,
and hence slow down map-epoch generation.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
This command returns all crush rules that are currently
referencing the device class specified by user.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
It would be a pain if we have to call 'ceph osd dump --format=json-pretty'
to find out these each time...
Demo output:
(1) ceph osd pool application get
{
"cephfs_data_b": {
"cephfs": {}
},
"cephfs_metadata_a": {
"cephfs": {}
},
"test_pool": {
"rbd": {
"test": "me"
}
}
}
(2) ceph osd pool application get test_pool
{
"rbd": {
"test": "me"
}
}
(3) ceph osd pool application get test_pool rbd
{
"test": "me"
}
(4) ceph osd pool application get test_pool rbd test
me
Fixes: http://tracker.ceph.com/issues/20976
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
User may specify a rule with the same name of the pool that it serves.
Since a pool can be renamed, so does the rule.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
The previous method to get the watcher admin socket was fragile
and had started to fail after the recent changes to vstart ceph.conf.
Fixes: http://tracker.ceph.com/issues/20954
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
rbd-ggate spawns a process responsible for the creation of ggate
device and forwarding I/O requests between the GEOM Gate kernel
subsystem and RADOS.
On FreeBSD it provides functionality similar to rbd-nbd on Linux.
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
This will pervent OSDMonitor from crashing on purging a very large
non-existent osd id as below:
osd e11 prepare_command_osd_purge purging osd.8
-1> 2017-08-05 18:59:44.994319 7f6076968700 10 mon.a@0(leader).osd e11 prepare_command_osd_destroy osd.8 does not exist.
0> 2017-08-05 18:59:45.002309 7f6076968700 -1 /home/xxg/build/ceph-dev/src/osd/OSDMap.h: In function 'int OSDMap::get_state(int) const'
thread 7f6076968700 time 2017-08-05 18:59:44.994336
/home/xxg/build/ceph-dev/src/osd/OSDMap.h: 690: FAILED assert(o < max_osd)
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
This introduces a new "rbd/singleton-bluestore" suite because creating an rbd
on an EC-backed datapool will fail on filestore.
References: http://tracker.ceph.com/issues/20295
Signed-off-by: Nathan Cutler <ncutler@suse.com>
we cannot overwrite existing dev class, and "osd_class_update_on_start"
is true by default (see 0c885d6). so we should remove all device classes before
setting them.
Signed-off-by: Kefu Chai <kchai@redhat.com>
/bin/bash is a Linuxism. Other operating systems install bash to
different paths. Use /usr/bin/env in shebangs to find bash.
Signed-off-by: Alan Somers <asomers@gmail.com>