100MB will be allocated for journal, and the remaining 100MB is for data
device. taking the inode into consideration, there will be approximately
87988 kB available for the activated OSD. and it will complain with a
"nearfull" state.
Fixes: http://tracker.ceph.com/issues/22136
Signed-off-by: Kefu Chai <kchai@redhat.com>
normally, if we care about the output of ceph-disk, we expect a json
string, and ceph-disk sends the output to stdout, and errors/warnings
to stderr. so everything works as expected. and the test should also
follow this tradition. for example, if deprecated warnings are printed,
the warning message should not be collected along with the json string.
see also: d44334f3
Signed-off-by: Kefu Chai <kchai@redhat.com>
ceph-disk now prints "depreacted" warning message when it starts. but
the tests parses its stdout and stderr for a json string. so we need to
silence the warnings for the tests.
Fixes: http://tracker.ceph.com/issues/22154
Signed-off-by: Kefu Chai <kchai@redhat.com>
"ceph osd create" is not idempotent, and is considered deprecated.
Fixes: http://tracker.ceph.com/issues/21993
Signed-off-by: Kefu Chai <kchai@redhat.com>
This test is incomplete and has been obsoleted by krbd_blkroset.t.
It's also not wired up, so it's not actually being run.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
For the new read-based bench tests, flushing prior to the start of the test
will result in the exclusive lock being acquired and the object map being
utilized.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
ceph osd pool create test 100
Error ERANGE: pg_num 100 size 3 would mean 648 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)
Signed-off-by: Mykola Golub <to.my.trociny@gmail.com>
when we do exporting/importing an image, we should export or import the image-meta of this image at the same time
Signed-off-by: PCzhangPC <pengcheng.zhang@easystack.cn>
We change ruleset -> crush back in dc7a2aaf7a.
If someone tries to use the old property, error out early, instead of
silently not doing the thing they thought they told us to do.
Signed-off-by: Sage Weil <sage@redhat.com>
We now set full flag if a pool is currently running out of space and
set both full and full_no_quota flags if it is running out of quota.
Therefore the full_no_quota flag should be instead used to uniquely
identify whether we are running out of quota or not.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
https://github.com/ceph/ceph/pull/17371 introduces support of
per-pool space-full flag, which turns out to set both
full and full_no_quota flags now if a pool is currently running out
of quota.
Actually this test is fragile as long as we keep appending new flags
at pool granularity, but let's not bother with that complexity now.
Fixes: http://tracker.ceph.com/issues/21409
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
rbd pool should exist for many rbd tests to work properly, create
the pool right after install is successful.
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>