100MB will be allocated for journal, and the remaining 100MB is for data
device. taking the inode into consideration, there will be approximately
87988 kB available for the activated OSD. and it will complain with a
"nearfull" state.
Fixes: http://tracker.ceph.com/issues/22136
Signed-off-by: Kefu Chai <kchai@redhat.com>
normally, if we care about the output of ceph-disk, we expect a json
string, and ceph-disk sends the output to stdout, and errors/warnings
to stderr. so everything works as expected. and the test should also
follow this tradition. for example, if deprecated warnings are printed,
the warning message should not be collected along with the json string.
see also: d44334f3
Signed-off-by: Kefu Chai <kchai@redhat.com>
ceph-disk now prints "depreacted" warning message when it starts. but
the tests parses its stdout and stderr for a json string. so we need to
silence the warnings for the tests.
Fixes: http://tracker.ceph.com/issues/22154
Signed-off-by: Kefu Chai <kchai@redhat.com>
"ceph osd create" is not idempotent, and is considered deprecated.
Fixes: http://tracker.ceph.com/issues/21993
Signed-off-by: Kefu Chai <kchai@redhat.com>
This test is incomplete and has been obsoleted by krbd_blkroset.t.
It's also not wired up, so it's not actually being run.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
For the new read-based bench tests, flushing prior to the start of the test
will result in the exclusive lock being acquired and the object map being
utilized.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
ceph osd pool create test 100
Error ERANGE: pg_num 100 size 3 would mean 648 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)
Signed-off-by: Mykola Golub <to.my.trociny@gmail.com>
when we do exporting/importing an image, we should export or import the image-meta of this image at the same time
Signed-off-by: PCzhangPC <pengcheng.zhang@easystack.cn>
We change ruleset -> crush back in dc7a2aaf7a.
If someone tries to use the old property, error out early, instead of
silently not doing the thing they thought they told us to do.
Signed-off-by: Sage Weil <sage@redhat.com>
We now set full flag if a pool is currently running out of space and
set both full and full_no_quota flags if it is running out of quota.
Therefore the full_no_quota flag should be instead used to uniquely
identify whether we are running out of quota or not.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
https://github.com/ceph/ceph/pull/17371 introduces support of
per-pool space-full flag, which turns out to set both
full and full_no_quota flags now if a pool is currently running out
of quota.
Actually this test is fragile as long as we keep appending new flags
at pool granularity, but let's not bother with that complexity now.
Fixes: http://tracker.ceph.com/issues/21409
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
rbd pool should exist for many rbd tests to work properly, create
the pool right after install is successful.
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
So we can combine "crush add-bucket" with "crush move" command,
and hence avoid making two separate changes to the osdmap,
and hence slow down map-epoch generation.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
This command returns all crush rules that are currently
referencing the device class specified by user.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
It would be a pain if we have to call 'ceph osd dump --format=json-pretty'
to find out these each time...
Demo output:
(1) ceph osd pool application get
{
"cephfs_data_b": {
"cephfs": {}
},
"cephfs_metadata_a": {
"cephfs": {}
},
"test_pool": {
"rbd": {
"test": "me"
}
}
}
(2) ceph osd pool application get test_pool
{
"rbd": {
"test": "me"
}
}
(3) ceph osd pool application get test_pool rbd
{
"test": "me"
}
(4) ceph osd pool application get test_pool rbd test
me
Fixes: http://tracker.ceph.com/issues/20976
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
User may specify a rule with the same name of the pool that it serves.
Since a pool can be renamed, so does the rule.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
The previous method to get the watcher admin socket was fragile
and had started to fail after the recent changes to vstart ceph.conf.
Fixes: http://tracker.ceph.com/issues/20954
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
rbd-ggate spawns a process responsible for the creation of ggate
device and forwarding I/O requests between the GEOM Gate kernel
subsystem and RADOS.
On FreeBSD it provides functionality similar to rbd-nbd on Linux.
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
This will pervent OSDMonitor from crashing on purging a very large
non-existent osd id as below:
osd e11 prepare_command_osd_purge purging osd.8
-1> 2017-08-05 18:59:44.994319 7f6076968700 10 mon.a@0(leader).osd e11 prepare_command_osd_destroy osd.8 does not exist.
0> 2017-08-05 18:59:45.002309 7f6076968700 -1 /home/xxg/build/ceph-dev/src/osd/OSDMap.h: In function 'int OSDMap::get_state(int) const'
thread 7f6076968700 time 2017-08-05 18:59:44.994336
/home/xxg/build/ceph-dev/src/osd/OSDMap.h: 690: FAILED assert(o < max_osd)
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
This introduces a new "rbd/singleton-bluestore" suite because creating an rbd
on an EC-backed datapool will fail on filestore.
References: http://tracker.ceph.com/issues/20295
Signed-off-by: Nathan Cutler <ncutler@suse.com>
we cannot overwrite existing dev class, and "osd_class_update_on_start"
is true by default (see 0c885d6). so we should remove all device classes before
setting them.
Signed-off-by: Kefu Chai <kchai@redhat.com>
/bin/bash is a Linuxism. Other operating systems install bash to
different paths. Use /usr/bin/env in shebangs to find bash.
Signed-off-by: Alan Somers <asomers@gmail.com>
- stop running via make check
- add teuthology yamls to run them
- disable ceph_objecstore_tool.py for now (too slow for make check, and
we can't use vstart in teuthology via a package install)
- drop cephtool tests since those are already covered by other teuthology
tests
- leave a handful of (fast!) ceph-helpers tests for make check for minimal
integration tests.
Signed-off-by: Sage Weil <sage@redhat.com>
to shorten the pathname of unix domain socket created for admin socket,
so it does not exceed the limit of 107 on GNU/Linux:
* ceph-helper.sh: the temp directory is named ${TMPDIR:-/tmp}/ceph-asok.$$
* vstart.sh: the temp directory is named `mktemp -u -d "${TMPDIR:-/tmp}/ceph-asok.XXXXXX"`
Fixes: http://tracker.ceph.com/issues/16895
Signed-off-by: Kefu Chai <kchai@redhat.com>
The CRUSH rule creation is busted (rules and buckets out of order), but
after I fix that it doesn't seem to run right anyway. Remove it.
We get the mon thrasher coverage from rados/monthrash already; I don't
think this is adding meaningful coverage for the amount of effort it takes
to maintain.
Signed-off-by: Sage Weil <sage@redhat.com>
the pool_getset pool is deleted before all tests on it are complete
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1990: test_mon_osd_pool_set: ceph osd pool delete pool_get
set pool_getset --yes-i-really-really-mean-it
4: pool 'pool_getset' removed
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1992: test_mon_osd_pool_set: ceph osd pool get rbd crush_r
ule
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1992: test_mon_osd_pool_set: grep 'crush_rule: '
4: crush_rule: replicated_rule
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1994: test_mon_osd_pool_set: ceph -f json osd pool get poo
l_getset compression_mode
4: Error ENOENT: unrecognized pool 'pool_getset'
Signed-off-by: Willem Jan Withagen <wjw@digiware.nl>
The output of ceph osd stat has changed,
It printed:
cluster b370a29d-9287-4ca3-ab57-3d824f65e339
health HEALTH_OK
monmap e1: 1 mons at {ceph1=10.0.0.8:6789/0}, election epoch 2, quorum 0 ceph1
osdmap e63: 2 osds: 2 up, 2 in
pgmap v41338: 952 pgs, 20 pools, 17130 MB data, 2199 objects
115 GB used, 167 GB / 297 GB avail
952 active+clean
but now the osdmap line has gone and thus this no longer works:
qa/workunits/cephtool/test.sh:1944:
old_pgs=$(ceph osd pool get $TEST_POOL_GETSET pg_num | sed -e 's/pg_num: //')
new_pgs=$(($old_pgs+$(ceph osd stat | grep osdmap | awk '{print $3}')*32))
4: qa/workunits/cephtool/test.sh: line 1945: 10+*32: syntax errotoken is "*32")
- And parse the output in json , with jq, for better reliability
Signed-off-by: Willem Jan Withagen <wjw@digiware.nl>
to avoid possible deadlock. quote from doc of Popen.wait()
> This will deadlock when using stdout=PIPE and/or stderr=PIPE and the
child process generates enough output to a pipe such that it blocks
waiting for the OS pipe buffer to accept more data. Use communicate() to
avoid that.
and print out the stdout and stderr using LOG.warn() if the command
fails.
Signed-off-by: Kefu Chai <kchai@redhat.com>
The former semantic of ceph-disk destroy is now implemented with the
--purge flag. Use that for the ceph-disk suite.
Signed-off-by: Loic Dachary <loic@dachary.org>
Add a set of new tests for the case when public_addr and public_bind_addr
are different for a mon. In order to test this properly I had to employ
port forwarding with socat. This helps simulate what would happen in a
environment like Kubernetes. socat is now a build dependency.
Also, moved jq_success to ceph-helpers.sh and refactored run_mon to enable
creating the mons without creating the rbd pool immediately.
Signed-off-by: Bassam Tabbara <bassam.tabbara@quantum.com>
it matches the settings in vstart.sh, also it would be handy for those
who are still developing on btrfs, which is now marked as an experimental
features now.
Signed-off-by: Kefu Chai <kchai@redhat.com>
0 OSDs is not an error anymore in the new health checking implemented by
OSDMap::check_health(). this case was treated as an error before, see
OSDMonitor::get_health(). but an osdmap without any OSD is fine, i
think. but an osdmap with 3 OSDs, but all of them are down and out, this
is an error. and we do report this as an error. so, let's update the
test instead.
Signed-off-by: Kefu Chai <kchai@redhat.com>
1) ruleset is an obsolete term, and
2) crush-{rule,failure-domain,...} is more descriptive.
Note that we are changing the names of the erasure code profile keys
from ruleset-* to crush-*. We will update this on upgrade when the
luminous flag is set, but that means that during mon upgrade you cannot
create EC pools that use these fields.
When the upgrade completes (users sets require_osd_release = luminous)
existing ec profiles are updated automatically.
Signed-off-by: Sage Weil <sage@redhat.com>
New command to create crush rule that specifies a class of device. Plus
all of the fallout in other callers to the CrushWrapper helpers, the
crushtool cli change, and cli test.
Signed-off-by: Sage Weil <sage@redhat.com>
Kill old mgr_modules option.
Add new mgr_initial_modules option, on the mon, for the initial cluster
mgrmap.
Add ls, enable, disable commands.
Respawn mgr if the module list changes. In the future we could enable
new modules without a full restart, but disabling probably requires (and
is best handled by) a respawn.
Signed-off-by: Sage Weil <sage@redhat.com>
A string-typed pool option requires user pass in a empty string to do a valid cancellation,
but the "osd pool set " command won't allow it. E.g.:
./bin/ceph osd pool set rbd compression_mode
Invalid command: missing required parameter val(<string>)
Since we already use the "unset" keyword to cancel the csum_type setting,
we could simply extends the above mechanism for compression_mode
and compression_algorithm too.
E.g.:
./bin/ceph osd pool set rbd compression_algorithm zlib
set pool 0 compression_algorithm to zlib
./bin/ceph osd pool set rbd compression_algorithm unset
unset pool 0 compression_algorithm
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
rbd-nbd: display pool/image/snap information in list output
Reviewed-by: Jason Dillaman <dillaman@redhat.com>
Reviewed-by: Mykola Golub <mgolub@mirantis.com>
in 55edd81, test for `--export-format` was added to exercise this
option. but this option is only supported on luminous, so we need to
check if it's avaialble before using it.
Signed-off-by: Kefu Chai <kchai@redhat.com>
no need to clone the whole history of rocksdb, we just need the HEAD of
master. so "--depth 1" is better and faster in this case.
Signed-off-by: Kefu Chai <kchai@redhat.com>
/home/jenkins-build/build/workspace/ceph-pull-requests-arm64/qa/workunits/cephtool/test.sh:1606: test_mon_crush: epoch='2017-06-20 04:44:52.862459 ffffad4d0200 -1 asok(0xffffa8000b10) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: The UNIX domain socket path /home/jenkins-build/build/workspace/ceph-pull-requests-arm64/build/src/test/td/t-7202/out/client.admin.48876.asok is too long! The maximum length on this system is 107
12'
/home/jenkins-build/build/workspace/ceph-pull-requests-arm64/qa/workunits/cephtool/test.sh:1607: test_mon_crush: '[' '2017-06-20 04:44:52.862459 ffffad4d0200 -1 asok(0xffffa8000b10) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: The UNIX domain socket path /home/jenkins-build/build/workspace/ceph-pull-requests-arm64/build/src/test/td/t-7202/out/client.admin.48876.asok is too long! The maximum length on this system is 107
12' -gt 1 ']'
/home/jenkins-build/build/workspace/ceph-pull-requests-arm64/qa/workunits/cephtool/test.sh: line 1607: [: 2017-06-20 04:44:52.862459 ffffad4d0200 -1 asok(0xffffa8000b10) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: The UNIX domain socket path /home/jenkins-build/build/workspace/ceph-pull-requests-arm64/build/src/test/td/t-7202/out/client.admin.48876.asok is too long! The maximum length on this system is 107
12: integer expression expected
Signed-off-by: Kefu Chai <kchai@redhat.com>
* remove tests for blank cap: this feature is not supported/implemented by AuthMonitor.
* remove cap for client.baz after done with it. so we don't have error
like: "entity client.baz exists but caps do not match" when trying to
re-set the cap of it.
Signed-off-by: Kefu Chai <kchai@redhat.com>
This reverts commit f0653c0401.
--force is not implemented by AuthMonitor. so revert this change to test
it.
Signed-off-by: Kefu Chai <kchai@redhat.com>
This is a follow-up change of https://github.com/ceph/ceph/pull/15381.
This patch also simplifies the original code logic a bit.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
If the prior_version doesn't match, reject the update.
Note that we also allow the crush_version-1 iff the proposed map is
identical to the current map in order to make the command idempotent.
Signed-off-by: Sage Weil <sage@redhat.com>