mon,osd: do not use crush_device_class file to initalize class for new osds
Reviewed-by: Alfredo Deza <adeza@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Andrew Schoen <aschoen@redhat.com>
If provided, set the OSD device_class at OSD creation time. This is
simpler than writing a file that the OSD has to read in and use to
set its initial device class, and also avoids a bit of sticky state
at the OSD that will make it keep trying to reset its device class on
startup if it ever gets cleared.
Note that we now ignore json input fields we don't understand, so remove
a test case.
Signed-off-by: Sage Weil <sage@redhat.com>
rbd: unified way to map images using different drivers
Reviewed-by: Jason Dillaman <dillaman@redhat.com>
Reviewed-by: Ilya Dryomov <idryomov@redhat.com>
Reviewed-by: Willem Jan Withagen <wjw@digiware.nl>
It is difficult to make it work reliably in different environments.
Fixes: http://tracker.ceph.com/issues/22803
Signed-off-by: Mykola Golub <mgolub@suse.com>
* tweak create a cloned image when the source image is
a clone (or at least one of its snapshots is a clone).
Signed-off-by: songweibin <song.weibin@zte.com.cn>
Replaced the delay argument for the trash move
command with a string acceptable by /bin/date, e.g.:
$ rbd trash move --pool foo --image bar --expires-in "2 weeks"
Added a "rbd trash purge" command that deletes any expired
image from the trash, has also a command to alter the current
expiration date with the "--older-than" argument which accepts
again a valid argument for /bin/date, e.g.:
$rbd trash purge mypool --older-than "2017-08-20"
There is also the "threshold" argument which tries to remove the
oldest trashed images (by deferment end time) until the pool space
is freed up to a percentage point, e.g.:
$ rbd trash purge mypool --threshold 0.9
If mypool uses 1GB it will try to remove trashed images until the
pool usage becomes equal to or lower than 900MB.
Signed-off-by: Theofilos Mouratidis <t.mour@cern.ch>
in test_mon_osd_misc(), there is good chance that the cluster chooses
to use an unbalanced weight because of the data distribution at that moment.
but this setting could prevent the CRUSH from choosing enough number of
OSDs for test_mon_cephdf_commands(), where 32 PGs are to be created. so
it's more likely that the CRUSH fails to pick enough OSDs for all PGs.
that's why we have a curr_object_copies_rate = 0.5.
so, in this change, pg=pgp=1 is specified for the new pool.
Fixes: http://tracker.ceph.com/issues/22711
Signed-off-by: Kefu Chai <kchai@redhat.com>
mon/OSDMonitor.cc : set erasure-code-profile to "" when create replicated pools.
Reviewed-by: Joao Eduardo Luis <joao@suse.de>
Reviewed-by: Kefu Chai <kchai@redhat.com>
Defines asynchronous librados operations that satisfy all of the
"Requirements on asynchronous operations" imposed by the C++ Networking
TS [1] in section 13.2.7. These operations are implemented in terms of
boost::asio, but the interfaces themselves are free of boost types -
this makes the transition to std::net trivial when it's available.
These interfaces conform to the Extensible Asynchronous Model [2] that
originated in boost::asio. This model allows the last 'handler' argument
to either be a callback that gets the result, a coroutine yield_context
that will suspend until completion, or a 'use_future' tag to request the
result in a std::future (see the unit tests for examples of each). The
'Extensible' part also enables further integration with new frameworks.
For now, only async_read(), async_write(), and the read/write variants
of async_operate() are provided.
[1] Working Draft, C++ Extensions for Networking
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/n4711.pdf
[2] "Library Foundations for Asynchronous Operations"
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3896.pdf
Signed-off-by: Casey Bodley <cbodley@redhat.com>
when we create a pool specify a rule, for example "ceph osd pool create foo replicated 10 rule_foo",
we will set pool foo erasure-code-profile to rule_foo,
if there has an erasure-code-profile names rule_foo, use "ceph osd erasure-code-profile rule_foo" will fail,
"Error EBUSY: foo pool(s) are using the erasure code profile 'rule_foo'", this is wrong.
we should do:
1. set erasure-code-profile to "" when create replicated pools
2. whether erasure-code-profile is used by pool not only judge pool erasure_code_profile property and also the pool is_erasure
Signed-off-by: zouaiguo <zou.aiguo@zte.com.cn>
100MB will be allocated for journal, and the remaining 100MB is for data
device. taking the inode into consideration, there will be approximately
87988 kB available for the activated OSD. and it will complain with a
"nearfull" state.
Fixes: http://tracker.ceph.com/issues/22136
Signed-off-by: Kefu Chai <kchai@redhat.com>
normally, if we care about the output of ceph-disk, we expect a json
string, and ceph-disk sends the output to stdout, and errors/warnings
to stderr. so everything works as expected. and the test should also
follow this tradition. for example, if deprecated warnings are printed,
the warning message should not be collected along with the json string.
see also: d44334f3
Signed-off-by: Kefu Chai <kchai@redhat.com>
ceph-disk now prints "depreacted" warning message when it starts. but
the tests parses its stdout and stderr for a json string. so we need to
silence the warnings for the tests.
Fixes: http://tracker.ceph.com/issues/22154
Signed-off-by: Kefu Chai <kchai@redhat.com>