Selecting force peering on a single PG. In reality this probably induces
*2* interval changes.
Note that in the case of a single OSD cluster we can't actually force a
repeer on a single PG because the pg_temp code is pretty robust about
filtering out redundant or meaningless changes, so we can't pg_temp our
way into a new interval if there are no other OSDs to switch to and the
code also prevents an empty pg_temp.
Signed-off-by: Sage Weil <sage@redhat.com>
What we actually want is a purge, not a destroy. Destroy leaves the OSD
ID in used and allows it to be recreated. What ceph-volume wants is to
purge all trace of the failed OSD setup.
Signed-off-by: Sage Weil <sage@redhat.com>
ceph-volume may run into a problem and want to clean up, but we do not
want to give it blanket access to the 'osd destroy' command. Instead,
make an 'osd destroy-new' that can only create new OSDs (ones that are
in the process of being created but have never booted yet).
Signed-off-by: Sage Weil <sage@redhat.com>
"ceph fs set cephfs allow_multimds false" is deprecated, and multimds is
enabled by default, so "ceph fs set cephfs max_mds 4" won't fail with
the default settings.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* refs/pull/16608/head:
qa: whitelist mds down wrn during cephfs testing
mds: add config to disable fragmentation
qa: add max_mds thrash test
qa: mds_thrash updates for new max_mds behavior
doc: update upgrade procedure and release notes
qa: add test for cluster resizing
qa: remove use of mds deactivate
cephfs: add new down/joinable fs flags
mds: evict all clients if last mds shutting down
cephfs: deprecate ceph mds deactivate
cephfs: kill allow_dirfrags
cephfs: Kill allow_multimds
cephfs: Change behavior of cluster_down flag
mon/FSCommands: Set extra MDS to standby
cephfs: Health check changes
mon/MDSMonitor: Remove command support for legacy syntax
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
With multi-mds now declared stable, allow_multimds now defaults to 1.
Given the max_mds parameter, it is now redundant. Remove it, leaving a
comment placeholder in the features bitmap.
ceph fs set <fs> allow_multimds is now deprecated and prints a warning
message.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Until now bytes and objects were formatted using si_t which used 1024 as
the factor to pretty print large numbers. For object counts a factor of
1000 is preferred. This commit retires the si_t formatting (as well as
prettybyte_t, kb_t and pretty_si_t) completely and adds structs and
formatting for binary and decimal units, bin_u_t and dec_u_t respectively.
Fixes: http://tracker.ceph.com/issues/22095
Signed-off-by: Jan Fajerski <jfajerski@suse.com>
mon,osd: do not use crush_device_class file to initalize class for new osds
Reviewed-by: Alfredo Deza <adeza@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Andrew Schoen <aschoen@redhat.com>
If provided, set the OSD device_class at OSD creation time. This is
simpler than writing a file that the OSD has to read in and use to
set its initial device class, and also avoids a bit of sticky state
at the OSD that will make it keep trying to reset its device class on
startup if it ever gets cleared.
Note that we now ignore json input fields we don't understand, so remove
a test case.
Signed-off-by: Sage Weil <sage@redhat.com>
in test_mon_osd_misc(), there is good chance that the cluster chooses
to use an unbalanced weight because of the data distribution at that moment.
but this setting could prevent the CRUSH from choosing enough number of
OSDs for test_mon_cephdf_commands(), where 32 PGs are to be created. so
it's more likely that the CRUSH fails to pick enough OSDs for all PGs.
that's why we have a curr_object_copies_rate = 0.5.
so, in this change, pg=pgp=1 is specified for the new pool.
Fixes: http://tracker.ceph.com/issues/22711
Signed-off-by: Kefu Chai <kchai@redhat.com>
mon/OSDMonitor.cc : set erasure-code-profile to "" when create replicated pools.
Reviewed-by: Joao Eduardo Luis <joao@suse.de>
Reviewed-by: Kefu Chai <kchai@redhat.com>
when we create a pool specify a rule, for example "ceph osd pool create foo replicated 10 rule_foo",
we will set pool foo erasure-code-profile to rule_foo,
if there has an erasure-code-profile names rule_foo, use "ceph osd erasure-code-profile rule_foo" will fail,
"Error EBUSY: foo pool(s) are using the erasure code profile 'rule_foo'", this is wrong.
we should do:
1. set erasure-code-profile to "" when create replicated pools
2. whether erasure-code-profile is used by pool not only judge pool erasure_code_profile property and also the pool is_erasure
Signed-off-by: zouaiguo <zou.aiguo@zte.com.cn>
"ceph osd create" is not idempotent, and is considered deprecated.
Fixes: http://tracker.ceph.com/issues/21993
Signed-off-by: Kefu Chai <kchai@redhat.com>
We change ruleset -> crush back in dc7a2aaf7a.
If someone tries to use the old property, error out early, instead of
silently not doing the thing they thought they told us to do.
Signed-off-by: Sage Weil <sage@redhat.com>
It would be a pain if we have to call 'ceph osd dump --format=json-pretty'
to find out these each time...
Demo output:
(1) ceph osd pool application get
{
"cephfs_data_b": {
"cephfs": {}
},
"cephfs_metadata_a": {
"cephfs": {}
},
"test_pool": {
"rbd": {
"test": "me"
}
}
}
(2) ceph osd pool application get test_pool
{
"rbd": {
"test": "me"
}
}
(3) ceph osd pool application get test_pool rbd
{
"test": "me"
}
(4) ceph osd pool application get test_pool rbd test
me
Fixes: http://tracker.ceph.com/issues/20976
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
This will pervent OSDMonitor from crashing on purging a very large
non-existent osd id as below:
osd e11 prepare_command_osd_purge purging osd.8
-1> 2017-08-05 18:59:44.994319 7f6076968700 10 mon.a@0(leader).osd e11 prepare_command_osd_destroy osd.8 does not exist.
0> 2017-08-05 18:59:45.002309 7f6076968700 -1 /home/xxg/build/ceph-dev/src/osd/OSDMap.h: In function 'int OSDMap::get_state(int) const'
thread 7f6076968700 time 2017-08-05 18:59:44.994336
/home/xxg/build/ceph-dev/src/osd/OSDMap.h: 690: FAILED assert(o < max_osd)
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>