this case introduces multiple quotes in caps line
it will trigger the bug like http://tracker.ceph.com/issues/22227
Signed-off-by: Gu Zhongyan <guzhongyan@360.cn>
Our test admin has been asking for this for the past few years:-)
Besides, this is also useful for operating on large Ceph clusters with
mutliple storage pools possibly spanning over all osds.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
It does not make much sense to add this kind of restrictions
as long as user is aware of what is going on.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
Use OSD_POOL_PRIORITY_MAX and OSD_POOL_PRIORITY_MIN constants
Scale legacy priorities if exceeds maximum
Signed-off-by: David Zafman <dzafman@redhat.com>
For large clusters, we use device classes to isolate storage pools.
The existing 'osd df' output turns out to be too nosiy, say, if
you care about only single storage pool with osds possibly spanning over
all hosts.
With this change you are now being able to do 'osd df' by class (or by pool,
if you simply use classes to separate different pools), or by a specified
crush bucket name you are currently interested in, which is much more
convenient.
Some examples:
```
$ bin/ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 0.05878 - 60 GiB 6.4 GiB 23 MiB 0 B 6 GiB 54 GiB 10.60 1.00 - root default
-3 0.02939 - 30 GiB 3.2 GiB 12 MiB 0 B 3 GiB 27 GiB 10.60 1.00 - host ceph11
3 aaa 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 56 up osd.3
4 bbb 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 58 up osd.4
5 ccc 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 60 up osd.5
-5 0.02939 - 30 GiB 3.2 GiB 12 MiB 0 B 3 GiB 27 GiB 10.60 1.00 - host ceph12
0 aaa 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 50 up osd.0
1 bbb 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 61 up osd.1
2 ccc 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 51 up osd.2
TOTAL 60 GiB 6.4 GiB 23 MiB 0 B 6 GiB 54 GiB 10.60
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
$ bin/ceph osd df tree class aaa
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 0.05878 - 20 GiB 2.1 GiB 7.8 MiB 0 B 2 GiB 18 GiB 10.60 1.00 - root default
-3 0.02939 - 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 - host ceph11
3 aaa 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 56 up osd.3
-5 0.02939 - 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 - host ceph12
0 aaa 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 50 up osd.0
TOTAL 20 GiB 2.1 GiB 7.8 MiB 0 B 2 GiB 18 GiB 10.60
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
$ bin/ceph osd df tree name ceph11
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-3 0.02939 - 30 GiB 3.2 GiB 12 MiB 0 B 3 GiB 27 GiB 10.60 1.00 - host ceph11
3 aaa 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 56 up osd.3
4 bbb 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 58 up osd.4
5 ccc 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 60 up osd.5
TOTAL 30 GiB 3.2 GiB 12 MiB 0 B 3 GiB 27 GiB 10.60
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
```
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
For those with multiple storage pools sharing the same devices,
I think it would make much more sense to offer per-pool
commands to bring pools with high priority, e.g., because they
are hosting data of more importance than others, back to normal
quickly.
Fixes: http://tracker.ceph.com/issues/38456
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
This isn't really relevant or useful now that the mgr is throttling the
actual pg_num adjustment based on pg_num_target, % misplaced, etc.
Signed-off-by: Sage Weil <sage@redhat.com>
Selecting force peering on a single PG. In reality this probably induces
*2* interval changes.
Note that in the case of a single OSD cluster we can't actually force a
repeer on a single PG because the pg_temp code is pretty robust about
filtering out redundant or meaningless changes, so we can't pg_temp our
way into a new interval if there are no other OSDs to switch to and the
code also prevents an empty pg_temp.
Signed-off-by: Sage Weil <sage@redhat.com>
What we actually want is a purge, not a destroy. Destroy leaves the OSD
ID in used and allows it to be recreated. What ceph-volume wants is to
purge all trace of the failed OSD setup.
Signed-off-by: Sage Weil <sage@redhat.com>
ceph-volume may run into a problem and want to clean up, but we do not
want to give it blanket access to the 'osd destroy' command. Instead,
make an 'osd destroy-new' that can only create new OSDs (ones that are
in the process of being created but have never booted yet).
Signed-off-by: Sage Weil <sage@redhat.com>
"ceph fs set cephfs allow_multimds false" is deprecated, and multimds is
enabled by default, so "ceph fs set cephfs max_mds 4" won't fail with
the default settings.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* refs/pull/16608/head:
qa: whitelist mds down wrn during cephfs testing
mds: add config to disable fragmentation
qa: add max_mds thrash test
qa: mds_thrash updates for new max_mds behavior
doc: update upgrade procedure and release notes
qa: add test for cluster resizing
qa: remove use of mds deactivate
cephfs: add new down/joinable fs flags
mds: evict all clients if last mds shutting down
cephfs: deprecate ceph mds deactivate
cephfs: kill allow_dirfrags
cephfs: Kill allow_multimds
cephfs: Change behavior of cluster_down flag
mon/FSCommands: Set extra MDS to standby
cephfs: Health check changes
mon/MDSMonitor: Remove command support for legacy syntax
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
With multi-mds now declared stable, allow_multimds now defaults to 1.
Given the max_mds parameter, it is now redundant. Remove it, leaving a
comment placeholder in the features bitmap.
ceph fs set <fs> allow_multimds is now deprecated and prints a warning
message.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Until now bytes and objects were formatted using si_t which used 1024 as
the factor to pretty print large numbers. For object counts a factor of
1000 is preferred. This commit retires the si_t formatting (as well as
prettybyte_t, kb_t and pretty_si_t) completely and adds structs and
formatting for binary and decimal units, bin_u_t and dec_u_t respectively.
Fixes: http://tracker.ceph.com/issues/22095
Signed-off-by: Jan Fajerski <jfajerski@suse.com>