mirror of
https://github.com/ceph/ceph
synced 2025-01-03 01:22:53 +00:00
doc: few notes on manipulating the crush map
Signed-off-by: Sage Weil <sage.weil@dreamhost.com>
This commit is contained in:
parent
6db7715897
commit
3bd1f18e59
75
doc/ops/manage/crush.rst
Normal file
75
doc/ops/manage/crush.rst
Normal file
@ -0,0 +1,75 @@
|
||||
=========================
|
||||
Adjusting the CRUSH map
|
||||
=========================
|
||||
|
||||
.. _adjusting-crush:
|
||||
|
||||
There are a few ways to adjust the crush map:
|
||||
|
||||
* online, by issuing commands to the monitor
|
||||
* offline, by extracting the current map to a file, modifying it, and then reinjecting a new map
|
||||
|
||||
For offline changes, some can be made directly with ``crushtool``, and
|
||||
others require you to decompile the file to text form, manually edit
|
||||
it, and then recompile.
|
||||
|
||||
|
||||
Adding a new device (OSD) to the map
|
||||
====================================
|
||||
|
||||
Adding new devices can be done via the monitor. The general form is::
|
||||
|
||||
$ ceph osd crush add <id> <name> <weight> [<loc> [<lo2> ...]]
|
||||
|
||||
where
|
||||
|
||||
* ``id`` is the numeric device id (the OSD id)
|
||||
* ``name`` is an alphanumeric name. By convention Ceph uses
|
||||
``osd.$id``.
|
||||
* ``weight`` is a floating point weight value controlling how much
|
||||
data the device will be allocated. A decent convention is to make
|
||||
this the number of TB the device will store.
|
||||
* ``loc`` is a list of ``what=where`` pairs indicating where in the
|
||||
CRUSH hierarchy the device will be stored. By default, the
|
||||
hierarchy (the ``what``s) includes ``pool`` (the ``default`` pool
|
||||
is normally the root of the hierarchy), ``rack``, and ``host``.
|
||||
At least one of these location specifiers has to refer to an
|
||||
existing point in the hierarchy, and only the lowest (most
|
||||
specific) match counts. Beneath that point, any intervening
|
||||
branches will be created as needed. Specifying the complete
|
||||
location is always sufficient, and also safe in that existing
|
||||
branches (and devices) won't be moved around.
|
||||
|
||||
For example, if the new OSD id is ``123``, we want a weight of ``1.0``
|
||||
and the new device is on host ``hostfoo`` and rack ``rackbar``::
|
||||
|
||||
$ ceph osd crush add 123 osd.123 1.0 pool=default rack=rackbar host=hostfoo
|
||||
|
||||
will add it to the hierarchy. The rack ``rackbar`` and host
|
||||
``hostfoo`` will be added as needed, as long as the pool ``default``
|
||||
exists (as it does in the default Ceph CRUSH map generated during
|
||||
cluster creation).
|
||||
|
||||
Note that if I later add another device in the same host but specify a
|
||||
different pool or rack::
|
||||
|
||||
$ ceph osd crush add 124 osd.124 1.0 pool=nondefault rack=weirdrack host=hostfoo
|
||||
|
||||
the device will still be placed in host ``hostfoo`` at its current
|
||||
location (rack ``rackbar`` and pool ``default``).
|
||||
|
||||
|
||||
Adjusting the CRUSH weight
|
||||
==========================
|
||||
|
||||
You can adjust the CRUSH weight for a device with::
|
||||
|
||||
$ ceph osd crush reweight osd.123 2.0
|
||||
|
||||
Removing a device
|
||||
=================
|
||||
|
||||
You can remove a device from the crush map with::
|
||||
|
||||
$ ceph osd crush remove osd.123
|
||||
|
@ -2,11 +2,42 @@
|
||||
Resizing the RADOS cluster
|
||||
============================
|
||||
|
||||
Adding new OSDs
|
||||
===============
|
||||
Adding a new OSD to the cluster
|
||||
===============================
|
||||
|
||||
Briefly...
|
||||
|
||||
#. Allocate a new OSD id::
|
||||
|
||||
$ ceph osd create
|
||||
123
|
||||
|
||||
#. Make sure ceph.conf is valid for the new OSD.
|
||||
|
||||
#. Initialize osd data directory::
|
||||
|
||||
$ ceph-osd -i 123 --mkfs --mkkey
|
||||
|
||||
#. Register the OSD authentication key::
|
||||
|
||||
$ ceph auth add osd.123 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd-data/123/keyring
|
||||
|
||||
#. Adjust the CRUSH map to allocate data to the new device (see :ref:`adjusting-crush`).
|
||||
|
||||
|
||||
Removing OSDs
|
||||
=============
|
||||
|
||||
Briefly...
|
||||
|
||||
#. Stop the daemon
|
||||
|
||||
#. Remove it from the CRUSH map::
|
||||
|
||||
$ ceph osd crush remove osd.123
|
||||
|
||||
#. Remove it from the osd map::
|
||||
|
||||
$ ceph osd rm 123
|
||||
|
||||
See also :ref:`failures-osd`.
|
||||
|
Loading…
Reference in New Issue
Block a user