2013-04-30 01:51:46 +00:00
|
|
|
=================
|
|
|
|
Add/Remove OSDs
|
|
|
|
=================
|
|
|
|
|
2013-05-10 16:37:03 +00:00
|
|
|
Adding and removing Ceph OSD Daemons to your cluster may involve a few more
|
|
|
|
steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons
|
|
|
|
write data to the disk and to journals. So you need to provide a disk for the
|
|
|
|
OSD and a path to the journal partition (i.e., this is the most common
|
|
|
|
configuration, but you may configure your system to your own needs).
|
2013-04-30 01:51:46 +00:00
|
|
|
|
|
|
|
In Ceph v0.60 and later releases, Ceph supports ``dm-crypt`` on disk encryption.
|
2014-01-03 22:37:23 +00:00
|
|
|
You may specify the ``--dmcrypt`` argument when preparing an OSD to tell
|
2013-04-30 01:51:46 +00:00
|
|
|
``ceph-deploy`` that you want to use encryption. You may also specify the
|
|
|
|
``--dmcrypt-key-dir`` argument to specify the location of ``dm-crypt``
|
|
|
|
encryption keys.
|
|
|
|
|
2013-05-10 16:37:03 +00:00
|
|
|
You should test various drive configurations to gauge their throughput before
|
|
|
|
before building out a large cluster. See `Data Storage`_ for additional details.
|
|
|
|
|
2013-04-30 01:51:46 +00:00
|
|
|
|
|
|
|
List Disks
|
|
|
|
==========
|
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
To list the disks on a node, execute the following command::
|
2013-04-30 01:51:46 +00:00
|
|
|
|
2013-05-10 16:37:03 +00:00
|
|
|
ceph-deploy disk list {node-name [node-name]...}
|
2013-04-30 01:51:46 +00:00
|
|
|
|
|
|
|
|
|
|
|
Zap Disks
|
|
|
|
=========
|
|
|
|
|
|
|
|
To zap a disk (delete its partition table) in preparation for use with Ceph,
|
|
|
|
execute the following::
|
|
|
|
|
2013-05-10 16:37:03 +00:00
|
|
|
ceph-deploy disk zap {osd-server-name}:{disk-name}
|
|
|
|
ceph-deploy disk zap osdserver1:sdb
|
2013-04-30 01:51:46 +00:00
|
|
|
|
2013-05-10 16:37:03 +00:00
|
|
|
.. important:: This will delete all data.
|
2013-04-30 01:51:46 +00:00
|
|
|
|
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
Create OSDs
|
|
|
|
===========
|
2013-04-30 01:51:46 +00:00
|
|
|
|
|
|
|
Once you create a cluster, install Ceph packages, and gather keys, you
|
2018-02-21 15:15:24 +00:00
|
|
|
may create the OSDs and deploy them to the OSD node(s). If you need to
|
|
|
|
identify a disk or zap it prior to preparing it for use as an OSD,
|
2013-04-30 01:51:46 +00:00
|
|
|
see `List Disks`_ and `Zap Disks`_. ::
|
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
ceph-deploy osd create --data {data-disk} {node-name}
|
2013-04-30 01:51:46 +00:00
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
For example::
|
2013-04-30 01:51:46 +00:00
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
ceph-deploy osd create --data /dev/ssd osd-server1
|
2013-05-10 16:37:03 +00:00
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
For bluestore (the default) the example assumes a disk dedicated to one Ceph
|
|
|
|
OSD Daemon. Filestore is also supported, in which case a ``--journal`` flag in
|
|
|
|
addition to ``--filestore`` needs to be used to define the Journal device on
|
|
|
|
the remote host.
|
2016-04-25 00:28:35 +00:00
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
.. note:: When running multiple Ceph OSD daemons on a single node, and
|
2013-05-10 16:37:03 +00:00
|
|
|
sharing a partioned journal with each OSD daemon, you should consider
|
|
|
|
the entire node the minimum failure domain for CRUSH purposes, because
|
|
|
|
if the SSD drive fails, all of the Ceph OSD daemons that journal to it
|
|
|
|
will fail too.
|
|
|
|
|
2013-04-30 01:51:46 +00:00
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
List OSDs
|
|
|
|
=========
|
2013-04-30 01:51:46 +00:00
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
To list the OSDs deployed on a node(s), execute the following command::
|
2013-04-30 01:51:46 +00:00
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
ceph-deploy osd list {node-name}
|
2013-04-30 01:51:46 +00:00
|
|
|
|
|
|
|
|
|
|
|
Destroy OSDs
|
|
|
|
============
|
|
|
|
|
2013-05-10 16:37:03 +00:00
|
|
|
.. note:: Coming soon. See `Remove OSDs`_ for manual procedures.
|
2013-05-01 21:03:19 +00:00
|
|
|
|
2018-02-21 15:15:24 +00:00
|
|
|
.. To destroy an OSD, execute the following command::
|
2013-04-30 01:51:46 +00:00
|
|
|
|
2013-05-10 16:37:03 +00:00
|
|
|
.. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}]
|
2013-04-30 01:51:46 +00:00
|
|
|
|
2013-05-10 16:37:03 +00:00
|
|
|
.. Destroying an OSD will take it ``down`` and ``out`` of the cluster.
|
2013-04-30 01:51:46 +00:00
|
|
|
|
2014-07-12 13:04:22 +00:00
|
|
|
.. _Data Storage: ../../../start/hardware-recommendations#data-storage
|
|
|
|
.. _Remove OSDs: ../../operations/add-or-rm-osds#removing-osds-manual
|