mirror of
https://github.com/ceph/ceph
synced 2025-01-29 14:34:40 +00:00
doc: Updated usage syntax. Added links to hardware and manual OSD remove.
Signed-off-by: John Wilkins <john.wilkins@inktank.com>
This commit is contained in:
parent
b353da6f68
commit
723062bbdd
@ -2,13 +2,15 @@
|
||||
Add/Remove OSDs
|
||||
=================
|
||||
|
||||
Adding and removing OSDs may involve a few more steps when compared to adding
|
||||
and removing other Ceph daemons. OSDs write data to the disk and to journals. So
|
||||
you need to provide paths for the OSD and journal.
|
||||
Adding and removing Ceph OSD Daemons to your cluster may involve a few more
|
||||
steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons
|
||||
write data to the disk and to journals. So you need to provide a disk for the
|
||||
OSD and a path to the journal partition (i.e., this is the most common
|
||||
configuration, but you may configure your system to your own needs).
|
||||
|
||||
By default, ``ceph-deploy`` will create an OSD with the XFS filesystem. You may
|
||||
override this by providing a ``--fs-type FS_TYPE`` argument, where ``FS_TYPE``
|
||||
is an alternate filesystem such as ``ext4`` or ``btrfs``.
|
||||
override the filesystem type by providing a ``--fs-type FS_TYPE`` argument,
|
||||
where ``FS_TYPE`` is an alternate filesystem such as ``ext4`` or ``btrfs``.
|
||||
|
||||
In Ceph v0.60 and later releases, Ceph supports ``dm-crypt`` on disk encryption.
|
||||
You may specify the ``--dm-crypt`` argument when preparing an OSD to tell
|
||||
@ -16,13 +18,16 @@ You may specify the ``--dm-crypt`` argument when preparing an OSD to tell
|
||||
``--dmcrypt-key-dir`` argument to specify the location of ``dm-crypt``
|
||||
encryption keys.
|
||||
|
||||
You should test various drive configurations to gauge their throughput before
|
||||
before building out a large cluster. See `Data Storage`_ for additional details.
|
||||
|
||||
|
||||
List Disks
|
||||
==========
|
||||
|
||||
To list the disks on a host, execute the following command::
|
||||
To list the disks on a node, execute the following command::
|
||||
|
||||
ceph-deploy disk list {host-name [host-name]...}
|
||||
ceph-deploy disk list {node-name [node-name]...}
|
||||
|
||||
|
||||
Zap Disks
|
||||
@ -31,65 +36,82 @@ Zap Disks
|
||||
To zap a disk (delete its partition table) in preparation for use with Ceph,
|
||||
execute the following::
|
||||
|
||||
ceph-deploy disk zap {osd-server-name}:/path/to/disk
|
||||
ceph-deploy disk zap {osd-server-name}:{disk-name}
|
||||
ceph-deploy disk zap osdserver1:sdb
|
||||
|
||||
.. important:: This will delete all data in the partition.
|
||||
.. important:: This will delete all data.
|
||||
|
||||
|
||||
Prepare OSDs
|
||||
============
|
||||
|
||||
Once you create a cluster, install Ceph packages, and gather keys, you
|
||||
may prepare the OSDs and deploy them to the OSD host(s). If you need to
|
||||
may prepare the OSDs and deploy them to the OSD node(s). If you need to
|
||||
identify a disk or zap it prior to preparing it for use as an OSD,
|
||||
see `List Disks`_ and `Zap Disks`_. ::
|
||||
|
||||
ceph-deploy osd prepare {host-name}:{path/to/disk}[:{path/to/journal}]
|
||||
ceph-deploy osd prepare osdserver1:/dev/sdb1:/dev/ssd1
|
||||
ceph-deploy osd prepare {node-name}:{disk}[:{path/to/journal}]
|
||||
ceph-deploy osd prepare osdserver1:sdb:/dev/ssd1
|
||||
|
||||
The ``prepare`` command only prepares the OSD. It does not activate it. To
|
||||
activate a prepared OSD, use the ``activate`` command. See `Activate OSDs`_
|
||||
for details.
|
||||
|
||||
The foregoing example assumes a disk dedicated to one Ceph OSD Daemon, and
|
||||
a path to an SSD journal partition. We recommend storing the journal on
|
||||
a separate drive to maximize throughput. You may dedicate a single drive
|
||||
for the journal too (which may be expensive) or place the journal on the
|
||||
same disk as the OSD (not recommended as it impairs performance). In the
|
||||
foregoing example we store the journal on a partioned solid state drive.
|
||||
|
||||
.. note:: When running multiple Ceph OSD daemons on a single node, and
|
||||
sharing a partioned journal with each OSD daemon, you should consider
|
||||
the entire node the minimum failure domain for CRUSH purposes, because
|
||||
if the SSD drive fails, all of the Ceph OSD daemons that journal to it
|
||||
will fail too.
|
||||
|
||||
|
||||
Activate OSDs
|
||||
=============
|
||||
|
||||
Once you prepare an OSD you may activate it with the following command. ::
|
||||
|
||||
ceph-deploy osd activate {host-name}:{path/to/disk}[:{path/to/journal}]
|
||||
ceph-deploy osd activate {node-name}:{path/to/disk}[:{path/to/journal}]
|
||||
ceph-deploy osd activate osdserver1:/dev/sdb1:/dev/ssd1
|
||||
|
||||
The ``activate`` command will cause your OSD to come ``up`` and be placed
|
||||
``in`` the cluster.
|
||||
``in`` the cluster. The ``activate`` command uses the path to the partition
|
||||
created when running the ``prepare`` command.
|
||||
|
||||
|
||||
Create OSDs
|
||||
===========
|
||||
|
||||
You may prepare OSDs, deploy them to the OSD host(s) and activate them in one
|
||||
You may prepare OSDs, deploy them to the OSD node(s) and activate them in one
|
||||
step with the ``create`` command. The ``create`` command is a convenience method
|
||||
for executing the ``prepare`` and ``activate`` command sequentially. ::
|
||||
|
||||
ceph-deploy osd create {host-name}:{path-to-disk}[:{path/to/journal}]
|
||||
ceph-deploy osd create osdserver1:/dev/sdb1:/dev/ssd1
|
||||
ceph-deploy osd create {node-name}:{disk}[:{path/to/journal}]
|
||||
ceph-deploy osd create osdserver1:sdb:/dev/ssd1
|
||||
|
||||
.. List OSDs
|
||||
.. =========
|
||||
|
||||
.. To list the OSDs deployed on a host(s), execute the following command::
|
||||
.. To list the OSDs deployed on a node(s), execute the following command::
|
||||
|
||||
.. ceph-deploy osd list {host-name}
|
||||
.. ceph-deploy osd list {node-name}
|
||||
|
||||
|
||||
Destroy OSDs
|
||||
============
|
||||
|
||||
.. note:: Coming soon.
|
||||
.. note:: Coming soon. See `Remove OSDs`_ for manual procedures.
|
||||
|
||||
To destroy an OSD, execute the following command::
|
||||
.. To destroy an OSD, execute the following command::
|
||||
|
||||
ceph-deploy osd destroy {host-name}:{path-to-disk}[:{path/to/journal}]
|
||||
.. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}]
|
||||
|
||||
Destroying an OSD will take it ``down`` and ``out`` of the cluster.
|
||||
.. Destroying an OSD will take it ``down`` and ``out`` of the cluster.
|
||||
|
||||
.. _Data Storage: ../../../install/hardware-recommendations#data-storage
|
||||
.. _Remove OSDs: ../../operations/add-or-rm-osds#removing-osds-manual
|
Loading…
Reference in New Issue
Block a user