doc/rados: rewrite storage device front matter

This PR updates the text in the RADOS Guide
(the Ceph Storage Cluster Guide) that appears
at the beginning of the "Storage Devices"
chapter. I did the following:

- rewrote some of the sentences so that
  they read more like written text than like
  spoken language
- added "Ceph Manager" to the list of daemons
  that a Ceph cluster comprises
- that's about it.

Signed-off-by: Zac Dover <zac.dover@gmail.com>
This commit is contained in:
Zac Dover 2021-07-29 00:37:46 +10:00
parent 547332a629
commit 64ac87bf2b

View File

@ -2,22 +2,26 @@
Storage Devices
=================
There are two Ceph daemons that store data on devices:
There are several Ceph daemons in a storage cluster:
* **Ceph OSDs** (or Object Storage Daemons) are where most of the
data is stored in Ceph. Generally speaking, each OSD is backed by
a single storage device, like a traditional hard disk (HDD) or
solid state disk (SSD). OSDs can also be backed by a combination
of devices, like a HDD for most data and an SSD (or partition of an
SSD) for some metadata. The number of OSDs in a cluster is
generally a function of how much data will be stored, how big each
storage device will be, and the level and type of redundancy
(replication or erasure coding).
* **Ceph Monitor** daemons manage critical cluster state like cluster
membership and authentication information. For smaller clusters a
few gigabytes is all that is needed, although for larger clusters
the monitor database can reach tens or possibly hundreds of
gigabytes.
* **Ceph OSDs** (Object Storage Daemons) store most of the data
in Ceph. Usually each OSD is backed by a single storage device.
This can be a traditional hard disk (HDD) or a solid state disk
(SSD). OSDs can also be backed by a combination of devices: for
example, a HDD for most data and an SSD (or partition of an
SSD) for some metadata. The number of OSDs in a cluster is
usually a function of the amount of data to be stored, the size
of each storage device, and the level and type of redundancy
specified (replication or erasure coding).
* **Ceph Monitor** daemons manage critical cluster state. This
includes cluster membership and authentication information.
Small clusters require only a few gigabytes of storage to hold
the monitor database. In large clusters, however, the monitor
database can reach sizes of tens of gigabytes to hundreds of
gigabytes.
* **Ceph Manager** daemons run alongside monitor daemons, providing
additional monitoring and providing interfaces to external
monitoring and management systems.
OSD Backends