2013-09-17 20:22:09 +00:00
|
|
|
===============
|
|
|
|
Intro to Ceph
|
|
|
|
===============
|
|
|
|
|
2023-04-24 11:02:16 +00:00
|
|
|
Ceph can be used to provide :term:`Ceph Object Storage` to :term:`Cloud
|
|
|
|
Platforms` and Ceph can be used to provide :term:`Ceph Block Device` services
|
|
|
|
to :term:`Cloud Platforms`. Ceph can be used to deploy a :term:`Ceph File
|
|
|
|
System`. All :term:`Ceph Storage Cluster` deployments begin with setting up
|
|
|
|
each :term:`Ceph Node` and then setting up the network.
|
|
|
|
|
|
|
|
A Ceph Storage Cluster requires the following: at least one Ceph Monitor and at
|
|
|
|
least one Ceph Manager, and at least as many Ceph OSDs as there are copies of
|
|
|
|
an object stored on the Ceph cluster (for example, if three copies of a given
|
|
|
|
object are stored on the Ceph cluster, then at least three OSDs must exist in
|
|
|
|
that Ceph cluster).
|
|
|
|
|
|
|
|
The Ceph Metadata Server is necessary to run Ceph File System clients.
|
|
|
|
|
|
|
|
.. note::
|
|
|
|
|
|
|
|
It is a best practice to have a Ceph Manager for each Monitor, but it is not
|
|
|
|
necessary.
|
2017-07-07 19:41:43 +00:00
|
|
|
|
2020-04-09 13:25:39 +00:00
|
|
|
.. ditaa::
|
|
|
|
|
|
|
|
+---------------+ +------------+ +------------+ +---------------+
|
2017-07-07 19:41:43 +00:00
|
|
|
| OSDs | | Monitors | | Managers | | MDSs |
|
|
|
|
+---------------+ +------------+ +------------+ +---------------+
|
|
|
|
|
|
|
|
- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps
|
|
|
|
of the cluster state, including the monitor map, manager map, the
|
2020-01-14 08:58:58 +00:00
|
|
|
OSD map, the MDS map, and the CRUSH map. These maps are critical
|
|
|
|
cluster state required for Ceph daemons to coordinate with each other.
|
|
|
|
Monitors are also responsible for managing authentication between
|
|
|
|
daemons and clients. At least three monitors are normally required
|
|
|
|
for redundancy and high availability.
|
2017-07-07 19:41:43 +00:00
|
|
|
|
|
|
|
- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
|
|
|
|
responsible for keeping track of runtime metrics and the current
|
|
|
|
state of the Ceph cluster, including storage utilization, current
|
|
|
|
performance metrics, and system load. The Ceph Manager daemons also
|
2019-02-27 12:49:47 +00:00
|
|
|
host python-based modules to manage and expose Ceph cluster
|
2018-06-29 12:46:54 +00:00
|
|
|
information, including a web-based :ref:`mgr-dashboard` and
|
2018-06-28 08:40:36 +00:00
|
|
|
`REST API`_. At least two managers are normally required for high
|
|
|
|
availability.
|
2017-07-07 19:41:43 +00:00
|
|
|
|
2022-06-08 19:19:16 +00:00
|
|
|
- **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`,
|
2017-07-07 19:41:43 +00:00
|
|
|
``ceph-osd``) stores data, handles data replication, recovery,
|
|
|
|
rebalancing, and provides some monitoring information to Ceph
|
|
|
|
Monitors and Managers by checking other Ceph OSD Daemons for a
|
2022-05-18 10:36:53 +00:00
|
|
|
heartbeat. At least three Ceph OSDs are normally required for
|
|
|
|
redundancy and high availability.
|
2017-07-07 19:41:43 +00:00
|
|
|
|
|
|
|
- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores
|
2019-09-09 19:36:04 +00:00
|
|
|
metadata on behalf of the :term:`Ceph File System` (i.e., Ceph Block
|
2017-07-07 19:41:43 +00:00
|
|
|
Devices and Ceph Object Storage do not use MDS). Ceph Metadata
|
2017-07-10 13:57:42 +00:00
|
|
|
Servers allow POSIX file system users to execute basic commands (like
|
|
|
|
``ls``, ``find``, etc.) without placing an enormous burden on the
|
2017-07-07 19:41:43 +00:00
|
|
|
Ceph Storage Cluster.
|
|
|
|
|
|
|
|
Ceph stores data as objects within logical storage pools. Using the
|
2022-06-12 23:41:28 +00:00
|
|
|
:term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should
|
|
|
|
contain the object, and which OSD should store the placement group. The
|
|
|
|
CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and
|
|
|
|
recover dynamically.
|
2017-07-07 19:41:43 +00:00
|
|
|
|
|
|
|
.. _REST API: ../../mgr/restful
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
.. container:: columns-2
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
.. container:: column
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
.. raw:: html
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
<h3>Recommendations</h3>
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
To begin using Ceph in production, you should review our hardware
|
|
|
|
recommendations and operating system recommendations.
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
.. toctree::
|
|
|
|
:maxdepth: 2
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
Hardware Recommendations <hardware-recommendations>
|
|
|
|
OS Recommendations <os-recommendations>
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
.. container:: column
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
.. raw:: html
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
<h3>Get Involved</h3>
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
You can avail yourself of help or contribute documentation, source
|
|
|
|
code or bugs by getting involved in the Ceph community.
|
2013-09-20 20:00:27 +00:00
|
|
|
|
2021-02-19 02:33:34 +00:00
|
|
|
.. toctree::
|
|
|
|
:maxdepth: 2
|
|
|
|
|
|
|
|
get-involved
|
|
|
|
documenting-ceph
|