ceph/doc/rados/index.rst
Robert Sander 1b42759e19 doc: remove references to ceph-deploy
The documentation still has many traces of ceph-deploy. This tool has
been deprecated with the Octopus release. This commit tries to remove
most of ceph-deploy occurences.

Signed-off-by: Robert Sander <r.sander@heinlein-support.de>
2020-09-02 21:14:36 +02:00

77 lines
2.0 KiB
ReStructuredText

======================
Ceph Storage Cluster
======================
The :term:`Ceph Storage Cluster` is the foundation for all Ceph deployments.
Based upon :abbr:`RADOS (Reliable Autonomic Distributed Object Store)`, Ceph
Storage Clusters consist of two types of daemons: a :term:`Ceph OSD Daemon`
(OSD) stores data as objects on a storage node; and a :term:`Ceph Monitor` (MON)
maintains a master copy of the cluster map. A Ceph Storage Cluster may contain
thousands of storage nodes. A minimal system will have at least one
Ceph Monitor and two Ceph OSD Daemons for data replication.
The Ceph File System, Ceph Object Storage and Ceph Block Devices read data from
and write data to the Ceph Storage Cluster.
.. raw:: html
<style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
<table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Config and Deploy</h3>
Ceph Storage Clusters have a few required settings, but most configuration
settings have default values. A typical deployment uses a deployment tool
to define a cluster and bootstrap a monitor. See `Deployment`_ for details
on ``cephadm.``
.. toctree::
:maxdepth: 2
Configuration <configuration/index>
Deployment <../cephadm/index>
.. raw:: html
</td><td><h3>Operations</h3>
Once you have deployed a Ceph Storage Cluster, you may begin operating
your cluster.
.. toctree::
:maxdepth: 2
Operations <operations/index>
.. toctree::
:maxdepth: 1
Man Pages <man/index>
.. toctree::
:hidden:
troubleshooting/index
.. raw:: html
</td><td><h3>APIs</h3>
Most Ceph deployments use `Ceph Block Devices`_, `Ceph Object Storage`_ and/or the
`Ceph File System`_. You may also develop applications that talk directly to
the Ceph Storage Cluster.
.. toctree::
:maxdepth: 2
APIs <api/index>
.. raw:: html
</td></tr></tbody></table>
.. _Ceph Block Devices: ../rbd/
.. _Ceph File System: ../cephfs/
.. _Ceph Object Storage: ../radosgw/
.. _Deployment: ../cephadm/