From 74cc624d002e51769da37c04b3bdc32e0077d370 Mon Sep 17 00:00:00 2001 From: Zac Dover Date: Mon, 10 Jun 2024 04:55:13 +1000 Subject: [PATCH] doc/start: remove "intro.rst" Remove "start/intro.rst", which has been renamed "start/index.rst" in order to follow the conventions followed elsewhere in the documentation. Follows https://github.com/ceph/ceph/pull/57900. Signed-off-by: Zac Dover --- doc/start/index.rst | 4 -- doc/start/intro.rst | 98 --------------------------------------------- 2 files changed, 102 deletions(-) delete mode 100644 doc/start/intro.rst diff --git a/doc/start/index.rst b/doc/start/index.rst index 640fb5d84a8..0aec895ab73 100644 --- a/doc/start/index.rst +++ b/doc/start/index.rst @@ -97,7 +97,3 @@ recover dynamically. get-involved documenting-ceph -.. toctree:: - :maxdepth: 2 - - intro diff --git a/doc/start/intro.rst b/doc/start/intro.rst deleted file mode 100644 index 1cbead4a3df..00000000000 --- a/doc/start/intro.rst +++ /dev/null @@ -1,98 +0,0 @@ -=============== - Intro to Ceph -=============== - -Ceph can be used to provide :term:`Ceph Object Storage` to :term:`Cloud -Platforms` and Ceph can be used to provide :term:`Ceph Block Device` services -to :term:`Cloud Platforms`. Ceph can be used to deploy a :term:`Ceph File -System`. All :term:`Ceph Storage Cluster` deployments begin with setting up -each :term:`Ceph Node` and then setting up the network. - -A Ceph Storage Cluster requires the following: at least one Ceph Monitor and at -least one Ceph Manager, and at least as many :term:`Ceph Object Storage -Daemon`\s (OSDs) as there are copies of a given object stored in the -Ceph cluster (for example, if three copies of a given object are stored in the -Ceph cluster, then at least three OSDs must exist in that Ceph cluster). - -The Ceph Metadata Server is necessary to run Ceph File System clients. - -.. note:: - - It is a best practice to have a Ceph Manager for each Monitor, but it is not - necessary. - -.. ditaa:: - - +---------------+ +------------+ +------------+ +---------------+ - | OSDs | | Monitors | | Managers | | MDSs | - +---------------+ +------------+ +------------+ +---------------+ - -- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps of the - cluster state, including the :ref:`monitor map`, manager - map, the OSD map, the MDS map, and the CRUSH map. These maps are critical - cluster state required for Ceph daemons to coordinate with each other. - Monitors are also responsible for managing authentication between daemons and - clients. At least three monitors are normally required for redundancy and - high availability. - -- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is - responsible for keeping track of runtime metrics and the current - state of the Ceph cluster, including storage utilization, current - performance metrics, and system load. The Ceph Manager daemons also - host python-based modules to manage and expose Ceph cluster - information, including a web-based :ref:`mgr-dashboard` and - `REST API`_. At least two managers are normally required for high - availability. - -- **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`, - ``ceph-osd``) stores data, handles data replication, recovery, - rebalancing, and provides some monitoring information to Ceph - Monitors and Managers by checking other Ceph OSD Daemons for a - heartbeat. At least three Ceph OSDs are normally required for - redundancy and high availability. - -- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores metadata - for the :term:`Ceph File System`. Ceph Metadata Servers allow CephFS users to - run basic commands (like ``ls``, ``find``, etc.) without placing a burden on - the Ceph Storage Cluster. - -Ceph stores data as objects within logical storage pools. Using the -:term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should -contain the object, and which OSD should store the placement group. The -CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and -recover dynamically. - -.. _REST API: ../../mgr/restful - -.. container:: columns-2 - - .. container:: column - - .. raw:: html - -

Recommendations

- - To begin using Ceph in production, you should review our hardware - recommendations and operating system recommendations. - - .. toctree:: - :maxdepth: 2 - - Beginner's Guide - Hardware Recommendations - OS Recommendations - - .. container:: column - - .. raw:: html - -

Get Involved

- - You can avail yourself of help or contribute documentation, source - code or bugs by getting involved in the Ceph community. - - .. toctree:: - :maxdepth: 2 - - get-involved - documenting-ceph