Merge pull request #1786 from nereocystis/quick-common

doc: Common graph used in 2 quick start files
This commit is contained in:
John Wilkins 2014-05-09 10:27:53 -07:00
commit 0d0c209263
4 changed files with 34 additions and 48 deletions

View File

@ -9,7 +9,7 @@ release = 'dev'
templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
exclude_patterns = ['**/.#*', '**/*~']
exclude_patterns = ['**/.#*', '**/*~', 'start/quick-common.rst']
pygments_style = 'sphinx'
html_theme = 'ceph'

View File

@ -7,34 +7,7 @@ If you haven't completed your `Preflight Checklist`_, do that first. This
on your admin node. Create a three Ceph Node cluster so you can
explore Ceph functionality.
.. ditaa::
/------------------\ /----------------\
| Admin Node | | node1 |
| +-------->+ cCCC |
| cephdeploy | | mon.node1 |
\---------+--------/ \----------------/
|
| /----------------\
| | node2 |
+----------------->+ cCCC |
| | osd.0 |
| \----------------/
|
| /----------------\
| | node3 |
+----------------->| cCCC |
| osd.1 |
\----------------/
For best results, create a directory on your admin node node for maintaining the
configuration that ``ceph-deploy`` generates for your cluster. ::
mkdir my-cluster
cd my-cluster
.. tip:: The ``ceph-deploy`` utility will output files to the
current directory. Ensure you are in this directory when executing
``ceph-deploy``.
.. include:: quick-common.rst
As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and two
Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
@ -382,4 +355,4 @@ the migration manually.
.. _placement group: ../../rados/operations/placement-groups
.. _Monitoring a Cluster: ../../rados/operations/monitoring
.. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
.. _Network Configuration Reference: ../../rados/configuration/network-config-ref

View File

@ -0,0 +1,28 @@
.. ditaa::
/------------------\ /----------------\
| Admin Node | | node1 |
| +-------->+ cCCC |
| cephdeploy | | mon.node1 |
\---------+--------/ \----------------/
|
| /----------------\
| | node2 |
+----------------->+ cCCC |
| | osd.0 |
| \----------------/
|
| /----------------\
| | node3 |
+----------------->| cCCC |
| osd.1 |
\----------------/
For best results, create a directory on your admin node node for maintaining the
configuration that ``ceph-deploy`` generates for your cluster. ::
mkdir my-cluster
cd my-cluster
.. tip:: The ``ceph-deploy`` utility will output files to the
current directory. Ensure you are in this directory when executing
``ceph-deploy``.

View File

@ -11,25 +11,10 @@ three Ceph Nodes (or virtual machines) that will host your Ceph Storage Cluster.
Before proceeding any further, see `OS Recommendations`_ to verify that you have
a supported distribution and version of Linux.
In the descriptions below, :term:`Node` refers to a single machine.
.. include:: quick-common.rst
.. ditaa::
/------------------\ /----------------\
| Admin Node | | node1 |
| +-------->+ |
| cephdeploy | | cCCC |
\---------+--------/ \----------------/
|
| /----------------\
| | node2 |
+----------------->+ |
| | cCCC |
| \----------------/
|
| /----------------\
| | node3 |
+----------------->| |
| cCCC |
\----------------/
Ceph Deploy Setup