doc: remove whitespace

Signed-off-by: Alfredo Deza <alfredo.deza@inktank.com>
This commit is contained in:
Alfredo Deza 2014-10-13 11:09:38 -04:00
parent 3f8fb85b34
commit 264f0fced5

View File

@ -4,13 +4,13 @@
If you haven't completed your `Preflight Checklist`_, do that first. This
**Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
on your admin node. Create a three Ceph Node cluster so you can
explore Ceph functionality.
on your admin node. Create a three Ceph Node cluster so you can
explore Ceph functionality.
.. include:: quick-common.rst
As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and two
Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
by adding a third Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors.
For best results, create a directory on your admin node node for maintaining the
configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
@ -21,31 +21,31 @@ configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
The ``ceph-deploy`` utility will output files to the current directory. Ensure you
are in this directory when executing ``ceph-deploy``.
.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
if you are logged in as a different user, because it will not issue ``sudo``
.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
if you are logged in as a different user, because it will not issue ``sudo``
commands needed on the remote host.
.. topic:: Disable ``requiretty``
On some distributions (e.g., CentOS), you may receive an error while trying
On some distributions (e.g., CentOS), you may receive an error while trying
to execute ``ceph-deploy`` commands. If ``requiretty`` is set
by default, disable it by executing ``sudo visudo`` and locate the
by default, disable it by executing ``sudo visudo`` and locate the
``Defaults requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` to
ensure that ``ceph-deploy`` can connect using the ``ceph`` user and execute
ensure that ``ceph-deploy`` can connect using the ``ceph`` user and execute
commands with ``sudo``.
Create a Cluster
================
If at any point you run into trouble and you want to start over, execute
the following to purge the configuration::
the following to purge the configuration::
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
To purge the Ceph packages too, you may also execute::
ceph-deploy purge {ceph-node} [{ceph-node}]
ceph-deploy purge {ceph-node} [{ceph-node}]
If you execute ``purge``, you must re-install Ceph.
@ -61,23 +61,23 @@ configuration details, perform the following steps using ``ceph-deploy``.
ceph-deploy new node1
Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the current
directory. You should see a Ceph configuration file, a monitor secret
keyring, and a log file for the new cluster. See `ceph-deploy new -h`_
directory. You should see a Ceph configuration file, a monitor secret
keyring, and a log file for the new cluster. See `ceph-deploy new -h`_
for additional details.
#. Change the default number of replicas in the Ceph configuration file from
``3`` to ``2`` so that Ceph can achieve an ``active + clean`` state with
#. Change the default number of replicas in the Ceph configuration file from
``3`` to ``2`` so that Ceph can achieve an ``active + clean`` state with
just two Ceph OSDs. Add the following line under the ``[global]`` section::
osd pool default size = 2
#. If you have more than one network interface, add the ``public network``
setting under the ``[global]`` section of your Ceph configuration file.
#. If you have more than one network interface, add the ``public network``
setting under the ``[global]`` section of your Ceph configuration file.
See the `Network Configuration Reference`_ for details. ::
public network = {ip-address}/{netmask}
#. Install Ceph. ::
#. Install Ceph. ::
ceph-deploy install {ceph-node}[{ceph-node} ...]
@ -86,26 +86,26 @@ configuration details, perform the following steps using ``ceph-deploy``.
ceph-deploy install admin-node node1 node2 node3
The ``ceph-deploy`` utility will install Ceph on each node.
**NOTE**: If you use ``ceph-deploy purge``, you must re-execute this step
**NOTE**: If you use ``ceph-deploy purge``, you must re-execute this step
to re-install Ceph.
#. Add the initial monitor(s) and gather the keys (new in
#. Add the initial monitor(s) and gather the keys (new in
``ceph-deploy`` v1.1.3). ::
ceph-deploy mon create-initial {ceph-node}
**Note:** In earlier versions of ``ceph-deploy``, you must create the
initial monitor(s) and gather keys in two discrete steps. First, create
the monitor. ::
the monitor. ::
ceph-deploy mon create {ceph-node}
For example::
ceph-deploy mon create node1
Then, gather the keys. ::
Then, gather the keys. ::
ceph-deploy gatherkeys {ceph-node}
@ -113,27 +113,27 @@ configuration details, perform the following steps using ``ceph-deploy``.
ceph-deploy gatherkeys node1
Once you complete the process, your local directory should have the following
Once you complete the process, your local directory should have the following
keyrings:
- ``{cluster-name}.client.admin.keyring``
- ``{cluster-name}.bootstrap-osd.keyring``
- ``{cluster-name}.bootstrap-mds.keyring``
- ``{cluster-name}.bootstrap-mds.keyring``
#. Add two OSDs. For fast setup, this quick start uses a directory rather
than an entire disk per Ceph OSD Daemon. See `ceph-deploy osd`_ for
details on using separate disks/partitions for OSDs and journals.
Login to the Ceph Nodes and create a directory for
than an entire disk per Ceph OSD Daemon. See `ceph-deploy osd`_ for
details on using separate disks/partitions for OSDs and journals.
Login to the Ceph Nodes and create a directory for
the Ceph OSD Daemon. ::
ssh node2
sudo mkdir /var/local/osd0
exit
ssh node3
sudo mkdir /var/local/osd1
exit
exit
Then, from your admin node, use ``ceph-deploy`` to prepare the OSDs. ::
@ -143,7 +143,7 @@ configuration details, perform the following steps using ``ceph-deploy``.
ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1
Finally, activate the OSDs. ::
Finally, activate the OSDs. ::
ceph-deploy osd activate {ceph-node}:/path/to/directory
@ -153,22 +153,22 @@ configuration details, perform the following steps using ``ceph-deploy``.
#. Use ``ceph-deploy`` to copy the configuration file and admin key to
your admin node and your Ceph Nodes so that you can use the ``ceph``
CLI without having to specify the monitor address and
``ceph.client.admin.keyring`` each time you execute a command. ::
your admin node and your Ceph Nodes so that you can use the ``ceph``
CLI without having to specify the monitor address and
``ceph.client.admin.keyring`` each time you execute a command. ::
ceph-deploy admin {admin-node} {ceph-node}
For example::
For example::
ceph-deploy admin admin-node node1 node2 node3
When ``ceph-deploy`` is talking to the local admin host (``admin-node``),
it must be reachable by its hostname. If necessary, modify ``/etc/hosts``
When ``ceph-deploy`` is talking to the local admin host (``admin-node``),
it must be reachable by its hostname. If necessary, modify ``/etc/hosts``
to add the name of the admin host.
#. Ensure that you have the correct permissions for the
#. Ensure that you have the correct permissions for the
``ceph.client.admin.keyring``. ::
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
@ -177,23 +177,23 @@ configuration details, perform the following steps using ``ceph-deploy``.
ceph health
Your cluster should return an ``active + clean`` state when it
Your cluster should return an ``active + clean`` state when it
has finished peering.
Operating Your Cluster
======================
Deploying a Ceph cluster with ``ceph-deploy`` automatically starts the cluster.
To operate the cluster daemons with Debian/Ubuntu distributions, see
Deploying a Ceph cluster with ``ceph-deploy`` automatically starts the cluster.
To operate the cluster daemons with Debian/Ubuntu distributions, see
`Running Ceph with Upstart`_. To operate the cluster daemons with CentOS,
Red Hat, Fedora, and SLES distributions, see `Running Ceph with sysvinit`_.
To learn more about peering and cluster health, see `Monitoring a Cluster`_.
To learn more about Ceph OSD Daemon and placement group health, see
`Monitoring OSDs and PGs`_. To learn more about managing users, see
To learn more about Ceph OSD Daemon and placement group health, see
`Monitoring OSDs and PGs`_. To learn more about managing users, see
`User Management`_.
Once you deploy a Ceph cluster, you can try out some of the administration
functionality, the ``rados`` object store command line, and then proceed to
Quick Start guides for Ceph Block Device, Ceph Filesystem, and the Ceph Object
@ -208,7 +208,7 @@ cluster. Add a Ceph OSD Daemon and a Ceph Metadata Server to ``node1``.
Then add a Ceph Monitor to ``node2`` and ``node3`` to establish a
quorum of Ceph Monitors.
.. ditaa::
.. ditaa::
/------------------\ /----------------\
| cephdeploy | | node1 |
| Admin Node | | cCCC |
@ -279,13 +279,13 @@ create a metadata server::
ceph-deploy mds create {ceph-node}
For example::
For example::
ceph-deploy mds create node1
.. note:: Currently Ceph runs in production with one metadata server only. You
may use more, but there is currently no commercial support for a cluster
.. note:: Currently Ceph runs in production with one metadata server only. You
may use more, but there is currently no commercial support for a cluster
with multiple metadata servers.
@ -308,12 +308,12 @@ For example::
Once you have added your new Ceph Monitors, Ceph will begin synchronizing
the monitors and form a quorum. You can check the quorum status by executing
the following::
the following::
ceph quorum_status --format json-pretty
.. tip:: When you run Ceph with multiple monitors, you SHOULD install and
.. tip:: When you run Ceph with multiple monitors, you SHOULD install and
configure NTP on each monitor host. Ensure that the
monitors are NTP peers.
@ -321,7 +321,7 @@ the following::
Storing/Retrieving Object Data
==============================
To store object data in the Ceph Storage Cluster, a Ceph client must:
To store object data in the Ceph Storage Cluster, a Ceph client must:
#. Set an object name
#. Specify a `pool`_
@ -330,39 +330,39 @@ The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
calculates how to map the object to a `placement group`_, and then calculates
how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
object location, all you need is the object name and the pool name. For
example::
example::
ceph osd map {poolname} {object-name}
.. topic:: Exercise: Locate an Object
As an exercise, lets create an object. Specify an object name, a path to
a test file containing some object data and a pool name using the
a test file containing some object data and a pool name using the
``rados put`` command on the command line. For example::
echo {Test-data} > testfile.txt
rados put {object-name} {file-path} --pool=data
rados put {object-name} {file-path} --pool=data
rados put test-object-1 testfile.txt --pool=data
To verify that the Ceph Storage Cluster stored the object, execute
To verify that the Ceph Storage Cluster stored the object, execute
the following::
rados -p data ls
Now, identify the object location::
Now, identify the object location::
ceph osd map {pool-name} {object-name}
ceph osd map data test-object-1
Ceph should output the object's location. For example::
Ceph should output the object's location. For example::
osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]
To remove the test object, simply delete it using the ``rados rm``
command. For example::
To remove the test object, simply delete it using the ``rados rm``
command. For example::
rados rm test-object-1 --pool=data
As the cluster evolves, the object location may change dynamically. One benefit
of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
the migration manually.