2013-08-20 00:27:10 +00:00
|
|
|
|
=============================
|
|
|
|
|
Storage Cluster Quick Start
|
|
|
|
|
=============================
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
|
|
|
|
If you haven't completed your `Preflight Checklist`_, do that first. This
|
2013-09-24 21:44:25 +00:00
|
|
|
|
**Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
|
|
|
|
|
on your admin node. Create a three Ceph Node cluster so you can
|
|
|
|
|
explore Ceph functionality.
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
|
|
|
|
.. ditaa::
|
2013-09-17 21:02:51 +00:00
|
|
|
|
/------------------\ /----------------\
|
2013-09-24 21:44:25 +00:00
|
|
|
|
| Admin Node | | ceph–node1 |
|
|
|
|
|
| +-------->+ cCCC |
|
2013-09-30 22:24:41 +00:00
|
|
|
|
| ceph–deploy | | mon.ceph–node1 |
|
2013-09-17 21:02:51 +00:00
|
|
|
|
\---------+--------/ \----------------/
|
|
|
|
|
|
|
|
|
|
|
| /----------------\
|
2013-09-20 20:00:57 +00:00
|
|
|
|
| | ceph–node2 |
|
2013-09-24 21:44:25 +00:00
|
|
|
|
+----------------->+ cCCC |
|
2013-09-30 22:24:41 +00:00
|
|
|
|
| | osd.0 |
|
2013-09-17 21:02:51 +00:00
|
|
|
|
| \----------------/
|
|
|
|
|
|
|
|
|
|
|
| /----------------\
|
2013-09-20 20:00:57 +00:00
|
|
|
|
| | ceph–node3 |
|
2013-09-24 21:44:25 +00:00
|
|
|
|
+----------------->| cCCC |
|
2013-09-30 22:24:41 +00:00
|
|
|
|
| osd.1 |
|
2013-09-17 21:02:51 +00:00
|
|
|
|
\----------------/
|
|
|
|
|
|
2013-09-24 21:44:25 +00:00
|
|
|
|
For best results, create a directory on your admin node node for maintaining the
|
|
|
|
|
configuration that ``ceph-deploy`` generates for your cluster. ::
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
|
|
|
|
mkdir my-cluster
|
|
|
|
|
cd my-cluster
|
|
|
|
|
|
|
|
|
|
.. tip:: The ``ceph-deploy`` utility will output files to the
|
2013-06-11 21:46:12 +00:00
|
|
|
|
current directory. Ensure you are in this directory when executing
|
|
|
|
|
``ceph-deploy``.
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and two
|
2013-09-24 21:44:25 +00:00
|
|
|
|
Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
|
|
|
|
|
by adding a third Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors.
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-10-10 19:20:37 +00:00
|
|
|
|
.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
|
|
|
|
|
if you are logged in as a different user, because it will not issue ``sudo``
|
|
|
|
|
commands needed on the remote host.
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Create a Cluster
|
|
|
|
|
================
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-24 21:44:25 +00:00
|
|
|
|
If at any point you run into trouble and you want to start over, execute
|
|
|
|
|
the following::
|
|
|
|
|
|
|
|
|
|
ceph-deploy purgedata {ceph-node} [{ceph-node}]
|
|
|
|
|
ceph-deploy forgetkeys
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
|
|
|
|
|
2013-09-24 21:44:25 +00:00
|
|
|
|
On your admin node, perform the following steps using ``ceph-deploy``.
|
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
#. Create the cluster. ::
|
2013-05-06 17:50:24 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph-deploy new {ceph-node}
|
|
|
|
|
ceph-deploy new ceph-node1
|
2013-05-06 17:50:24 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the current
|
|
|
|
|
directory. You should see a Ceph configuration file, a keyring, and a log
|
|
|
|
|
file for the new cluster. See `ceph-deploy new -h`_ for additional details.
|
2013-05-06 17:50:24 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
#. Install Ceph. ::
|
2013-05-06 17:50:24 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph-deploy install {ceph-node}[{ceph-node} ...]
|
|
|
|
|
ceph-deploy install ceph-node1 ceph-node2 ceph-node3
|
2013-06-11 21:46:12 +00:00
|
|
|
|
|
2013-05-06 17:50:24 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
#. Add a Ceph Monitor. ::
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph-deploy mon create {ceph-node}
|
|
|
|
|
ceph-deploy mon create ceph-node1
|
|
|
|
|
|
|
|
|
|
#. Gather keys. ::
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph-deploy gatherkeys {ceph-node}
|
|
|
|
|
ceph-deploy gatherkeys ceph-node1
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-10-10 19:20:37 +00:00
|
|
|
|
Once you have gathered keys, your local directory should have the following
|
|
|
|
|
keyrings:
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
- ``{cluster-name}.client.admin.keyring``
|
|
|
|
|
- ``{cluster-name}.bootstrap-osd.keyring``
|
|
|
|
|
- ``{cluster-name}.bootstrap-mds.keyring``
|
|
|
|
|
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
#. Add two OSDs. For fast setup, this quick start uses a directory rather
|
|
|
|
|
than an entire disk per Ceph OSD Daemon. See `ceph-deploy osd`_ for
|
2013-09-24 21:44:25 +00:00
|
|
|
|
details on using separate disks/partitions for OSDs and journals.
|
|
|
|
|
Login to the Ceph Nodes and create a directory for
|
2013-09-17 21:02:51 +00:00
|
|
|
|
the Ceph OSD Daemon. ::
|
|
|
|
|
|
|
|
|
|
ssh ceph-node2
|
|
|
|
|
sudo mkdir /tmp/osd0
|
|
|
|
|
exit
|
|
|
|
|
|
|
|
|
|
ssh ceph-node3
|
|
|
|
|
sudo mkdir /tmp/osd1
|
|
|
|
|
exit
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-24 21:44:25 +00:00
|
|
|
|
Then, from your admin node, use ``ceph-deploy`` to prepare the OSDs. ::
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph-deploy osd prepare {ceph-node}:/path/to/directory
|
|
|
|
|
ceph-deploy osd prepare ceph-node2:/tmp/osd0 ceph-node3:/tmp/osd1
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Finally, activate the OSDs. ::
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph-deploy osd activate {ceph-node}:/path/to/directory
|
|
|
|
|
ceph-deploy osd activate ceph-node2:/tmp/osd0 ceph-node3:/tmp/osd1
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
|
|
|
|
|
2013-09-24 21:44:25 +00:00
|
|
|
|
#. Use ``ceph-deploy`` to copy the configuration file and admin key to
|
|
|
|
|
your admin node and your Ceph Nodes so that you can use the ``ceph``
|
|
|
|
|
CLI without having to specify the monitor address and
|
|
|
|
|
``ceph.client.admin.keyring`` each time you execute a command. ::
|
2013-09-17 21:02:51 +00:00
|
|
|
|
|
|
|
|
|
ceph-deploy admin {ceph-node}
|
2013-09-24 21:44:25 +00:00
|
|
|
|
ceph-deploy admin admin-node ceph-node1 ceph-node2 ceph-node3
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
**Note:** Since you are using ``ceph-deploy`` to talk to the
|
|
|
|
|
local host, your host must be reachable by its hostname
|
|
|
|
|
(e.g., you can modify ``/etc/hosts`` if necessary). Ensure that
|
|
|
|
|
you have the correct permissions for the ``ceph.client.admin.keyring``.
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
#. Check your cluster's health. ::
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph health
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-24 21:44:25 +00:00
|
|
|
|
Your cluster should return an ``active + clean`` state when it
|
|
|
|
|
has finished peering.
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Operating Your Cluster
|
|
|
|
|
======================
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Deploying a Ceph cluster with ``ceph-deploy`` automatically starts the cluster.
|
|
|
|
|
To operate the cluster daemons with Debian/Ubuntu distributions, see
|
|
|
|
|
`Running Ceph with Upstart`_. To operate the cluster daemons with CentOS,
|
|
|
|
|
Red Hat, Fedora, and SLES distributions, see `Running Ceph with sysvinit`_.
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-09-24 21:44:25 +00:00
|
|
|
|
To learn more about peering and cluster health, see `Monitoring a Cluster`_.
|
|
|
|
|
To learn more about Ceph OSD Daemon and placement group health, see
|
|
|
|
|
`Monitoring OSDs and PGs`_.
|
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Once you deploy a Ceph cluster, you can try out some of the administration
|
|
|
|
|
functionality, the ``rados`` object store command line, and then proceed to
|
|
|
|
|
Quick Start guides for Ceph Block Device, Ceph Filesystem, and the Ceph Object
|
|
|
|
|
Gateway.
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Expanding Your Cluster
|
|
|
|
|
======================
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-09-24 21:44:25 +00:00
|
|
|
|
Once you have a basic cluster up and running, the next step is to expand
|
|
|
|
|
cluster. Add a Ceph OSD Daemon and a Ceph Metadata Server to ``ceph-node1``.
|
|
|
|
|
Then add a Ceph Monitor to ``ceph-node2`` and ``ceph-node3`` to establish a
|
|
|
|
|
quorum of Ceph Monitors.
|
|
|
|
|
|
|
|
|
|
.. ditaa::
|
|
|
|
|
/------------------\ /----------------\
|
|
|
|
|
| ceph–deploy | | ceph–node1 |
|
|
|
|
|
| Admin Node | | cCCC |
|
|
|
|
|
| +-------->+ mon.ceph–node1 |
|
2013-09-30 22:24:41 +00:00
|
|
|
|
| | | osd.2 |
|
2013-09-24 21:44:25 +00:00
|
|
|
|
| | | mds.ceph–node1 |
|
|
|
|
|
\---------+--------/ \----------------/
|
|
|
|
|
|
|
|
|
|
|
| /----------------\
|
|
|
|
|
| | ceph–node2 |
|
|
|
|
|
| | cCCC |
|
|
|
|
|
+----------------->+ |
|
2013-09-30 22:24:41 +00:00
|
|
|
|
| | osd.0 |
|
2013-09-24 21:44:25 +00:00
|
|
|
|
| | mon.ceph–node2 |
|
|
|
|
|
| \----------------/
|
|
|
|
|
|
|
|
|
|
|
| /----------------\
|
|
|
|
|
| | ceph–node3 |
|
|
|
|
|
| | cCCC |
|
|
|
|
|
+----------------->+ |
|
2013-09-30 22:24:41 +00:00
|
|
|
|
| osd.1 |
|
2013-09-24 21:44:25 +00:00
|
|
|
|
| mon.ceph–node3 |
|
|
|
|
|
\----------------/
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Adding an OSD
|
|
|
|
|
-------------
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-10-10 19:20:37 +00:00
|
|
|
|
Since you are running a 3-node cluster for demonstration purposes, add the OSD
|
|
|
|
|
to the monitor node. ::
|
2013-06-11 21:46:12 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ssh ceph-node1
|
|
|
|
|
sudo mkdir /tmp/osd2
|
|
|
|
|
exit
|
2013-06-11 21:46:12 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Then, from your ``ceph-deploy`` node, prepare the OSD. ::
|
2013-06-11 21:46:12 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph-deploy osd prepare {ceph-node}:/path/to/directory
|
|
|
|
|
ceph-deploy osd prepare ceph-node1:/tmp/osd2
|
2013-06-11 21:46:12 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Finally, activate the OSDs. ::
|
2013-06-11 21:46:12 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph-deploy osd activate {ceph-node}:/path/to/directory
|
|
|
|
|
ceph-deploy osd activate ceph-node1:/tmp/osd2
|
2013-06-11 21:46:12 +00:00
|
|
|
|
|
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Once you have added your new OSD, Ceph will begin rebalancing the cluster by
|
|
|
|
|
migrating placement groups to your new OSD. You can observe this process with
|
|
|
|
|
the ``ceph`` CLI. ::
|
2013-06-11 21:46:12 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph -w
|
2013-06-11 21:46:12 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
You should see the placement group states change from ``active+clean`` to active
|
|
|
|
|
with some degraded objects, and finally ``active+clean`` when migration
|
|
|
|
|
completes. (Control-c to exit.)
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
|
|
|
|
|
2013-09-24 21:44:25 +00:00
|
|
|
|
Add a Metadata Server
|
|
|
|
|
---------------------
|
|
|
|
|
|
|
|
|
|
To use CephFS, you need at least one metadata server. Execute the following to
|
|
|
|
|
create a metadata server::
|
|
|
|
|
|
|
|
|
|
ceph-deploy mds create {ceph-node}
|
|
|
|
|
ceph-deploy mds create ceph-node1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
.. note:: Currently Ceph runs in production with one metadata server only. You
|
|
|
|
|
may use more, but there is currently no commercial support for a cluster
|
|
|
|
|
with multiple metadata servers.
|
|
|
|
|
|
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Adding Monitors
|
|
|
|
|
---------------
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
A Ceph Storage Cluster requires at least one Ceph Monitor to run. For high
|
|
|
|
|
availability, Ceph Storage Clusters typically run multiple Ceph
|
|
|
|
|
Monitors so that the failure of a single Ceph Monitor will not bring down the
|
|
|
|
|
Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority
|
|
|
|
|
of monitors (i.e., 1, 2:3, 3:4, 3:5, 4:6, etc.) to form a quorum.
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Add two Ceph Monitors to your cluster. ::
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph-deploy mon create {ceph-node}
|
|
|
|
|
ceph-deploy mon create ceph-node2 ceph-node3
|
2013-05-09 19:48:14 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Once you have added your new Ceph Monitors, Ceph will begin synchronizing
|
|
|
|
|
the monitors and form a quorum. You can check the quorum status by executing
|
|
|
|
|
the following::
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph quorum_status
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
Storing/Retrieving Object Data
|
|
|
|
|
==============================
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
To store object data in the Ceph Storage Cluster, a Ceph client must:
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
#. Set an object name
|
|
|
|
|
#. Specify a `pool`_
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
|
|
|
|
|
calculates how to map the object to a `placement group`_, and then calculates
|
|
|
|
|
how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
|
|
|
|
|
object location, all you need is the object name and the pool name. For
|
|
|
|
|
example::
|
2013-06-06 17:58:47 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph osd map {poolname} {object-name}
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
.. topic:: Exercise: Locate an Object
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-10-10 19:20:37 +00:00
|
|
|
|
As an exercise, lets create an object. Specify an object name, a path to
|
|
|
|
|
a test file containing some object data and a pool name using the
|
2013-09-17 21:02:51 +00:00
|
|
|
|
``rados put`` command on the command line. For example::
|
|
|
|
|
|
|
|
|
|
rados put {object-name} {file-path} --pool=data
|
|
|
|
|
rados put test-object-1 testfile.txt --pool=data
|
|
|
|
|
|
2013-10-10 19:20:37 +00:00
|
|
|
|
To verify that the Ceph Storage Cluster stored the object, execute
|
|
|
|
|
the following::
|
2013-09-17 21:02:51 +00:00
|
|
|
|
|
|
|
|
|
rados -p data ls
|
|
|
|
|
|
|
|
|
|
Now, identify the object location::
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
2013-09-17 21:02:51 +00:00
|
|
|
|
ceph osd map {pool-name} {object-name}
|
|
|
|
|
ceph osd map data test-object-1
|
|
|
|
|
|
|
|
|
|
Ceph should output the object's location. For example::
|
|
|
|
|
|
|
|
|
|
osdmap e537 pool 'data' (0) object 'test-object-1' -> pg 0.d1743484 (0.4) -> up [1,0] acting [1,0]
|
|
|
|
|
|
2013-10-10 19:20:37 +00:00
|
|
|
|
To remove the test object, simply delete it using the ``rados rm``
|
|
|
|
|
command. For example::
|
2013-09-17 21:02:51 +00:00
|
|
|
|
|
|
|
|
|
rados rm test-object-1 --pool=data
|
|
|
|
|
|
|
|
|
|
As the cluster evolves, the object location may change dynamically. One benefit
|
|
|
|
|
of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
|
|
|
|
|
the migration manually.
|
2013-04-26 21:01:20 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
.. _Preflight Checklist: ../quick-start-preflight
|
|
|
|
|
.. _Ceph Deploy: ../../rados/deployment
|
|
|
|
|
.. _ceph-deploy install -h: ../../rados/deployment/ceph-deploy-install
|
2013-06-06 17:58:47 +00:00
|
|
|
|
.. _ceph-deploy new -h: ../../rados/deployment/ceph-deploy-new
|
2013-09-17 21:02:51 +00:00
|
|
|
|
.. _ceph-deploy osd: ../../rados/deployment/ceph-deploy-osd
|
2013-06-11 21:46:12 +00:00
|
|
|
|
.. _Running Ceph with Upstart: ../../rados/operations/operating#running-ceph-with-upstart
|
2013-09-17 21:02:51 +00:00
|
|
|
|
.. _Running Ceph with sysvinit: ../../rados/operations/operating#running-ceph-with-sysvinit
|
|
|
|
|
.. _CRUSH Map: ../../rados/operations/crush-map
|
|
|
|
|
.. _pool: ../../rados/operations/pools
|
2013-09-24 21:44:25 +00:00
|
|
|
|
.. _placement group: ../../rados/operations/placement-groups
|
|
|
|
|
.. _Monitoring a Cluster: ../../rados/operations/monitoring
|
|
|
|
|
.. _Monitoring OSDs and PGs: ../../rados/operations/monitoring-osd-pg
|