ceph/doc/ops/install.rst
Tommi Virtanen 2f3cfa174a doc: Instructions how to build RPMs.
Signed-off-by: Tommi Virtanen <tommi.virtanen@dreamhost.com>
2011-09-14 08:58:03 -07:00

248 lines
7.2 KiB
ReStructuredText

===========================
Installing a Ceph cluster
===========================
For development and really early stage testing, see :doc:`/dev/index`.
For installing the latest development builds, see
:doc:`/ops/autobuilt`.
Installing any complex distributed software can be a lot of work. We
support two automated ways of installing Ceph: using Chef_, or with
the ``mkcephfs`` shell script.
.. _Chef: http://wiki.opscode.com/display/chef
.. topic:: Status as of 2011-09
This section hides a lot of the tedious underlying details. If you
need to, or wish to, roll your own deployment automation, or are
doing it manually, you'll have to dig into a lot more intricate
details. We are working on simplifying the installation, as that
also simplifies our Chef cookbooks.
.. _install-chef:
Installing Ceph using Chef
==========================
(Try saying that fast 10 times.)
.. topic:: Status as of 2011-09
While we have Chef cookbooks in use internally, they are not yet
ready to handle unsupervised installation of a full cluster. Stay
tuned for updates.
.. todo:: write me
Installing Ceph using ``mkcephfs``
==================================
.. note:: ``mkcephfs`` is meant as a quick bootstrapping tool. It does
not handle more complex operations, such as upgrades. For
production clusters, you will want to use the :ref:`Chef cookbooks
<install-chef>`.
Pick a host that has the Ceph software installed -- it does not have
to be a part of your cluster, but it does need to have *matching
versions* of the ``mkcephfs`` command and other Ceph tools
installed. This will be your `admin host`.
Installing the packages
-----------------------
.. _install-debs:
Debian/Ubuntu
~~~~~~~~~~~~~
We regularly build Debian and Ubuntu packages for the `amd64` and
`i386` architectures, for the following distributions:
- ``sid`` (Debian unstable)
- ``squeeze`` (Debian 6.0)
- ``lenny`` (Debian 5.0)
- ``oneiric`` (Ubuntu 11.11)
- ``natty`` (Ubuntu 11.04)
- ``maverick`` (Ubuntu 10.10)
.. todo:: http://ceph.newdream.net/debian/dists/ also has ``lucid``
(Ubuntu 10.04), should that be removed?
Whenever we say *DISTRO* below, replace that with the codename of your
operating system.
Run these commands on all nodes::
wget -q -O- https://raw.github.com/NewDreamNetwork/ceph/master/keys/release.asc \
| sudo apt-key add -
sudo tee /etc/apt/sources.list.d/ceph.list <<EOF
deb http://ceph.newdream.net/debian/ DISTRO main
deb-src http://ceph.newdream.net/debian/ DISTRO main
EOF
sudo apt-get update
sudo apt-get install ceph
.. todo:: For older distributions, you may need to make sure your apt-get may read .bz2 compressed files. This works for Debian Lenny 5.0.3: ``apt-get install bzip2``
.. todo:: Ponder packages; ceph.deb currently pulls in gceph (ceph.deb
Recommends: ceph-client-tools ceph-fuse libceph1 librados2 librbd1
btrfs-tools gceph) (other interesting: ceph-client-tools ceph-fuse
libceph-dev librados-dev librbd-dev obsync python-ceph radosgw)
Red Hat / CentOS / Fedora
~~~~~~~~~~~~~~~~~~~~~~~~~
.. topic:: Status as of 2011-09
We do not currently provide prebuilt RPMs, but we do provide a spec
file that should work. The following will guide you to compiling it
yourself.
To ensure you have the right build-dependencies, run::
yum install rpm-build rpmdevtools git fuse-devel libtool \
libtool-ltdl-devel boost-devel libedit-devel openssl-devel \
gcc-c++ nss-devel libatomic_ops-devel make
To setup an RPM compilation environment, run::
rpmdev-setuptree
To fetch the Ceph source tarball, run::
wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-0.34.tar.gz
.. topic:: Status as of 2011-09
Release v0.34 does not contain a ceph.spec yet. Until v0.35 is
released, you can fetch a usable spec file and then start the
compilation::
wget https://raw.github.com/gist/1214596/5b6b5b0e978221e36fa2f7c795544ed50b6e9593/ceph.spec
rpmbuild -bb ceph.spec
Once v0.35 is released, this should suffice:
rpmbuild -tb ~/rpmbuild/SOURCES/ceph-0.35.tar.gz
Finally, install the RPMs::
rpm -i rpmbuild/RPMS/x86_64/ceph-*.rpm
.. todo:: Other operating system support.
Creating a ``ceph.conf`` file
-----------------------------
On the `admin host`, create a file with a name like
``mycluster.conf``.
Here's a template for a 3-node cluster, where all three machines run a
:ref:`monitor <monitor>` and an :ref:`object store <rados>`, and the
first one runs the :ref:`Ceph filesystem daemon <cephfs>`. Replace the
hostnames and IP addresses with your own, and add/remove hosts as
appropriate. All hostnames *must* be short form (no domain).
.. literalinclude:: mycluster.conf
:language: ini
Note how the ``host`` variables dictate what node runs what
services. See :doc:`/ops/config` for more information.
.. todo:: More specific link for host= convention.
.. todo:: Point to cluster design docs, once they are ready.
.. todo:: At this point, either use 1 or 3 mons, point to :doc:`grow/mon`
Running ``mkcephfs``
--------------------
Verify that you can manage the nodes from the host you intend to run
``mkcephfs`` on:
- Make sure you can SSH_ from the `admin host` into all the nodes
using the short hostnames (``myserver`` not
``myserver.mydept.example.com``), with no user specified
[#ssh_config]_.
- Make sure you can SSH_ from the `admin host` into all the nodes
as ``root`` using the short hostnames.
- Make sure you can run ``sudo`` without passphrase prompts on all
nodes [#sudo]_.
.. _SSH: http://openssh.org/
If you are not using :ref:`Btrfs <btrfs>`, enable :ref:`extended
attributes <xattr>`.
On each node, make sure the directory ``/srv/osd.N`` (with the
appropriate ``N``) exists, and the right filesystem is mounted. If you
are not using a separate filesystem for the file store, just run
``sudo mkdir /srv/osd.N`` (with the right ``N``).
Then, using the right path to the ``mycluster.conf`` file you prepared
earlier, run::
mkcephfs -a -c mycluster.conf -k mycluster.keyring
This will place an `admin key` into ``mycluster.keyring``. This will
be used to manage the cluster. Treat it like a ``root`` password to
your filesystem.
.. todo:: Link to explanation of `admin key`.
That should SSH into all the nodes, and set up Ceph for you.
It does **not** copy the configuration, or start the services. Let's
do that::
ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
...
ssh myserver01 sudo /etc/init.d/ceph start
ssh myserver02 sudo /etc/init.d/ceph start
ssh myserver03 sudo /etc/init.d/ceph start
...
After a little while, the cluster should come up and reach a healthy
state. We can check that::
ceph -k mycluster.keyring -c mycluster.conf health
2011-09-06 12:33:51.561012 mon <- [health]
2011-09-06 12:33:51.562164 mon2 -> 'HEALTH_OK' (0)
.. todo:: Document "healthy"
.. todo:: Improve output.
.. rubric:: Footnotes
.. [#ssh_config] Something like this in your ``~/.ssh_config`` may
help -- unfortunately you need an entry per node::
Host myserverNN
Hostname myserverNN.dept.example.com
User ubuntu
.. [#sudo] The relevant ``sudoers`` syntax looks like this::
%admin ALL=(ALL) NOPASSWD:ALL