mirror of
https://github.com/ceph/ceph
synced 2025-04-01 00:26:47 +00:00
doc: misc updates
doc/architecture.rst - removed broken reference. doc/config-cluster - cleanup and added chef doc/install - Made generic to add Chef, OpenStack and libvert installs doc/init - Created light start | stop and health section doc/source - Removed $ from code examples. Trimmed paras to 80 char doc/images - Added preliminary diagram for Chef. doc/rec - Added reference to hardware. Added filesystem info. Signed-off-by: John Wilkins <john.wilkins@dreamhost.com>
This commit is contained in:
parent
3a2dc969ff
commit
812989bf35
@ -80,7 +80,7 @@ metadata to store file owner etc.
|
||||
|
||||
Underneath, ``ceph-osd`` stores the data on a local filesystem. We
|
||||
recommend using Btrfs_, but any POSIX filesystem that has extended
|
||||
attributes should work (see :ref:`xattr`).
|
||||
attributes should work.
|
||||
|
||||
.. _Btrfs: http://en.wikipedia.org/wiki/Btrfs
|
||||
|
||||
|
@ -13,12 +13,12 @@ Each process or daemon looks for a ``ceph.conf`` file that provides their
|
||||
configuration settings. The default ``ceph.conf`` locations in sequential
|
||||
order include:
|
||||
|
||||
1. ``$CEPH_CONF`` (*i.e.,* the path following
|
||||
the ``$CEPH_CONF`` environment variable)
|
||||
2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
|
||||
3. ``/etc/ceph/ceph.conf``
|
||||
4. ``~/.ceph/config``
|
||||
5. ``./ceph.conf`` (*i.e.,* in the current working directory)
|
||||
#. ``$CEPH_CONF`` (*i.e.,* the path following
|
||||
the ``$CEPH_CONF`` environment variable)
|
||||
#. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
|
||||
#. ``/etc/ceph/ceph.conf``
|
||||
#. ``~/.ceph/config``
|
||||
#. ``./ceph.conf`` (*i.e.,* in the current working directory)
|
||||
|
||||
The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you
|
||||
have installed the Ceph packages on the OSD Cluster hosts, you need to create
|
||||
@ -124,26 +124,24 @@ alphanumeric for monitors and metadata servers. ::
|
||||
|
||||
``host`` and ``addr`` Settings
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
The :doc:`/install/hardware-recommendations` section provides some hardware guidelines for
|
||||
configuring the cluster. It is possible for a single host to run
|
||||
multiple daemons. For example, a single host with multiple disks or
|
||||
RAIDs may run one ``ceph-osd`` for each disk or RAID. Additionally, a
|
||||
host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon on the
|
||||
same host. Ideally, you will have a host for a particular type of
|
||||
process. For example, one host may run ``ceph-osd`` daemons, another
|
||||
host may run a ``ceph-mds`` daemon, and other hosts may run
|
||||
``ceph-mon`` daemons.
|
||||
The `Hardware Recommendations <../hardware-recommendations>`_ section
|
||||
provides some hardware guidelines for configuring the cluster. It is possible
|
||||
for a single host to run multiple daemons. For example, a single host with
|
||||
multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID.
|
||||
Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon
|
||||
on the same host. Ideally, you will have a host for a particular type of
|
||||
process. For example, one host may run ``ceph-osd`` daemons, another host
|
||||
may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons.
|
||||
|
||||
Each host has a name identified by the ``host`` setting, and a network
|
||||
location (i.e., domain name or IP address) identified by the ``addr``
|
||||
setting. For example::
|
||||
Each host has a name identified by the ``host`` setting, and a network location
|
||||
(i.e., domain name or IP address) identified by the ``addr`` setting. For example::
|
||||
|
||||
[osd.1]
|
||||
host = hostNumber1
|
||||
addr = 150.140.130.120:1100
|
||||
addr = 150.140.130.120
|
||||
[osd.2]
|
||||
host = hostNumber1
|
||||
addr = 150.140.130.120:1102
|
||||
addr = 150.140.130.120
|
||||
|
||||
|
||||
Monitor Configuration
|
||||
@ -155,7 +153,12 @@ algorithm can determine which version of the cluster map is the most accurate.
|
||||
.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
|
||||
the lack of a monitor may interrupt data service availability.
|
||||
|
||||
Ceph monitors typically listen on port ``6789``.
|
||||
Ceph monitors typically listen on port ``6789``. For example:
|
||||
|
||||
[mon.a]
|
||||
host = hostNumber1
|
||||
addr = 150.140.130.120:6789
|
||||
|
||||
|
||||
Example Configuration File
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -168,13 +171,11 @@ Configuration File Deployment Options
|
||||
The most common way to deploy the ``ceph.conf`` file in a cluster is to have
|
||||
all hosts share the same configuration file.
|
||||
|
||||
You may create a ``ceph.conf`` file for each host if you wish, or
|
||||
specify a particular ``ceph.conf`` file for a subset of hosts within
|
||||
the cluster. However, using per-host ``ceph.conf`` configuration files
|
||||
imposes a maintenance burden as the cluster grows. In a typical
|
||||
deployment, an administrator creates a ``ceph.conf`` file on the
|
||||
Administration host and then copies that file to each OSD Cluster
|
||||
host.
|
||||
You may create a ``ceph.conf`` file for each host if you wish, or specify a
|
||||
particular ``ceph.conf`` file for a subset of hosts within the cluster. However,
|
||||
using per-host ``ceph.conf``configuration files imposes a maintenance burden as the
|
||||
cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file
|
||||
on the Administration host and then copies that file to each OSD Cluster host.
|
||||
|
||||
The current cluster deployment script, ``mkcephfs``, does not make copies of the
|
||||
``ceph.conf``. You must copy the file manually.
|
||||
``ceph.conf``. You must copy the file manually.
|
89
doc/config-cluster/chef.rst
Normal file
89
doc/config-cluster/chef.rst
Normal file
@ -0,0 +1,89 @@
|
||||
=====================
|
||||
Deploying with Chef
|
||||
=====================
|
||||
|
||||
We use Chef cookbooks to deploy Ceph. See `Managing Cookbooks with Knife`_ for details
|
||||
on using ``knife``.
|
||||
|
||||
Add a Cookbook Path
|
||||
-------------------
|
||||
Add the ``cookbook_path`` to your ``~/.ceph/knife.rb`` configuration file. For example::
|
||||
|
||||
cookbook_path '/home/userId/.chef/ceph-cookbooks'
|
||||
|
||||
Install Ceph Cookbooks
|
||||
----------------------
|
||||
To get the cookbooks for Ceph, clone them from git.::
|
||||
|
||||
cd ~/.chef
|
||||
git clone https://github.com/ceph/ceph-cookbooks.git
|
||||
knife cookbook site upload parted btrfs parted
|
||||
|
||||
Install Apache Cookbooks
|
||||
------------------------
|
||||
RADOS Gateway uses Apache 2. So you must install the Apache 2 cookbooks.
|
||||
To retrieve the Apache 2 cookbooks, execute the following::
|
||||
|
||||
cd ~/.chef/ceph-cookbooks
|
||||
knife cookbook site download apache2
|
||||
|
||||
The `apache2-{version}.tar.gz`` archive will appear in your ``~/.ceph`` directory.
|
||||
In the following example, replace ``{version}`` with the version of the Apache 2
|
||||
cookbook archive knife retrieved. Then, expand the archive and upload it to the
|
||||
Chef server.::
|
||||
|
||||
tar xvf apache2-{version}.tar.gz
|
||||
knife cookbook upload apache2
|
||||
|
||||
Configure Chef
|
||||
--------------
|
||||
To configure Chef, you must specify an environment and a series of roles. You
|
||||
may use the Web UI or ``knife`` to perform these tasks.
|
||||
|
||||
The following instructions demonstrate how to perform these tasks with ``knife``.
|
||||
|
||||
|
||||
Create a role file for the Ceph monitor. ::
|
||||
|
||||
cat >ceph-mon.rb <<EOF
|
||||
name "ceph-mon"
|
||||
description "Ceph monitor server"
|
||||
run_list(
|
||||
'recipe[ceph::single_mon]'
|
||||
)
|
||||
EOF
|
||||
|
||||
Create a role file for the OSDs. ::
|
||||
|
||||
cat >ceph-osd.rb <<EOF
|
||||
name "ceph-osd"
|
||||
description "Ceph object store"
|
||||
run_list(
|
||||
'recipe[ceph::bootstrap_osd]'
|
||||
)
|
||||
EOF
|
||||
|
||||
Add the roles to Chef using ``knife``. ::
|
||||
|
||||
knife role from file ceph-mon.rb ceph-osd.rb
|
||||
|
||||
You may also perform the same tasks with the command line and a ``vim`` editor.
|
||||
Set an ``EDITOR`` environment variable. ::
|
||||
|
||||
export EDITOR=vi
|
||||
|
||||
Then exectute::
|
||||
|
||||
knife create role {rolename}
|
||||
|
||||
The ``vim`` editor opens with a JSON object, and you may edit the settings and
|
||||
save the JSON file.
|
||||
|
||||
Finally configure the nodes. ::
|
||||
|
||||
knife node edit {nodename}
|
||||
|
||||
|
||||
|
||||
|
||||
.. _Managing Cookbooks with Knife: http://wiki.opscode.com/display/chef/Managing+Cookbooks+With+Knife
|
@ -1,4 +1,5 @@
|
||||
[global]
|
||||
; use cephx or none
|
||||
auth supported = cephx
|
||||
keyring = /etc/ceph/$name.keyring
|
||||
|
||||
@ -11,6 +12,8 @@
|
||||
osd data = /srv/osd.$id
|
||||
osd journal = /srv/osd.$id.journal
|
||||
osd journal size = 1000
|
||||
; uncomment the following line if you are mounting with ext4
|
||||
; filestore xattr use omap = true
|
||||
|
||||
[mon.a]
|
||||
host = myserver01
|
||||
|
@ -1,10 +1,11 @@
|
||||
==============================
|
||||
Deploying Ceph Configuration
|
||||
==============================
|
||||
Ceph's current deployment script does not copy the configuration file you
|
||||
Ceph's ``mkcephfs`` deployment script does not copy the configuration file you
|
||||
created from the Administration host to the OSD Cluster hosts. Copy the
|
||||
configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
|
||||
from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.
|
||||
from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host
|
||||
if you are using ``mkcephfs`` to deploy Ceph.
|
||||
|
||||
::
|
||||
|
||||
@ -12,18 +13,9 @@ from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.
|
||||
ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
|
||||
ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
|
||||
|
||||
|
||||
The current deployment script doesn't copy the start services. Copy the ``start``
|
||||
services from the Administration host to each OSD Cluster host. ::
|
||||
|
||||
ssh myserver01 sudo /etc/init.d/ceph start
|
||||
ssh myserver02 sudo /etc/init.d/ceph start
|
||||
ssh myserver03 sudo /etc/init.d/ceph start
|
||||
|
||||
The current deployment script may not create the default server directories. Create
|
||||
server directories for each instance of a Ceph daemon.
|
||||
|
||||
Using the exemplary ``ceph.conf`` file, you would perform the following:
|
||||
The current deployment script does not create the default server directories. Create
|
||||
server directories for each instance of a Ceph daemon. Using the exemplary
|
||||
``ceph.conf`` file, you would perform the following:
|
||||
|
||||
On ``myserver01``::
|
||||
|
||||
|
@ -1,31 +1,17 @@
|
||||
================================
|
||||
Deploying Ceph with ``mkcephfs``
|
||||
================================
|
||||
==================================
|
||||
Deploying Ceph with ``mkcephfs``
|
||||
==================================
|
||||
|
||||
Once you have copied your Ceph Configuration to the OSD Cluster hosts,
|
||||
you may deploy Ceph with the ``mkcephfs`` script.
|
||||
|
||||
.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more complex operations, such as upgrades.
|
||||
.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more
|
||||
complex operations, such as upgrades.
|
||||
|
||||
For production environments, you will also be able to deploy Ceph using Chef cookbooks (coming soon!).
|
||||
For production environments, you deploy Ceph using Chef cookbooks. To run
|
||||
``mkcephfs``, execute the following::
|
||||
|
||||
sudo mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring
|
||||
|
||||
To run ``mkcephfs``, execute the following::
|
||||
|
||||
$ mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.keyring
|
||||
|
||||
The script adds an admin key to the ``ceph.keyring``, which is analogous to a root password.
|
||||
|
||||
The script adds an admin key to the ``mycluster.keyring``, which is analogous to a root password.
|
||||
|
||||
To start the cluster, execute the following::
|
||||
|
||||
/etc/init.d/ceph -a start
|
||||
|
||||
Ceph should begin operating. You can check on the health of your Ceph cluster with the following::
|
||||
|
||||
ceph health
|
||||
|
||||
If you specified non-default locations for your configuration or keyring::
|
||||
|
||||
ceph -c /path/to/conf -k /path/to/keyring health
|
||||
|
||||
The script adds an admin key to the ``ceph.keyring``, which is analogous to a
|
||||
root password.
|
||||
|
@ -1,6 +1,6 @@
|
||||
=========================================
|
||||
Hard Disk and File System Recommendations
|
||||
=========================================
|
||||
===========================================
|
||||
Hard Disk and File System Recommendations
|
||||
===========================================
|
||||
|
||||
Ceph aims for data safety, which means that when the application receives notice
|
||||
that data was written to the disk, that data was actually written to the disk.
|
||||
@ -9,7 +9,7 @@ disk. Newer kernels should work fine.
|
||||
|
||||
Use ``hdparm`` to disable write caching on the hard disk::
|
||||
|
||||
$ hdparm -W 0 /dev/hda 0
|
||||
hdparm -W 0 /dev/hda 0
|
||||
|
||||
|
||||
Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file
|
||||
@ -26,7 +26,8 @@ File system candidates for Ceph include B tree and B+ tree file systems such as:
|
||||
- ``btrfs``
|
||||
- ``XFS``
|
||||
|
||||
If you are using ``ext4``, enable XATTRs. ::
|
||||
If you are using ``ext4``, mount your file system to enable XATTRs. You must also
|
||||
add the following line to the ``[osd]`` section of your ``ceph.conf`` file. ::
|
||||
|
||||
filestore xattr use omap = true
|
||||
|
||||
|
@ -27,3 +27,4 @@ instance (a single context).
|
||||
Configuration <ceph-conf>
|
||||
Deploy Config <deploying-ceph-conf>
|
||||
deploying-ceph-with-mkcephfs
|
||||
Deploy with Chef <chef>
|
||||
|
BIN
doc/images/chef.png
Normal file
BIN
doc/images/chef.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 43 KiB |
17074
doc/images/chef.svg
Normal file
17074
doc/images/chef.svg
Normal file
File diff suppressed because it is too large
Load Diff
After Width: | Height: | Size: 977 KiB |
@ -20,6 +20,7 @@ cluster to ensure that the storage hosts are running smoothly.
|
||||
start/index
|
||||
install/index
|
||||
config-cluster/index
|
||||
init/index
|
||||
ops/index
|
||||
rec/index
|
||||
config
|
||||
|
16
doc/init/check-cluster-health.rst
Normal file
16
doc/init/check-cluster-health.rst
Normal file
@ -0,0 +1,16 @@
|
||||
=========================
|
||||
Checking Cluster Health
|
||||
=========================
|
||||
When you start the Ceph cluster, it may take some time to reach a healthy
|
||||
state. You can check on the health of your Ceph cluster with the following::
|
||||
|
||||
ceph health
|
||||
|
||||
If you specified non-default locations for your configuration or keyring::
|
||||
|
||||
ceph -c /path/to/conf -k /path/to/keyring health
|
||||
|
||||
Upon starting the Ceph cluster, you will likely encounter a health
|
||||
warning such as ``HEALTH_WARN XXX num pgs stale``. Wait a few moments and check
|
||||
it again. When your cluster is ready, ``ceph health`` should return a message
|
||||
such as ``HEALTH_OK``. At that point, it is okay to begin using the cluster.
|
77
doc/init/index.rst
Normal file
77
doc/init/index.rst
Normal file
@ -0,0 +1,77 @@
|
||||
==========================
|
||||
Start | Stop the Cluster
|
||||
==========================
|
||||
The ``ceph`` process provides functionality to **start**, **restart**, and
|
||||
**stop** your Ceph cluster. Each time you execute ``ceph``, you must specify at
|
||||
least one option and one command. You may also specify a daemon type or a daemon
|
||||
instance. For most newer Debian/Ubuntu distributions, you may use the following
|
||||
syntax::
|
||||
|
||||
sudo service ceph [options] [commands] [daemons]
|
||||
|
||||
For older distributions, you may wish to use the ``/etc/init.d/ceph`` path::
|
||||
|
||||
sudo /etc/init.d/ceph [options] [commands] [daemons]
|
||||
|
||||
The ``ceph`` options include:
|
||||
|
||||
+-----------------+----------+-------------------------------------------------+
|
||||
| Option | Shortcut | Description |
|
||||
+=================+==========+=================================================+
|
||||
| ``--verbose`` | ``-v`` | Use verbose logging. |
|
||||
+-----------------+----------+-------------------------------------------------+
|
||||
| ``--valgrind`` | ``N/A`` | (Developers only) Use `Valgrind`_ debugging. |
|
||||
+-----------------+----------+-------------------------------------------------+
|
||||
| ``--allhosts`` | ``-a`` | Execute on all hosts in ``ceph.conf.`` |
|
||||
| | | Otherwise, it only executes on ``localhost``. |
|
||||
+-----------------+----------+-------------------------------------------------+
|
||||
| ``--restart`` | ``N/A`` | Automatically restart daemon if it core dumps. |
|
||||
+-----------------+----------+-------------------------------------------------+
|
||||
| ``--norestart`` | ``N/A`` | Don't restart a daemon if it core dumps. |
|
||||
+-----------------+----------+-------------------------------------------------+
|
||||
| ``--conf`` | ``-c`` | Use an alternate configuration file. |
|
||||
+-----------------+----------+-------------------------------------------------+
|
||||
|
||||
The ``ceph`` commands include:
|
||||
|
||||
+------------------+------------------------------------------------------------+
|
||||
| Command | Description |
|
||||
+==================+============================================================+
|
||||
| ``start`` | Start the daemon(s). |
|
||||
+------------------+------------------------------------------------------------+
|
||||
| ``stop`` | Stop the daemon(s). |
|
||||
+------------------+------------------------------------------------------------+
|
||||
| ``forcestop`` | Force the daemon(s) to stop. Same as ``kill -9`` |
|
||||
+------------------+------------------------------------------------------------+
|
||||
| ``killall`` | Kill all daemons of a particular type. |
|
||||
+------------------+------------------------------------------------------------+
|
||||
| ``cleanlogs`` | Cleans out the log directory. |
|
||||
+------------------+------------------------------------------------------------+
|
||||
| ``cleanalllogs`` | Cleans out **everything** in the log directory. |
|
||||
+------------------+------------------------------------------------------------+
|
||||
|
||||
The ``ceph`` daemons include the daemon types:
|
||||
|
||||
- ``mon``
|
||||
- ``osd``
|
||||
- ``mds``
|
||||
|
||||
The ``ceph`` daemons may also specify a specific instance::
|
||||
|
||||
sudo /etc/init.d/ceph -a start osd.0
|
||||
|
||||
Where ``osd.0`` is the first OSD in the cluster.
|
||||
|
||||
.. _Valgrind: http://www.valgrind.org/
|
||||
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
start-cluster
|
||||
Check Cluster Health <check-cluster-health>
|
||||
stop-cluster
|
||||
|
||||
See `Operations`_ for more detailed information.
|
||||
|
||||
.. _Operations: ../ops/index.html
|
23
doc/init/start-cluster.rst
Normal file
23
doc/init/start-cluster.rst
Normal file
@ -0,0 +1,23 @@
|
||||
====================
|
||||
Starting a Cluster
|
||||
====================
|
||||
To start your Ceph cluster, execute the ``ceph`` with the ``start`` command.
|
||||
The usage may differ based upon your Linux distribution. For example, for most
|
||||
newer Debian/Ubuntu distributions, you may use the following syntax::
|
||||
|
||||
sudo service ceph start [options] [start|restart] [daemonType|daemonID]
|
||||
|
||||
For older distributions, you may wish to use the ``/etc/init.d/ceph`` path::
|
||||
|
||||
sudo /etc/init.d/ceph [options] [start|restart] [daemonType|daemonID]
|
||||
|
||||
The following examples illustrates a typical use case::
|
||||
|
||||
sudo service ceph -a start
|
||||
sudo /etc/init.d/ceph -a start
|
||||
|
||||
Once you execute with ``-a``, Ceph should begin operating. You may also specify
|
||||
a particular daemon instance to constrain the command to a single instance. For
|
||||
example::
|
||||
|
||||
sudo /etc/init.d/ceph start osd.0
|
9
doc/init/stop-cluster.rst
Normal file
9
doc/init/stop-cluster.rst
Normal file
@ -0,0 +1,9 @@
|
||||
====================
|
||||
Stopping a Cluster
|
||||
====================
|
||||
To stop a cluster, execute one of the following::
|
||||
|
||||
sudo service ceph stop
|
||||
sudo /etc/init.d/ceph -a stop
|
||||
|
||||
Ceph should shut down the operating processes.
|
201
doc/install/chef.rst
Normal file
201
doc/install/chef.rst
Normal file
@ -0,0 +1,201 @@
|
||||
=================
|
||||
Installing Chef
|
||||
=================
|
||||
Chef defines three types of entities:
|
||||
|
||||
#. **Chef Server:** Manages Chef 'nodes."
|
||||
#. **Chef Nodes:** Managed by the Chef Server.
|
||||
#. **Chef Workstation:** Manages Chef.
|
||||
|
||||
.. image:: ../images/chef.png
|
||||
|
||||
See `Chef Architecture Introduction`_ for details.
|
||||
|
||||
Identify a host(s) for your Chef server and Chef workstation. You may
|
||||
install them on the same host. To configure Chef, do the following on
|
||||
the host designated to operate as the Chef server:
|
||||
|
||||
#. Install Ruby
|
||||
#. Install Chef
|
||||
#. Install the Chef Server
|
||||
#. Install Knife
|
||||
#. Install the Chef Client
|
||||
|
||||
Once you have completed the foregoing steps, you may bootstrap the
|
||||
Chef nodes with ``knife.``
|
||||
|
||||
Installing Ruby
|
||||
---------------
|
||||
Chef requires you to install Ruby. Use the version applicable to your current
|
||||
Linux distribution. ::
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install ruby
|
||||
|
||||
Installing Chef
|
||||
---------------
|
||||
.. important:: Before you install Chef, identify the host for your Chef
|
||||
server, and its fully qualified URI.
|
||||
|
||||
First, add Opscode packages to your APT configuration.
|
||||
Replace ``{dist.name}`` with the name of your Linux distribution.
|
||||
For example::
|
||||
|
||||
sudo tee /etc/apt/sources.list.d/chef.list << EOF
|
||||
deb http://apt.opscode.com/ `lsb_release -cs`{dist.name}-0.10 main
|
||||
deb-src http://apt.opscode.com/ `lsb_release -cs`{dist.name}-0.10 main
|
||||
EOF
|
||||
|
||||
Next, you must request keys so that APT can verify the packages. ::
|
||||
|
||||
gpg --keyserver keys.gnupg.net --recv-keys 83EF826A
|
||||
gpg --export packages@opscode.com | sudo apt-key add -
|
||||
|
||||
To install Chef, execute ``update`` and ``install``. For example::
|
||||
|
||||
sudo apt-get update
|
||||
sudo apt-get install chef
|
||||
|
||||
Enter the fully qualified URI for your Chef server. For example::
|
||||
|
||||
http://127.0.0.1:4000
|
||||
|
||||
Installing Chef Server
|
||||
----------------------
|
||||
Once you have installed Chef, you must install the Chef server.
|
||||
See `Installing Chef Server on Debian or Ubuntu using Packages`_ for details.
|
||||
For example::
|
||||
|
||||
sudo apt-get install chef-server
|
||||
|
||||
The Chef server installer will prompt you to enter a temporary password. Enter
|
||||
a temporary password (e.g., ``foo``) and proceed with the installation.
|
||||
|
||||
.. tip:: As of this writing, we found a bug in the Chef installer.
|
||||
When you press **Enter** to get to the password entry field, nothing happens.
|
||||
We were able to get to the password entry field by pressing **ESC**.
|
||||
|
||||
Once the installer finishes and activates the Chef server, you may enter the fully
|
||||
qualified URI in a browser to launch the Chef web UI. For example::
|
||||
|
||||
http://127.0.0.1:4000
|
||||
|
||||
The Chef web UI will prompt you to enter the username and password.
|
||||
|
||||
- **login:** ``admin``
|
||||
- **password:** ``foo``
|
||||
|
||||
Once you have entered the temporary password, the Chef web UI will prompt you
|
||||
to enter a new password.
|
||||
|
||||
Configuring Knife
|
||||
-----------------
|
||||
Once you complete the Chef server installation, install ``knife`` on the the
|
||||
Chef server. If the Chef server is a remote host, use ``ssh`` to connect. ::
|
||||
|
||||
ssh username@my-chef-server
|
||||
|
||||
In the ``/home/username`` directory, create a hidden Chef directory. ::
|
||||
|
||||
mkdir -p ~/.chef
|
||||
|
||||
The server generates validation and web UI certificates with read/write
|
||||
permissions for the user that installed the Chef server. Copy them from the
|
||||
``/etc/chef`` directory to the ``~/.chef`` directory. Then, change their
|
||||
ownership to the current user. ::
|
||||
|
||||
sudo cp /etc/chef/validation.pem /etc/chef/webui.pem ~/.chef
|
||||
sudo chown -R $USER ~/.chef
|
||||
|
||||
From the current user's home directory, configure ``knife`` with an initial
|
||||
API client. ::
|
||||
|
||||
knife configure -i
|
||||
|
||||
The configuration will prompt you for inputs. Answer accordingly:
|
||||
|
||||
*Where should I put the config file? [~/.chef/knife.rb]* Press **Enter**
|
||||
to accept the default value.
|
||||
|
||||
*Please enter the chef server URL:* If you are installing the
|
||||
client on the same host as the server, enter ``http://localhost:4000``.
|
||||
Otherwise, enter an appropriate URL for the server.
|
||||
|
||||
*Please enter a clientname for the new client:* Press **Enter**
|
||||
to accept the default value.
|
||||
|
||||
*Please enter the existing admin clientname:* Press **Enter**
|
||||
to accept the default value.
|
||||
|
||||
*Please enter the location of the existing admin client's private key:*
|
||||
Override the default value so that it points to the ``.chef`` directory.
|
||||
(*e.g.,* ``.chef/webui.pem``)
|
||||
|
||||
*Please enter the validation clientname:* Press **Enter** to accept
|
||||
the default value.
|
||||
|
||||
*Please enter the location of the validation key:* Override the
|
||||
default value so that it points to the ``.chef`` directory.
|
||||
(*e.g.,* ``.chef/validation.pem``)
|
||||
|
||||
*Please enter the path to a chef repository (or leave blank):*
|
||||
Leave the entry field blank and press **Enter**.
|
||||
|
||||
|
||||
Installing Chef Client
|
||||
----------------------
|
||||
Install the Chef client on the Chef Workstation. If you use the same host for
|
||||
the workstation and server, you may have performed a number of these steps.
|
||||
See `Installing Chef Client on Ubuntu or Debian`_
|
||||
|
||||
Create a directory for the GPG key. ::
|
||||
|
||||
sudo mkdir -p /etc/apt/trusted.gpg.d
|
||||
|
||||
Add the GPG keys and update the index. ::
|
||||
|
||||
gpg --keyserver keys.gnupg.net --recv-keys 83EF826A
|
||||
gpg --export packages@opscode.com | sudo tee /etc/apt/trusted.gpg.d/opscode-keyring.gpg > /dev/null
|
||||
|
||||
Update APT. ::
|
||||
|
||||
sudo apt-get update
|
||||
|
||||
Install the Opscode keyring to ensure the keyring stays up to date. ::
|
||||
|
||||
sudo apt-get install opscode-keyring
|
||||
|
||||
The ``chef-client`` requires a ``client.rb`` and a copy of the
|
||||
``validation.pem`` file. Create a directory for them. ::
|
||||
|
||||
sudo mkdir -p /etc/chef
|
||||
|
||||
Create the ``client.rb`` and ``validation.pem`` for ``chef-client``. ::
|
||||
|
||||
sudo knife configure client /etc/chef
|
||||
|
||||
Bootstrapping Nodes
|
||||
-------------------
|
||||
The fastest way to deploy Chef on nodes is to use ``knife``
|
||||
to boostrap each node. Chef must have network access to each host
|
||||
you intend to configure as a node (e.g., ``NAT``, ``ssh``). Replace
|
||||
the ``{dist.vernum}`` with your distribution and version number.
|
||||
For example::
|
||||
|
||||
knife bootstrap IP_ADDR -d {dist.vernum}-apt --sudo
|
||||
|
||||
See `Knife Bootstrap`_ for details.
|
||||
|
||||
Verify Nodes
|
||||
------------
|
||||
Verify that you have setup all the hosts you want to use as
|
||||
Chef nodes. ::
|
||||
|
||||
knife node list
|
||||
|
||||
A list of the nodes you've boostrapped should appear.
|
||||
|
||||
.. _Chef Architecture Introduction: http://wiki.opscode.com/display/chef/Architecture+Introduction
|
||||
.. _Installing Chef Client on Ubuntu or Debian: http://wiki.opscode.com/display/chef/Installing+Chef+Client+on+Ubuntu+or+Debian
|
||||
.. _Installing Chef Server on Debian or Ubuntu using Packages: http://wiki.opscode.com/display/chef/Installing+Chef+Server+on+Debian+or+Ubuntu+using+Packages
|
||||
.. _Knife Bootstrap: http://wiki.opscode.com/display/chef/Knife+Bootstrap
|
@ -1,17 +1,25 @@
|
||||
=================
|
||||
Installing Ceph
|
||||
=================
|
||||
==============
|
||||
Installation
|
||||
==============
|
||||
Storage clusters are the foundation of the Ceph system. Ceph storage hosts
|
||||
provide object storage. Clients access the Ceph storage cluster directly from
|
||||
an application (using ``librados``), over an object storage protocol such as
|
||||
Amazon S3 or OpenStack Swift (using ``radosgw``), or with a block device
|
||||
(using ``rbd``). To begin using Ceph, you must first set up a storage cluster.
|
||||
|
||||
The following sections provide guidance for configuring a storage cluster and
|
||||
installing Ceph components:
|
||||
You may deploy Ceph with our ``mkcephfs`` bootstrap utility for development
|
||||
and test environments. For production environments, we recommend deploying
|
||||
Ceph with the Chef cloud management tool.
|
||||
|
||||
If your deployment uses OpenStack, you will also need to install OpenStack.
|
||||
|
||||
The following sections provide guidance for installing components used with
|
||||
Ceph:
|
||||
|
||||
.. toctree::
|
||||
|
||||
Hardware Recommendations <hardware-recommendations>
|
||||
Installing Debian/Ubuntu Packages <debian>
|
||||
Installing RPM Packages <rpm>
|
||||
Installing Chef <chef>
|
||||
Installing OpenStack <openstack>
|
||||
|
3
doc/install/openstack.rst
Normal file
3
doc/install/openstack.rst
Normal file
@ -0,0 +1,3 @@
|
||||
======================
|
||||
Installing OpenStack
|
||||
======================
|
@ -1,24 +1,45 @@
|
||||
=======================================
|
||||
Underlying filesystem recommendations
|
||||
=======================================
|
||||
============
|
||||
Filesystem
|
||||
============
|
||||
For details on file systems when configuring a cluster See
|
||||
`Hard Disk and File System Recommendations`_ .
|
||||
|
||||
.. todo:: Benefits of each, limits on non-btrfs ones, performance data when we have them, etc
|
||||
.. tip:: We recommend configuring Ceph to use the ``XFS`` file system in
|
||||
the near term, and ``btrfs`` in the long term once it is stable
|
||||
enough for production.
|
||||
|
||||
Before ``ext3``, ``ReiserFS`` was the only journaling file system available for
|
||||
Linux. However, ``ext3`` doesn't provide Extended Attribute (XATTR) support.
|
||||
While ``ext4`` provides XATTR support, it only allows XATTRs up to 4kb. The
|
||||
4kb limit is not enough for RADOS GW ACLs, snapshots, and other features. As of
|
||||
version 0.45, Ceph provides a ``leveldb`` feature for ``ext4`` file systems
|
||||
that stores XATTRs in excess of 4kb in a ``leveldb`` database.
|
||||
|
||||
The ``XFS`` and ``btrfs`` file systems provide numerous advantages in highly
|
||||
scaled data storage environments when `compared`_ to ``ext3`` and ``ext4``.
|
||||
Both ``XFS`` and ``btrfs`` are `journaling file systems`_, which means that
|
||||
they are more robust when recovering from crashes, power outages, etc. These
|
||||
filesystems journal all of the changes they will make before performing writes.
|
||||
|
||||
.. _btrfs:
|
||||
``XFS`` was developed for Silicon Graphics, and is a mature and stable
|
||||
filesystem. By contrast, ``btrfs`` is a relatively new file system that aims
|
||||
to address the long-standing wishes of system administrators working with
|
||||
large scale data storage environments. ``btrfs`` has some unique features
|
||||
and advantages compared to other Linux filesystems.
|
||||
|
||||
Btrfs
|
||||
-----
|
||||
``btrfs`` is a `copy-on-write`_ filesystem. It supports file creation
|
||||
timestamps and checksums that verify metadata integrity, so it can detect
|
||||
bad copies of data and fix them with the good copies. The copy-on-write
|
||||
capability means that ``btrfs`` can support snapshots that are writable.
|
||||
``btrfs`` supports transparent compression and other features.
|
||||
|
||||
.. todo:: what does btrfs give you (the journaling thing)
|
||||
``btrfs`` also incorporates multi-device management into the file system,
|
||||
which enables you to support heterogeneous disk storage infrastructure,
|
||||
data allocation policies. The community also aims to provide ``fsck``,
|
||||
deduplication, and data encryption support in the future. This compelling
|
||||
list of features makes ``btrfs`` the ideal choice for Ceph clusters.
|
||||
|
||||
|
||||
ext4/ext3
|
||||
---------
|
||||
|
||||
.. _xattr:
|
||||
|
||||
Enabling extended attributes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. todo:: how to enable xattr on ext4/3
|
||||
.. _copy-on-write: http://en.wikipedia.org/wiki/Copy-on-write
|
||||
.. _Hard Disk and File System Recommendations: ../../config-cluster/file-system-recommendations
|
||||
.. _compared: http://en.wikipedia.org/wiki/Comparison_of_file_systems
|
||||
.. _journaling file systems: http://en.wikipedia.org/wiki/Journaling_file_system
|
||||
|
@ -1,7 +1,7 @@
|
||||
==========================
|
||||
Hardware recommendations
|
||||
==========================
|
||||
==========
|
||||
Hardware
|
||||
==========
|
||||
|
||||
Discussing the hardware requirements for each daemon, the tradeoffs of
|
||||
doing one ceph-osd per machine versus one per disk, and hardware-related
|
||||
configuration options like journaling locations.
|
||||
See `Hardware Recommendations`_ for details.
|
||||
|
||||
.. _Hardware Recommendations: ../../install/hardware-recommendations
|
||||
|
@ -1,51 +1,49 @@
|
||||
===================
|
||||
Build Ceph Packages
|
||||
===================
|
||||
|
||||
To build packages, you must clone the `Ceph`_ repository.
|
||||
You can create installation packages from the latest code using ``dpkg-buildpackage`` for Debian/Ubuntu
|
||||
or ``rpmbuild`` for the RPM Package Manager.
|
||||
|
||||
.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of cores * 2.
|
||||
For example, use ``-j4`` for a dual-core processor to accelerate the build.
|
||||
=====================
|
||||
Build Ceph Packages
|
||||
=====================
|
||||
To build packages, you must clone the `Ceph`_ repository. You can create
|
||||
installation packages from the latest code using ``dpkg-buildpackage`` for
|
||||
Debian/Ubuntu or ``rpmbuild`` for the RPM Package Manager.
|
||||
|
||||
.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of
|
||||
cores * 2. For example, use ``-j4`` for a dual-core processor to accelerate
|
||||
the build.
|
||||
|
||||
Advanced Package Tool (APT)
|
||||
---------------------------
|
||||
To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the
|
||||
`Ceph`_ repository, installed the `build prerequisites`_ and installed
|
||||
``debhelper``::
|
||||
|
||||
To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the `Ceph`_ repository,
|
||||
installed the `build prerequisites`_ and installed ``debhelper``::
|
||||
|
||||
$ sudo apt-get install debhelper
|
||||
sudo apt-get install debhelper
|
||||
|
||||
Once you have installed debhelper, you can build the packages:
|
||||
|
||||
$ sudo dpkg-buildpackage
|
||||
sudo dpkg-buildpackage
|
||||
|
||||
For multi-processor CPUs use the ``-j`` option to accelerate the build.
|
||||
|
||||
RPM Package Manager
|
||||
-------------------
|
||||
To create ``.rpm`` packages, ensure that you have cloned the `Ceph`_ repository,
|
||||
installed the `build prerequisites`_ and installed ``rpm-build`` and
|
||||
``rpmdevtools``::
|
||||
|
||||
To create ``.prm`` packages, ensure that you have cloned the `Ceph`_ repository,
|
||||
installed the `build prerequisites`_ and installed ``rpm-build`` and ``rpmdevtools``::
|
||||
|
||||
$ yum install rpm-build rpmdevtools
|
||||
yum install rpm-build rpmdevtools
|
||||
|
||||
Once you have installed the tools, setup an RPM compilation environment::
|
||||
|
||||
$ rpmdev-setuptree
|
||||
rpmdev-setuptree
|
||||
|
||||
Fetch the source tarball for the RPM compilation environment::
|
||||
|
||||
$ wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-<version>.tar.gz
|
||||
wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-<version>.tar.gz
|
||||
|
||||
Build the RPM packages::
|
||||
|
||||
$ rpmbuild -tb ~/rpmbuild/SOURCES/ceph-<version>.tar.gz
|
||||
rpmbuild -tb ~/rpmbuild/SOURCES/ceph-<version>.tar.gz
|
||||
|
||||
For multi-processor CPUs use the ``-j`` option to accelerate the build.
|
||||
|
||||
|
||||
.. _build prerequisites: ../build-prerequisites
|
||||
.. _Ceph: ../cloning-the-ceph-source-code-repository
|
||||
|
@ -1,15 +1,16 @@
|
||||
===================
|
||||
Build Prerequisites
|
||||
===================
|
||||
=====================
|
||||
Build Prerequisites
|
||||
=====================
|
||||
Before you can build Ceph source code or Ceph documentation, you need to install
|
||||
several libraries and tools.
|
||||
|
||||
Before you can build Ceph source code or Ceph documentation, you need to install several libraries and tools.
|
||||
|
||||
.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution.
|
||||
.. tip:: Check this section to see if there are specific prerequisites for your
|
||||
Linux/Unix distribution.
|
||||
|
||||
Prerequisites for Building Ceph Source Code
|
||||
===========================================
|
||||
Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts
|
||||
depend on the following:
|
||||
Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly.
|
||||
Ceph build scripts depend on the following:
|
||||
|
||||
- ``autotools-dev``
|
||||
- ``autoconf``
|
||||
@ -32,13 +33,15 @@ depend on the following:
|
||||
- ``pkg-config``
|
||||
- ``libcurl4-gnutls-dev``
|
||||
|
||||
On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
|
||||
On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't
|
||||
installed on your host. ::
|
||||
|
||||
$ sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev
|
||||
sudo apt-get install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev
|
||||
|
||||
On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. ::
|
||||
On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't
|
||||
installed on your host. ::
|
||||
|
||||
$ aptitude install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev
|
||||
aptitude install autotools-dev autoconf automake cdbs gcc g++ git libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev libcrypto++ libexpat1-dev
|
||||
|
||||
|
||||
Ubuntu Requirements
|
||||
@ -52,16 +55,17 @@ Ubuntu Requirements
|
||||
- ``libgdata-common``
|
||||
- ``libgdata13``
|
||||
|
||||
Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
|
||||
Execute ``sudo apt-get install`` for each dependency that isn't installed on
|
||||
your host. ::
|
||||
|
||||
$ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata13
|
||||
sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev libatomic-ops-dev libaio-dev libgdata-common libgdata13
|
||||
|
||||
Debian
|
||||
------
|
||||
Alternatively, you may also install::
|
||||
|
||||
$ aptitude install fakeroot dpkg-dev
|
||||
$ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev
|
||||
aptitude install fakeroot dpkg-dev
|
||||
aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev
|
||||
|
||||
openSUSE 11.2 (and later)
|
||||
-------------------------
|
||||
@ -72,16 +76,18 @@ openSUSE 11.2 (and later)
|
||||
- ``libopenssl-devel``
|
||||
- ``fuse-devel`` (optional)
|
||||
|
||||
Execute ``zypper install`` for each dependency that isn't installed on your host. ::
|
||||
Execute ``zypper install`` for each dependency that isn't installed on your
|
||||
host. ::
|
||||
|
||||
$zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
|
||||
zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
|
||||
|
||||
Prerequisites for Building Ceph Documentation
|
||||
=============================================
|
||||
Ceph utilizes Python's Sphinx documentation tool. For details on
|
||||
the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_
|
||||
Follow the directions at `Sphinx 1.1.3 <http://pypi.python.org/pypi/Sphinx>`_
|
||||
to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the following are required:
|
||||
the Sphinx documentation tool, refer to: `Sphinx`_
|
||||
Follow the directions at `Sphinx 1.1.3`_
|
||||
to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the
|
||||
following are required:
|
||||
|
||||
- ``python-dev``
|
||||
- ``python-pip``
|
||||
@ -92,6 +98,10 @@ to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the followi
|
||||
- ``ditaa``
|
||||
- ``graphviz``
|
||||
|
||||
Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
|
||||
Execute ``sudo apt-get install`` for each dependency that isn't installed on
|
||||
your host. ::
|
||||
|
||||
$ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
|
||||
sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
|
||||
|
||||
.. _Sphinx: http://sphinx.pocoo.org
|
||||
.. _Sphinx 1.1.3: http://pypi.python.org/pypi/Sphinx
|
||||
|
@ -1,38 +1,46 @@
|
||||
=============
|
||||
Building Ceph
|
||||
=============
|
||||
|
||||
===============
|
||||
Building Ceph
|
||||
===============
|
||||
Ceph provides build scripts for source code and for documentation.
|
||||
|
||||
Building Ceph
|
||||
=============
|
||||
Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following::
|
||||
-------------
|
||||
Ceph provides ``automake`` and ``configure`` scripts to streamline the build
|
||||
process. To build Ceph, navigate to your cloned Ceph repository and execute the
|
||||
following::
|
||||
|
||||
$ cd ceph
|
||||
$ ./autogen.sh
|
||||
$ ./configure
|
||||
$ make
|
||||
cd ceph
|
||||
./autogen.sh
|
||||
./configure
|
||||
make
|
||||
|
||||
You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
|
||||
You can use ``make -j`` to execute multiple jobs depending upon your system. For
|
||||
example::
|
||||
|
||||
$ make -j4
|
||||
make -j4
|
||||
|
||||
To install Ceph locally, you may also use::
|
||||
|
||||
$ make install
|
||||
sudo make install
|
||||
|
||||
If you install Ceph locally, ``make`` will place the executables in ``usr/local/bin``.
|
||||
You may add the ``ceph.conf`` file to the ``usr/local/bin`` directory to run an evaluation environment of Ceph from a single directory.
|
||||
If you install Ceph locally, ``make`` will place the executables in
|
||||
``usr/local/bin``. You may add the ``ceph.conf`` file to the ``usr/local/bin``
|
||||
directory to run an evaluation environment of Ceph from a single directory.
|
||||
|
||||
Building Ceph Documentation
|
||||
===========================
|
||||
Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script::
|
||||
---------------------------
|
||||
Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx
|
||||
documentation tool, refer to: `Sphinx`_. To build the Ceph documentaiton,
|
||||
navigate to the Ceph repository and execute the build script::
|
||||
|
||||
$ cd ceph
|
||||
$ admin/build-doc
|
||||
cd ceph
|
||||
admin/build-doc
|
||||
|
||||
Once you build the documentation set, you may navigate to the source directory to view it::
|
||||
|
||||
$ cd build-doc/output
|
||||
cd build-doc/output
|
||||
|
||||
There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively.
|
||||
There should be an ``/html`` directory and a ``/man`` directory containing
|
||||
documentation in HTML and manpage formats respectively.
|
||||
|
||||
.. _Sphinx: http://sphinx.pocoo.org
|
@ -10,12 +10,12 @@ Generate SSH Keys
|
||||
You must generate SSH keys for github to clone the Ceph
|
||||
repository. If you do not have SSH keys for ``github``, execute::
|
||||
|
||||
$ ssh-keygen -d
|
||||
ssh-keygen -d
|
||||
|
||||
Get the key to add to your ``github`` account (the following example
|
||||
assumes you used the default file path)::
|
||||
|
||||
$ cat .ssh/id_dsa.pub
|
||||
cat .ssh/id_dsa.pub
|
||||
|
||||
Copy the public key.
|
||||
|
||||
|
@ -5,4 +5,7 @@
|
||||
As Ceph development progresses, the Ceph team releases new versions of the
|
||||
source code. You may download source code tarballs for Ceph releases here:
|
||||
|
||||
`Ceph Release Tarballs <http://ceph.com/download/>`_
|
||||
`Ceph Release Tarballs`_
|
||||
|
||||
|
||||
.. _Ceph Release Tarballs: http://ceph.com/download/
|
Loading…
Reference in New Issue
Block a user