doc: Whitespace cleanup.

Signed-off-by: Tommi Virtanen <tommi.virtanen@dreamhost.com>
This commit is contained in:
Tommi Virtanen 2012-05-03 10:15:21 -07:00
parent 93dcc9886f
commit 5465e81097
30 changed files with 274 additions and 286 deletions

54
README
View File

@ -27,8 +27,8 @@ Building Ceph
=============
To prepare the source tree after it has been git cloned,
$ git submodule update --init
$ git submodule update --init
To build the server daemons, and FUSE client, execute the following:
@ -72,19 +72,19 @@ Building the Documentation
Prerequisites
-------------
To build the documentation, you must install the following:
To build the documentation, you must install the following:
- python-dev
- python-pip
- python-virualenv
- doxygen
- python-dev
- python-pip
- python-virualenv
- doxygen
- ditaa
- libxml2-dev
- libxslt-dev
- dot
- libxml2-dev
- libxslt-dev
- dot
- graphviz
For example:
For example:
sudo apt-get install python-dev python-pip python-virualenv doxygen ditaa libxml2-dev libxslt-dev dot graphviz
@ -98,25 +98,23 @@ To build the documentation, ensure that you are in the top-level `/ceph director
Build Prerequisites
-------------------
To build the source code, you must install the following:
To build the source code, you must install the following:
- automake
- autoconf
- automake
- gcc
- g++
- libboost-dev
- libedit-dev
- libssl-dev
- libtool
- libfcgi
- libfcgi-dev
- libfuse-dev
- linux-kernel-headers
- automake
- autoconf
- automake
- gcc
- g++
- libboost-dev
- libedit-dev
- libssl-dev
- libtool
- libfcgi
- libfcgi-dev
- libfuse-dev
- linux-kernel-headers
- libcrypto++-dev
For example:
For example:
$ apt-get install automake autoconf automake gcc g++ libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev

View File

@ -11,7 +11,7 @@ API
- :doc:`Authentication and ACLs <s3/authentication>`
- :doc:`Service Operations <s3/serviceops>`
- :doc:`Bucket Operations <s3/bucketops>`
- :doc:`Object Operations <s3/objectops>`
- :doc:`Object Operations <s3/objectops>`
.. toctree::
@ -23,7 +23,7 @@ API
Service Ops <s3/serviceops>
Bucket Ops <s3/bucketops>
Object Ops <s3/objectops>
Features Support
----------------

View File

@ -1,8 +1,8 @@
Authentication and ACLs
=======================
Requests to the RADOS Gateway (RGW) can be either authenticated or unauthenticated.
RGW assumes unauthenticated requests are sent by an anonymous user. RGW supports
canned ACLs.
Requests to the RADOS Gateway (RGW) can be either authenticated or unauthenticated.
RGW assumes unauthenticated requests are sent by an anonymous user. RGW supports
canned ACLs.
Authentication
--------------
@ -22,7 +22,7 @@ approach. The HTTP header signing is similar to OAuth 1.0, but avoids the comple
Authorization: AWS {access-key}:{hash-of-header-and-secret}
In the foregoing example, replace ``{access-key}`` with the value for your access key ID followed by
In the foregoing example, replace ``{access-key}`` with the value for your access key ID followed by
a colon (``:``). Replace ``{hash-of-header-and-secret}`` with a hash of the header string and the secret
corresponding to the access key ID.
@ -32,13 +32,13 @@ To generate the hash of the header string and secret, you must:
str = "HTTP/1.1\nPUT /buckets/bucket/object.mpeg\nHost: cname.domain.com\n
Date: Mon, 2 Jan 2012 00:01:01 +0000\nContent-Length: 9999999\nContent-Encoding: mpeg";
secret = "valueOfSecret";
2. Generate an HMAC using a SHA-1 hashing algorithm. ::
hmac = object.hmac-sha1(str, secret);
3. Encode the ``hmac`` result using base-64. ::
encodedHmac = someBase64Encoder.encode(hmac);
@ -64,4 +64,4 @@ Each grant has a different meaning when applied to a bucket versus applied to an
| ``WRITE_ACP`` | Grantee can write bucket ACL. | Grantee can write to the object ACL. |
+------------------+--------------------------------------------------------+----------------------------------------------+
| ``FULL_CONTROL`` | Grantee has full permissions for object in the bucket. | Grantee can read or write to the object ACL. |
+------------------+--------------------------------------------------------+----------------------------------------------+
+------------------+--------------------------------------------------------+----------------------------------------------+

View File

@ -10,11 +10,11 @@ create buckets as an anonymous user.
Constraints
~~~~~~~~~~~
In general, bucket names should follow domain name constraints.
In general, bucket names should follow domain name constraints.
- Bucket names must be unique.
- Bucket names must begin and end with a lowercase letter.
- Bucket names may contain a dash (-).
- Bucket names must begin and end with a lowercase letter.
- Bucket names may contain a dash (-).
Syntax
~~~~~~
@ -24,7 +24,7 @@ Syntax
PUT /{bucket} HTTP/1.1
Host: cname.domain.com
x-amz-acl: public-read-write
Authorization: AWS {access-key}:{hash-of-header-and-secret}
Parameters
@ -41,8 +41,8 @@ Parameters
HTTP Response
~~~~~~~~~~~~~
If the bucket name is unique, within constraints and unused, the operation will succeed.
If a bucket with the same name already exists and the user is the bucket owner, the operation will succeed.
If the bucket name is unique, within constraints and unused, the operation will succeed.
If a bucket with the same name already exists and the user is the bucket owner, the operation will succeed.
If the bucket name is already in use, the operation will fail.
+---------------+-----------------------+----------------------------------------------------------+
@ -63,7 +63,7 @@ Syntax
DELETE /{bucket} HTTP/1.1
Host: cname.domain.com
Authorization: AWS {access-key}:{hash-of-header-and-secret}
HTTP Response
@ -115,7 +115,7 @@ HTTP Response
Bucket Response Entities
~~~~~~~~~~~~~~~~~~~~~~~~
``GET /{bucket}`` returns a container for buckets with the following fields.
``GET /{bucket}`` returns a container for buckets with the following fields.
+------------------------+-----------+----------------------------------------------------------------------------------+
| Name | Type | Description |
@ -238,7 +238,7 @@ List Bucket Multipart Uploads
-----------------------------
``GET /?uploads`` returns a list of the current in-progress multipart uploads--i.e., the application initiates a multipart upload, but
the service hasn't completed all the uploads yet.
the service hasn't completed all the uploads yet.
Syntax
~~~~~~
@ -266,7 +266,7 @@ You may specify parameters for ``GET /{bucket}?uploads``, but none of them are r
| ``max-uploads`` | Integer | The maximum number of multipart uploads. The range from 1-1000. The default is 1000. |
+------------------------+-----------+--------------------------------------------------------------------------------------+
| ``upload-id-marker`` | String | Ignored if ``key-marker`` isn't specified. Specifies the ``ID`` of first |
| | | upload to list in lexicographical order at or following the ``ID``. |
| | | upload to list in lexicographical order at or following the ``ID``. |
+------------------------+-----------+--------------------------------------------------------------------------------------+

View File

@ -7,7 +7,7 @@ Common Entities
Bucket and Host Name
--------------------
There are two different modes of accessing the buckets. The first (preferred) method
There are two different modes of accessing the buckets. The first (preferred) method
identifies the bucket as the top-level directory in the URI. ::
GET /mybucket HTTP/1.1

View File

@ -334,7 +334,7 @@ Syntax
::
POST /{bucket}/{object}?uploadId= HTTP/1.1
POST /{bucket}/{object}?uploadId= HTTP/1.1
Request Entities
~~~~~~~~~~~~~~~~
@ -377,5 +377,3 @@ Syntax
::
DELETE /{bucket}/{object}?uploadId= HTTP/1.1

View File

@ -3,7 +3,7 @@ Service Operations
List Buckets
------------
``GET /`` returns a list of buckets created by the user making the request. ``GET /`` only
``GET /`` returns a list of buckets created by the user making the request. ``GET /`` only
returns buckets created by an authenticated user. You cannot make an anonymous request.
Syntax
@ -12,7 +12,7 @@ Syntax
GET / HTTP/1.1
Host: cname.domain.com
Authorization: AWS {access-key}:{hash-of-header-and-secret}
Response Entities
@ -37,4 +37,3 @@ Response Entities
+----------------------------+-------------+-----------------------------------------------------------------+
| ``DisplayName`` | String | The bucket owner's display name. |
+----------------------------+-------------+-----------------------------------------------------------------+

View File

@ -1,26 +1,26 @@
==========================
Ceph Configuration Files
==========================
When you start the Ceph service, the initialization process activates a series
of daemons that run in the background. The hosts in a typical RADOS cluster run
at least one of three processes or daemons:
When you start the Ceph service, the initialization process activates a series
of daemons that run in the background. The hosts in a typical RADOS cluster run
at least one of three processes or daemons:
- RADOS (``ceph-osd``)
- Monitor (``ceph-mon``)
- Metadata Server (``ceph-mds``)
Each process or daemon looks for a ``ceph.conf`` file that provides their
configuration settings. The default ``ceph.conf`` locations in sequential
order include:
Each process or daemon looks for a ``ceph.conf`` file that provides their
configuration settings. The default ``ceph.conf`` locations in sequential
order include:
1. ``$CEPH_CONF`` (*i.e.,* the path following
the ``$CEPH_CONF`` environment variable)
1. ``$CEPH_CONF`` (*i.e.,* the path following
the ``$CEPH_CONF`` environment variable)
2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
3. ``/etc/ceph/ceph.conf``
4. ``~/.ceph/config``
5. ``./ceph.conf`` (*i.e.,* in the current working directory)
The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you
The ``ceph.conf`` file provides the settings for each Ceph daemon. Once you
have installed the Ceph packages on the OSD Cluster hosts, you need to create
a ``ceph.conf`` file to configure your OSD cluster.
@ -33,17 +33,17 @@ The ``ceph.conf`` file defines:
- Paths to Hosts
- Runtime Options
You can add comments to the ``ceph.conf`` file by preceding comments with
You can add comments to the ``ceph.conf`` file by preceding comments with
a semi-colon (;). For example::
; <--A semi-colon precedes a comment
; A comment may be anything, and always follows a semi-colon on each line.
; A comment may be anything, and always follows a semi-colon on each line.
; We recommend that you provide comments in your configuration file(s).
Configuration File Basics
~~~~~~~~~~~~~~~~~~~~~~~~~
The ``ceph.conf`` file configures each instance of the three common processes
in a RADOS cluster.
The ``ceph.conf`` file configures each instance of the three common processes
in a RADOS cluster.
+-----------------+--------------+--------------+-----------------+-------------------------------------------------+
| Setting Scope | Process | Setting | Instance Naming | Description |
@ -59,9 +59,9 @@ in a RADOS cluster.
Metavariables
~~~~~~~~~~~~~
The configuration system supports certain 'metavariables,' which are typically
used in ``[global]`` or process/daemon settings. If metavariables occur inside
a configuration value, Ceph expands them into a concrete value--similar to how
The configuration system supports certain 'metavariables,' which are typically
used in ``[global]`` or process/daemon settings. If metavariables occur inside
a configuration value, Ceph expands them into a concrete value--similar to how
Bash shell expansion works.
There are a few different metavariables:
@ -84,36 +84,36 @@ There are a few different metavariables:
Global Settings
~~~~~~~~~~~~~~~
The Ceph configuration file supports a hierarchy of settings, where child
settings inherit the settings of the parent. Global settings affect all
instances of all processes in the cluster. Use the ``[global]`` setting for
values that are common for all hosts in the cluster. You can override each
``[global]`` setting by:
The Ceph configuration file supports a hierarchy of settings, where child
settings inherit the settings of the parent. Global settings affect all
instances of all processes in the cluster. Use the ``[global]`` setting for
values that are common for all hosts in the cluster. You can override each
``[global]`` setting by:
1. Changing the setting in a particular ``[group]``.
2. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
2. Changing the setting in a particular process type (*e.g.,* ``[osd]``, ``[mon]``, ``[mds]`` ).
3. Changing the setting in a particular process (*e.g.,* ``[osd.1]`` )
Overriding a global setting affects all child processes, except those that
Overriding a global setting affects all child processes, except those that
you specifically override. For example::
[global]
; Enable authentication between hosts within the cluster.
auth supported = cephx
[global]
; Enable authentication between hosts within the cluster.
auth supported = cephx
Process/Daemon Settings
~~~~~~~~~~~~~~~~~~~~~~~
You can specify settings that apply to a particular type of process. When you
specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
particular instance, the setting will apply to all OSDs, monitors or metadata
You can specify settings that apply to a particular type of process. When you
specify settings under ``[osd]``, ``[mon]`` or ``[mds]`` without specifying a
particular instance, the setting will apply to all OSDs, monitors or metadata
daemons respectively.
Instance Settings
~~~~~~~~~~~~~~~~~
You may specify settings for particular instances of an daemon. You may specify
an instance by entering its type, delimited by a period (.) and by the
instance ID. The instance ID for an OSD is always numeric, but it may be
alphanumeric for monitors and metadata servers. ::
You may specify settings for particular instances of an daemon. You may specify
an instance by entering its type, delimited by a period (.) and by the
instance ID. The instance ID for an OSD is always numeric, but it may be
alphanumeric for monitors and metadata servers. ::
[osd.1]
; settings affect osd.1 only.
@ -124,17 +124,17 @@ alphanumeric for monitors and metadata servers. ::
``host`` and ``addr`` Settings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The `Hardware Recommendations <../hardware-recommendations>`_ section
provides some hardware guidelines for configuring the cluster. It is possible
for a single host to run multiple daemons. For example, a single host with
multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID.
Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon
on the same host. Ideally, you will have a host for a particular type of
process. For example, one host may run ``ceph-osd`` daemons, another host
The `Hardware Recommendations <../hardware-recommendations>`_ section
provides some hardware guidelines for configuring the cluster. It is possible
for a single host to run multiple daemons. For example, a single host with
multiple disks or RAIDs may run one ``ceph-osd`` for each disk or RAID.
Additionally, a host may run both a ``ceph-mon`` and an ``ceph-osd`` daemon
on the same host. Ideally, you will have a host for a particular type of
process. For example, one host may run ``ceph-osd`` daemons, another host
may run a ``ceph-mds`` daemon, and other hosts may run ``ceph-mon`` daemons.
Each host has a name identified by the ``host`` setting, and a network location
(i.e., domain name or IP address) identified by the ``addr`` setting. For example::
Each host has a name identified by the ``host`` setting, and a network location
(i.e., domain name or IP address) identified by the ``addr`` setting. For example::
[osd.1]
host = hostNumber1
@ -146,14 +146,14 @@ Each host has a name identified by the ``host`` setting, and a network location
Monitor Configuration
~~~~~~~~~~~~~~~~~~~~~
Ceph typically deploys with 3 monitors to ensure high availability should a
monitor instance crash. An odd number of monitors (3) ensures that the Paxos
Ceph typically deploys with 3 monitors to ensure high availability should a
monitor instance crash. An odd number of monitors (3) ensures that the Paxos
algorithm can determine which version of the cluster map is the most accurate.
.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
the lack of a monitor may interrupt data service availability.
.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
the lack of a monitor may interrupt data service availability.
Ceph monitors typically listen on port ``6789``.
Ceph monitors typically listen on port ``6789``.
Example Configuration File
~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -163,14 +163,14 @@ Example Configuration File
Configuration File Deployment Options
-------------------------------------
The most common way to deploy the ``ceph.conf`` file in a cluster is to have
The most common way to deploy the ``ceph.conf`` file in a cluster is to have
all hosts share the same configuration file.
You may create a ``ceph.conf`` file for each host if you wish, or specify a
particular ``ceph.conf`` file for a subset of hosts within the cluster. However,
using per-host ``ceph.conf``configuration files imposes a maintenance burden as the
cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file
You may create a ``ceph.conf`` file for each host if you wish, or specify a
particular ``ceph.conf`` file for a subset of hosts within the cluster. However,
using per-host ``ceph.conf``configuration files imposes a maintenance burden as the
cluster grows. In a typical deployment, an administrator creates a ``ceph.conf`` file
on the Administration host and then copies that file to each OSD Cluster host.
The current cluster deployment script, ``mkcephfs``, does not make copies of the
The current cluster deployment script, ``mkcephfs``, does not make copies of the
``ceph.conf``. You must copy the file manually.

View File

@ -1,9 +1,9 @@
==============================
Deploying Ceph Configuration
==============================
Ceph's current deployment script does not copy the configuration file you
created from the Administration host to the OSD Cluster hosts. Copy the
configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
Ceph's current deployment script does not copy the configuration file you
created from the Administration host to the OSD Cluster hosts. Copy the
configuration file you created (*i.e.,* ``mycluster.conf`` in the example below)
from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.
::
@ -11,23 +11,23 @@ from the Administration host to ``etc/ceph/ceph.conf`` on each OSD Cluster host.
ssh myserver01 sudo tee /etc/ceph/ceph.conf <mycluster.conf
ssh myserver02 sudo tee /etc/ceph/ceph.conf <mycluster.conf
ssh myserver03 sudo tee /etc/ceph/ceph.conf <mycluster.conf
The current deployment script doesn't copy the start services. Copy the ``start``
services from the Administration host to each OSD Cluster host. ::
ssh myserver01 sudo /etc/init.d/ceph start
ssh myserver02 sudo /etc/init.d/ceph start
ssh myserver03 sudo /etc/init.d/ceph start
The current deployment script may not create the default server directories. Create
server directories for each instance of a Ceph daemon.
Using the exemplary ``ceph.conf`` file, you would perform the following:
Using the exemplary ``ceph.conf`` file, you would perform the following:
On ``myserver01``::
On ``myserver01``::
mkdir srv/osd.0
mkdir srv/osd.0
mkdir srv/mon.a
On ``myserver02``::
@ -35,14 +35,13 @@ On ``myserver02``::
mkdir srv/osd.1
mkdir srv/mon.b
On ``myserver03``::
On ``myserver03``::
mkdir srv/osd.2
mkdir srv/mon.c
On ``myserver04``::
mkdir srv/osd.3
.. important:: The ``host`` variable determines which host runs each instance of a Ceph daemon.

View File

@ -6,19 +6,18 @@ Once you have copied your Ceph Configuration to the OSD Cluster hosts, you may d
.. note:: ``mkcephfs`` is a quick bootstrapping tool. It does not handle more complex operations, such as upgrades.
For production environments, you will deploy Ceph using Chef cookbooks (coming soon!).
To run ``mkcephfs``, execute the following::
For production environments, you will deploy Ceph using Chef cookbooks (coming soon!).
To run ``mkcephfs``, execute the following::
$ mkcephfs -a -c <path>/ceph.conf -k mycluster.keyring
The script adds an admin key to the ``mycluster.keyring``, which is analogous to a root password.
To start the cluster, execute the following::
The script adds an admin key to the ``mycluster.keyring``, which is analogous to a root password.
To start the cluster, execute the following::
/etc/init.d/ceph -a start
Ceph should begin operating. You can check on the health of your Ceph cluster with the following::
Ceph should begin operating. You can check on the health of your Ceph cluster with the following::
ceph -k mycluster.keyring -c <path>/ceph.conf health

View File

@ -2,26 +2,26 @@
Hard Disk and File System Recommendations
=========================================
Ceph aims for data safety, which means that when the application receives notice
that data was written to the disk, that data was actually written to the disk.
For old kernels (<2.6.33), disable the write cache if the journal is on a raw
Ceph aims for data safety, which means that when the application receives notice
that data was written to the disk, that data was actually written to the disk.
For old kernels (<2.6.33), disable the write cache if the journal is on a raw
disk. Newer kernels should work fine.
Use ``hdparm`` to disable write caching on the hard disk::
Use ``hdparm`` to disable write caching on the hard disk::
$ hdparm -W 0 /dev/hda 0
$ hdparm -W 0 /dev/hda 0
Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file
Ceph OSDs depend on the Extended Attributes (XATTRs) of the underlying file
system for:
- Internal object state
- Snapshot metadata
- RADOS Gateway Access Control Lists (ACLs).
- RADOS Gateway Access Control Lists (ACLs).
Ceph OSDs rely heavily upon the stability and performance of the underlying file
system. The underlying file system must provide sufficient capacity for XATTRs.
File system candidates for Ceph include B tree and B+ tree file systems such as:
system. The underlying file system must provide sufficient capacity for XATTRs.
File system candidates for Ceph include B tree and B+ tree file systems such as:
- ``btrfs``
- ``XFS``
@ -34,19 +34,18 @@ If you are using ``ext4``, enable XATTRs. ::
The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit
for XATTRs in ``ext4``, causing the ``ceph-osd`` process to crash. Version 0.45
or newer uses ``leveldb`` to bypass this limitation. ``ext4`` is a poor file
system choice if you intend to deploy the RADOS Gateway or use snapshots on
versions earlier than 0.45.
or newer uses ``leveldb`` to bypass this limitation. ``ext4`` is a poor file
system choice if you intend to deploy the RADOS Gateway or use snapshots on
versions earlier than 0.45.
.. tip:: Use ``xfs`` initially and ``btrfs`` when it is ready for production.
The Ceph team believes that the best performance and stability will come from
``btrfs.`` The ``btrfs`` file system has internal transactions that keep the
local data set in a consistent state. This makes OSDs based on ``btrfs`` simple
to deploy, while providing scalability not currently available from block-based
file systems. The 64-kb XATTR limit for ``xfs`` XATTRS is enough to accommodate
RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice
file system of the Ceph team in the long run, but ``xfs`` is currently more
stable than ``btrfs``. If you only plan to use RADOS and ``rbd`` without
The Ceph team believes that the best performance and stability will come from
``btrfs.`` The ``btrfs`` file system has internal transactions that keep the
local data set in a consistent state. This makes OSDs based on ``btrfs`` simple
to deploy, while providing scalability not currently available from block-based
file systems. The 64-kb XATTR limit for ``xfs`` XATTRS is enough to accommodate
RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice
file system of the Ceph team in the long run, but ``xfs`` is currently more
stable than ``btrfs``. If you only plan to use RADOS and ``rbd`` without
snapshots and without ``radosgw``, the ``ext4`` file system should work just fine.

View File

@ -1,22 +1,22 @@
===============================
Configuring a Storage Cluster
===============================
Ceph can run with a cluster containing thousands of Object Storage Devices
(OSDs). A minimal system will have at least two OSDs for data replication. To
configure OSD clusters, you must provide settings in the configuration file.
Ceph provides default values for many settings, which you can override in the
configuration file. Additionally, you can make runtime modification to the
Ceph can run with a cluster containing thousands of Object Storage Devices
(OSDs). A minimal system will have at least two OSDs for data replication. To
configure OSD clusters, you must provide settings in the configuration file.
Ceph provides default values for many settings, which you can override in the
configuration file. Additionally, you can make runtime modification to the
configuration using command-line utilities.
When Ceph starts, it activates three daemons:
When Ceph starts, it activates three daemons:
- ``ceph-osd`` (mandatory)
- ``ceph-mon`` (mandatory)
- ``ceph-mds`` (mandatory for cephfs only)
Each process, daemon or utility loads the host's configuration file. A process
may have information about more than one daemon instance (*i.e.,* multiple
contexts). A daemon or utility only has information about a single daemon
Each process, daemon or utility loads the host's configuration file. A process
may have information about more than one daemon instance (*i.e.,* multiple
contexts). A daemon or utility only has information about a single daemon
instance (a single context).
.. note:: Ceph can run on a single host for evaluation purposes.

View File

@ -29,7 +29,7 @@
+-----------------------------------+-------------------------+------------+------------------------------------------------+
| ``mds_session_autoclose`` | Float | 300 | // autoclose idle session |
+-----------------------------------+-------------------------+------------+------------------------------------------------+
| ``mds_reconnect_timeout`` | Float | 45 | // secs to wait for clients during mds restart |
| ``mds_reconnect_timeout`` | Float | 45 | // secs to wait for clients during mds restart |
+-----------------------------------+-------------------------+------------+------------------------------------------------+
| ``mds_tick_interval`` | Float | 5 | |
+-----------------------------------+-------------------------+------------+------------------------------------------------+

View File

@ -98,11 +98,11 @@ Inkscape
You can use Inkscape to generate scalable vector graphics.
http://inkscape.org for restructedText documents.
If you generate diagrams with Inkscape, you should
If you generate diagrams with Inkscape, you should
commit both the Scalable Vector Graphics (SVG) file and export a
Portable Network Graphic (PNG) file. Reference the PNG file.
By committing the SVG file, others will be able to update the
SVG diagrams using Inkscape.
HTML5 will support SVG inline.
HTML5 will support SVG inline.

View File

@ -25,14 +25,14 @@ You must set up SSH keys with github to clone the Ceph
repository. If you do not have SSH keys for github, execute:
``$ ssh-keygen -d``
Get the key to add to your github account:
``$ cat .ssh/id_dsa.pub``
Copy the public key. Then, go to your your github account,
click on **Account Settings** (*i.e.*, the tools icon); then,
click **SSH Keys** on the left side navbar.
click **SSH Keys** on the left side navbar.
Click **Add SSH key** in the **SSH Keys** list, enter a name for
the key, paste the key you generated, and press the **Add key**
@ -41,17 +41,17 @@ button.
To clone the Ceph repository, execute:
``$ git clone git@github:ceph/ceph.git``
You should have a full copy of the Ceph repository.
Install the Required Tools
--------------------------
If you do not have Sphinx and its dependencies installed,
--------------------------
If you do not have Sphinx and its dependencies installed,
a list of dependencies will appear in the output. Install
the dependencies on your system, and then execute the build.
To run Sphinx, at least the following are required:
To run Sphinx, at least the following are required:
- ``python-dev``
- ``python-pip``
@ -63,10 +63,10 @@ To run Sphinx, at least the following are required:
- ``graphviz``
Execute ``apt-get install`` for each dependency that isn't
installed on your host.
installed on your host.
``$ apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz``
Build the Documents
@ -80,6 +80,6 @@ Once you have installed all the dependencies, execute the build:
Once you build the documentation set, you may navigate to the source directory to view it:
``$ cd build-doc/output``
There should be an ``html`` directory and a ``man`` directory containing documentation
in HTML and manpage formats respectively.
in HTML and manpage formats respectively.

View File

@ -1,14 +1,14 @@
=================
Welcome to Ceph
=================
Ceph uniquely delivers **object, block, and file storage in one unified
system**. Ceph is highly reliable, easy to manage, and free. The power of Ceph
can transform your companys IT infrastructure and your ability to manage vast
amounts of data. Ceph delivers extraordinary scalability--thousands of clients
accessing petabytes to exabytes of data. Ceph leverages commodity hardware and
intelligent daemons to accommodate large numbers of storage hosts, which
communicate with each other to replicate data, and redistribute data
dynamically. Ceph's cluster of monitors oversees the hosts in the Ceph storage
Ceph uniquely delivers **object, block, and file storage in one unified
system**. Ceph is highly reliable, easy to manage, and free. The power of Ceph
can transform your companys IT infrastructure and your ability to manage vast
amounts of data. Ceph delivers extraordinary scalability--thousands of clients
accessing petabytes to exabytes of data. Ceph leverages commodity hardware and
intelligent daemons to accommodate large numbers of storage hosts, which
communicate with each other to replicate data, and redistribute data
dynamically. Ceph's cluster of monitors oversees the hosts in the Ceph storage
cluster to ensure that the storage hosts are running smoothly.
.. image:: images/stack.png

View File

@ -1,51 +1,50 @@
====================================
Downloading Debian/Ubuntu Packages
====================================
We automatically build Debian/Ubuntu packages for any branches or tags that
appear in the ``ceph.git`` `repository <http://github.com/ceph/ceph>`_. If you
want to build your own packages (*e.g.,* for RPM), see
We automatically build Debian/Ubuntu packages for any branches or tags that
appear in the ``ceph.git`` `repository <http://github.com/ceph/ceph>`_. If you
want to build your own packages (*e.g.,* for RPM), see
`Build Ceph Packages <../../source/build-packages>`_.
When you download release packages, you will receive the latest package build,
which may be several weeks behind the current release or the most recent code.
It may contain bugs that have already been fixed in the most recent versions of
the code. Until packages contain only stable code, you should carefully consider
When you download release packages, you will receive the latest package build,
which may be several weeks behind the current release or the most recent code.
It may contain bugs that have already been fixed in the most recent versions of
the code. Until packages contain only stable code, you should carefully consider
the tradeoffs of installing from a package or retrieving the latest release
or the most current source code and building Ceph.
When you execute the following commands to install the Debian/Ubuntu Ceph
packages, replace ``{ARCH}`` with the architecture of your CPU (*e.g.,* ``amd64``
or ``i386``), ``{DISTRO}`` with the code name of your operating system
(*e.g.,* ``precise``, rather than the OS version number) and ``{BRANCH}`` with
the version of Ceph you want to run (e.g., ``master``, ``stable``, ``unstable``,
``v0.44``, *etc.*).
When you execute the following commands to install the Debian/Ubuntu Ceph
packages, replace ``{ARCH}`` with the architecture of your CPU (*e.g.,* ``amd64``
or ``i386``), ``{DISTRO}`` with the code name of your operating system
(*e.g.,* ``precise``, rather than the OS version number) and ``{BRANCH}`` with
the version of Ceph you want to run (e.g., ``master``, ``stable``, ``unstable``,
``v0.44``, *etc.*).
Adding Release Packages to APT
------------------------------
We provide stable release packages for Debian/Ubuntu, which are signed signed
with the ``release.asc`` key. Click `here <http://ceph.newdream.net/debian/dists>`_
to see the distributions and branches supported. To install a release package,
you must first add a release key. ::
We provide stable release packages for Debian/Ubuntu, which are signed signed
with the ``release.asc`` key. Click `here <http://ceph.newdream.net/debian/dists>`_
to see the distributions and branches supported. To install a release package,
you must first add a release key. ::
$ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc \ | sudo apt-key add -
For Debian/Ubuntu releases, we use the Advanced Package Tool (APT). To retrieve
the release packages and updates and install them with ``apt``, you must add a
For Debian/Ubuntu releases, we use the Advanced Package Tool (APT). To retrieve
the release packages and updates and install them with ``apt``, you must add a
``ceph.list`` file to your ``apt`` configuration with the following path::
etc/apt/sources.list.d/ceph.list
Open the file and add the following line::
deb http://ceph.com/debian/ {DISTRO} main
Remember to replace ``{DISTRO}`` with the Linux distribution for your host.
Remember to replace ``{DISTRO}`` with the Linux distribution for your host.
Then, save the file.
Downloading Packages
--------------------
Once you add either release or autobuild packages for Debian/Ubuntu, you may
Once you add either release or autobuild packages for Debian/Ubuntu, you may
download them with ``apt`` as follows::
sudo apt-get update

View File

@ -1,20 +1,20 @@
==========================
Hardware Recommendations
==========================
Ceph runs on commodity hardware and a Linux operating system over a TCP/IP
network. The hardware recommendations for different processes/daemons differ
considerably.
Ceph runs on commodity hardware and a Linux operating system over a TCP/IP
network. The hardware recommendations for different processes/daemons differ
considerably.
OSD hosts should have ample data storage in the form of a hard drive or a RAID.
Ceph OSDs run the RADOS service, calculate data placement with CRUSH, and
maintain their own copy of the cluster map. Therefore, OSDs should have a
reasonable amount of processing power.
OSD hosts should have ample data storage in the form of a hard drive or a RAID.
Ceph OSDs run the RADOS service, calculate data placement with CRUSH, and
maintain their own copy of the cluster map. Therefore, OSDs should have a
reasonable amount of processing power.
Ceph monitors require enough disk space for the cluster map, but usually do
Ceph monitors require enough disk space for the cluster map, but usually do
not encounter heavy loads. Monitor hosts do not need to be very powerful.
Ceph metadata servers distribute their load. However, metadata servers must be
capable of serving their data quickly. Metadata servers should have strong
Ceph metadata servers distribute their load. However, metadata servers must be
capable of serving their data quickly. Metadata servers should have strong
processing capability and plenty of RAM.
.. note:: If you are not using the Ceph File System, you do not need a meta data server.
@ -45,5 +45,4 @@ processing capability and plenty of RAM.
| | Disk Space | 1 MB per daemon |
| +----------------+------------------------------------+
| | Network | 2-1GB Ethernet NICs |
+--------------+----------------+------------------------------------+
+--------------+----------------+------------------------------------+

View File

@ -1,13 +1,13 @@
=================
Installing Ceph
=================
Storage clusters are the foundation of the Ceph system. Ceph storage hosts
provide object storage. Clients access the Ceph storage cluster directly from
an application (using ``librados``), over an object storage protocol such as
Amazon S3 or OpenStack Swift (using ``radosgw``), or with a block device
Storage clusters are the foundation of the Ceph system. Ceph storage hosts
provide object storage. Clients access the Ceph storage cluster directly from
an application (using ``librados``), over an object storage protocol such as
Amazon S3 or OpenStack Swift (using ``radosgw``), or with a block device
(using ``rbd``). To begin using Ceph, you must first set up a storage cluster.
The following sections provide guidance for configuring a storage cluster and
The following sections provide guidance for configuring a storage cluster and
installing Ceph components:
.. toctree::

View File

@ -1,18 +1,18 @@
==========================
Installing Ceph Packages
==========================
Once you have downloaded or built Ceph packages, you may install them on your
Once you have downloaded or built Ceph packages, you may install them on your
Admin host and OSD Cluster hosts.
.. important:: All hosts should be running the same package version.
To ensure that you are running the same version on each host with APT,
you may execute ``sudo apt-get update`` on each host before you install
.. important:: All hosts should be running the same package version.
To ensure that you are running the same version on each host with APT,
you may execute ``sudo apt-get update`` on each host before you install
the packages.
Installing Packages with APT
----------------------------
Once you download or build the packages and add your packages to APT
(see `Downloading Debian/Ubuntu Packages <../download-packages>`_), you may
Once you download or build the packages and add your packages to APT
(see `Downloading Debian/Ubuntu Packages <../download-packages>`_), you may
install them as follows::
$ sudo apt-get install ceph
@ -24,9 +24,9 @@ You may install RPM packages as follows::
rpm -i rpmbuild/RPMS/x86_64/ceph-*.rpm
.. note: We do not build RPM packages at this time. You may build them
yourself by downloading the source code.
yourself by downloading the source code.
Proceed to Configuring a Cluster
--------------------------------
Once you have prepared your hosts and installed Ceph pages, proceed to
`Configuring a Storage Cluster <../../config-cluster>`_.
Once you have prepared your hosts and installed Ceph pages, proceed to
`Configuring a Storage Cluster <../../config-cluster>`_.

View File

@ -2,49 +2,49 @@
Build Ceph Packages
===================
To build packages, you must clone the `Ceph`_ repository.
You can create installation packages from the latest code using ``dpkg-buildpackage`` for Debian/Ubuntu
To build packages, you must clone the `Ceph`_ repository.
You can create installation packages from the latest code using ``dpkg-buildpackage`` for Debian/Ubuntu
or ``rpmbuild`` for the RPM Package Manager.
.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of cores * 2.
.. tip:: When building on a multi-core CPU, use the ``-j`` and the number of cores * 2.
For example, use ``-j4`` for a dual-core processor to accelerate the build.
Advanced Package Tool (APT)
---------------------------
To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the `Ceph`_ repository,
installed the `build prerequisites`_ and installed ``debhelper``::
To create ``.deb`` packages for Debian/Ubuntu, ensure that you have cloned the `Ceph`_ repository,
installed the `build prerequisites`_ and installed ``debhelper``::
$ sudo apt-get install debhelper
Once you have installed debhelper, you can build the packages:
Once you have installed debhelper, you can build the packages:
$ sudo dpkg-buildpackage
For multi-processor CPUs use the ``-j`` option to accelerate the build.
For multi-processor CPUs use the ``-j`` option to accelerate the build.
RPM Package Manager
-------------------
To create ``.prm`` packages, ensure that you have cloned the `Ceph`_ repository,
To create ``.prm`` packages, ensure that you have cloned the `Ceph`_ repository,
installed the `build prerequisites`_ and installed ``rpm-build`` and ``rpmdevtools``::
$ yum install rpm-build rpmdevtools
Once you have installed the tools, setup an RPM compilation environment::
$ rpmdev-setuptree
Fetch the source tarball for the RPM compilation environment::
Fetch the source tarball for the RPM compilation environment::
$ wget -P ~/rpmbuild/SOURCES/ http://ceph.newdream.net/download/ceph-<version>.tar.gz
Build the RPM packages::
$ rpmbuild -tb ~/rpmbuild/SOURCES/ceph-<version>.tar.gz
For multi-processor CPUs use the ``-j`` option to accelerate the build.
For multi-processor CPUs use the ``-j`` option to accelerate the build.
.. _build prerequisites: ../build-prerequisites

View File

@ -75,7 +75,7 @@ openSUSE 11.2 (and later)
Execute ``zypper install`` for each dependency that isn't installed on your host. ::
$zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
$zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
Prerequisites for Building Ceph Documentation
=============================================
@ -96,4 +96,3 @@ to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the followi
Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
$ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz

View File

@ -13,26 +13,26 @@ Ceph provides ``automake`` and ``configure`` scripts to streamline the build pro
$ ./configure
$ make
You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
$ make -j4
To install Ceph locally, you may also use::
To install Ceph locally, you may also use::
$ make install
If you install Ceph locally, ``make`` will place the executables in ``usr/local/bin``.
If you install Ceph locally, ``make`` will place the executables in ``usr/local/bin``.
You may add the ``ceph.conf`` file to the ``usr/local/bin`` directory to run an evaluation environment of Ceph from a single directory.
Building Ceph Documentation
===========================
Ceph utilizes Pythons Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx <http://sphinx.pocoo.org>`_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script::
$ cd ceph
$ admin/build-doc
Once you build the documentation set, you may navigate to the source directory to view it::
$ cd build-doc/output
There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively.

View File

@ -7,7 +7,7 @@ on your local host. To install ``git``, execute::
$ sudo apt-get install git
You must also have a ``github`` account. If you do not have a
``github`` account, go to `github.com <http://github.com>`_ and register.
``github`` account, go to `github.com <http://github.com>`_ and register.
Follow the directions for setting up git at `Set Up Git <http://help.github.com/linux-set-up-git/>`_.
Clone the Source
@ -15,24 +15,24 @@ Clone the Source
To clone the Ceph source code repository, execute::
$ git clone git@github.com:ceph/ceph.git
Once ``git clone`` executes, you should have a full copy of the Ceph repository.
Clone the Submodules
--------------------
Before you can build Ceph, you must navigate to your new repository and get the ``init`` submodule and the ``update`` submodule::
Before you can build Ceph, you must navigate to your new repository and get the ``init`` submodule and the ``update`` submodule::
$ cd ceph
$ git submodule init
$ git submodule update
$ cd ceph
$ git submodule init
$ git submodule update
.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date::
.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date::
$ git status
Choose a Branch
---------------
Once you clone the source code and submodules, your Ceph repository will be on the ``master`` branch by default, which is the unstable development branch. You may choose other branches too.
Once you clone the source code and submodules, your Ceph repository will be on the ``master`` branch by default, which is the unstable development branch. You may choose other branches too.
- ``master``: The unstable development branch.
- ``stable``: The bugfix branch.
@ -41,4 +41,3 @@ Once you clone the source code and submodules, your Ceph repository will be on t
::
git checkout master

View File

@ -1,8 +1,8 @@
==========================
Contributing Source Code
==========================
If you are making source contributions, you must be added to the Ceph
project on github. You must also generate keys and add them to your
If you are making source contributions, you must be added to the Ceph
project on github. You must also generate keys and add them to your
github account.
Generate SSH Keys
@ -11,19 +11,19 @@ You must generate SSH keys for github to clone the Ceph
repository. If you do not have SSH keys for ``github``, execute::
$ ssh-keygen -d
Get the key to add to your ``github`` account (the following example
Get the key to add to your ``github`` account (the following example
assumes you used the default file path)::
$ cat .ssh/id_dsa.pub
Copy the public key.
Copy the public key.
Add the Key
-----------
Go to your your ``github`` account,
click on "Account Settings" (i.e., the 'tools' icon); then,
click "SSH Keys" on the left side navbar.
click "SSH Keys" on the left side navbar.
Click "Add SSH key" in the "SSH Keys" list, enter a name for
the key, paste the key you generated, and press the "Add key"

View File

@ -2,7 +2,7 @@
Downloading a Ceph Release Tarball
====================================
As Ceph development progresses, the Ceph team releases new versions of the
As Ceph development progresses, the Ceph team releases new versions of the
source code. You may download source code tarballs for Ceph releases here:
`Ceph Release Tarballs <http://ceph.com/download/>`_

View File

@ -2,9 +2,9 @@
Ceph Source Code
==================
You can build Ceph from source by downloading a release or cloning the ``ceph``
repository at github. If you intend to build Ceph from source, please see the
build pre-requisites first. Making sure you have all the pre-requisites
You can build Ceph from source by downloading a release or cloning the ``ceph``
repository at github. If you intend to build Ceph from source, please see the
build pre-requisites first. Making sure you have all the pre-requisites
will save you time.
.. toctree::

View File

@ -34,8 +34,8 @@ These are exciting times in the Ceph community! Get involved!
| **Support** | If you have a very specific problem, an | http://inktank.com |
| | immediate need, or if your deployment requires | |
| | significant help, consider commercial support_. | |
+-----------------+-------------------------------------------------+-----------------------------------------------+
+-----------------+-------------------------------------------------+-----------------------------------------------+
.. _Subscribe: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel

View File

@ -1,7 +1,7 @@
=================
Getting Started
=================
Welcome to Ceph! The following sections provide information that will help you
Welcome to Ceph! The following sections provide information that will help you
get started:
.. toctree::

View File

@ -1,17 +1,17 @@
=============
Quick Start
=============
Ceph is intended for large-scale deployments, but you may install Ceph on a
single host. Quick start is intended for Debian/Ubuntu Linux distributions.
Ceph is intended for large-scale deployments, but you may install Ceph on a
single host. Quick start is intended for Debian/Ubuntu Linux distributions.
1. Login to your host.
2. Make a directory for Ceph packages. *e.g.,* ``$ mkdir ceph``
3. `Get Ceph packages <../../install/download-packages>`_ and add them to your
3. `Get Ceph packages <../../install/download-packages>`_ and add them to your
APT configuration file.
4. Update and Install Ceph packages.
See `Downloading Debian/Ubuntu Packages <../../install/download-packages>`_
4. Update and Install Ceph packages.
See `Downloading Debian/Ubuntu Packages <../../install/download-packages>`_
and `Installing Packages <../../install/installing-packages>`_ for details.
5. Add a ``ceph.conf`` file.
5. Add a ``ceph.conf`` file.
See `Ceph Configuration Files <../../config-cluster/ceph-conf>`_ for details.
6. Run Ceph.
6. Run Ceph.
See `Deploying Ceph with mkcephfs <../../config-cluster/deploying-ceph-with-mkcephfs>`_