diff --git a/README b/README
index 1a39c4da475..ee25e7c6122 100644
--- a/README
+++ b/README
@@ -1,11 +1,11 @@
+============================================
Ceph - a scalable distributed storage system
============================================
Please see http://ceph.newdream.net/ for current info.
-
Contributing Code
------------------
+=================
Most of Ceph is licensed under the LGPL version 2.1. Some
miscellaneous code is under BSD-style license or is public domain.
@@ -24,21 +24,20 @@ contributed under the terms of the applicable license.
Building Ceph
--------------
+=============
-To prepare the source tree for the first time in case it has been git cloned,
+To prepare the source tree after it has been git cloned,
+
+ $ git submodule update --init
-$ git submodule update --init
+To build the server daemons, and FUSE client, execute the following:
-To build the server daemons, and FUSE client,
-
-$ ./autogen.sh
-$ ./configure
-$ make
+ $ ./autogen.sh
+ $ ./configure
+ $ make
(Note that the FUSE client will only be built if libfuse is present.)
-
Dependencies
------------
@@ -66,3 +65,58 @@ $ dpkg-buildpackage
For RPM-based systems (Redhat, Suse, etc.),
$ rpmbuild
+
+
+Building the Documentation
+==========================
+
+Prerequisites
+-------------
+To build the documentation, you must install the following:
+
+- python-dev
+- python-pip
+- python-virualenv
+- doxygen
+- ditaa
+- libxml2-dev
+- libxslt-dev
+- dot
+- graphviz
+
+For example:
+
+ sudo apt-get install python-dev python-pip python-virualenv doxygen ditaa libxml2-dev libxslt-dev dot graphviz
+
+Building the Documentation
+--------------------------
+
+To build the documentation, ensure that you are in the top-level `/ceph directory, and execute the build script. For example:
+
+ $ admin/build-doc
+
+
+Build Prerequisites
+-------------------
+To build the source code, you must install the following:
+
+- automake
+- autoconf
+- automake
+- gcc
+- g++
+- libboost-dev
+- libedit-dev
+- libssl-dev
+- libtool
+- libfcgi
+- libfcgi-dev
+- libfuse-dev
+- linux-kernel-headers
+- libcrypto++-dev
+
+For example:
+
+ $ apt-get install automake autoconf automake gcc g++ libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev
+
+
diff --git a/doc/conf.py b/doc/conf.py
index de0381f720d..0d6b844e1d8 100644
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -1,5 +1,5 @@
project = u'Ceph'
-copyright = u'2011, New Dream Network'
+copyright = u'2012, New Dream Network'
version = 'dev'
release = 'dev'
diff --git a/doc/dev/documenting.rst b/doc/dev/documenting.rst
index eefceefa7cd..f66d7b40ad5 100644
--- a/doc/dev/documenting.rst
+++ b/doc/dev/documenting.rst
@@ -90,3 +90,19 @@ declarative language for drawing things, and includes:
.. _`sequence diagrams`: http://blockdiag.com/en/seqdiag/index.html
.. _`activity diagrams`: http://blockdiag.com/en/actdiag/index.html
.. _`network diagrams`: http://blockdiag.com/en/nwdiag/
+
+
+Inkscape
+--------
+
+You can use Inkscape to generate scalable vector graphics.
+http://inkscape.org for restructedText documents.
+
+If you generate diagrams with Inkscape, you should
+commit both the Scalable Vector Graphics (SVG) file and export a
+Portable Network Graphic (PNG) file. Reference the PNG file.
+
+By committing the SVG file, others will be able to update the
+SVG diagrams using Inkscape.
+
+HTML5 will support SVG inline.
\ No newline at end of file
diff --git a/doc/dev/generatedocs.rst b/doc/dev/generatedocs.rst
index 3abc8158141..f795ec028c2 100644
--- a/doc/dev/generatedocs.rst
+++ b/doc/dev/generatedocs.rst
@@ -1,86 +1,85 @@
-BUILDING CEPH DOCUMENTATION
+Building Ceph Documentation
===========================
-Ceph utilizes Python's "Sphinx" documentation tool. For details on
-the Sphinx documentation tool, refer to: http://sphinx.pocoo.org/
+Ceph utilizes Python's Sphinx documentation tool. For details on
+the Sphinx documentation tool, refer to `The Sphinx Documentation Tool `_.
To build the Ceph documentation set, you must:
1. Clone the Ceph repository
2. Install the required tools
-3. Execute admin/build-doc from the ceph directory.
+3. Build the documents
-CLONE THE CEPH REPOSITORY
+Clone the Ceph Repository
-------------------------
-To clone the Ceph repository, you must have "git" installed
-on your local host. To install git, execute:
+To clone the Ceph repository, you must have ``git`` installed
+on your local host. To install ``git``, execute:
- $ sudo apt-get install git
+ ``$ sudo apt-get install git``
-You must also have a "github" account. If you do not have a
-github account, go to http://github.com and register.
+You must also have a github account. If you do not have a
+github account, go to `github `_ and register.
You must set up SSH keys with github to clone the Ceph
repository. If you do not have SSH keys for github, execute:
- $ ssh-keygen -d
+ ``$ ssh-keygen -d``
Get the key to add to your github account:
- $ cat .ssh/id_dsa.pub
+ ``$ cat .ssh/id_dsa.pub``
Copy the public key. Then, go to your your github account,
-click on "Account Settings" (i.e., the 'tools' icon); then,
-click "SSH Keys" on the left side navbar.
+click on **Account Settings** (*i.e.*, the tools icon); then,
+click **SSH Keys** on the left side navbar.
-Click "Add SSH key" in the "SSH Keys" list, enter a name for
-the key, paste the key you generated, and press the "Add key"
+Click **Add SSH key** in the **SSH Keys** list, enter a name for
+the key, paste the key you generated, and press the **Add key**
button.
To clone the Ceph repository, execute:
- $ git clone git@github:ceph/ceph.git
+ ``$ git clone git@github:ceph/ceph.git``
You should have a full copy of the Ceph repository.
-INSTALL THE REQUIRED TOOLS
---------------------------
-If you think you have the required tools to run Sphinx,
-navigate to the Ceph repository and execute the build:
-
- $ cd ceph
- $ admin/build-doc
-
+Install the Required Tools
+--------------------------
If you do not have Sphinx and its dependencies installed,
a list of dependencies will appear in the output. Install
the dependencies on your system, and then execute the build.
To run Sphinx, at least the following are required:
-python-dev
-python-pip
-python-virtualenv
-libxml2-dev
-libxslt-dev
-doxygen
-ditaa
-graphviz
+- ``python-dev``
+- ``python-pip``
+- ``python-virtualenv``
+- ``libxml2-dev``
+- ``libxslt-dev``
+- ``doxygen``
+- ``ditaa``
+- ``graphviz``
-Execute "apt-get install" for each dependency that isn't
+Execute ``apt-get install`` for each dependency that isn't
installed on your host.
- $ apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
+ ``$ apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz``
-Once you have installed all the dependencies, execute the build again:
- $ cd ceph
- $ admin/build-doc
+
+Build the Documents
+-------------------
+
+Once you have installed all the dependencies, execute the build:
+
+ ``$ cd ceph``
+ ``$ admin/build-doc``
Once you build the documentation set, you may navigate to the source directory to view it:
- $ cd build-doc/output
+ ``$ cd build-doc/output``
-There should be an 'html' directory and a 'man' directory containing documentation
+There should be an ``html`` directory and a ``man`` directory containing documentation
in HTML and manpage formats respectively.
\ No newline at end of file
diff --git a/doc/images/osdStack.svg b/doc/images/osdStack.svg
new file mode 100644
index 00000000000..0c0b236b013
--- /dev/null
+++ b/doc/images/osdStack.svg
@@ -0,0 +1,137 @@
+
+
+
+
diff --git a/doc/images/radosStack.svg b/doc/images/radosStack.svg
new file mode 100644
index 00000000000..91029f227c9
--- /dev/null
+++ b/doc/images/radosStack.svg
@@ -0,0 +1,343 @@
+
+
+
+
diff --git a/doc/images/techstack.png b/doc/images/techstack.png
new file mode 100644
index 00000000000..c7d3477b2f9
Binary files /dev/null and b/doc/images/techstack.png differ
diff --git a/doc/images/techstack.svg b/doc/images/techstack.svg
new file mode 100644
index 00000000000..8f7dcc5a991
--- /dev/null
+++ b/doc/images/techstack.svg
@@ -0,0 +1,603 @@
+
+
+
+
diff --git a/doc/index.rst b/doc/index.rst
index 7f1f674ce3f..2ed750a3849 100644
--- a/doc/index.rst
+++ b/doc/index.rst
@@ -1,124 +1,48 @@
-=================
- Welcome to Ceph
-=================
+===============
+Welcome to Ceph
+===============
+Ceph is an open source storage system that delivers extraordinary scalability--thousands of clients
+accessing petabytes to exabytes of data--with high performance and solid reliability.
-Ceph is a unified, distributed storage system that operates on a large
-number of hosts connected by a TCP/IP network. Ceph has been designed
-to accommodate multiple petabytes of storage with ease.
+Ceph leverages commodity hardware to accommodate large numbers of Object Storage Devices (OSDs)
+operating in clusters over a TCP/IP network. Ceph's Reliable Autonomic Distributed Object Store (RADOS)
+utilizes the CPU, memory and network interface of the OSDs to communicate with each other,
+replicate data, and redistribute data dynamically. Ceph's monitors maintain a master copy of the
+OSD cluster map. Monitors also use the Paxos algorithm to to resolve disparities among different versions
+of the OSD cluster map as maintained by a plurality of monitors.
-Ceph Distributed File System provides POSIX filesystem semantics with
-distributed metadata management.
+Client applications access RADOS OSD clusters in several ways. A C/C++ binding (``librados``) provides an
+application with direct access to RADOS OSDs. Applications can access RADOS as a block device (``rbd``) using a
+device driver (dev/rdb) or the Qemu Kernel-based Virtual Machine (KVM). The RADOS RESTful gateway (``radosgw``)
+supports popular protocols like Amazon S3 and Swift so that applications that support those
+data storage interfaces can utilize RADOS OSDs. Finally, client applications can access RADOS OSDs
+using the Ceph file system.
-RADOS is a reliable object store, used by Ceph, but also directly
-accessible by client applications.
+The Ceph File System (Ceph FS) is a virtual file system (VFS) with POSIX semantics that provides
+client applications with a unified interface to petabytes or even exabytes of data. Ceph metadata servers
+provide the Ceph FS file system mapping. Client applications access Ceph FS via a Filesystem in User Space (FUSE),
+a Kernel Object (KO), or the Ceph VFS.
-``radosgw`` is an S3-compatible RESTful HTTP service for object
-storage, using RADOS storage.
+.. image:: images/techstack.png
-RBD is a Linux kernel feature that exposes RADOS storage as a block
-device. Qemu/KVM also has a direct RBD client, that avoids the kernel
-overhead.
+Ceph Development Status
+=======================
+The Ceph project is currently focused on stability. The Ceph file system is functionally complete,
+but has not been tested well enough at scale and under load to recommend it for a production environment yet.
+We recommend deploying Ceph for testing and evaluation. We do not recommend deploying Ceph into a
+production environment or storing valuable data until stress testing is complete.
+Ceph is developed on Linux. You may attempt to deploy Ceph on other platforms, but Linux is the
+target platform for the Ceph project. You can access the Ceph file system from other operating systems
+using NFS or Samba re-exports.
-.. ditaa::
-
- /---------+-----------+-----------\/----------+------\/---------\/-----------\
- | ceph.ko | ceph-fuse | libcephfs || kernel | Qemu || ||librados |
- |c9EE |c3EA |c6F6 || /dev/rbd | /KVM || ||c6F6 |
- +---------+-----------+-----------+|c9EE |c3EA || |+-----------+
- | Ceph DFS (protocol) |+----------+------+| radosgw || |
- | +-----------------+| || || |
- | | ceph-mds || RBD (protocol) || || |
- | |cFA2 || ||cFB5 || |
- +---------------+-----------------++-----------------++---------++ |
- | |
- | +=------+ +=------+ |
- | |cls_rbd| |cls_rgw| |
- | +-------+ +-------+ |
- | |
- | ceph-osd |
- |cFB3 |
- \----------------------------------------------------------------------------/
-
-
-
-Mailing lists, bug tracker, IRC channel
-=======================================
-
-- `Ceph Blog `__: news and status info
-- The development mailing list is at ceph-devel@vger.kernel.org, and
- archived at Gmane_. Send email to subscribe_ or unsubscribe_.
-- `Bug/feature tracker `__:
- for filing bugs and feature requests.
-- IRC channel ``#ceph`` on ``irc.oftc.net``: Many of the core
- developers are on IRC, especially daytime in the US/Pacific
- timezone. You are welcome to join and ask questions. You can find
- logs of the channel `here `__.
-- `Commercial support `__
-
-.. _subscribe: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
-.. _unsubscribe: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
-.. _Gmane: http://news.gmane.org/gmane.comp.file-systems.ceph.devel
-
-
-Status
-======
-
-The Ceph project is currently focused on stability. The object store
-(RADOS), radosgw, and RBD are considered reasonably stable. However,
-we do not yet recommend storing valuable data with it yet without
-proper precautions.
-
-The OSD component of RADOS relies heavily on the stability and
-performance of the underlying filesystem. In the long-term we believe
-that the best performance and stability will come from ``btrfs``.
-Currently, you need to run the latest ``btrfs`` kernel to get the
-latest stability fixes, and there are several performance fixes that
-have not yet hit the mainline kernel. In the short term you may wish
-to carefully consider the tradeoffs between ``xfs``, ``ext4`` and
-``btrfs``. In particular:
-
-* ``btrfs`` can efficiently clone objects, which improves performance
- and space utilization when using snapshots with RBD and the
- distributed filesystem. ``xfs`` and ``ext4`` will have to copy
- snapshotted objects the first time they are touched.
-
-* ``xfs`` has a 64 KB limit on extended attributes (xattrs).
-
-* ``ext4`` has a 4 KB limit on xattrs.
-
-Ceph uses xattrs for internal object state, snapshot metadata, and
-``radosgw`` ACLs. For most purposes, the 64 KB provided by ``xfs`` is
-plenty, making that our second choice if ``btrfs`` is not an option
-for you. The 4 KB limit in ``ext4`` is easily hit by ``radosgw``, and
-will cause ``ceph-osd`` to crash, making that a poor choice for
-``radosgw`` users. On the other hand, if you are using RADOS or RBD
-without snapshots and without ``radosgw``, ``ext4`` will be just
-fine. We will have a workaround for xattr size limitations shortly,
-making these problems largely go away.
-
-.. _cfuse-kernel-tradeoff:
-
-The Ceph filesystem is functionally fairly complete, but has not been
-tested well enough at scale and under load yet. Multi-master MDS is
-still problematic and we recommend running just one active MDS
-(standbys are ok). If you have problems with ``kclient`` or
-``ceph-fuse``, you may wish to try the other option; in general,
-``kclient`` is expected to be faster (but be sure to use the latest
-Linux kernel!) while ``ceph-fuse`` provides better stability by not
-triggering kernel crashes.
-
-Ceph is developed on Linux. Other platforms may work, but are not the
-focus of the project. Filesystem access from other operating systems
-can be done via NFS or Samba re-exports.
-
-
-Table of Contents
-=================
.. toctree::
- :maxdepth: 3
+ :maxdepth: 1
+ :hidden:
start/index
+ install/index
+ configure/index
architecture
ops/index
rec/index
@@ -129,9 +53,3 @@ Table of Contents
man/index
papers
appendix/index
-
-
-Indices and tables
-==================
-
-- :ref:`genindex`
diff --git a/doc/install/file_system_requirements.rst b/doc/install/file_system_requirements.rst
new file mode 100644
index 00000000000..62b711d6034
--- /dev/null
+++ b/doc/install/file_system_requirements.rst
@@ -0,0 +1,31 @@
+========================
+File System Requirements
+========================
+Ceph OSDs depend on the Extended Attributes (XATTRS) of the underlying file system for::
+
+- Internal object state
+- Snapshot metadata
+- RADOS Gateway Access Control Lists (ACLs).
+
+Ceph OSDs rely heavily upon the stability and performance of the underlying file system. The
+underlying file system must provide sufficient capacity for XATTRS. File system candidates for
+Ceph include B tree and B+ tree file systems such as:
+
+- ``btrfs``
+- ``XFS``
+
+.. warning::
+
+The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit for XATTRs in ``ext4``,
+causing the ``ceph-osd`` process to crash. So ``ext4`` is a poor file system choice if
+you intend to deploy the RADOS Gateway or use snapshots.
+
+.. tip::
+
+The Ceph team believes that the best performance and stability will come from ``btrfs.``
+The ``btrfs`` file system has internal transactions that keep the local data set in a consistent state.
+This makes OSDs based on ``btrfs`` simple to deploy, while providing scalability not
+currently available from block-based file systems. The 64-kb XATTR limit for ``xfs``
+XATTRS is enough to accommodate RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice
+file system of the Ceph team. If you only plan to use RADOS and ``rbd`` without snapshots and without
+``radosgw``, the ``ext4`` file system should work just fine.
diff --git a/doc/install/hardware_requirements.rst b/doc/install/hardware_requirements.rst
new file mode 100644
index 00000000000..d9cbea4c26f
--- /dev/null
+++ b/doc/install/hardware_requirements.rst
@@ -0,0 +1,46 @@
+=====================
+Hardware Requirements
+=====================
+Ceph OSDs run on commodity hardware and a Linux operating system over a TCP/IP network. OSD hosts
+should have ample data storage in the form of one or more hard drives or a Redundant Array of
+Independent Devices (RAIDs).
+
+
+
+Discussing the hardware requirements for each daemon,
+the tradeoffs of doing one ceph-osd per machine versus one per disk,
+and hardware-related configuration options like journaling locations.
+
++--------------+----------------+------------------------------------+
+| Process | Criteria | Minimum Requirement |
++==============+================+====================================+
+| ``ceph-osd`` | Processor | 64-bit x86; x-cores; 2MB Ln Cache |
+| +----------------+------------------------------------+
+| | RAM | 12 GB |
+| +----------------+------------------------------------+
+| | Disk Space | 30 GB |
+| +----------------+------------------------------------+
+| | Volume Storage | 2-4TB SATA Drives |
+| +----------------+------------------------------------+
+| | Network | 2-1GB Ethernet NICs |
++--------------+----------------+------------------------------------+
+| ``ceph-mon`` | Processor | 64-bit x86; x-cores; 2MB Ln Cache |
+| +----------------+------------------------------------+
+| | RAM | 12 GB |
+| +----------------+------------------------------------+
+| | Disk Space | 30 GB |
+| +----------------+------------------------------------+
+| | Volume Storage | 2-4TB SATA Drives |
+| +----------------+------------------------------------+
+| | Network | 2-1GB Ethernet NICs |
++--------------+----------------+------------------------------------+
+| ``ceph-mds`` | Processor | 64-bit x86; x-cores; 2MB Ln Cache |
+| +----------------+------------------------------------+
+| | RAM | 12 GB |
+| +----------------+------------------------------------+
+| | Disk Space | 30 GB |
+| +----------------+------------------------------------+
+| | Volume Storage | 2-4TB SATA Drives |
+| +----------------+------------------------------------+
+| | Network | 2-1GB Ethernet NICs |
++--------------+----------------+------------------------------------+
\ No newline at end of file
diff --git a/doc/install/index.rst b/doc/install/index.rst
new file mode 100644
index 00000000000..e0cc10c37b6
--- /dev/null
+++ b/doc/install/index.rst
@@ -0,0 +1,19 @@
+======================
+RADOS OSD Provisioning
+======================
+RADOS OSD clusters are the foundation of the Ceph file system, and they can also be provide
+object storage to clients via ``librados``, ``rbd`` and ``radosgw``. The following sections
+provide guidance for RADOS OSD provisioning:
+
+1. :doc:`Introduction to RADOS OSDs `
+2. :doc:`Hardware Requirements `
+3. :doc:`File System Requirements `
+4. :doc:`Installing RADOS Processes and Daemons `
+
+.. toctree::
+ :hidden:
+
+ Introduction
+ Hardware
+ File System Reqs
+ Installation
diff --git a/doc/install/installing_rados_processes_and_daemons.rst b/doc/install/installing_rados_processes_and_daemons.rst
new file mode 100644
index 00000000000..85dcad1aa7a
--- /dev/null
+++ b/doc/install/installing_rados_processes_and_daemons.rst
@@ -0,0 +1,41 @@
+======================================
+Installing RADOS Processes and Daemons
+======================================
+
+When you start the Ceph service, the initialization process activates a series of daemons that run in the background.
+The hosts in a typical RADOS cluster run at least one of three processes:
+
+- RADOS (``ceph-osd``)
+- Monitor (``ceph-mon``)
+- Metadata Server (``ceph-mds``)
+
+Each instance of a RADOS ``ceph-osd`` process performs a few essential tasks.
+
+1. Each ``ceph-osd`` instance provides clients with an object interface to the OSD for read/write operations.
+2. Each ``ceph-osd`` instance communicates and coordinates with other OSDs to store, replicate, redistribute and restore data.
+3. Each ``ceph-osd`` instance communicates with monitors to retrieve and/or update the master copy of the cluster map.
+
+Each instance of a monitor process performs a few essential tasks:
+
+1. Each ``ceph-mon`` instance communicates with other ``ceph-mon`` instances using PAXOS to establish consensus for distributed decision making.
+2. Each ``ceph-mon`` instance serves as the first point of contact for clients, and provides clients with the topology and status of the cluster.
+3. Each ``ceph-mon`` instance provides RADOS instances with a master copy of the cluster map and receives updates for the master copy of the cluster map.
+
+A metadata server (MDS) process performs a few essential tasks:
+
+1. Each ``ceph-mds`` instance provides clients with metadata regarding the file system.
+2. Each ``ceph-mds`` instance manage the file system namespace
+3. Coordinate access to the shared OSD cluster.
+
+
+Installing ``ceph-osd``
+=======================
+
+
+Installing ``ceph-mon``
+=======================
+
+
+Installing ``ceph-mds``
+=======================
+
diff --git a/doc/install/introduction_to_rados_osds.rst b/doc/install/introduction_to_rados_osds.rst
new file mode 100644
index 00000000000..13f9a3cd11c
--- /dev/null
+++ b/doc/install/introduction_to_rados_osds.rst
@@ -0,0 +1,23 @@
+==========================
+Introduction to RADOS OSDs
+==========================
+RADOS OSD clusters are the foundation of Ceph. RADOS revolutionizes OSDs by utilizing the CPU,
+memory and network interface of the storage hosts to communicate with each other, replicate data, and
+redistribute data dynamically so that system administrators do not have to plan and coordinate
+these tasks manually. By utilizing each host's computing resources, RADOS increases scalability while
+simultaneously eliminating both a performance bottleneck and a single point of failure common
+to systems that manage clusters centrally. Each OSD maintains a copy of the cluster map.
+
+Ceph provides a light-weight monitor process to address faults in the OSD clusters as they
+arise. System administrators must expect hardware failure in petabyte-to-exabyte scale systems
+with thousands of OSD hosts. Ceph's monitors increase the reliability of the OSD clusters by
+maintaining a master copy of the cluster map, and using the Paxos algorithm to resolve disparities
+among versions of the cluster map maintained by a plurality of monitors.
+
+Ceph Metadata Servers (MDSs) are only required for Ceph FS. You can use RADOS block devices or the
+RADOS Gateway without MDSs. The MDS dynamically adapt their behavior to the current workload.
+As the size and popularity of parts of the file system hierarchy change over time, the
+that hierarchy the MDSs dynamically redistribute the file system hierarchy among the available
+MDSs to balance the load to use server resources effectively.
+
+
\ No newline at end of file
diff --git a/doc/ops/manage/failures/radosgw.rst b/doc/ops/manage/failures/radosgw.rst
index 0de2aa48de5..0dffc36fb60 100644
--- a/doc/ops/manage/failures/radosgw.rst
+++ b/doc/ops/manage/failures/radosgw.rst
@@ -1,21 +1,21 @@
-=================================
- Recovering from radosgw failure
-=================================
+====================================
+ Recovering from ``radosgw`` failure
+====================================
-HTTP request errors
+HTTP Request Errors
===================
Examining the access and error logs for the web server itself is
probably the first step in identifying what is going on. If there is
a 500 error, that usually indicates a problem communicating with the
-radosgw daemon. Ensure the daemon is running, its socket path is
+``radosgw`` daemon. Ensure the daemon is running, its socket path is
configured, and that the web server is looking for it in the proper
location.
-Crashed radosgw process
-=======================
+Crashed ``radosgw`` process
+===========================
If the ``radosgw`` process dies, you will normally see a 500 error
from the web server (apache, nginx, etc.). In that situation, simply
@@ -25,8 +25,8 @@ To diagnose the cause of the crash, check the log in ``/var/log/ceph``
and/or the core file (if one was generated).
-Blocked radosgw requests
-========================
+Blocked ``radosgw`` Requests
+============================
If some (or all) radosgw requests appear to be blocked, you can get
some insight into the internal state of the ``radosgw`` daemon via
diff --git a/doc/overview.rst b/doc/overview.rst
deleted file mode 100644
index 6117e016d9d..00000000000
--- a/doc/overview.rst
+++ /dev/null
@@ -1,118 +0,0 @@
-=====================
-Ceph Product Overview
-=====================
-
-About this Document
-===================
-
-This document describes the features and benefits of using the Ceph
-Unified Distributed Storage System, and why it is superior to other
-systems.
-
-The audience for this document consists of sales and marketing
-personnel, new customers, and all persons who need to get a basic
-overview of the features and functionality of the system.
-
-Introduction to Ceph
-====================
-
-Ceph is a unified, distributed file system that operates on a large
-number of hosts connected by a network. Ceph has been designed to
-accommodate multiple petabytes of storage with ease. Since file sizes
-and network systems are always increasing, Ceph is perfectly
-positioned to accommodate these new technologies with its unique,
-self-healing and self-replicating architecture. Customers that need
-to move large amounts of metadata, such as media and entertainment
-companies, can greatly benefit from this product. Ceph is also
-dynamic; no need to cache data like those old-fashioned
-client-servers!
-
-Benefits of Using Ceph
-======================
-
-Ceph's flexible and scalable architecture translates into cost savings
-for users. Its powerful load balancing technology ensures the highest
-performance in terms of both speed and reliability. Nodes can be
-added "on the fly" with no impact to the system. In the case of node
-failure, the load is re-distributed with no degradation to the system.
-
-Failure detection is rapid and immediately remedied by efficiently
-re-adding nodes that were temporarily cut off from the network.
-
-Manageability
-=============
-
-Ceph is easy to manage, requiring little or no system administrative
-intervention. Its powerful placement algorithm and intelligent nodes
-manage data seamlessly across any node configuration. It also
-features multiple access methods to its object storage, block storage,
-and file systems. Figure 1 displays this configuration.
-
-.. image:: /images/CEPHConfig.jpg
-
-RADOS
-=====
-
-The Reliable Autonomic Distributed Object Store (RADOS) provides a
-scalable object storage management platform. RADOS allows the Object
-Storage Devices (OSD) to operate autonomously when recovering from
-failures or migrating data to expand clusters. RADOS employs existing
-node device intelligence to maximized scalability.
-
-The RADOS Block Device (RBD) provides a block device interface to a
-Linux machine, while striping the data across multiple RADOS objects
-for improved performance. RDB is supported for Linux kernels 2.6.37
-and higher. Each RDB device contains a directory with files and
-information
-
-RADOS GATEWAY
-=============
-
-``radosgw`` is an S3-compatible RESTful HTTP service for object
-storage, using RADOS storage.
-
-The RADOS Block Device (RBD) provides a block device interface to a
-Linux machine. To the user, RDB is transparent, which means that the
-entire Ceph system looks like a single, limitless hard drive that is
-always up and has no size limitations. .
-
-
-Hypervisor Support
-==================
-
-RBD supports the QEMU processor emulator and the Kernel-based Virtual
-Machine (KVM) virtualization infrastructure for the Linux kernel.
-Normally, these hypervisors would not be used together in a single
-configuration.
-
-KVM RBD
--------
-
-The Linux Kernel-based Virtual Machine (KVM) RBD provides the
-functionality for striping data across multiple distributed RADOS
-objects for improved performance.
-
-KVM-RDB is supported for Linux kernels 2.6.37 and higher. Each RDB
-device contains a directory with files and information.
-
-KVM employs the XEN hypervisor to manage its virtual machines.
-
-QEMU RBD
---------
-
-QEMU-RBD facilitates striping a VM block device over objects stored in
-the Ceph distributed object store. This provides shared block storage
-to facilitate VM migration between hosts.
-
-QEMU has its own hypervisor which interfaces with the librdb
-user-space library to store its virtual machines
-
-Monitors
-========
-
-Once you have determined your configuration needs, make sure you have
-access to the following documents:
-
-- Ceph Installation and Configuration Guide
-- Ceph System Administration Guide
-- Ceph Troubleshooting Manual
diff --git a/doc/start/block.rst b/doc/start/block.rst
deleted file mode 100644
index 4ed09be150a..00000000000
--- a/doc/start/block.rst
+++ /dev/null
@@ -1,89 +0,0 @@
-.. index:: RBD
-
-=====================
- Starting to use RBD
-=====================
-
-Introduction
-============
-
-`RBD` is the block device component of Ceph. It provides a block
-device interface to a Linux machine, while striping the data across
-multiple `RADOS` objects for improved performance. For more
-information, see :ref:`rbd`.
-
-
-Installation
-============
-
-To use `RBD`, you need to install a Ceph cluster. Follow the
-instructions in :doc:`/ops/install/index`. Continue with these
-instructions once you have a healthy cluster running.
-
-
-Setup
-=====
-
-The default `pool` used by `RBD` is called ``rbd``. It is created for
-you as part of the installation. If you wish to use multiple pools,
-for example for access control, see :ref:`create-new-pool`.
-
-First, we need a ``client`` key that is authorized to access the right
-pool. Follow the instructions in :ref:`add-new-key`. Let's set the
-``id`` of the key to be ``bar``. You could set up one key per machine
-using `RBD`, or let them share a single key; your call. Make sure the
-keyring containing the new key is available on the machine.
-
-Then, authorize the key to access the new pool. Follow the
-instructions in :ref:`auth-pool`.
-
-
-Usage
-=====
-
-`RBD` can be accessed in two ways:
-
-- as a block device on a Linux machine
-- via the ``rbd`` network storage driver in Qemu/KVM
-
-
-.. rubric:: Example: As a block device
-
-Using the ``client.bar`` key you set up earlier, we can create an RBD
-image called ``tengigs``::
-
- rbd --name=client.bar create --size=10240 tengigs
-
-And then make that visible as a block device::
-
- touch secretfile
- chmod go= secretfile
- ceph-authtool --name=bar --print-key /etc/ceph/client.bar.keyring >secretfile
- rbd map tengigs --user bar --secret secretfile
-
-.. todo:: the secretfile part is really clumsy
-
-For more information, see :doc:`rbd `\(8).
-
-
-.. rubric:: Example: As a Qemu/KVM storage driver via Libvirt
-
-You'll need ``kvm`` v0.15, and ``libvirt`` v0.8.7 or newer.
-
-Create the RBD image as above, and then refer to it in the ``libvirt``
-virtual machine configuration::
-
-
-
-
- `_
+Follow the directions at `Sphinx 1.1.3 `_
+to install Sphinx. To run Sphinx, with `admin/build-doc`, at least the following are required:
+
+- ``python-dev``
+- ``python-pip``
+- ``python-virtualenv``
+- ``libxml2-dev``
+- ``libxslt-dev``
+- ``doxygen``
+- ``ditaa``
+- ``graphviz``
+
+Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz
+
+Prerequisites for Building Ceph Source Code
+===========================================
+Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts
+depend on the following:
+
+- ``autotools-dev``
+- ``autoconf``
+- ``automake``
+- ``cdbs``
+- ``gcc``
+- ``g++``
+- ``git``
+- ``libboost-dev``
+- ``libedit-dev``
+- ``libssl-dev``
+- ``libtool``
+- ``libfcgi``
+- ``libfcgi-dev``
+- ``libfuse-dev``
+- ``linux-kernel-headers``
+- ``libcrypto++-dev``
+- ``libcrypto++``
+- ``libexpat1-dev``
+- ``libgtkmm-2.4-dev``
+- ``pkg-config``
+
+On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install autotools-dev autoconf automake cdbs
+ gcc g++ git libboost-dev libedit-dev libssl-dev libtool
+ libfcgi libfcgi-dev libfuse-dev linux-kernel-headers
+ libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
+
+On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. ::
+
+ $ aptitude install autotools-dev autoconf automake cdbs
+ gcc g++ git libboost-dev libedit-dev libssl-dev libtool
+ libfcgi libfcgi-dev libfuse-dev linux-kernel-headers
+ libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev
+
+
+Ubuntu Requirements
+-------------------
+
+- ``uuid-dev``
+- ``libkeytutils-dev``
+- ``libgoogle-perftools-dev``
+- ``libatomic-ops-dev``
+- ``libaio-dev``
+- ``libgdata-common``
+- ``libgdata13``
+
+Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
+
+ $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev
+ libatomic-ops-dev libaio-dev libgdata-common libgdata13
+
+Debian
+------
+Alternatively, you may also install::
+
+ $ aptitude install fakeroot dpkg-dev
+ $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev
+
+openSUSE 11.2 (and later)
+-------------------------
+
+- ``boost-devel``
+- ``gcc-c++``
+- ``libedit-devel``
+- ``libopenssl-devel``
+- ``fuse-devel`` (optional)
+
+Execute ``zypper install`` for each dependency that isn't installed on your host. ::
+
+ $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel
\ No newline at end of file
diff --git a/doc/start/building_ceph.rst b/doc/start/building_ceph.rst
new file mode 100644
index 00000000000..81a2039901d
--- /dev/null
+++ b/doc/start/building_ceph.rst
@@ -0,0 +1,31 @@
+=============
+Building Ceph
+=============
+
+Ceph provides build scripts for source code and for documentation.
+
+Building Ceph
+=============
+Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following::
+
+ $ cd ceph
+ $ ./autogen.sh
+ $ ./configure
+ $ make
+
+You can use ``make -j`` to execute multiple jobs depending upon your system. For example::
+
+ $ make -j4
+
+Building Ceph Documentation
+===========================
+Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx `_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script::
+
+ $ cd ceph
+ $ admin/build-doc
+
+Once you build the documentation set, you may navigate to the source directory to view it::
+
+ $ cd build-doc/output
+
+There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively.
diff --git a/doc/start/cloning_the_ceph_source_code_repository.rst b/doc/start/cloning_the_ceph_source_code_repository.rst
new file mode 100644
index 00000000000..8486e2df298
--- /dev/null
+++ b/doc/start/cloning_the_ceph_source_code_repository.rst
@@ -0,0 +1,54 @@
+=======================================
+Cloning the Ceph Source Code Repository
+=======================================
+To check out the Ceph source code, you must have ``git`` installed
+on your local host. To install ``git``, execute::
+
+ $ sudo apt-get install git
+
+You must also have a ``github`` account. If you do not have a
+``github`` account, go to `github.com `_ and register.
+Follow the directions for setting up git at `Set Up Git `_.
+
+Generate SSH Keys
+-----------------
+You must generate SSH keys for github to clone the Ceph
+repository. If you do not have SSH keys for ``github``, execute::
+
+ $ ssh-keygen -d
+
+Get the key to add to your ``github`` account::
+
+ $ cat .ssh/id_dsa.pub
+
+Copy the public key.
+
+Add the Key
+-----------
+Go to your your ``github`` account,
+click on "Account Settings" (i.e., the 'tools' icon); then,
+click "SSH Keys" on the left side navbar.
+
+Click "Add SSH key" in the "SSH Keys" list, enter a name for
+the key, paste the key you generated, and press the "Add key"
+button.
+
+Clone the Source
+----------------
+To clone the Ceph source code repository, execute::
+
+ $ git clone git@github.com:ceph/ceph.git
+
+Once ``git clone`` executes, you should have a full copy of the Ceph repository.
+
+Clone the Submodules
+--------------------
+Before you can build Ceph, you must get the ``init`` submodule and the ``update`` submodule::
+
+ $ git submodule init
+ $ git submodule update
+
+.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date::
+
+ $ git status
+
diff --git a/doc/start/download_packages.rst b/doc/start/download_packages.rst
new file mode 100644
index 00000000000..9bf6d091311
--- /dev/null
+++ b/doc/start/download_packages.rst
@@ -0,0 +1,41 @@
+====================
+Downloading Packages
+====================
+
+We automatically build Debian and Ubuntu packages for any branches or tags that appear in
+the ``ceph.git`` `repository `_. We build packages for the following
+architectures:
+
+- ``amd64``
+- ``i386``
+
+For each architecture, we build packages for the following distributions:
+
+- Debian 7.0 (``wheezy``)
+- Debian 6.0 (``squeeze``)
+- Debian unstable (``sid``)
+- Ubuntu 12.04 (``precise``)
+- Ubuntu 11.10 (``oneiric``)
+- Ubuntu 11.04 (``natty``)
+- Ubuntu 10.10 (``maverick``)
+
+When you execute the following commands to install the Ceph packages, replace ``{ARCH}`` with the architecture of your CPU,
+``{DISTRO}`` with the code name of your operating system (e.g., ``wheezy``, rather than the version number) and
+``{BRANCH}`` with the version of Ceph you want to run (e.g., ``master``, ``stable``, ``unstable``, ``v0.44``, etc.). ::
+
+ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/autobuild.asc \
+ | sudo apt-key add -
+
+ sudo tee /etc/apt/sources.list.d/ceph.list <`_
\ No newline at end of file
diff --git a/doc/start/filesystem.rst b/doc/start/filesystem.rst
deleted file mode 100644
index 5a10f79ec82..00000000000
--- a/doc/start/filesystem.rst
+++ /dev/null
@@ -1,73 +0,0 @@
-========================
- Starting to use CephFS
-========================
-
-Introduction
-============
-
-The Ceph Distributed File System is a scalable network file system
-aiming for high performance, large data storage, and POSIX
-compliance. For more information, see :ref:`cephfs`.
-
-
-Installation
-============
-
-To use `Ceph DFS`, you need to install a Ceph cluster. Follow the
-instructions in :doc:`/ops/install/index`. Continue with these
-instructions once you have a healthy cluster running.
-
-
-Setup
-=====
-
-First, we need a ``client`` key that is authorized to access the
-filesystem. Follow the instructions in :ref:`add-new-key`. Let's set
-the ``id`` of the key to be ``foo``. You could set up one key per
-machine mounting the filesystem, or let them share a single key; your
-call. Make sure the keyring containing the new key is available on the
-machine doing the mounting.
-
-
-Usage
-=====
-
-There are two main ways of using the filesystem. You can use the Ceph
-client implementation that is included in the Linux kernel, or you can
-use the FUSE userspace filesystem. For an explanation of the
-tradeoffs, see :ref:`Status `. Follow the
-instructions in :ref:`mounting`.
-
-Once you have the filesystem mounted, you can use it like any other
-filesystem. The changes you make on one client will be visible to
-other clients that have mounted the same filesystem.
-
-You can now use snapshots, automatic disk usage tracking, and all
-other features `Ceph DFS` has. All read and write operations will be
-automatically distributed across your whole storage cluster, giving
-you the best performance available.
-
-.. todo:: links for snapshots, disk usage
-
-You can use :doc:`cephfs `\(8) to interact with
-``cephfs`` internals.
-
-
-.. rubric:: Example: Home directories
-
-If you locate UNIX user account home directories under a Ceph
-filesystem mountpoint, the same files will be available from all
-machines set up this way.
-
-Users can move between hosts, or even use them simultaneously, and
-always access the same files.
-
-
-.. rubric:: Example: HPC
-
-In a HPC (High Performance Computing) scenario, hundreds or thousands
-of machines could all mount the Ceph filesystem, and worker processes
-on all of the machines could then access the same files for
-input/output.
-
-.. todo:: point to the lazy io optimization
diff --git a/doc/start/get_involved_in_the_ceph_community.rst b/doc/start/get_involved_in_the_ceph_community.rst
new file mode 100644
index 00000000000..241be479443
--- /dev/null
+++ b/doc/start/get_involved_in_the_ceph_community.rst
@@ -0,0 +1,21 @@
+===================================
+Get Involved in the Ceph Community!
+===================================
+These are exciting times in the Ceph community!
+Follow the `Ceph Blog `__ to keep track of Ceph progress.
+
+As you delve into Ceph, you may have questions or feedback for the Ceph development team.
+Ceph developers are often available on the ``#ceph`` IRC channel at ``irc.oftc.net``,
+particularly during daytime hours in the US Pacific Standard Time zone.
+Keep in touch with developer activity by subscribing_ to the email list at ceph-devel@vger.kernel.org.
+You can opt out of the email list at any time by unsubscribing_. A simple email is
+all it takes! If you would like to view the archives, go to Gmane_.
+You can help prepare Ceph for production by filing
+and tracking bugs, and providing feature requests using
+the `bug/feature tracker `__.
+
+.. _subscribing: mailto:majordomo@vger.kernel.org?body=subscribe+ceph-devel
+.. _unsubscribing: mailto:majordomo@vger.kernel.org?body=unsubscribe+ceph-devel
+.. _Gmane: http://news.gmane.org/gmane.comp.file-systems.ceph.devel
+
+If you need hands-on help, `commercial support `__ is available too!
\ No newline at end of file
diff --git a/doc/start/index.rst b/doc/start/index.rst
index 9f305c5a910..2922d62552e 100644
--- a/doc/start/index.rst
+++ b/doc/start/index.rst
@@ -1,43 +1,28 @@
-=================
- Getting Started
-=================
+===============
+Getting Started
+===============
+Welcome to Ceph! The following sections provide information
+that will help you get started before you install Ceph:
-.. todo:: write about vstart, somewhere
-
-The Ceph Storage System consists of multiple components, and can be
-used in multiple ways. To guide you through it, please pick an aspect
-of Ceph that is most interesting to you:
-
-- :doc:`Object storage