1
0
mirror of https://github.com/ceph/ceph synced 2025-04-01 23:02:17 +00:00

doc: Whitespace cleanup.

Signed-off-by: Tommi Virtanen <tommi.virtanen@dreamhost.com>
This commit is contained in:
Tommi Virtanen 2012-05-03 10:15:21 -07:00
parent 93dcc9886f
commit 5465e81097
30 changed files with 274 additions and 286 deletions

4
README
View File

@ -28,7 +28,7 @@ Building Ceph
To prepare the source tree after it has been git cloned, To prepare the source tree after it has been git cloned,
$ git submodule update --init $ git submodule update --init
To build the server daemons, and FUSE client, execute the following: To build the server daemons, and FUSE client, execute the following:
@ -118,5 +118,3 @@ To build the source code, you must install the following:
For example: For example:
$ apt-get install automake autoconf automake gcc g++ libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev $ apt-get install automake autoconf automake gcc g++ libboost-dev libedit-dev libssl-dev libtool libfcgi libfcgi-dev libfuse-dev linux-kernel-headers libcrypto++-dev

View File

@ -377,5 +377,3 @@ Syntax
:: ::
DELETE /{bucket}/{object}?uploadId= HTTP/1.1 DELETE /{bucket}/{object}?uploadId= HTTP/1.1

View File

@ -37,4 +37,3 @@ Response Entities
+----------------------------+-------------+-----------------------------------------------------------------+ +----------------------------+-------------+-----------------------------------------------------------------+
| ``DisplayName`` | String | The bucket owner's display name. | | ``DisplayName`` | String | The bucket owner's display name. |
+----------------------------+-------------+-----------------------------------------------------------------+ +----------------------------+-------------+-----------------------------------------------------------------+

View File

@ -14,7 +14,7 @@ configuration settings. The default ``ceph.conf`` locations in sequential
order include: order include:
1. ``$CEPH_CONF`` (*i.e.,* the path following 1. ``$CEPH_CONF`` (*i.e.,* the path following
the ``$CEPH_CONF`` environment variable) the ``$CEPH_CONF`` environment variable)
2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument) 2. ``-c path/path`` (*i.e.,* the ``-c`` command line argument)
3. ``/etc/ceph/ceph.conf`` 3. ``/etc/ceph/ceph.conf``
4. ``~/.ceph/config`` 4. ``~/.ceph/config``
@ -151,7 +151,7 @@ monitor instance crash. An odd number of monitors (3) ensures that the Paxos
algorithm can determine which version of the cluster map is the most accurate. algorithm can determine which version of the cluster map is the most accurate.
.. note:: You may deploy Ceph with a single monitor, but if the instance fails, .. note:: You may deploy Ceph with a single monitor, but if the instance fails,
the lack of a monitor may interrupt data service availability. the lack of a monitor may interrupt data service availability.
Ceph monitors typically listen on port ``6789``. Ceph monitors typically listen on port ``6789``.

View File

@ -45,4 +45,3 @@ On ``myserver04``::
mkdir srv/osd.3 mkdir srv/osd.3
.. important:: The ``host`` variable determines which host runs each instance of a Ceph daemon. .. important:: The ``host`` variable determines which host runs each instance of a Ceph daemon.

View File

@ -21,4 +21,3 @@ To start the cluster, execute the following::
Ceph should begin operating. You can check on the health of your Ceph cluster with the following:: Ceph should begin operating. You can check on the health of your Ceph cluster with the following::
ceph -k mycluster.keyring -c <path>/ceph.conf health ceph -k mycluster.keyring -c <path>/ceph.conf health

View File

@ -49,4 +49,3 @@ If you are using ``ext4``, enable XATTRs. ::
file system of the Ceph team in the long run, but ``xfs`` is currently more file system of the Ceph team in the long run, but ``xfs`` is currently more
stable than ``btrfs``. If you only plan to use RADOS and ``rbd`` without stable than ``btrfs``. If you only plan to use RADOS and ``rbd`` without
snapshots and without ``radosgw``, the ``ext4`` file system should work just fine. snapshots and without ``radosgw``, the ``ext4`` file system should work just fine.

View File

@ -48,4 +48,3 @@ Once you add either release or autobuild packages for Debian/Ubuntu, you may
download them with ``apt`` as follows:: download them with ``apt`` as follows::
sudo apt-get update sudo apt-get update

View File

@ -46,4 +46,3 @@ processing capability and plenty of RAM.
| +----------------+------------------------------------+ | +----------------+------------------------------------+
| | Network | 2-1GB Ethernet NICs | | | Network | 2-1GB Ethernet NICs |
+--------------+----------------+------------------------------------+ +--------------+----------------+------------------------------------+

View File

@ -24,7 +24,7 @@ You may install RPM packages as follows::
rpm -i rpmbuild/RPMS/x86_64/ceph-*.rpm rpm -i rpmbuild/RPMS/x86_64/ceph-*.rpm
.. note: We do not build RPM packages at this time. You may build them .. note: We do not build RPM packages at this time. You may build them
yourself by downloading the source code. yourself by downloading the source code.
Proceed to Configuring a Cluster Proceed to Configuring a Cluster
-------------------------------- --------------------------------

View File

@ -96,4 +96,3 @@ to install Sphinx. To run Sphinx, with ``admin/build-doc``, at least the followi
Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. ::
$ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz

View File

@ -41,4 +41,3 @@ Once you clone the source code and submodules, your Ceph repository will be on t
:: ::
git checkout master git checkout master