mirror of
https://github.com/ceph/ceph
synced 2025-03-25 11:48:05 +00:00
doc: Fixed many hyperlinks, a few typos, and some minor clarifications.
fixes: #3564 Signed-off-by: John Wilkins <john.wilkins@inktank.com>
This commit is contained in:
parent
a7a3cbf8fc
commit
f2c7a60c90
@ -247,8 +247,8 @@ bad sectors on a disk that weren't apparent in a light scrub (weekly).
|
||||
|
||||
.. todo:: explain "classes"
|
||||
|
||||
.. _Placement Group States: ../cluster-ops/pg-states
|
||||
.. _Placement Group Concepts: ../cluster-ops/pg-concepts
|
||||
.. _Placement Group States: ../rados/operations/pg-states
|
||||
.. _Placement Group Concepts: ../rados/operations/pg-concepts
|
||||
|
||||
Monitor Quorums
|
||||
===============
|
||||
@ -301,7 +301,7 @@ commands. The Cephx authentication system is similar to Kerberos, but avoids a
|
||||
single point of failure to ensure scalability and high availability. For
|
||||
details on Cephx, see `Ceph Authentication and Authorization`_.
|
||||
|
||||
.. _Ceph Authentication and Authorization: ../cluster-ops/auth-intro/
|
||||
.. _Ceph Authentication and Authorization: ../rados/operations/auth-intro/
|
||||
|
||||
librados
|
||||
--------
|
||||
|
@ -16,4 +16,4 @@ For example::
|
||||
mandatory when you have Ceph authentication running. See `Authentication`_
|
||||
for details.
|
||||
|
||||
.. _Authentication: ../../cluster-ops/authentication/
|
||||
.. _Authentication: ../../rados/operations/authentication/
|
@ -31,4 +31,4 @@ To unmount the Ceph file system, you may use the ``umount`` command. For example
|
||||
See `mount.ceph`_ for details.
|
||||
|
||||
.. _mount.ceph: ../../man/8/mount.ceph/
|
||||
.. _Authentication: ../../cluster-ops/authentication/
|
||||
.. _Authentication: ../../rados/operations/authentication/
|
@ -53,4 +53,4 @@ Build the RPM packages::
|
||||
For multi-processor CPUs use the ``-j`` option to accelerate the build.
|
||||
|
||||
.. _build prerequisites: ../build-prerequisites
|
||||
.. _Ceph: ../cloning-the-ceph-source-code-repository
|
||||
.. _Ceph: ../clone-source
|
||||
|
@ -81,7 +81,7 @@ Monitor commands
|
||||
A more complete summary of commands understood by the monitor cluster can be found in the
|
||||
wiki, at
|
||||
|
||||
http://ceph.com/docs/master/cluster-ops/control
|
||||
http://ceph.com/docs/master/rados/operations/control
|
||||
|
||||
|
||||
Availability
|
||||
|
@ -203,7 +203,7 @@ minimal settings for each instance of a daemon. For example:
|
||||
**ONLY** for ``mkcephfs`` and manual deployment. It **MUST NOT**
|
||||
be used with ``chef`` or ``ceph-deploy``.
|
||||
|
||||
.. _Hardware Recommendations: ../../install/hardware-recommendations
|
||||
.. _Hardware Recommendations: ../../../install/hardware-recommendations
|
||||
|
||||
.. _ceph-network-config:
|
||||
|
||||
@ -260,7 +260,7 @@ in the daemon instance sections of your ``ceph.conf`` file.
|
||||
public addr {host-public-ip-address}
|
||||
cluster addr {host-cluster-ip-address}
|
||||
|
||||
.. _hardware recommendations: ../../install/hardware-recommendations
|
||||
.. _hardware recommendations: ../../../install/hardware-recommendations
|
||||
|
||||
|
||||
.. _ceph-monitor-config:
|
||||
|
@ -49,13 +49,13 @@ by using a method of storing XATTRs that is extrinsic to the underlying filesyst
|
||||
Synchronization Intervals
|
||||
=========================
|
||||
|
||||
Periodically, the filestore needs to quiesce writes and synchronize the filesystem,
|
||||
which creates a consistent commit point. It can then free journal entries up to
|
||||
the commit point. Synchronizing more frequently tends to reduce the time required
|
||||
perform synchronization, and reduces the amount of data that needs to remain in the
|
||||
journal. Less frequent synchronization allows the backing filesystem to coalesce
|
||||
small writes and metadata updates more optimally--potentially resulting in more
|
||||
efficient synchronization.
|
||||
Periodically, the filestore needs to quiesce writes and synchronize the
|
||||
filesystem, which creates a consistent commit point. It can then free journal
|
||||
entries up to the commit point. Synchronizing more frequently tends to reduce
|
||||
the time required to perform synchronization, and reduces the amount of data
|
||||
that needs to remain in the journal. Less frequent synchronization allows the
|
||||
backing filesystem to coalesce small writes and metadata updates more
|
||||
optimally--potentially resulting in more efficient synchronization.
|
||||
|
||||
|
||||
``filestore max sync interval``
|
||||
|
@ -11,7 +11,7 @@ Ceph OSDs use a journal for two reasons: speed and consistency.
|
||||
with short spurts of high-speed writes followed by periods without any
|
||||
write progress as the filesystem catches up to the journal.
|
||||
|
||||
- **Consistency:** Ceph OSDs requires a filesystem interface that guarantees
|
||||
- **Consistency:** Ceph OSDs require a filesystem interface that guarantees
|
||||
atomic compound operations. Ceph OSDs write a description of the operation
|
||||
to the journal and apply the operation to the filesystem. This enables
|
||||
atomic updates to an object (for example, placement group metadata). Every
|
||||
|
@ -347,18 +347,21 @@
|
||||
:Type: Float
|
||||
:Default: Once per day. ``60*60*24``
|
||||
|
||||
|
||||
``osd deep scrub interval``
|
||||
|
||||
:Description: The interval for "deep" scrubbing (fully reading all data)
|
||||
:Description: The interval for "deep" scrubbing (fully reading all data).
|
||||
:Type: Float
|
||||
:Default: Once per week. ``60*60*24*7``
|
||||
|
||||
|
||||
``osd deep scrub stride``
|
||||
|
||||
:Description: Read siez when doing a deep scrub
|
||||
:Description: Read size when doing a deep scrub.
|
||||
:Type: 32-bit Int
|
||||
:Default: 512 KB. ``524288``
|
||||
|
||||
|
||||
``osd class dir``
|
||||
|
||||
:Description: The class path for RADOS class plug-ins.
|
||||
@ -414,9 +417,3 @@
|
||||
:Type: Boolean
|
||||
:Default: ``false``
|
||||
|
||||
|
||||
``osd kill backfill at``
|
||||
|
||||
:Description: For debugging only.
|
||||
:Type: 32-bit Integer
|
||||
:Default: ``0``
|
||||
|
@ -247,5 +247,5 @@ See `Operating a Cluster`_ for details.
|
||||
|
||||
|
||||
.. _Managing Cookbooks with Knife: http://wiki.opscode.com/display/chef/Managing+Cookbooks+With+Knife
|
||||
.. _Installing Chef: ../../install/chef
|
||||
.. _Operating a Cluster: ../../init/
|
||||
.. _Installing Chef: ../../deployment/chef
|
||||
.. _Operating a Cluster: ../../operations/
|
||||
|
@ -2,7 +2,7 @@
|
||||
Ceph Deployment
|
||||
=================
|
||||
|
||||
You can deploy Chef using many different deployment systems including Chef, Juju,
|
||||
You can deploy Ceph using many different deployment systems including Chef, Juju,
|
||||
Puppet, and Crowbar. If you are just experimenting, Ceph provides some minimal
|
||||
deployment tools that rely only on SSH and DNS to deploy Ceph. You need to set
|
||||
up the SSH and DNS settings manually.
|
||||
|
@ -276,7 +276,7 @@ Chef nodes. ::
|
||||
|
||||
A list of the nodes you've configured should appear.
|
||||
|
||||
See the `Deploy With Chef <../../config-cluster/chef>`_ section for information
|
||||
See the `Deploy With Chef <../../deployment/chef>`_ section for information
|
||||
on using Chef to deploy your Ceph cluster.
|
||||
|
||||
.. _Chef Architecture Introduction: http://wiki.opscode.com/display/chef/Architecture+Introduction
|
||||
|
@ -35,8 +35,8 @@ See `Filesystem Recommendations`_ for details.
|
||||
Add your OSD host to a rack in your cluster, connect it to the network
|
||||
and ensure that it has network connectivity.
|
||||
|
||||
.. _Hardware Recommendations: ../../install/hardware-recommendations
|
||||
.. _Filesystem Recommendations: ../../config-cluster/file-system-recommendations
|
||||
.. _Hardware Recommendations: ../../../install/hardware-recommendations
|
||||
.. _Filesystem Recommendations: ../../configuration/filesystem-recommendations
|
||||
|
||||
Install the Required Software
|
||||
-----------------------------
|
||||
@ -46,17 +46,17 @@ manually. See `Installing Debian/Ubuntu Packages`_ for details.
|
||||
You should configure SSH to a user with password-less authentication
|
||||
and root permissions.
|
||||
|
||||
.. _Installing Debian/Ubuntu Packages: ../../install/debian
|
||||
.. _Installing Debian/Ubuntu Packages: ../../../install/debian
|
||||
|
||||
For clusters deployed with Chef, create a `chef user`_, `configure
|
||||
SSH keys`_, `install Ruby`_ and `install the Chef client`_ on your host. See
|
||||
`Installing Chef`_ for details.
|
||||
|
||||
.. _chef user: ../../install/chef#createuser
|
||||
.. _configure SSH keys: ../../install/chef#genkeys
|
||||
.. _install the Chef client: ../../install/chef#installchef
|
||||
.. _Installing Chef: ../../install/chef
|
||||
.. _Install Ruby: ../../install/chef#installruby
|
||||
.. _chef user: ../../deployment/install-chef#createuser
|
||||
.. _configure SSH keys: ../../deployment/install-chef#genkeys
|
||||
.. _install the Chef client: ../../deployment/install-chef#installchef
|
||||
.. _Installing Chef: ../../deployment/install-chef
|
||||
.. _Install Ruby: ../../deployment/install-chef#installruby
|
||||
|
||||
Adding an OSD (Manual)
|
||||
----------------------
|
||||
@ -234,8 +234,8 @@ completes. (Control-c to exit.)
|
||||
|
||||
|
||||
.. _Add/Move an OSD: ../crush-map#addosd
|
||||
.. _Configure Nodes: ../../config-cluster/chef#confignodes
|
||||
.. _Prepare OSD Disks: ../../config-cluster/chef#prepdisks
|
||||
.. _Configure Nodes: ../../deployment/chef#confignodes
|
||||
.. _Prepare OSD Disks: ../../deployment/chef#prepdisks
|
||||
.. _ceph: ../monitoring
|
||||
|
||||
|
||||
|
@ -46,8 +46,8 @@ to one or more pools, or the cluster as a whole.
|
||||
|
||||
.. toctree::
|
||||
|
||||
Cephx Overview <auth-intro>
|
||||
authentication
|
||||
Authentication Overview <auth-intro>
|
||||
Cephx Authentication <authentication>
|
||||
|
||||
|
||||
|
||||
|
@ -102,5 +102,5 @@ For example::
|
||||
|
||||
|
||||
|
||||
.. _Storage Pools: ../../cluster-ops/pools
|
||||
.. _Storage Pools: ../../rados/operations/pools
|
||||
.. _RBD – Manage RADOS Block Device (RBD) Images: ../../man/8/rbd/
|
@ -60,4 +60,4 @@ For example::
|
||||
sudo rbd unmap /dev/rbd/rbd/foo
|
||||
|
||||
|
||||
.. _cephx: ../../cluster-ops/authentication/
|
||||
.. _cephx: ../../rados/operations/authentication/
|
@ -68,8 +68,8 @@ See `Create a Pool`_ for detail on specifying the number of placement groups for
|
||||
your pools, and `Placement Groups`_ for details on the number of placement
|
||||
groups you should set for your pools.
|
||||
|
||||
.. _Create a Pool: ../../cluster-ops/pools#createpool
|
||||
.. _Placement Groups: ../../cluster-ops/placement-groups
|
||||
.. _Create a Pool: ../../rados/operations/pools#createpool
|
||||
.. _Placement Groups: ../../rados/operations/placement-groups
|
||||
|
||||
|
||||
Configure OpenStack Ceph Clients
|
||||
@ -132,7 +132,7 @@ the temporary copy of the key::
|
||||
|
||||
Save the uuid of the secret for configuring ``nova-compute`` later.
|
||||
|
||||
.. _cephx authentication: ../../cluster-ops/authentication
|
||||
.. _cephx authentication: ../../rados/operations/authentication
|
||||
|
||||
|
||||
Configure OpenStack to use Ceph
|
||||
|
@ -313,7 +313,7 @@ For example::
|
||||
a flattened image will take up more storage space than a layered clone.
|
||||
|
||||
|
||||
.. _cephx: ../../cluster-ops/authentication/
|
||||
.. _cephx: ../../rados/operations/authentication/
|
||||
.. _QEMU: ../qemu-rbd/
|
||||
.. _OpenStack: ../rbd-openstack/
|
||||
.. _CloudStack: ../rbd-cloudstack/
|
||||
|
@ -51,7 +51,7 @@ devices simultaneously.
|
||||
librbd <librbdpy>
|
||||
|
||||
|
||||
.. _RBD Caching: ../../config-cluster/rbd-config-ref/
|
||||
.. _RBD Caching: ../rbd-config-ref/
|
||||
.. _kernel modules: ../rbd-ko/
|
||||
.. _Qemu: ../qemu-rbd/
|
||||
.. _OpenStack: ../rbd-openstack
|
||||
|
Loading…
Reference in New Issue
Block a user