mirror of
https://github.com/ceph/ceph
synced 2025-04-18 21:36:09 +00:00
doc,man: typos found by codespell
Signed-off-by: Dimitri Papadopoulos <3234522+DimitriPapadopoulos@users.noreply.github.com>
This commit is contained in:
parent
82a77ef058
commit
7677651618
@ -4,7 +4,7 @@ Converting an existing cluster to cephadm
|
||||
=========================================
|
||||
|
||||
It is possible to convert some existing clusters so that they can be managed
|
||||
with ``cephadm``. This statment applies to some clusters that were deployed
|
||||
with ``cephadm``. This statement applies to some clusters that were deployed
|
||||
with ``ceph-deploy``, ``ceph-ansible``, or ``DeepSea``.
|
||||
|
||||
This section of the documentation explains how to determine whether your
|
||||
@ -51,7 +51,7 @@ Preparation
|
||||
|
||||
cephadm ls
|
||||
|
||||
Before starting the converstion process, ``cephadm ls`` shows all existing
|
||||
Before starting the conversion process, ``cephadm ls`` shows all existing
|
||||
daemons to have a style of ``legacy``. As the adoption process progresses,
|
||||
adopted daemons will appear with a style of ``cephadm:v1``.
|
||||
|
||||
|
@ -82,7 +82,7 @@ All osds on the host will be scheduled to be removed. You can check osd removal
|
||||
|
||||
see :ref:`cephadm-osd-removal` for more details about osd removal
|
||||
|
||||
You can check if there are no deamons left on the host with the following:
|
||||
You can check if there are no daemons left on the host with the following:
|
||||
|
||||
.. prompt:: bash #
|
||||
|
||||
@ -202,7 +202,7 @@ Setting the initial CRUSH location of host
|
||||
==========================================
|
||||
|
||||
Hosts can contain a ``location`` identifier which will instruct cephadm to
|
||||
create a new CRUSH host located in the specified hierachy.
|
||||
create a new CRUSH host located in the specified hierarchy.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -524,7 +524,7 @@ Purging a cluster
|
||||
|
||||
.. danger:: THIS OPERATION WILL DESTROY ALL DATA STORED IN THIS CLUSTER
|
||||
|
||||
In order to destory a cluster and delete all data stored in this cluster, pause
|
||||
In order to destroy a cluster and delete all data stored in this cluster, pause
|
||||
cephadm to avoid deploying new daemons.
|
||||
|
||||
.. prompt:: bash #
|
||||
|
@ -435,7 +435,7 @@ Consider the following service specification:
|
||||
count: 3
|
||||
label: myfs
|
||||
|
||||
This service specifcation instructs cephadm to deploy three daemons on hosts
|
||||
This service specification instructs cephadm to deploy three daemons on hosts
|
||||
labeled ``myfs`` across the cluster.
|
||||
|
||||
If there are fewer than three daemons deployed on the candidate hosts, cephadm
|
||||
|
@ -170,8 +170,8 @@ network ``10.1.2.0/24``, run the following commands:
|
||||
|
||||
ceph orch apply mon --placement="newhost1,newhost2,newhost3"
|
||||
|
||||
Futher Reading
|
||||
==============
|
||||
Further Reading
|
||||
===============
|
||||
|
||||
* :ref:`rados-operations`
|
||||
* :ref:`rados-troubleshooting-mon`
|
||||
|
@ -768,8 +768,8 @@ layout, it is recommended to apply different OSD specs matching only one
|
||||
set of hosts. Typically you will have a spec for multiple hosts with the
|
||||
same layout.
|
||||
|
||||
The sevice id as the unique key: In case a new OSD spec with an already
|
||||
applied service id is applied, the existing OSD spec will be superseeded.
|
||||
The service id as the unique key: In case a new OSD spec with an already
|
||||
applied service id is applied, the existing OSD spec will be superseded.
|
||||
cephadm will now create new OSD daemons based on the new spec
|
||||
definition. Existing OSD daemons will not be affected. See :ref:`cephadm-osd-declarative`.
|
||||
|
||||
@ -912,8 +912,8 @@ activates all existing OSDs on a host.
|
||||
|
||||
This will scan all existing disks for OSDs and deploy corresponding daemons.
|
||||
|
||||
Futher Reading
|
||||
==============
|
||||
Further Reading
|
||||
===============
|
||||
|
||||
* :ref:`ceph-volume`
|
||||
* :ref:`rados-index`
|
||||
|
@ -156,7 +156,7 @@ High availability service for RGW
|
||||
=================================
|
||||
|
||||
The *ingress* service allows you to create a high availability endpoint
|
||||
for RGW with a minumum set of configuration options. The orchestrator will
|
||||
for RGW with a minimum set of configuration options. The orchestrator will
|
||||
deploy and manage a combination of haproxy and keepalived to provide load
|
||||
balancing on a floating virtual IP.
|
||||
|
||||
|
@ -273,7 +273,7 @@ To call miscellaneous like ``ceph-objectstore-tool`` or
|
||||
0: [v2:127.0.0.1:3300/0,v1:127.0.0.1:6789/0] mon.myhostname
|
||||
|
||||
This command sets up the environment in a way that is suitable
|
||||
for extended daemon maintenance and running the deamon interactively.
|
||||
for extended daemon maintenance and running the daemon interactively.
|
||||
|
||||
.. _cephadm-restore-quorum:
|
||||
|
||||
@ -324,7 +324,7 @@ Get the container image::
|
||||
|
||||
ceph config get "mgr.hostname.smfvfd" container_image
|
||||
|
||||
Create a file ``config-json.json`` which contains the information neccessary to deploy
|
||||
Create a file ``config-json.json`` which contains the information necessary to deploy
|
||||
the daemon:
|
||||
|
||||
.. code-block:: json
|
||||
|
@ -123,7 +123,7 @@ clients allowed, even some capabilities are not needed or wanted by the clients,
|
||||
as pre-issuing capabilities could reduce latency in some cases.
|
||||
|
||||
If there is only one client, usually it will be the loner client for all the inodes.
|
||||
While in multiple clients case, the MDS will try to caculate a loner client out for
|
||||
While in multiple clients case, the MDS will try to calculate a loner client out for
|
||||
each inode depending on the capabilities the clients (needed | wanted), but usually
|
||||
it will fail. The loner client will always get all the capabilities.
|
||||
|
||||
|
@ -115,7 +115,7 @@ To stop a mirroring directory snapshots use::
|
||||
$ ceph fs snapshot mirror remove <fs_name> <path>
|
||||
|
||||
Only absolute directory paths are allowed. Also, paths are normalized by the mirroring
|
||||
module, therfore, `/a/b/../b` is equivalent to `/a/b`.
|
||||
module, therefore, `/a/b/../b` is equivalent to `/a/b`.
|
||||
|
||||
$ mkdir -p /d0/d1/d2
|
||||
$ ceph fs snapshot mirror add cephfs /d0/d1/d2
|
||||
@ -124,7 +124,7 @@ module, therfore, `/a/b/../b` is equivalent to `/a/b`.
|
||||
Error EEXIST: directory /d0/d1/d2 is already tracked
|
||||
|
||||
Once a directory is added for mirroring, its subdirectory or ancestor directories are
|
||||
disallowed to be added for mirorring::
|
||||
disallowed to be added for mirroring::
|
||||
|
||||
$ ceph fs snapshot mirror add cephfs /d0/d1
|
||||
Error EINVAL: /d0/d1 is a ancestor of tracked path /d0/d1/d2
|
||||
@ -301,7 +301,7 @@ E.g., adding a regular file for synchronization would result in failed status::
|
||||
|
||||
This allows a user to add a non-existent directory for synchronization. The mirror daemon
|
||||
would mark the directory as failed and retry (less frequently). When the directory comes
|
||||
to existence, the mirror daemons would unmark the failed state upon successfull snapshot
|
||||
to existence, the mirror daemons would unmark the failed state upon successful snapshot
|
||||
synchronization.
|
||||
|
||||
When mirroring is disabled, the respective `fs mirror status` command for the file system
|
||||
|
@ -187,7 +187,7 @@ It is **important** to ensure that all workers have completed the
|
||||
scan_extents phase before any workers enter the scan_inodes phase.
|
||||
|
||||
After completing the metadata recovery, you may want to run cleanup
|
||||
operation to delete ancillary data geneated during recovery.
|
||||
operation to delete ancillary data generated during recovery.
|
||||
|
||||
::
|
||||
|
||||
|
@ -10,7 +10,7 @@ storage administrators among others can use the common CLI provided by the
|
||||
ceph-mgr volumes module to manage the CephFS exports.
|
||||
|
||||
The ceph-mgr volumes module implements the following file system export
|
||||
abstactions:
|
||||
abstractions:
|
||||
|
||||
* FS volumes, an abstraction for CephFS file systems
|
||||
|
||||
@ -359,13 +359,13 @@ To delete a partial clone use::
|
||||
$ ceph fs subvolume rm <vol_name> <clone_name> [--group_name <group_name>] --force
|
||||
|
||||
.. note:: Cloning only synchronizes directories, regular files and symbolic links. Also, inode timestamps (access and
|
||||
modification times) are synchronized upto seconds granularity.
|
||||
modification times) are synchronized up to seconds granularity.
|
||||
|
||||
An `in-progress` or a `pending` clone operation can be canceled. To cancel a clone operation use the `clone cancel` command::
|
||||
|
||||
$ ceph fs clone cancel <vol_name> <clone_name> [--group_name <group_name>]
|
||||
|
||||
On successful cancelation, the cloned subvolume is moved to `canceled` state::
|
||||
On successful cancellation, the cloned subvolume is moved to `canceled` state::
|
||||
|
||||
$ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1
|
||||
$ ceph fs clone cancel cephfs clone1
|
||||
|
@ -64,7 +64,7 @@ performance issues::
|
||||
MDS_SLOW_REQUEST 1 MDSs report slow requests
|
||||
mds.fs-01(mds.0): 5 slow requests are blocked > 30 secs
|
||||
|
||||
Where, for intance, ``MDS_SLOW_REQUEST`` is the unique code representing the
|
||||
Where, for instance, ``MDS_SLOW_REQUEST`` is the unique code representing the
|
||||
condition where requests are taking long time to complete. And the following
|
||||
description shows its severity and the MDS daemons which are serving these
|
||||
slow requests.
|
||||
|
@ -23,7 +23,7 @@ Using LazyIO
|
||||
============
|
||||
|
||||
LazyIO includes two methods ``lazyio_propagate()`` and ``lazyio_synchronize()``.
|
||||
With LazyIO enabled, writes may not be visble to other clients until
|
||||
With LazyIO enabled, writes may not be visible to other clients until
|
||||
``lazyio_propagate()`` is called. Reads may come from local cache (irrespective of
|
||||
changes to the file by other clients) until ``lazyio_synchronize()`` is called.
|
||||
|
||||
@ -59,7 +59,7 @@ particular client/file descriptor in a parallel application:
|
||||
|
||||
/* The barrier makes sure changes associated with all file descriptors
|
||||
are propagated so that there is certainty that the backing file
|
||||
is upto date */
|
||||
is up to date */
|
||||
application_specific_barrier();
|
||||
|
||||
char in_buf[40];
|
||||
|
@ -8,7 +8,7 @@
|
||||
|
||||
This document does NOT define a specific proposal or some future work.
|
||||
Instead it merely lists a few thoughts that MIGHT be relevant for future
|
||||
cephadm enhacements.
|
||||
cephadm enhancements.
|
||||
|
||||
*******
|
||||
Intro
|
||||
|
@ -161,7 +161,7 @@ To stop a mirroring directory snapshots use::
|
||||
$ ceph fs snapshot mirror remove <fs_name> <path>
|
||||
|
||||
Only absolute directory paths are allowed. Also, paths are normalized by the mirroring
|
||||
module, therfore, `/a/b/../b` is equivalent to `/a/b`.
|
||||
module, therefore, `/a/b/../b` is equivalent to `/a/b`.
|
||||
|
||||
$ mkdir -p /d0/d1/d2
|
||||
$ ceph fs snapshot mirror add cephfs /d0/d1/d2
|
||||
@ -170,7 +170,7 @@ module, therfore, `/a/b/../b` is equivalent to `/a/b`.
|
||||
Error EEXIST: directory /d0/d1/d2 is already tracked
|
||||
|
||||
Once a directory is added for mirroring, its subdirectory or ancestor directories are
|
||||
disallowed to be added for mirorring::
|
||||
disallowed to be added for mirroring::
|
||||
|
||||
$ ceph fs snapshot mirror add cephfs /d0/d1
|
||||
Error EINVAL: /d0/d1 is a ancestor of tracked path /d0/d1/d2
|
||||
@ -355,7 +355,7 @@ E.g., adding a regular file for synchronization would result in failed status::
|
||||
|
||||
This allows a user to add a non-existent directory for synchronization. The mirror daemon
|
||||
would mark the directory as failed and retry (less frequently). When the directory comes
|
||||
to existence, the mirror daemons would unmark the failed state upon successfull snapshot
|
||||
to existence, the mirror daemons would unmark the failed state upon successful snapshot
|
||||
synchronization.
|
||||
|
||||
When mirroring is disabled, the respective `fs mirror status` command for the file system
|
||||
|
@ -92,7 +92,7 @@ Shaman
|
||||
is a server offering RESTful API allowing the clients to query the
|
||||
information of repos hosted by chacra nodes. Shaman is also known
|
||||
for its `Web UI`_. But please note, shaman does not build the
|
||||
packages, it justs offers information of the builds.
|
||||
packages, it just offers information on the builds.
|
||||
|
||||
As the following shows, `chacra`_ manages multiple projects whose metadata
|
||||
are stored in a database. These metadata are exposed via Shaman as a web
|
||||
@ -199,7 +199,7 @@ libraries in our dist tarball. They are
|
||||
- pmdk
|
||||
|
||||
``make-dist`` is a script used by our CI pipeline to create dist tarball so the
|
||||
tarball can be used to build the Ceph packages in a clean room environmet. When
|
||||
tarball can be used to build the Ceph packages in a clean room environment. When
|
||||
we need to upgrade these third party libraries, we should
|
||||
|
||||
- update the CMake script
|
||||
@ -231,8 +231,8 @@ ref
|
||||
a unique id of a given version of a set packages. This id is used to reference
|
||||
the set packages under the ``<project>/<branch>``. It is a good practice to
|
||||
version the packaging recipes, like the ``debian`` directory for building deb
|
||||
packages and the ``spec`` for building rpm packages, and use ths sha1 of the
|
||||
packaging receipe for the ``ref``. But you could also the a random string for
|
||||
packages and the ``spec`` for building rpm packages, and use the sha1 of the
|
||||
packaging receipe for the ``ref``. But you could also use a random string for
|
||||
``ref``, like the tag name of the built source tree.
|
||||
|
||||
distro
|
||||
|
@ -171,7 +171,7 @@ pg stats reported to mgr
|
||||
------------------------
|
||||
|
||||
Crimson collects the per-pg, per-pool, and per-osd stats in a `MPGStats`
|
||||
messsage, and send it over to mgr, so that the mgr modules can query
|
||||
message, and send it over to mgr, so that the mgr modules can query
|
||||
them using the `MgrModule.get()` method.
|
||||
|
||||
asock command
|
||||
|
@ -254,7 +254,7 @@ Comparison
|
||||
* Worst case
|
||||
|
||||
- At least three writes are required additionally on WAL, object metadata, and data blocks.
|
||||
- If the flush from WAL to the data parition occurs frequently, radix tree onode structure needs to be update
|
||||
- If the flush from WAL to the data partition occurs frequently, radix tree onode structure needs to be update
|
||||
in many times. To minimize such overhead, we can make use of batch processing to minimize the update on the tree
|
||||
(the data related to the object has a locality because it will have the same parent node, so updates can be minimized)
|
||||
|
||||
@ -285,7 +285,7 @@ Detailed Design
|
||||
|
||||
.. code-block:: c
|
||||
|
||||
stuct onode {
|
||||
struct onode {
|
||||
extent_tree block_maps;
|
||||
b+_tree omaps;
|
||||
map xattrs;
|
||||
@ -380,7 +380,7 @@ Detailed Design
|
||||
|
||||
* Omap and xattr
|
||||
In this design, omap and xattr data is tracked by b+tree in onode. The onode only has the root node of b+tree.
|
||||
The root node contains entires which indicate where the key onode exists.
|
||||
The root node contains entries which indicate where the key onode exists.
|
||||
So, if we know the onode, omap can be found via omap b+tree.
|
||||
|
||||
* Fragmentation
|
||||
@ -437,7 +437,7 @@ Detailed Design
|
||||
WAL
|
||||
---
|
||||
Each SP has a WAL.
|
||||
The datas written to the WAL are metadata updates, free space update and small data.
|
||||
The data written to the WAL are metadata updates, free space update and small data.
|
||||
Note that only data smaller than the predefined threshold needs to be written to the WAL.
|
||||
The larger data is written to the unallocated free space and its onode's extent_tree is updated accordingly
|
||||
(also on-disk extent tree). We statically allocate WAL partition aside from data partition pre-configured.
|
||||
|
@ -51,7 +51,7 @@ Options
|
||||
|
||||
.. option:: -k
|
||||
|
||||
Keep old configuration files instead of overwritting theses.
|
||||
Keep old configuration files instead of overwriting these.
|
||||
|
||||
.. option:: -K, --kstore
|
||||
|
||||
@ -135,7 +135,7 @@ Environment variables
|
||||
|
||||
{OSD,MDS,MON,RGW}
|
||||
|
||||
Theses environment variables will contains the number of instances of the desired ceph process you want to start.
|
||||
These environment variables will contains the number of instances of the desired ceph process you want to start.
|
||||
|
||||
Example: ::
|
||||
|
||||
|
@ -137,12 +137,12 @@ Running Workunits Using vstart_enviroment.sh
|
||||
|
||||
Code can be tested by building Ceph locally from source, starting a vstart
|
||||
cluster, and running any suite against it.
|
||||
Similar to S3-Tests, other workunits can be run against by configuring your enviroment.
|
||||
Similar to S3-Tests, other workunits can be run against by configuring your environment.
|
||||
|
||||
Set up the enviroment
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
Set up the environment
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Configure your enviroment::
|
||||
Configure your environment::
|
||||
|
||||
$ . ./build/vstart_enviroment.sh
|
||||
|
||||
|
@ -48,7 +48,7 @@ A job failure might be caused by one or more of the following reasons:
|
||||
|
||||
* environment setup (`testing on varied
|
||||
systems <https://github.com/ceph/ceph/tree/master/qa/distros/supported>`_):
|
||||
testing compatibility with stable realeases for supported versions.
|
||||
testing compatibility with stable releases for supported versions.
|
||||
|
||||
* permutation of config values: for instance, `qa/suites/rados/thrash
|
||||
<https://github.com/ceph/ceph/tree/master/qa/suites/rados/thrash>`_ ensures
|
||||
|
@ -5,7 +5,7 @@
|
||||
User documentation
|
||||
==================
|
||||
|
||||
The documentation on docs.ceph.com is generated from the restructuredText
|
||||
The documentation on docs.ceph.com is generated from the reStructuredText
|
||||
sources in ``/doc/`` in the Ceph git repository.
|
||||
|
||||
Please make sure that your changes are written in a way that is intended
|
||||
|
@ -64,7 +64,7 @@ AuthMonitor::upgrade_format() called by `PaxosService::_active()`::
|
||||
boil down
|
||||
---------
|
||||
|
||||
* if `format_version >= current_version` then format is uptodate, return.
|
||||
* if `format_version >= current_version` then format is up-to-date, return.
|
||||
* if `features doesn't contain LUMINOUS` then `current_version = 1`
|
||||
* else if `features doesn't contain MIMIC` then `current_version = 2`
|
||||
* else `current_version = 3`
|
||||
|
@ -578,7 +578,7 @@ Compression will not be possible when using secure mode, unless configured speci
|
||||
|
||||
Post-compression frame format
|
||||
-----------------------------
|
||||
Depending on the negotiated connection mode from TAG_COMPRESSION_DONE, the connection is able to acccept/send compressed frames or process all frames as decompressed.
|
||||
Depending on the negotiated connection mode from TAG_COMPRESSION_DONE, the connection is able to accept/send compressed frames or process all frames as decompressed.
|
||||
|
||||
# msgr2.x-force mode
|
||||
|
||||
|
@ -28,7 +28,7 @@ out-of-band of the live acting set, similar to backfill, but still using
|
||||
the PG log to determine what needs to be done. This is known as *asynchronous
|
||||
recovery*.
|
||||
|
||||
The threashold for performing asynchronous recovery instead of synchronous
|
||||
The threshold for performing asynchronous recovery instead of synchronous
|
||||
recovery is not a clear-cut. There are a few criteria which
|
||||
need to be met for asynchronous recovery:
|
||||
|
||||
|
@ -35,7 +35,7 @@ concept of interval changes) and an increasing per-PG version number
|
||||
``pg_info_t::last_update``. Furthermore, we maintain a log of "recent"
|
||||
operations extending back at least far enough to include any
|
||||
*unstable* writes (writes which have been started but not committed)
|
||||
and objects which aren't uptodate locally (see recovery and
|
||||
and objects which aren't up-to-date locally (see recovery and
|
||||
backfill). In practice, the log will extend much further
|
||||
(``osd_min_pg_log_entries`` when clean and ``osd_max_pg_log_entries`` when not
|
||||
clean) because it's handy for quickly performing recovery.
|
||||
|
@ -31,7 +31,7 @@ RBD
|
||||
|
||||
For RBD, the primary goal is for either an OSD-internal agent or a
|
||||
cluster-external agent to be able to transparently shift portions
|
||||
of the consituent 4MB extents between a dedup pool and a hot base
|
||||
of the constituent 4MB extents between a dedup pool and a hot base
|
||||
pool.
|
||||
|
||||
As such, RBD operations (including class operations and snapshots)
|
||||
|
@ -290,7 +290,7 @@ S3 Documentation reference : http://docs.aws.amazon.com/AmazonS3/latest/API/REST
|
||||
+---------+---------------------------+------------+---------------------------------------------------------------------------------------------------------+-------------+
|
||||
| PUT | Object copy | Yes | | |
|
||||
+---------+---------------------------+------------+---------------------------------------------------------------------------------------------------------+-------------+
|
||||
| PUT | Initate multipart upload | Yes | | |
|
||||
| PUT | Initiate multipart upload | Yes | | |
|
||||
+---------+---------------------------+------------+---------------------------------------------------------------------------------------------------------+-------------+
|
||||
| PUT | Upload Part | Yes | | |
|
||||
+---------+---------------------------+------------+---------------------------------------------------------------------------------------------------------+-------------+
|
||||
|
@ -166,7 +166,7 @@ key/value into that node at the min would involve moving a bunch of
|
||||
bytes, which would be expensive (or verbose) to express purely as a
|
||||
sequence of byte operations. As such, each delta indicates the type
|
||||
as well as the location of the corresponding extent. Each block
|
||||
type can therefore implement CachedExtent::apply_delta as appopriate.
|
||||
type can therefore implement CachedExtent::apply_delta as appropriate.
|
||||
|
||||
See src/os/crimson/seastore/cached_extent.h.
|
||||
See src/os/crimson/seastore/cache.h.
|
||||
|
@ -43,7 +43,7 @@ Windows Event Log, having Event ID 1000. The entry will also include the process
|
||||
the faulting module name and path as well as the exception code.
|
||||
|
||||
Please note that in order to analyze crash dumps, the debug symbols are required.
|
||||
We're currently buidling Ceph using ``MinGW``, so by default ``DWARF`` symbols will
|
||||
We're currently building Ceph using ``MinGW``, so by default ``DWARF`` symbols will
|
||||
be embedded in the binaries. ``windbg`` does not support such symbols but ``gdb``
|
||||
can be used.
|
||||
|
||||
|
@ -46,7 +46,7 @@ HOW TO ENABLE TRACING IN CEPH
|
||||
-----------------------------
|
||||
|
||||
tracing in Ceph is disabled by default.
|
||||
it could be enabled globally, or for each entity seperately (e.g. rgw).
|
||||
it could be enabled globally, or for each entity separately (e.g. rgw).
|
||||
|
||||
Enable tracing globally::
|
||||
|
||||
|
@ -213,14 +213,14 @@ BlueStore OSD with the *prime-osd-dir* command::
|
||||
BlueFS log rescue
|
||||
=====================
|
||||
|
||||
Some versions of BlueStore were susceptible to BlueFS log growing extremaly large -
|
||||
Some versions of BlueStore were susceptible to BlueFS log growing extremely large -
|
||||
beyond the point of making booting OSD impossible. This state is indicated by
|
||||
booting that takes very long and fails in _replay function.
|
||||
|
||||
This can be fixed by::
|
||||
ceph-bluestore-tool fsck --path *osd path* --bluefs_replay_recovery=true
|
||||
|
||||
It is advised to first check if rescue process would be successfull::
|
||||
It is advised to first check if rescue process would be successful::
|
||||
ceph-bluestore-tool fsck --path *osd path* \
|
||||
--bluefs_replay_recovery=true --bluefs_replay_recovery_disable_compact=true
|
||||
|
||||
|
@ -88,8 +88,8 @@ Options
|
||||
|
||||
override the ``$pid`` when expanding options. For example, if an option is
|
||||
configured like ``/var/log/$name.$pid.log``, the ``$pid`` portion in its
|
||||
value will be substituded using the PID of **ceph-conf** instead of the
|
||||
PID of the process specfied using the ``--name`` option.
|
||||
value will be substituted using the PID of **ceph-conf** instead of the
|
||||
PID of the process specified using the ``--name`` option.
|
||||
|
||||
.. option:: -r, --resolve-search
|
||||
|
||||
|
@ -73,7 +73,7 @@ Commands
|
||||
|
||||
.. option:: select_test <n>
|
||||
|
||||
Select the given build-in test instance as a the in-memory instance
|
||||
Select the given built-in test instance as the in-memory instance
|
||||
of the type.
|
||||
|
||||
.. option:: get_features
|
||||
|
@ -14,7 +14,7 @@ Synopsis
|
||||
Description
|
||||
===========
|
||||
|
||||
:program:`ceph-diff-sorted` is a simplifed *diff* utility optimized
|
||||
:program:`ceph-diff-sorted` is a simplified *diff* utility optimized
|
||||
for comparing two files with lines that are lexically sorted.
|
||||
|
||||
The output is simplified in comparison to that of the standard `diff`
|
||||
|
@ -23,7 +23,7 @@ the real work. To mount a Ceph file system use::
|
||||
mount.ceph name@07fe3187-00d9-42a3-814b-72a4d5e7d5be.fs_name=/ /mnt/mycephfs -o mon_addr=1.2.3.4
|
||||
|
||||
Mount helper can fill in the cluster FSID by reading the ceph configuration file.
|
||||
Its recommeded to call the mount helper via mount(8) as per::
|
||||
Its recommended to call the mount helper via mount(8) as per::
|
||||
|
||||
mount -t ceph name@.fs_name=/ /mnt/mycephfs -o mon_addr=1.2.3.4
|
||||
|
||||
|
@ -50,7 +50,7 @@ If the active daemon fails to send a beacon to the monitors for
|
||||
more than :confval:`mon_mgr_beacon_grace`, then it will be replaced
|
||||
by a standby.
|
||||
|
||||
If you want to pre-empt failover, you can explicitly mark a ceph-mgr
|
||||
If you want to preempt failover, you can explicitly mark a ceph-mgr
|
||||
daemon as failed using ``ceph mgr fail <mgr name>``.
|
||||
|
||||
Using modules
|
||||
|
@ -1212,7 +1212,7 @@ The command returns the URL where the Ceph Dashboard is located: ``https://<host
|
||||
|
||||
Many Ceph tools return results in JSON format. We suggest that
|
||||
you install the `jq <https://stedolan.github.io/jq>`_ command-line
|
||||
utility to faciliate working with JSON data.
|
||||
utility to facilitate working with JSON data.
|
||||
|
||||
|
||||
Accessing the Dashboard
|
||||
|
@ -64,7 +64,7 @@ the deployment strategy:
|
||||
**block (data) only**
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
If all devices are the same type, for example all rotational drives, and
|
||||
there are no fast devices to use for metadata, it makes sense to specifiy the
|
||||
there are no fast devices to use for metadata, it makes sense to specify the
|
||||
block device only and to not separate ``block.db`` or ``block.wal``. The
|
||||
:ref:`ceph-volume-lvm` command for a single ``/dev/sda`` device looks like::
|
||||
|
||||
@ -139,7 +139,7 @@ In older releases, internal level sizes mean that the DB can fully utilize only
|
||||
specific partition / LV sizes that correspond to sums of L0, L0+L1, L1+L2,
|
||||
etc. sizes, which with default settings means roughly 3 GB, 30 GB, 300 GB, and
|
||||
so forth. Most deployments will not substantially benefit from sizing to
|
||||
accomodate L3 and higher, though DB compaction can be facilitated by doubling
|
||||
accommodate L3 and higher, though DB compaction can be facilitated by doubling
|
||||
these figures to 6GB, 60GB, and 600GB.
|
||||
|
||||
Improvements in releases beginning with Nautilus 14.2.12 and Octopus 15.2.6
|
||||
|
@ -189,7 +189,7 @@ Naming Clusters (deprecated)
|
||||
Each Ceph cluster has an internal name that is used as part of configuration
|
||||
and log file names as well as directory and mountpoint names. This name
|
||||
defaults to "ceph". Previous releases of Ceph allowed one to specify a custom
|
||||
name instead, for example "ceph2". This was intended to faciliate running
|
||||
name instead, for example "ceph2". This was intended to facilitate running
|
||||
multiple logical clusters on the same physical hardware, but in practice this
|
||||
was rarely exploited and should no longer be attempted. Prior documentation
|
||||
could also be misinterpreted as requiring unique cluster names in order to
|
||||
@ -202,7 +202,7 @@ custom names may be progressively removed by future Ceph releases, so it is
|
||||
strongly recommended to deploy all new clusters with the default name "ceph".
|
||||
|
||||
Some Ceph CLI commands accept an optional ``--cluster`` (cluster name) option. This
|
||||
option is present purely for backward compatibility and need not be accomodated
|
||||
option is present purely for backward compatibility and need not be accommodated
|
||||
by new tools and deployments.
|
||||
|
||||
If you do need to allow multiple clusters to exist on the same host, please use
|
||||
|
@ -1142,7 +1142,7 @@ _______________
|
||||
One or more PGs has not been scrubbed recently. PGs are normally scrubbed
|
||||
within every configured interval specified by
|
||||
:confval:`osd_scrub_max_interval` globally. This
|
||||
interval can be overriden on per-pool basis with
|
||||
interval can be overridden on per-pool basis with
|
||||
:confval:`scrub_max_interval`. The warning triggers when
|
||||
``mon_warn_pg_not_scrubbed_ratio`` percentage of interval has elapsed without a
|
||||
scrub since it was due.
|
||||
|
@ -414,7 +414,7 @@ on the number of replicas, clones and snapshots.
|
||||
the cache pool but have not been flushed yet to the base pool. This field is
|
||||
only available when cache tiering is in use.
|
||||
- **USED COMPR:** amount of space allocated for compressed data (i.e. this
|
||||
includes comrpessed data plus all the allocation, replication and erasure
|
||||
includes compressed data plus all the allocation, replication and erasure
|
||||
coding overhead).
|
||||
- **UNDER COMPR:** amount of data passed through compression (summed over all
|
||||
replicas) and beneficial enough to be stored in a compressed form.
|
||||
@ -447,7 +447,7 @@ Or:
|
||||
ceph osd dump
|
||||
|
||||
You can also check view OSDs according to their position in the CRUSH map by
|
||||
using the folloiwng command:
|
||||
using the following command:
|
||||
|
||||
.. prompt:: bash #
|
||||
|
||||
|
@ -41,7 +41,7 @@ No matter what happens, Ceph will not compromise on data integrity
|
||||
and consistency. If there's a failure in your network or a loss of nodes and
|
||||
you can restore service, Ceph will return to normal functionality on its own.
|
||||
|
||||
But there are scenarios where you lose data availibility despite having
|
||||
But there are scenarios where you lose data availability despite having
|
||||
enough servers available to satisfy Ceph's consistency and sizing constraints, or
|
||||
where you may be surprised to not satisfy Ceph's constraints.
|
||||
The first important category of these failures resolve around inconsistent
|
||||
@ -112,7 +112,7 @@ CRUSH and place ``mon.e`` there ::
|
||||
$ ceph mon set_location e datacenter=site3
|
||||
$ ceph mon enable_stretch_mode e stretch_rule datacenter
|
||||
|
||||
When stretch mode is enabled, the OSDs wlll only take PGs active when
|
||||
When stretch mode is enabled, the OSDs will only take PGs active when
|
||||
they peer across data centers (or whatever other CRUSH bucket type
|
||||
you specified), assuming both are alive. Pools will increase in size
|
||||
from the default 3 to 4, expecting 2 copies in each site. OSDs will only
|
||||
|
@ -31,7 +31,7 @@ Initial Troubleshooting
|
||||
**Are you able to reach to the mon nodes?**
|
||||
|
||||
Doesn't happen often, but sometimes there are ``iptables`` rules that
|
||||
block accesse to mon nodes or TCP ports. These may be leftovers from
|
||||
block access to mon nodes or TCP ports. These may be leftovers from
|
||||
prior stress-testing or rule development. Try SSHing into
|
||||
the server and, if that succeeds, try connecting to the monitor's ports
|
||||
(``tcp/3300`` and ``tcp/6789``) using a ``telnet``, ``nc``, or similar tools.
|
||||
@ -361,7 +361,7 @@ Can I increase the maximum tolerated clock skew?
|
||||
The maximum tolerated clock skew is configurable via the
|
||||
``mon-clock-drift-allowed`` option, and
|
||||
although you *CAN* you almost certainly *SHOULDN'T*. The clock skew mechanism
|
||||
is in place because clock-skewed monitors are liely to misbehave. We, as
|
||||
is in place because clock-skewed monitors are likely to misbehave. We, as
|
||||
developers and QA aficionados, are comfortable with the current default
|
||||
value, as it will alert the user before the monitors get out hand. Changing
|
||||
this value may cause unforeseen effects on the
|
||||
|
@ -539,7 +539,7 @@ Flapping OSDs
|
||||
When OSDs peer and check heartbeats, they use the cluster (back-end)
|
||||
network when it's available. See `Monitor/OSD Interaction`_ for details.
|
||||
|
||||
We have tradtionally recommended separate *public* (front-end) and *private*
|
||||
We have traditionally recommended separate *public* (front-end) and *private*
|
||||
(cluster / back-end / replication) networks:
|
||||
|
||||
#. Segregation of heartbeat and replication / recovery traffic (private)
|
||||
|
@ -279,7 +279,7 @@ For example,
|
||||
Object modification & Limitations
|
||||
----------------------------------
|
||||
|
||||
The cloud storage class once configured can then be used like any other storage class in the bucket lifecyle rules. For example,
|
||||
The cloud storage class once configured can then be used like any other storage class in the bucket lifecycle rules. For example,
|
||||
|
||||
::
|
||||
|
||||
@ -351,6 +351,6 @@ Future Work
|
||||
|
||||
* Federation between RGW and Cloud services.
|
||||
|
||||
* Support transition to other cloud provideres (like Azure).
|
||||
* Support transition to other cloud providers (like Azure).
|
||||
|
||||
.. _`Multisite Configuration`: ../multisite
|
||||
|
@ -117,7 +117,7 @@ These objects are accessed when listing buckets, when updating bucket
|
||||
contents, and updating and retrieving bucket statistics (e.g. for quota).
|
||||
|
||||
See the user-visible, encoded class 'cls_user_bucket_entry' and its
|
||||
nested class 'cls_user_bucket' for the values of these omap entires.
|
||||
nested class 'cls_user_bucket' for the values of these omap entries.
|
||||
|
||||
These listings are kept consistent with buckets in pool ".rgw".
|
||||
|
||||
|
@ -64,7 +64,7 @@ To add a specific version of a package to the allowlist:
|
||||
# radosgw-admin script-package add --package='{package name} {package version}' [--allow-compilation]
|
||||
|
||||
|
||||
* When adding a diffrent version of a package which already exists in the list, the newly
|
||||
* When adding a different version of a package which already exists in the list, the newly
|
||||
added version will override the existing one.
|
||||
|
||||
* When adding a package without a version specified, the latest version of the package
|
||||
@ -324,9 +324,9 @@ Lua Code Samples
|
||||
|
||||
function print_owner(owner)
|
||||
RGWDebugLog("Owner:")
|
||||
RGWDebugLog(" Dispaly Name: " .. owner.DisplayName)
|
||||
RGWDebugLog(" Display Name: " .. owner.DisplayName)
|
||||
RGWDebugLog(" Id: " .. owner.User.Id)
|
||||
RGWDebugLog(" Tenanet: " .. owner.User.Tenant)
|
||||
RGWDebugLog(" Tenant: " .. owner.User.Tenant)
|
||||
end
|
||||
|
||||
function print_acl(acl_type)
|
||||
|
@ -202,7 +202,7 @@ Buckets are either a bucket name, or '*' (wildcard). Wildcard bucket means the c
|
||||
Prefix can be defined to filter source objects.
|
||||
Tags are passed by a comma separated list of 'key=value'.
|
||||
Destination owner can be set to force a destination owner of the objects. If user mode is selected, only the destination bucket owner can be set.
|
||||
Destinatino storage class can also be condfigured.
|
||||
Destination storage class can also be configured.
|
||||
User id can be set for user mode, and will be the user under which the sync operation will be executed (for permissions validation).
|
||||
|
||||
|
||||
|
@ -150,7 +150,7 @@ Features Support
|
||||
| predicate as a projection | where address like '%new-york%'; |
|
||||
+---------------------------------+-----------------+-----------------------------------------------------------------------+
|
||||
| an alias to | select (_1 like "_3_") as *likealias*,_1 from s3object |
|
||||
| predicate as a prjection | where *likealias* = true and cast(_1 as int) between 800 and 900; |
|
||||
| predicate as a projection | where *likealias* = true and cast(_1 as int) between 800 and 900; |
|
||||
+---------------------------------+-----------------+-----------------------------------------------------------------------+
|
||||
| casting operator | select cast(123 as int)%2 from s3object; |
|
||||
+---------------------------------+-----------------+-----------------------------------------------------------------------+
|
||||
|
@ -96,7 +96,7 @@ advanced variables.
|
||||
|
||||
**Deployment:**
|
||||
|
||||
Perform the followint steps on the Ansible installer node.
|
||||
Perform the following steps on the Ansible installer node.
|
||||
|
||||
#. As ``root``, execute the Ansible playbook:
|
||||
|
||||
|
@ -118,7 +118,7 @@ allows selecting AES-128 as well. Adding / removing passphrases is currently
|
||||
not supported by RBD, but can be applied to the raw RBD data using compatible
|
||||
tools such as cryptsetup.
|
||||
|
||||
The LUKS header size can vary (upto 136MiB in LUKS2), but is usually upto
|
||||
The LUKS header size can vary (up to 136MiB in LUKS2), but is usually up to
|
||||
16MiB, depending on the version of `libcryptsetup` installed. For optimal
|
||||
performance, the encryption format will set the data offset to be aligned with
|
||||
the image object size. For example expect a minimum overhead of 8MiB if using
|
||||
|
@ -323,7 +323,7 @@ has been enabled by default since the Giant release. Moreover, enabling the
|
||||
client admin socket allows the collection of metrics and can be invaluable
|
||||
for troubleshooting.
|
||||
|
||||
This socket can be accessed on the hypvervisor (Nova compute) node::
|
||||
This socket can be accessed on the hypervisor (Nova compute) node::
|
||||
|
||||
ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok help
|
||||
|
||||
|
@ -4,7 +4,7 @@ Cuttlefish
|
||||
|
||||
Cuttlefish is the 3rd stable release of Ceph. It is named after a type
|
||||
of cephalopod (order Sepiida) characterized by a unique internal shell, the
|
||||
cuttlebone, which is used for control of bouyancy.
|
||||
cuttlebone, which is used for control of buoyancy.
|
||||
|
||||
v0.61.9 "Cuttlefish"
|
||||
====================
|
||||
|
@ -55,7 +55,7 @@ Past vulnerabilities
|
||||
+------------+-------------------+-------------+--------------------------------------------+
|
||||
| 2018-07-10 | `CVE-2018-1128`_ | 7.5 High | Cephx replay vulnerability |
|
||||
+------------+-------------------+-------------+--------------------------------------------+
|
||||
| 2018-07-27 | `CVE-2017-7519`_ | 4.4 Medium | libradosstriper unvaliated format string |
|
||||
| 2018-07-27 | `CVE-2017-7519`_ | 4.4 Medium | libradosstriper unvalidated format string |
|
||||
+------------+-------------------+-------------+--------------------------------------------+
|
||||
| 2018-08-01 | `CVE-2016-9579`_ | 7.6 High | potential RGW XSS attack |
|
||||
+------------+-------------------+-------------+--------------------------------------------+
|
||||
|
@ -296,7 +296,7 @@ the following packages are required:
|
||||
- python3-dev
|
||||
- python3-pip
|
||||
- python3-sphinx
|
||||
- pytnon3-venv
|
||||
- python3-venv
|
||||
- libxml2-dev
|
||||
- libxslt1-dev
|
||||
- doxygen
|
||||
@ -533,14 +533,14 @@ Pull`_ approach.
|
||||
Notify Us
|
||||
---------
|
||||
|
||||
In case The PR did not got a review within in a resonable timeframe, please get in touch
|
||||
In case The PR did not got a review within in a reasonable timeframe, please get in touch
|
||||
with the corresponding component lead of the :ref:`clt`.
|
||||
|
||||
Documentation Style Guide
|
||||
=========================
|
||||
|
||||
One objective of the Ceph documentation project is to ensure the readability of
|
||||
the documentation in both native restructuredText format and its rendered
|
||||
the documentation in both native reStructuredText format and its rendered
|
||||
formats such as HTML. Navigate to your Ceph repository and view a document in
|
||||
its native format. You may notice that it is generally as legible in a terminal
|
||||
as it is in its rendered HTML format. Additionally, you may also notice that
|
||||
|
@ -324,7 +324,7 @@ SELinux ceph policy is very flexible allowing users to setup their ceph processe
|
||||
.B STANDARD FILE CONTEXT
|
||||
|
||||
SELinux defines the file context types for the ceph, if you wanted to
|
||||
store files with these types in a diffent paths, you need to execute the semanage command to sepecify alternate labeling and then use restorecon to put the labels on disk.
|
||||
store files with these types in a diffent paths, you need to execute the semanage command to specify alternate labeling and then use restorecon to put the labels on disk.
|
||||
|
||||
.B semanage fcontext -a -t ceph_exec_t '/srv/ceph/content(/.*)?'
|
||||
.br
|
||||
|
Loading…
Reference in New Issue
Block a user