From 76776516180c017f9a028ec387044106be82ec20 Mon Sep 17 00:00:00 2001 From: Dimitri Papadopoulos <3234522+DimitriPapadopoulos@users.noreply.github.com> Date: Wed, 8 Dec 2021 12:43:27 +0100 Subject: [PATCH] doc,man: typos found by codespell Signed-off-by: Dimitri Papadopoulos <3234522+DimitriPapadopoulos@users.noreply.github.com> --- doc/cephadm/adoption.rst | 4 ++-- doc/cephadm/host-management.rst | 4 ++-- doc/cephadm/operations.rst | 2 +- doc/cephadm/services/index.rst | 2 +- doc/cephadm/services/mon.rst | 4 ++-- doc/cephadm/services/osd.rst | 8 ++++---- doc/cephadm/services/rgw.rst | 2 +- doc/cephadm/troubleshooting.rst | 4 ++-- doc/cephfs/capabilities.rst | 2 +- doc/cephfs/cephfs-mirroring.rst | 6 +++--- doc/cephfs/disaster-recovery-experts.rst | 2 +- doc/cephfs/fs-volumes.rst | 6 +++--- doc/cephfs/health-messages.rst | 2 +- doc/cephfs/lazyio.rst | 4 ++-- doc/dev/cephadm/scalability-notes.rst | 2 +- doc/dev/cephfs-mirroring.rst | 6 +++--- doc/dev/continuous-integration.rst | 8 ++++---- doc/dev/crimson/crimson.rst | 2 +- doc/dev/crimson/poseidonstore.rst | 8 ++++---- doc/dev/dev_cluster_deployement.rst | 4 ++-- doc/dev/developer_guide/running-tests-locally.rst | 8 ++++---- ...ests-integration-testing-teuthology-debugging-tips.rst | 2 +- doc/dev/documenting.rst | 2 +- doc/dev/mon-on-disk-formats.rst | 2 +- doc/dev/msgr2.rst | 2 +- doc/dev/osd_internals/async_recovery.rst | 2 +- doc/dev/osd_internals/log_based_pg.rst | 2 +- doc/dev/osd_internals/manifest.rst | 2 +- doc/dev/radosgw/s3_compliance.rst | 2 +- doc/dev/seastore.rst | 2 +- doc/install/windows-troubleshooting.rst | 2 +- doc/jaegertracing/index.rst | 2 +- doc/man/8/ceph-bluestore-tool.rst | 4 ++-- doc/man/8/ceph-conf.rst | 4 ++-- doc/man/8/ceph-dencoder.rst | 2 +- doc/man/8/ceph-diff-sorted.rst | 2 +- doc/man/8/mount.ceph.rst | 2 +- doc/mgr/administrator.rst | 2 +- doc/mgr/dashboard.rst | 2 +- doc/rados/configuration/bluestore-config-ref.rst | 4 ++-- doc/rados/configuration/common.rst | 4 ++-- doc/rados/operations/health-checks.rst | 2 +- doc/rados/operations/monitoring.rst | 4 ++-- doc/rados/operations/stretch-mode.rst | 4 ++-- doc/rados/troubleshooting/troubleshooting-mon.rst | 4 ++-- doc/rados/troubleshooting/troubleshooting-osd.rst | 2 +- doc/radosgw/cloud-transition.rst | 4 ++-- doc/radosgw/layout.rst | 2 +- doc/radosgw/lua-scripting.rst | 6 +++--- doc/radosgw/multisite-sync-policy.rst | 2 +- doc/radosgw/s3select.rst | 2 +- doc/rbd/iscsi-target-ansible.rst | 2 +- doc/rbd/rbd-encryption.rst | 2 +- doc/rbd/rbd-openstack.rst | 2 +- doc/releases/cuttlefish.rst | 2 +- doc/security/cves.rst | 2 +- doc/start/documenting-ceph.rst | 6 +++--- man/ceph_selinux.8 | 2 +- 58 files changed, 94 insertions(+), 94 deletions(-) diff --git a/doc/cephadm/adoption.rst b/doc/cephadm/adoption.rst index e06422a716c..1130b4e11f1 100644 --- a/doc/cephadm/adoption.rst +++ b/doc/cephadm/adoption.rst @@ -4,7 +4,7 @@ Converting an existing cluster to cephadm ========================================= It is possible to convert some existing clusters so that they can be managed -with ``cephadm``. This statment applies to some clusters that were deployed +with ``cephadm``. This statement applies to some clusters that were deployed with ``ceph-deploy``, ``ceph-ansible``, or ``DeepSea``. This section of the documentation explains how to determine whether your @@ -51,7 +51,7 @@ Preparation cephadm ls - Before starting the converstion process, ``cephadm ls`` shows all existing + Before starting the conversion process, ``cephadm ls`` shows all existing daemons to have a style of ``legacy``. As the adoption process progresses, adopted daemons will appear with a style of ``cephadm:v1``. diff --git a/doc/cephadm/host-management.rst b/doc/cephadm/host-management.rst index 745765ff685..c10e372f7be 100644 --- a/doc/cephadm/host-management.rst +++ b/doc/cephadm/host-management.rst @@ -82,7 +82,7 @@ All osds on the host will be scheduled to be removed. You can check osd removal see :ref:`cephadm-osd-removal` for more details about osd removal -You can check if there are no deamons left on the host with the following: +You can check if there are no daemons left on the host with the following: .. prompt:: bash # @@ -202,7 +202,7 @@ Setting the initial CRUSH location of host ========================================== Hosts can contain a ``location`` identifier which will instruct cephadm to -create a new CRUSH host located in the specified hierachy. +create a new CRUSH host located in the specified hierarchy. .. code-block:: yaml diff --git a/doc/cephadm/operations.rst b/doc/cephadm/operations.rst index 08e493bd73a..ec6e8887a64 100644 --- a/doc/cephadm/operations.rst +++ b/doc/cephadm/operations.rst @@ -524,7 +524,7 @@ Purging a cluster .. danger:: THIS OPERATION WILL DESTROY ALL DATA STORED IN THIS CLUSTER -In order to destory a cluster and delete all data stored in this cluster, pause +In order to destroy a cluster and delete all data stored in this cluster, pause cephadm to avoid deploying new daemons. .. prompt:: bash # diff --git a/doc/cephadm/services/index.rst b/doc/cephadm/services/index.rst index f34180eb24f..a14b37f838e 100644 --- a/doc/cephadm/services/index.rst +++ b/doc/cephadm/services/index.rst @@ -435,7 +435,7 @@ Consider the following service specification: count: 3 label: myfs -This service specifcation instructs cephadm to deploy three daemons on hosts +This service specification instructs cephadm to deploy three daemons on hosts labeled ``myfs`` across the cluster. If there are fewer than three daemons deployed on the candidate hosts, cephadm diff --git a/doc/cephadm/services/mon.rst b/doc/cephadm/services/mon.rst index 6326b73f46d..56bb0a99a11 100644 --- a/doc/cephadm/services/mon.rst +++ b/doc/cephadm/services/mon.rst @@ -170,8 +170,8 @@ network ``10.1.2.0/24``, run the following commands: ceph orch apply mon --placement="newhost1,newhost2,newhost3" -Futher Reading -============== +Further Reading +=============== * :ref:`rados-operations` * :ref:`rados-troubleshooting-mon` diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst index b2193cefa95..32f50e2d47f 100644 --- a/doc/cephadm/services/osd.rst +++ b/doc/cephadm/services/osd.rst @@ -768,8 +768,8 @@ layout, it is recommended to apply different OSD specs matching only one set of hosts. Typically you will have a spec for multiple hosts with the same layout. -The sevice id as the unique key: In case a new OSD spec with an already -applied service id is applied, the existing OSD spec will be superseeded. +The service id as the unique key: In case a new OSD spec with an already +applied service id is applied, the existing OSD spec will be superseded. cephadm will now create new OSD daemons based on the new spec definition. Existing OSD daemons will not be affected. See :ref:`cephadm-osd-declarative`. @@ -912,8 +912,8 @@ activates all existing OSDs on a host. This will scan all existing disks for OSDs and deploy corresponding daemons. -Futher Reading -============== +Further Reading +=============== * :ref:`ceph-volume` * :ref:`rados-index` diff --git a/doc/cephadm/services/rgw.rst b/doc/cephadm/services/rgw.rst index a0b66b216c5..ece8aeee35a 100644 --- a/doc/cephadm/services/rgw.rst +++ b/doc/cephadm/services/rgw.rst @@ -156,7 +156,7 @@ High availability service for RGW ================================= The *ingress* service allows you to create a high availability endpoint -for RGW with a minumum set of configuration options. The orchestrator will +for RGW with a minimum set of configuration options. The orchestrator will deploy and manage a combination of haproxy and keepalived to provide load balancing on a floating virtual IP. diff --git a/doc/cephadm/troubleshooting.rst b/doc/cephadm/troubleshooting.rst index ee4c3877276..b7d295b1296 100644 --- a/doc/cephadm/troubleshooting.rst +++ b/doc/cephadm/troubleshooting.rst @@ -273,7 +273,7 @@ To call miscellaneous like ``ceph-objectstore-tool`` or 0: [v2:127.0.0.1:3300/0,v1:127.0.0.1:6789/0] mon.myhostname This command sets up the environment in a way that is suitable -for extended daemon maintenance and running the deamon interactively. +for extended daemon maintenance and running the daemon interactively. .. _cephadm-restore-quorum: @@ -324,7 +324,7 @@ Get the container image:: ceph config get "mgr.hostname.smfvfd" container_image -Create a file ``config-json.json`` which contains the information neccessary to deploy +Create a file ``config-json.json`` which contains the information necessary to deploy the daemon: .. code-block:: json diff --git a/doc/cephfs/capabilities.rst b/doc/cephfs/capabilities.rst index e5e9bb08583..21231915cf8 100644 --- a/doc/cephfs/capabilities.rst +++ b/doc/cephfs/capabilities.rst @@ -123,7 +123,7 @@ clients allowed, even some capabilities are not needed or wanted by the clients, as pre-issuing capabilities could reduce latency in some cases. If there is only one client, usually it will be the loner client for all the inodes. -While in multiple clients case, the MDS will try to caculate a loner client out for +While in multiple clients case, the MDS will try to calculate a loner client out for each inode depending on the capabilities the clients (needed | wanted), but usually it will fail. The loner client will always get all the capabilities. diff --git a/doc/cephfs/cephfs-mirroring.rst b/doc/cephfs/cephfs-mirroring.rst index d602de2a3d7..8793f3e3cdc 100644 --- a/doc/cephfs/cephfs-mirroring.rst +++ b/doc/cephfs/cephfs-mirroring.rst @@ -115,7 +115,7 @@ To stop a mirroring directory snapshots use:: $ ceph fs snapshot mirror remove Only absolute directory paths are allowed. Also, paths are normalized by the mirroring -module, therfore, `/a/b/../b` is equivalent to `/a/b`. +module, therefore, `/a/b/../b` is equivalent to `/a/b`. $ mkdir -p /d0/d1/d2 $ ceph fs snapshot mirror add cephfs /d0/d1/d2 @@ -124,7 +124,7 @@ module, therfore, `/a/b/../b` is equivalent to `/a/b`. Error EEXIST: directory /d0/d1/d2 is already tracked Once a directory is added for mirroring, its subdirectory or ancestor directories are -disallowed to be added for mirorring:: +disallowed to be added for mirroring:: $ ceph fs snapshot mirror add cephfs /d0/d1 Error EINVAL: /d0/d1 is a ancestor of tracked path /d0/d1/d2 @@ -301,7 +301,7 @@ E.g., adding a regular file for synchronization would result in failed status:: This allows a user to add a non-existent directory for synchronization. The mirror daemon would mark the directory as failed and retry (less frequently). When the directory comes -to existence, the mirror daemons would unmark the failed state upon successfull snapshot +to existence, the mirror daemons would unmark the failed state upon successful snapshot synchronization. When mirroring is disabled, the respective `fs mirror status` command for the file system diff --git a/doc/cephfs/disaster-recovery-experts.rst b/doc/cephfs/disaster-recovery-experts.rst index 11df16e3813..343ecfc0bff 100644 --- a/doc/cephfs/disaster-recovery-experts.rst +++ b/doc/cephfs/disaster-recovery-experts.rst @@ -187,7 +187,7 @@ It is **important** to ensure that all workers have completed the scan_extents phase before any workers enter the scan_inodes phase. After completing the metadata recovery, you may want to run cleanup -operation to delete ancillary data geneated during recovery. +operation to delete ancillary data generated during recovery. :: diff --git a/doc/cephfs/fs-volumes.rst b/doc/cephfs/fs-volumes.rst index aa367b8b29c..3cd8fde6a7c 100644 --- a/doc/cephfs/fs-volumes.rst +++ b/doc/cephfs/fs-volumes.rst @@ -10,7 +10,7 @@ storage administrators among others can use the common CLI provided by the ceph-mgr volumes module to manage the CephFS exports. The ceph-mgr volumes module implements the following file system export -abstactions: +abstractions: * FS volumes, an abstraction for CephFS file systems @@ -359,13 +359,13 @@ To delete a partial clone use:: $ ceph fs subvolume rm [--group_name ] --force .. note:: Cloning only synchronizes directories, regular files and symbolic links. Also, inode timestamps (access and - modification times) are synchronized upto seconds granularity. + modification times) are synchronized up to seconds granularity. An `in-progress` or a `pending` clone operation can be canceled. To cancel a clone operation use the `clone cancel` command:: $ ceph fs clone cancel [--group_name ] -On successful cancelation, the cloned subvolume is moved to `canceled` state:: +On successful cancellation, the cloned subvolume is moved to `canceled` state:: $ ceph fs subvolume snapshot clone cephfs subvol1 snap1 clone1 $ ceph fs clone cancel cephfs clone1 diff --git a/doc/cephfs/health-messages.rst b/doc/cephfs/health-messages.rst index 790fc3fdedb..a46f56f85ea 100644 --- a/doc/cephfs/health-messages.rst +++ b/doc/cephfs/health-messages.rst @@ -64,7 +64,7 @@ performance issues:: MDS_SLOW_REQUEST 1 MDSs report slow requests mds.fs-01(mds.0): 5 slow requests are blocked > 30 secs -Where, for intance, ``MDS_SLOW_REQUEST`` is the unique code representing the +Where, for instance, ``MDS_SLOW_REQUEST`` is the unique code representing the condition where requests are taking long time to complete. And the following description shows its severity and the MDS daemons which are serving these slow requests. diff --git a/doc/cephfs/lazyio.rst b/doc/cephfs/lazyio.rst index 9cd9ae74a59..b3005877408 100644 --- a/doc/cephfs/lazyio.rst +++ b/doc/cephfs/lazyio.rst @@ -23,7 +23,7 @@ Using LazyIO ============ LazyIO includes two methods ``lazyio_propagate()`` and ``lazyio_synchronize()``. -With LazyIO enabled, writes may not be visble to other clients until +With LazyIO enabled, writes may not be visible to other clients until ``lazyio_propagate()`` is called. Reads may come from local cache (irrespective of changes to the file by other clients) until ``lazyio_synchronize()`` is called. @@ -59,7 +59,7 @@ particular client/file descriptor in a parallel application: /* The barrier makes sure changes associated with all file descriptors are propagated so that there is certainty that the backing file - is upto date */ + is up to date */ application_specific_barrier(); char in_buf[40]; diff --git a/doc/dev/cephadm/scalability-notes.rst b/doc/dev/cephadm/scalability-notes.rst index 157153cb3f1..9faaee04169 100644 --- a/doc/dev/cephadm/scalability-notes.rst +++ b/doc/dev/cephadm/scalability-notes.rst @@ -8,7 +8,7 @@ This document does NOT define a specific proposal or some future work. Instead it merely lists a few thoughts that MIGHT be relevant for future -cephadm enhacements. +cephadm enhancements. ******* Intro diff --git a/doc/dev/cephfs-mirroring.rst b/doc/dev/cephfs-mirroring.rst index 3ca487c0332..64a482eee01 100644 --- a/doc/dev/cephfs-mirroring.rst +++ b/doc/dev/cephfs-mirroring.rst @@ -161,7 +161,7 @@ To stop a mirroring directory snapshots use:: $ ceph fs snapshot mirror remove Only absolute directory paths are allowed. Also, paths are normalized by the mirroring -module, therfore, `/a/b/../b` is equivalent to `/a/b`. +module, therefore, `/a/b/../b` is equivalent to `/a/b`. $ mkdir -p /d0/d1/d2 $ ceph fs snapshot mirror add cephfs /d0/d1/d2 @@ -170,7 +170,7 @@ module, therfore, `/a/b/../b` is equivalent to `/a/b`. Error EEXIST: directory /d0/d1/d2 is already tracked Once a directory is added for mirroring, its subdirectory or ancestor directories are -disallowed to be added for mirorring:: +disallowed to be added for mirroring:: $ ceph fs snapshot mirror add cephfs /d0/d1 Error EINVAL: /d0/d1 is a ancestor of tracked path /d0/d1/d2 @@ -355,7 +355,7 @@ E.g., adding a regular file for synchronization would result in failed status:: This allows a user to add a non-existent directory for synchronization. The mirror daemon would mark the directory as failed and retry (less frequently). When the directory comes -to existence, the mirror daemons would unmark the failed state upon successfull snapshot +to existence, the mirror daemons would unmark the failed state upon successful snapshot synchronization. When mirroring is disabled, the respective `fs mirror status` command for the file system diff --git a/doc/dev/continuous-integration.rst b/doc/dev/continuous-integration.rst index 139ab23bb0e..cfa44b60ea7 100644 --- a/doc/dev/continuous-integration.rst +++ b/doc/dev/continuous-integration.rst @@ -92,7 +92,7 @@ Shaman is a server offering RESTful API allowing the clients to query the information of repos hosted by chacra nodes. Shaman is also known for its `Web UI`_. But please note, shaman does not build the - packages, it justs offers information of the builds. + packages, it just offers information on the builds. As the following shows, `chacra`_ manages multiple projects whose metadata are stored in a database. These metadata are exposed via Shaman as a web @@ -199,7 +199,7 @@ libraries in our dist tarball. They are - pmdk ``make-dist`` is a script used by our CI pipeline to create dist tarball so the -tarball can be used to build the Ceph packages in a clean room environmet. When +tarball can be used to build the Ceph packages in a clean room environment. When we need to upgrade these third party libraries, we should - update the CMake script @@ -231,8 +231,8 @@ ref a unique id of a given version of a set packages. This id is used to reference the set packages under the ``/``. It is a good practice to version the packaging recipes, like the ``debian`` directory for building deb - packages and the ``spec`` for building rpm packages, and use ths sha1 of the - packaging receipe for the ``ref``. But you could also the a random string for + packages and the ``spec`` for building rpm packages, and use the sha1 of the + packaging receipe for the ``ref``. But you could also use a random string for ``ref``, like the tag name of the built source tree. distro diff --git a/doc/dev/crimson/crimson.rst b/doc/dev/crimson/crimson.rst index 2e2514f0504..185d1f76bf4 100644 --- a/doc/dev/crimson/crimson.rst +++ b/doc/dev/crimson/crimson.rst @@ -171,7 +171,7 @@ pg stats reported to mgr ------------------------ Crimson collects the per-pg, per-pool, and per-osd stats in a `MPGStats` -messsage, and send it over to mgr, so that the mgr modules can query +message, and send it over to mgr, so that the mgr modules can query them using the `MgrModule.get()` method. asock command diff --git a/doc/dev/crimson/poseidonstore.rst b/doc/dev/crimson/poseidonstore.rst index e390bd5b773..3dc2638de22 100644 --- a/doc/dev/crimson/poseidonstore.rst +++ b/doc/dev/crimson/poseidonstore.rst @@ -254,7 +254,7 @@ Comparison * Worst case - At least three writes are required additionally on WAL, object metadata, and data blocks. - - If the flush from WAL to the data parition occurs frequently, radix tree onode structure needs to be update + - If the flush from WAL to the data partition occurs frequently, radix tree onode structure needs to be update in many times. To minimize such overhead, we can make use of batch processing to minimize the update on the tree (the data related to the object has a locality because it will have the same parent node, so updates can be minimized) @@ -285,7 +285,7 @@ Detailed Design .. code-block:: c - stuct onode { + struct onode { extent_tree block_maps; b+_tree omaps; map xattrs; @@ -380,7 +380,7 @@ Detailed Design * Omap and xattr In this design, omap and xattr data is tracked by b+tree in onode. The onode only has the root node of b+tree. - The root node contains entires which indicate where the key onode exists. + The root node contains entries which indicate where the key onode exists. So, if we know the onode, omap can be found via omap b+tree. * Fragmentation @@ -437,7 +437,7 @@ Detailed Design WAL --- Each SP has a WAL. -The datas written to the WAL are metadata updates, free space update and small data. +The data written to the WAL are metadata updates, free space update and small data. Note that only data smaller than the predefined threshold needs to be written to the WAL. The larger data is written to the unallocated free space and its onode's extent_tree is updated accordingly (also on-disk extent tree). We statically allocate WAL partition aside from data partition pre-configured. diff --git a/doc/dev/dev_cluster_deployement.rst b/doc/dev/dev_cluster_deployement.rst index 69185e7f0b0..526d7b7eb19 100644 --- a/doc/dev/dev_cluster_deployement.rst +++ b/doc/dev/dev_cluster_deployement.rst @@ -51,7 +51,7 @@ Options .. option:: -k - Keep old configuration files instead of overwritting theses. + Keep old configuration files instead of overwriting these. .. option:: -K, --kstore @@ -135,7 +135,7 @@ Environment variables {OSD,MDS,MON,RGW} -Theses environment variables will contains the number of instances of the desired ceph process you want to start. +These environment variables will contains the number of instances of the desired ceph process you want to start. Example: :: diff --git a/doc/dev/developer_guide/running-tests-locally.rst b/doc/dev/developer_guide/running-tests-locally.rst index 71cbdec7d23..010cccdf29a 100644 --- a/doc/dev/developer_guide/running-tests-locally.rst +++ b/doc/dev/developer_guide/running-tests-locally.rst @@ -137,12 +137,12 @@ Running Workunits Using vstart_enviroment.sh Code can be tested by building Ceph locally from source, starting a vstart cluster, and running any suite against it. -Similar to S3-Tests, other workunits can be run against by configuring your enviroment. +Similar to S3-Tests, other workunits can be run against by configuring your environment. -Set up the enviroment -^^^^^^^^^^^^^^^^^^^^^ +Set up the environment +^^^^^^^^^^^^^^^^^^^^^^ -Configure your enviroment:: +Configure your environment:: $ . ./build/vstart_enviroment.sh diff --git a/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-debugging-tips.rst b/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-debugging-tips.rst index 94d0da1b08f..ac7c75e037c 100644 --- a/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-debugging-tips.rst +++ b/doc/dev/developer_guide/testing_integration_tests/tests-integration-testing-teuthology-debugging-tips.rst @@ -48,7 +48,7 @@ A job failure might be caused by one or more of the following reasons: * environment setup (`testing on varied systems `_): - testing compatibility with stable realeases for supported versions. + testing compatibility with stable releases for supported versions. * permutation of config values: for instance, `qa/suites/rados/thrash `_ ensures diff --git a/doc/dev/documenting.rst b/doc/dev/documenting.rst index 3e4c942faed..e6c05ee2a44 100644 --- a/doc/dev/documenting.rst +++ b/doc/dev/documenting.rst @@ -5,7 +5,7 @@ User documentation ================== -The documentation on docs.ceph.com is generated from the restructuredText +The documentation on docs.ceph.com is generated from the reStructuredText sources in ``/doc/`` in the Ceph git repository. Please make sure that your changes are written in a way that is intended diff --git a/doc/dev/mon-on-disk-formats.rst b/doc/dev/mon-on-disk-formats.rst index c9fa957a099..a64994fc0de 100644 --- a/doc/dev/mon-on-disk-formats.rst +++ b/doc/dev/mon-on-disk-formats.rst @@ -64,7 +64,7 @@ AuthMonitor::upgrade_format() called by `PaxosService::_active()`:: boil down --------- -* if `format_version >= current_version` then format is uptodate, return. +* if `format_version >= current_version` then format is up-to-date, return. * if `features doesn't contain LUMINOUS` then `current_version = 1` * else if `features doesn't contain MIMIC` then `current_version = 2` * else `current_version = 3` diff --git a/doc/dev/msgr2.rst b/doc/dev/msgr2.rst index af1421a3b87..05b6d201ed6 100644 --- a/doc/dev/msgr2.rst +++ b/doc/dev/msgr2.rst @@ -578,7 +578,7 @@ Compression will not be possible when using secure mode, unless configured speci Post-compression frame format ----------------------------- -Depending on the negotiated connection mode from TAG_COMPRESSION_DONE, the connection is able to acccept/send compressed frames or process all frames as decompressed. +Depending on the negotiated connection mode from TAG_COMPRESSION_DONE, the connection is able to accept/send compressed frames or process all frames as decompressed. # msgr2.x-force mode diff --git a/doc/dev/osd_internals/async_recovery.rst b/doc/dev/osd_internals/async_recovery.rst index ab0a036f1ad..aea5b70db91 100644 --- a/doc/dev/osd_internals/async_recovery.rst +++ b/doc/dev/osd_internals/async_recovery.rst @@ -28,7 +28,7 @@ out-of-band of the live acting set, similar to backfill, but still using the PG log to determine what needs to be done. This is known as *asynchronous recovery*. -The threashold for performing asynchronous recovery instead of synchronous +The threshold for performing asynchronous recovery instead of synchronous recovery is not a clear-cut. There are a few criteria which need to be met for asynchronous recovery: diff --git a/doc/dev/osd_internals/log_based_pg.rst b/doc/dev/osd_internals/log_based_pg.rst index 5d1e560c0e6..99cffd3d95d 100644 --- a/doc/dev/osd_internals/log_based_pg.rst +++ b/doc/dev/osd_internals/log_based_pg.rst @@ -35,7 +35,7 @@ concept of interval changes) and an increasing per-PG version number ``pg_info_t::last_update``. Furthermore, we maintain a log of "recent" operations extending back at least far enough to include any *unstable* writes (writes which have been started but not committed) -and objects which aren't uptodate locally (see recovery and +and objects which aren't up-to-date locally (see recovery and backfill). In practice, the log will extend much further (``osd_min_pg_log_entries`` when clean and ``osd_max_pg_log_entries`` when not clean) because it's handy for quickly performing recovery. diff --git a/doc/dev/osd_internals/manifest.rst b/doc/dev/osd_internals/manifest.rst index 2e8f9ae4445..6689bf239c5 100644 --- a/doc/dev/osd_internals/manifest.rst +++ b/doc/dev/osd_internals/manifest.rst @@ -31,7 +31,7 @@ RBD For RBD, the primary goal is for either an OSD-internal agent or a cluster-external agent to be able to transparently shift portions -of the consituent 4MB extents between a dedup pool and a hot base +of the constituent 4MB extents between a dedup pool and a hot base pool. As such, RBD operations (including class operations and snapshots) diff --git a/doc/dev/radosgw/s3_compliance.rst b/doc/dev/radosgw/s3_compliance.rst index 50aeda36a31..017506422bf 100644 --- a/doc/dev/radosgw/s3_compliance.rst +++ b/doc/dev/radosgw/s3_compliance.rst @@ -290,7 +290,7 @@ S3 Documentation reference : http://docs.aws.amazon.com/AmazonS3/latest/API/REST +---------+---------------------------+------------+---------------------------------------------------------------------------------------------------------+-------------+ | PUT | Object copy | Yes | | | +---------+---------------------------+------------+---------------------------------------------------------------------------------------------------------+-------------+ -| PUT | Initate multipart upload | Yes | | | +| PUT | Initiate multipart upload | Yes | | | +---------+---------------------------+------------+---------------------------------------------------------------------------------------------------------+-------------+ | PUT | Upload Part | Yes | | | +---------+---------------------------+------------+---------------------------------------------------------------------------------------------------------+-------------+ diff --git a/doc/dev/seastore.rst b/doc/dev/seastore.rst index eb89d82196b..dd080092c03 100644 --- a/doc/dev/seastore.rst +++ b/doc/dev/seastore.rst @@ -166,7 +166,7 @@ key/value into that node at the min would involve moving a bunch of bytes, which would be expensive (or verbose) to express purely as a sequence of byte operations. As such, each delta indicates the type as well as the location of the corresponding extent. Each block -type can therefore implement CachedExtent::apply_delta as appopriate. +type can therefore implement CachedExtent::apply_delta as appropriate. See src/os/crimson/seastore/cached_extent.h. See src/os/crimson/seastore/cache.h. diff --git a/doc/install/windows-troubleshooting.rst b/doc/install/windows-troubleshooting.rst index 6be37562f4e..c41d50b1d71 100644 --- a/doc/install/windows-troubleshooting.rst +++ b/doc/install/windows-troubleshooting.rst @@ -43,7 +43,7 @@ Windows Event Log, having Event ID 1000. The entry will also include the process the faulting module name and path as well as the exception code. Please note that in order to analyze crash dumps, the debug symbols are required. -We're currently buidling Ceph using ``MinGW``, so by default ``DWARF`` symbols will +We're currently building Ceph using ``MinGW``, so by default ``DWARF`` symbols will be embedded in the binaries. ``windbg`` does not support such symbols but ``gdb`` can be used. diff --git a/doc/jaegertracing/index.rst b/doc/jaegertracing/index.rst index 75458986cb8..2275c91d1d2 100644 --- a/doc/jaegertracing/index.rst +++ b/doc/jaegertracing/index.rst @@ -46,7 +46,7 @@ HOW TO ENABLE TRACING IN CEPH ----------------------------- tracing in Ceph is disabled by default. -it could be enabled globally, or for each entity seperately (e.g. rgw). +it could be enabled globally, or for each entity separately (e.g. rgw). Enable tracing globally:: diff --git a/doc/man/8/ceph-bluestore-tool.rst b/doc/man/8/ceph-bluestore-tool.rst index c6f198496db..f6c88da09b2 100644 --- a/doc/man/8/ceph-bluestore-tool.rst +++ b/doc/man/8/ceph-bluestore-tool.rst @@ -213,14 +213,14 @@ BlueStore OSD with the *prime-osd-dir* command:: BlueFS log rescue ===================== -Some versions of BlueStore were susceptible to BlueFS log growing extremaly large - +Some versions of BlueStore were susceptible to BlueFS log growing extremely large - beyond the point of making booting OSD impossible. This state is indicated by booting that takes very long and fails in _replay function. This can be fixed by:: ceph-bluestore-tool fsck --path *osd path* --bluefs_replay_recovery=true -It is advised to first check if rescue process would be successfull:: +It is advised to first check if rescue process would be successful:: ceph-bluestore-tool fsck --path *osd path* \ --bluefs_replay_recovery=true --bluefs_replay_recovery_disable_compact=true diff --git a/doc/man/8/ceph-conf.rst b/doc/man/8/ceph-conf.rst index 0c8252f9d5c..4fea4d43d7f 100644 --- a/doc/man/8/ceph-conf.rst +++ b/doc/man/8/ceph-conf.rst @@ -88,8 +88,8 @@ Options override the ``$pid`` when expanding options. For example, if an option is configured like ``/var/log/$name.$pid.log``, the ``$pid`` portion in its - value will be substituded using the PID of **ceph-conf** instead of the - PID of the process specfied using the ``--name`` option. + value will be substituted using the PID of **ceph-conf** instead of the + PID of the process specified using the ``--name`` option. .. option:: -r, --resolve-search diff --git a/doc/man/8/ceph-dencoder.rst b/doc/man/8/ceph-dencoder.rst index 883a90bb4a4..ef7e972b80e 100644 --- a/doc/man/8/ceph-dencoder.rst +++ b/doc/man/8/ceph-dencoder.rst @@ -73,7 +73,7 @@ Commands .. option:: select_test - Select the given build-in test instance as a the in-memory instance + Select the given built-in test instance as the in-memory instance of the type. .. option:: get_features diff --git a/doc/man/8/ceph-diff-sorted.rst b/doc/man/8/ceph-diff-sorted.rst index ee42232cd04..f5fe22ed868 100644 --- a/doc/man/8/ceph-diff-sorted.rst +++ b/doc/man/8/ceph-diff-sorted.rst @@ -14,7 +14,7 @@ Synopsis Description =========== -:program:`ceph-diff-sorted` is a simplifed *diff* utility optimized +:program:`ceph-diff-sorted` is a simplified *diff* utility optimized for comparing two files with lines that are lexically sorted. The output is simplified in comparison to that of the standard `diff` diff --git a/doc/man/8/mount.ceph.rst b/doc/man/8/mount.ceph.rst index 82294d017c2..5c9052aaa87 100644 --- a/doc/man/8/mount.ceph.rst +++ b/doc/man/8/mount.ceph.rst @@ -23,7 +23,7 @@ the real work. To mount a Ceph file system use:: mount.ceph name@07fe3187-00d9-42a3-814b-72a4d5e7d5be.fs_name=/ /mnt/mycephfs -o mon_addr=1.2.3.4 Mount helper can fill in the cluster FSID by reading the ceph configuration file. -Its recommeded to call the mount helper via mount(8) as per:: +Its recommended to call the mount helper via mount(8) as per:: mount -t ceph name@.fs_name=/ /mnt/mycephfs -o mon_addr=1.2.3.4 diff --git a/doc/mgr/administrator.rst b/doc/mgr/administrator.rst index 411ad8c38b7..fd2df3d1c67 100644 --- a/doc/mgr/administrator.rst +++ b/doc/mgr/administrator.rst @@ -50,7 +50,7 @@ If the active daemon fails to send a beacon to the monitors for more than :confval:`mon_mgr_beacon_grace`, then it will be replaced by a standby. -If you want to pre-empt failover, you can explicitly mark a ceph-mgr +If you want to preempt failover, you can explicitly mark a ceph-mgr daemon as failed using ``ceph mgr fail ``. Using modules diff --git a/doc/mgr/dashboard.rst b/doc/mgr/dashboard.rst index 9d9d1afba8c..44faefaea12 100644 --- a/doc/mgr/dashboard.rst +++ b/doc/mgr/dashboard.rst @@ -1212,7 +1212,7 @@ The command returns the URL where the Ceph Dashboard is located: ``https://`_ command-line - utility to faciliate working with JSON data. + utility to facilitate working with JSON data. Accessing the Dashboard diff --git a/doc/rados/configuration/bluestore-config-ref.rst b/doc/rados/configuration/bluestore-config-ref.rst index e4757c8bc67..cf6f63c20ae 100644 --- a/doc/rados/configuration/bluestore-config-ref.rst +++ b/doc/rados/configuration/bluestore-config-ref.rst @@ -64,7 +64,7 @@ the deployment strategy: **block (data) only** ^^^^^^^^^^^^^^^^^^^^^ If all devices are the same type, for example all rotational drives, and -there are no fast devices to use for metadata, it makes sense to specifiy the +there are no fast devices to use for metadata, it makes sense to specify the block device only and to not separate ``block.db`` or ``block.wal``. The :ref:`ceph-volume-lvm` command for a single ``/dev/sda`` device looks like:: @@ -139,7 +139,7 @@ In older releases, internal level sizes mean that the DB can fully utilize only specific partition / LV sizes that correspond to sums of L0, L0+L1, L1+L2, etc. sizes, which with default settings means roughly 3 GB, 30 GB, 300 GB, and so forth. Most deployments will not substantially benefit from sizing to -accomodate L3 and higher, though DB compaction can be facilitated by doubling +accommodate L3 and higher, though DB compaction can be facilitated by doubling these figures to 6GB, 60GB, and 600GB. Improvements in releases beginning with Nautilus 14.2.12 and Octopus 15.2.6 diff --git a/doc/rados/configuration/common.rst b/doc/rados/configuration/common.rst index 5b818aee5ff..8c36a53292b 100644 --- a/doc/rados/configuration/common.rst +++ b/doc/rados/configuration/common.rst @@ -189,7 +189,7 @@ Naming Clusters (deprecated) Each Ceph cluster has an internal name that is used as part of configuration and log file names as well as directory and mountpoint names. This name defaults to "ceph". Previous releases of Ceph allowed one to specify a custom -name instead, for example "ceph2". This was intended to faciliate running +name instead, for example "ceph2". This was intended to facilitate running multiple logical clusters on the same physical hardware, but in practice this was rarely exploited and should no longer be attempted. Prior documentation could also be misinterpreted as requiring unique cluster names in order to @@ -202,7 +202,7 @@ custom names may be progressively removed by future Ceph releases, so it is strongly recommended to deploy all new clusters with the default name "ceph". Some Ceph CLI commands accept an optional ``--cluster`` (cluster name) option. This -option is present purely for backward compatibility and need not be accomodated +option is present purely for backward compatibility and need not be accommodated by new tools and deployments. If you do need to allow multiple clusters to exist on the same host, please use diff --git a/doc/rados/operations/health-checks.rst b/doc/rados/operations/health-checks.rst index 08c05b55363..2e96b0151cd 100644 --- a/doc/rados/operations/health-checks.rst +++ b/doc/rados/operations/health-checks.rst @@ -1142,7 +1142,7 @@ _______________ One or more PGs has not been scrubbed recently. PGs are normally scrubbed within every configured interval specified by :confval:`osd_scrub_max_interval` globally. This -interval can be overriden on per-pool basis with +interval can be overridden on per-pool basis with :confval:`scrub_max_interval`. The warning triggers when ``mon_warn_pg_not_scrubbed_ratio`` percentage of interval has elapsed without a scrub since it was due. diff --git a/doc/rados/operations/monitoring.rst b/doc/rados/operations/monitoring.rst index d154ca2b9f5..ebfe17159aa 100644 --- a/doc/rados/operations/monitoring.rst +++ b/doc/rados/operations/monitoring.rst @@ -414,7 +414,7 @@ on the number of replicas, clones and snapshots. the cache pool but have not been flushed yet to the base pool. This field is only available when cache tiering is in use. - **USED COMPR:** amount of space allocated for compressed data (i.e. this - includes comrpessed data plus all the allocation, replication and erasure + includes compressed data plus all the allocation, replication and erasure coding overhead). - **UNDER COMPR:** amount of data passed through compression (summed over all replicas) and beneficial enough to be stored in a compressed form. @@ -447,7 +447,7 @@ Or: ceph osd dump You can also check view OSDs according to their position in the CRUSH map by -using the folloiwng command: +using the following command: .. prompt:: bash # diff --git a/doc/rados/operations/stretch-mode.rst b/doc/rados/operations/stretch-mode.rst index de39e80ff85..2870623742b 100644 --- a/doc/rados/operations/stretch-mode.rst +++ b/doc/rados/operations/stretch-mode.rst @@ -41,7 +41,7 @@ No matter what happens, Ceph will not compromise on data integrity and consistency. If there's a failure in your network or a loss of nodes and you can restore service, Ceph will return to normal functionality on its own. -But there are scenarios where you lose data availibility despite having +But there are scenarios where you lose data availability despite having enough servers available to satisfy Ceph's consistency and sizing constraints, or where you may be surprised to not satisfy Ceph's constraints. The first important category of these failures resolve around inconsistent @@ -112,7 +112,7 @@ CRUSH and place ``mon.e`` there :: $ ceph mon set_location e datacenter=site3 $ ceph mon enable_stretch_mode e stretch_rule datacenter -When stretch mode is enabled, the OSDs wlll only take PGs active when +When stretch mode is enabled, the OSDs will only take PGs active when they peer across data centers (or whatever other CRUSH bucket type you specified), assuming both are alive. Pools will increase in size from the default 3 to 4, expecting 2 copies in each site. OSDs will only diff --git a/doc/rados/troubleshooting/troubleshooting-mon.rst b/doc/rados/troubleshooting/troubleshooting-mon.rst index 549291ef02c..fef18175946 100644 --- a/doc/rados/troubleshooting/troubleshooting-mon.rst +++ b/doc/rados/troubleshooting/troubleshooting-mon.rst @@ -31,7 +31,7 @@ Initial Troubleshooting **Are you able to reach to the mon nodes?** Doesn't happen often, but sometimes there are ``iptables`` rules that - block accesse to mon nodes or TCP ports. These may be leftovers from + block access to mon nodes or TCP ports. These may be leftovers from prior stress-testing or rule development. Try SSHing into the server and, if that succeeds, try connecting to the monitor's ports (``tcp/3300`` and ``tcp/6789``) using a ``telnet``, ``nc``, or similar tools. @@ -361,7 +361,7 @@ Can I increase the maximum tolerated clock skew? The maximum tolerated clock skew is configurable via the ``mon-clock-drift-allowed`` option, and although you *CAN* you almost certainly *SHOULDN'T*. The clock skew mechanism - is in place because clock-skewed monitors are liely to misbehave. We, as + is in place because clock-skewed monitors are likely to misbehave. We, as developers and QA aficionados, are comfortable with the current default value, as it will alert the user before the monitors get out hand. Changing this value may cause unforeseen effects on the diff --git a/doc/rados/troubleshooting/troubleshooting-osd.rst b/doc/rados/troubleshooting/troubleshooting-osd.rst index 1086eaaea7c..883f4f44f74 100644 --- a/doc/rados/troubleshooting/troubleshooting-osd.rst +++ b/doc/rados/troubleshooting/troubleshooting-osd.rst @@ -539,7 +539,7 @@ Flapping OSDs When OSDs peer and check heartbeats, they use the cluster (back-end) network when it's available. See `Monitor/OSD Interaction`_ for details. -We have tradtionally recommended separate *public* (front-end) and *private* +We have traditionally recommended separate *public* (front-end) and *private* (cluster / back-end / replication) networks: #. Segregation of heartbeat and replication / recovery traffic (private) diff --git a/doc/radosgw/cloud-transition.rst b/doc/radosgw/cloud-transition.rst index b144fa4820e..bc5b3ea2c69 100644 --- a/doc/radosgw/cloud-transition.rst +++ b/doc/radosgw/cloud-transition.rst @@ -279,7 +279,7 @@ For example, Object modification & Limitations ---------------------------------- -The cloud storage class once configured can then be used like any other storage class in the bucket lifecyle rules. For example, +The cloud storage class once configured can then be used like any other storage class in the bucket lifecycle rules. For example, :: @@ -351,6 +351,6 @@ Future Work * Federation between RGW and Cloud services. -* Support transition to other cloud provideres (like Azure). +* Support transition to other cloud providers (like Azure). .. _`Multisite Configuration`: ../multisite diff --git a/doc/radosgw/layout.rst b/doc/radosgw/layout.rst index 5003a96b159..e861519f6cf 100644 --- a/doc/radosgw/layout.rst +++ b/doc/radosgw/layout.rst @@ -117,7 +117,7 @@ These objects are accessed when listing buckets, when updating bucket contents, and updating and retrieving bucket statistics (e.g. for quota). See the user-visible, encoded class 'cls_user_bucket_entry' and its -nested class 'cls_user_bucket' for the values of these omap entires. +nested class 'cls_user_bucket' for the values of these omap entries. These listings are kept consistent with buckets in pool ".rgw". diff --git a/doc/radosgw/lua-scripting.rst b/doc/radosgw/lua-scripting.rst index 8541ed4d9cb..d970455a520 100644 --- a/doc/radosgw/lua-scripting.rst +++ b/doc/radosgw/lua-scripting.rst @@ -64,7 +64,7 @@ To add a specific version of a package to the allowlist: # radosgw-admin script-package add --package='{package name} {package version}' [--allow-compilation] -* When adding a diffrent version of a package which already exists in the list, the newly +* When adding a different version of a package which already exists in the list, the newly added version will override the existing one. * When adding a package without a version specified, the latest version of the package @@ -324,9 +324,9 @@ Lua Code Samples function print_owner(owner) RGWDebugLog("Owner:") - RGWDebugLog(" Dispaly Name: " .. owner.DisplayName) + RGWDebugLog(" Display Name: " .. owner.DisplayName) RGWDebugLog(" Id: " .. owner.User.Id) - RGWDebugLog(" Tenanet: " .. owner.User.Tenant) + RGWDebugLog(" Tenant: " .. owner.User.Tenant) end function print_acl(acl_type) diff --git a/doc/radosgw/multisite-sync-policy.rst b/doc/radosgw/multisite-sync-policy.rst index 56342473ce8..befef4279ee 100644 --- a/doc/radosgw/multisite-sync-policy.rst +++ b/doc/radosgw/multisite-sync-policy.rst @@ -202,7 +202,7 @@ Buckets are either a bucket name, or '*' (wildcard). Wildcard bucket means the c Prefix can be defined to filter source objects. Tags are passed by a comma separated list of 'key=value'. Destination owner can be set to force a destination owner of the objects. If user mode is selected, only the destination bucket owner can be set. -Destinatino storage class can also be condfigured. +Destination storage class can also be configured. User id can be set for user mode, and will be the user under which the sync operation will be executed (for permissions validation). diff --git a/doc/radosgw/s3select.rst b/doc/radosgw/s3select.rst index be02539d9d4..b30af4aec13 100644 --- a/doc/radosgw/s3select.rst +++ b/doc/radosgw/s3select.rst @@ -150,7 +150,7 @@ Features Support | predicate as a projection | where address like '%new-york%'; | +---------------------------------+-----------------+-----------------------------------------------------------------------+ | an alias to | select (_1 like "_3_") as *likealias*,_1 from s3object | -| predicate as a prjection | where *likealias* = true and cast(_1 as int) between 800 and 900; | +| predicate as a projection | where *likealias* = true and cast(_1 as int) between 800 and 900; | +---------------------------------+-----------------+-----------------------------------------------------------------------+ | casting operator | select cast(123 as int)%2 from s3object; | +---------------------------------+-----------------+-----------------------------------------------------------------------+ diff --git a/doc/rbd/iscsi-target-ansible.rst b/doc/rbd/iscsi-target-ansible.rst index bf8a1ec9b35..f89c4a0d2f0 100644 --- a/doc/rbd/iscsi-target-ansible.rst +++ b/doc/rbd/iscsi-target-ansible.rst @@ -96,7 +96,7 @@ advanced variables. **Deployment:** -Perform the followint steps on the Ansible installer node. +Perform the following steps on the Ansible installer node. #. As ``root``, execute the Ansible playbook: diff --git a/doc/rbd/rbd-encryption.rst b/doc/rbd/rbd-encryption.rst index 0628f3781be..4fd40fd4205 100644 --- a/doc/rbd/rbd-encryption.rst +++ b/doc/rbd/rbd-encryption.rst @@ -118,7 +118,7 @@ allows selecting AES-128 as well. Adding / removing passphrases is currently not supported by RBD, but can be applied to the raw RBD data using compatible tools such as cryptsetup. -The LUKS header size can vary (upto 136MiB in LUKS2), but is usually upto +The LUKS header size can vary (up to 136MiB in LUKS2), but is usually up to 16MiB, depending on the version of `libcryptsetup` installed. For optimal performance, the encryption format will set the data offset to be aligned with the image object size. For example expect a minimum overhead of 8MiB if using diff --git a/doc/rbd/rbd-openstack.rst b/doc/rbd/rbd-openstack.rst index 3f1b85f30f6..7d64b3548b9 100644 --- a/doc/rbd/rbd-openstack.rst +++ b/doc/rbd/rbd-openstack.rst @@ -323,7 +323,7 @@ has been enabled by default since the Giant release. Moreover, enabling the client admin socket allows the collection of metrics and can be invaluable for troubleshooting. -This socket can be accessed on the hypvervisor (Nova compute) node:: +This socket can be accessed on the hypervisor (Nova compute) node:: ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok help diff --git a/doc/releases/cuttlefish.rst b/doc/releases/cuttlefish.rst index 3e522fd3f0d..01758e763a8 100644 --- a/doc/releases/cuttlefish.rst +++ b/doc/releases/cuttlefish.rst @@ -4,7 +4,7 @@ Cuttlefish Cuttlefish is the 3rd stable release of Ceph. It is named after a type of cephalopod (order Sepiida) characterized by a unique internal shell, the -cuttlebone, which is used for control of bouyancy. +cuttlebone, which is used for control of buoyancy. v0.61.9 "Cuttlefish" ==================== diff --git a/doc/security/cves.rst b/doc/security/cves.rst index 4e8b6a23329..223b61634fd 100644 --- a/doc/security/cves.rst +++ b/doc/security/cves.rst @@ -55,7 +55,7 @@ Past vulnerabilities +------------+-------------------+-------------+--------------------------------------------+ | 2018-07-10 | `CVE-2018-1128`_ | 7.5 High | Cephx replay vulnerability | +------------+-------------------+-------------+--------------------------------------------+ -| 2018-07-27 | `CVE-2017-7519`_ | 4.4 Medium | libradosstriper unvaliated format string | +| 2018-07-27 | `CVE-2017-7519`_ | 4.4 Medium | libradosstriper unvalidated format string | +------------+-------------------+-------------+--------------------------------------------+ | 2018-08-01 | `CVE-2016-9579`_ | 7.6 High | potential RGW XSS attack | +------------+-------------------+-------------+--------------------------------------------+ diff --git a/doc/start/documenting-ceph.rst b/doc/start/documenting-ceph.rst index 91c478d9405..1ab3a87171d 100644 --- a/doc/start/documenting-ceph.rst +++ b/doc/start/documenting-ceph.rst @@ -296,7 +296,7 @@ the following packages are required: - python3-dev - python3-pip - python3-sphinx -- pytnon3-venv +- python3-venv - libxml2-dev - libxslt1-dev - doxygen @@ -533,14 +533,14 @@ Pull`_ approach. Notify Us --------- -In case The PR did not got a review within in a resonable timeframe, please get in touch +In case The PR did not got a review within in a reasonable timeframe, please get in touch with the corresponding component lead of the :ref:`clt`. Documentation Style Guide ========================= One objective of the Ceph documentation project is to ensure the readability of -the documentation in both native restructuredText format and its rendered +the documentation in both native reStructuredText format and its rendered formats such as HTML. Navigate to your Ceph repository and view a document in its native format. You may notice that it is generally as legible in a terminal as it is in its rendered HTML format. Additionally, you may also notice that diff --git a/man/ceph_selinux.8 b/man/ceph_selinux.8 index e2482e8b827..0d6594800e5 100644 --- a/man/ceph_selinux.8 +++ b/man/ceph_selinux.8 @@ -324,7 +324,7 @@ SELinux ceph policy is very flexible allowing users to setup their ceph processe .B STANDARD FILE CONTEXT SELinux defines the file context types for the ceph, if you wanted to -store files with these types in a diffent paths, you need to execute the semanage command to sepecify alternate labeling and then use restorecon to put the labels on disk. +store files with these types in a diffent paths, you need to execute the semanage command to specify alternate labeling and then use restorecon to put the labels on disk. .B semanage fcontext -a -t ceph_exec_t '/srv/ceph/content(/.*)?' .br