doc/rados/configuration: add missing options

and link to them when appropriate

Signed-off-by: Kefu Chai <kchai@redhat.com>
This commit is contained in:
Kefu Chai 2021-04-22 11:05:40 +08:00
parent b61d034e77
commit be07a5b407
3 changed files with 24 additions and 12 deletions

View File

@ -324,6 +324,15 @@ To enable sharding and apply the Pacific defaults, stop an OSD and run
.. confval:: bluestore_rocksdb_cf
.. confval:: bluestore_rocksdb_cfs
Throttling
==========
.. confval:: bluestore_throttle_bytes
.. confval:: bluestore_throttle_deferred_bytes
.. confval:: bluestore_throttle_cost_per_io
.. confval:: bluestore_throttle_cost_per_io_hdd
.. confval:: bluestore_throttle_cost_per_io_ssd
SPDK Usage
==================

View File

@ -167,7 +167,7 @@ maximize the impact of the mclock scheduler.
:Bluestore Throttle Parameters:
We recommend using the default values as defined by
``bluestore_throttle_bytes`` and ``bluestore_throttle_deferred_bytes``. But
:confval:`bluestore_throttle_bytes` and :confval:`bluestore_throttle_deferred_bytes`. But
these parameters may also be determined during the benchmarking phase as
described below.
@ -183,7 +183,7 @@ correct bluestore throttle values.
2. Install cbt and all the dependencies mentioned on the cbt github page.
3. Construct the Ceph configuration file and the cbt yaml file.
4. Ensure that the bluestore throttle options ( i.e.
``bluestore_throttle_bytes`` and ``bluestore_throttle_deferred_bytes``) are
:confval:`bluestore_throttle_bytes` and :confval:`bluestore_throttle_deferred_bytes`) are
set to the default values.
5. Ensure that the test is performed on similar device types to get reliable
OSD capacity data.
@ -195,8 +195,8 @@ correct bluestore throttle values.
value is the baseline throughput(IOPS) when the default bluestore
throttle options are in effect.
9. If the intent is to determine the bluestore throttle values for your
environment, then set the two options, ``bluestore_throttle_bytes`` and
``bluestore_throttle_deferred_bytes`` to 32 KiB(32768 Bytes) each to begin
environment, then set the two options, :confval:`bluestore_throttle_bytes` and
:confval:`bluestore_throttle_deferred_bytes` to 32 KiB(32768 Bytes) each to begin
with. Otherwise, you may skip to the next section.
10. Run the 4KiB random write workload as before on the OSD(s) for 300 secs.
11. Note the overall throughput from the cbt log files and compare the value
@ -253,7 +253,7 @@ The other values for the built-in profiles include *balanced* and
*high_recovery_ops*.
If there is a requirement to change the default profile, then the option
``osd_mclock_profile`` may be set in the **[global]** or **[osd]** section of
:confval:`osd_mclock_profile` may be set in the **[global]** or **[osd]** section of
your Ceph configuration file before bringing up your cluster.
Alternatively, to change the profile during runtime, use the following command:

View File

@ -179,6 +179,9 @@ scrubbing operations.
Operations
==========
.. confval:: osd_op_num_shards
.. confval:: osd_op_num_shards_hdd
.. confval:: osd_op_num_shards_ssd
.. confval:: osd_op_queue
.. confval:: osd_op_queue_cut_off
.. confval:: osd_client_op_priority
@ -285,8 +288,8 @@ queues within Ceph. First, requests to an OSD are sharded by their
placement group identifier. Each shard has its own mClock queue and
these queues neither interact nor share information among them. The
number of shards can be controlled with the configuration options
``osd_op_num_shards``, ``osd_op_num_shards_hdd``, and
``osd_op_num_shards_ssd``. A lower number of shards will increase the
:confval:`osd_op_num_shards`, :confval:`osd_op_num_shards_hdd`, and
:confval:`osd_op_num_shards_ssd`. A lower number of shards will increase the
impact of the mClock queues, but may have other deleterious effects.
Second, requests are transferred from the operation queue to the
@ -303,11 +306,11 @@ the impact of mClock, we want to keep as few operations in the
operation sequencer as possible. So we have an inherent tension.
The configuration options that influence the number of operations in
the operation sequencer are ``bluestore_throttle_bytes``,
``bluestore_throttle_deferred_bytes``,
``bluestore_throttle_cost_per_io``,
``bluestore_throttle_cost_per_io_hdd``, and
``bluestore_throttle_cost_per_io_ssd``.
the operation sequencer are :confval:`bluestore_throttle_bytes`,
:confval:`bluestore_throttle_deferred_bytes`,
:confval:`bluestore_throttle_cost_per_io`,
:confval:`bluestore_throttle_cost_per_io_hdd`, and
:confval:`bluestore_throttle_cost_per_io_ssd`.
A third factor that affects the impact of the mClock algorithm is that
we're using a distributed system, where requests are made to multiple