mirror of
https://github.com/ceph/ceph
synced 2025-02-20 01:17:47 +00:00
doc/rados/operations/pools.rst: Added docs for stretch pool set|unset|show
Fixes: https://tracker.ceph.com/issues/64802 Signed-off-by: Kamoltat <ksirivad@redhat.com>
This commit is contained in:
parent
4ca1320727
commit
fb0011a692
@ -737,6 +737,117 @@ Managing pools that are flagged with ``--bulk``
|
||||
===============================================
|
||||
See :ref:`managing_bulk_flagged_pools`.
|
||||
|
||||
Setting values for a stretch pool
|
||||
=================================
|
||||
To set values for a stretch pool, run a command of the following form:
|
||||
|
||||
.. prompt:: bash $
|
||||
|
||||
ceph osd pool stretch set {pool-name} {peering_crush_bucket_count} {peering_crush_bucket_target} {peering_crush_bucket_barrier} {crush_rule} {size} {min_size} [--yes-i-really-mean-it]
|
||||
|
||||
Here are the break downs of the arguments:
|
||||
|
||||
.. describe:: {pool-name}
|
||||
|
||||
The name of the pool. It must be an existing pool, this command doesn't create a new pool.
|
||||
|
||||
:Type: String
|
||||
:Required: Yes.
|
||||
|
||||
.. describe:: {peering_crush_bucket_count}
|
||||
|
||||
The value is used along with peering_crush_bucket_barrier to determined whether the set of
|
||||
OSDs in the chosen acting set can peer with each other, based on the number of distinct
|
||||
buckets there are in the acting set.
|
||||
|
||||
:Type: Integer
|
||||
:Required: Yes.
|
||||
|
||||
.. describe:: {peering_crush_bucket_target}
|
||||
|
||||
This value is used along with peering_crush_bucket_barrier and size to calculate
|
||||
the value bucket_max which limits the number of OSDs in the same bucket from getting chose to be in the acting set of a PG.
|
||||
|
||||
:Type: Integer
|
||||
:Required: Yes.
|
||||
|
||||
.. describe:: {peering_crush_bucket_barrier}
|
||||
|
||||
The type of bucket a pool is stretched across, e.g., rack, row, or datacenter.
|
||||
|
||||
:Type: String
|
||||
:Required: Yes.
|
||||
|
||||
.. describe:: {crush_rule}
|
||||
|
||||
The crush rule to use for the stretch pool. The type of pool must match the type of crush_rule
|
||||
(replicated or erasure).
|
||||
|
||||
:Type: String
|
||||
:Required: Yes.
|
||||
|
||||
.. describe:: {size}
|
||||
|
||||
The number of replicas for objects in the stretch pool.
|
||||
|
||||
:Type: Integer
|
||||
:Required: Yes.
|
||||
|
||||
.. describe:: {min_size}
|
||||
|
||||
The minimum number of replicas required for I/O in the stretch pool.
|
||||
|
||||
:Type: Integer
|
||||
:Required: Yes.
|
||||
|
||||
.. describe:: {--yes-i-really-mean-it}
|
||||
|
||||
This flag is required to confirm that you really want to by-pass
|
||||
the safety checks and set the values for a stretch pool, e.g,
|
||||
when you are trying to set ``peering_crush_bucket_count`` or
|
||||
``peering_crush_bucket_target`` to be more than the number of buckets in the crush map.
|
||||
|
||||
:Type: Flag
|
||||
:Required: No.
|
||||
|
||||
.. _setting_values_for_a_stretch_pool:
|
||||
|
||||
Unsetting values for a stretch pool
|
||||
===================================
|
||||
To move the pool back to non-stretch, run a command of the following form:
|
||||
|
||||
.. prompt:: bash $
|
||||
|
||||
ceph osd pool stretch unset {pool-name}
|
||||
|
||||
Here are the break downs of the argument:
|
||||
|
||||
.. describe:: {pool-name}
|
||||
|
||||
The name of the pool. It must be an existing pool that is stretched,
|
||||
i.e., it has already been set with the command `ceph osd pool stretch set`.
|
||||
|
||||
:Type: String
|
||||
:Required: Yes.
|
||||
|
||||
Showing values of a stretch pool
|
||||
================================
|
||||
To show values for a stretch pool, run a command of the following form:
|
||||
|
||||
.. prompt:: bash $
|
||||
|
||||
ceph osd pool stretch show {pool-name}
|
||||
|
||||
Here are the break downs of the argument:
|
||||
|
||||
.. describe:: {pool-name}
|
||||
|
||||
The name of the pool. It must be an existing pool that is stretched,
|
||||
i.e., it has already been set with the command `ceph osd pool stretch set`.
|
||||
|
||||
:Type: String
|
||||
:Required: Yes.
|
||||
|
||||
.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref
|
||||
.. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter
|
||||
.. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups
|
||||
|
@ -81,6 +81,18 @@ Data Center B. In a situation of this kind, the loss of Data Center A means
|
||||
that the data is lost and Ceph will not be able to operate on it. This
|
||||
situation is surprisingly difficult to avoid using only standard CRUSH rules.
|
||||
|
||||
Individual Stretch Pools
|
||||
========================
|
||||
Setting individual ``stretch pool`` is an option that allows for the configuration
|
||||
of specific pools to be distributed across ``two or more data centers``.
|
||||
This is achieved by executing the ``ceph osd pool stretch set`` command on each desired pool,
|
||||
as opposed to applying a cluster-wide configuration ``with stretch mode``.
|
||||
See :ref:`setting_values_for_a_stretch_pool`
|
||||
|
||||
Use ``stretch mode`` when you have exactly ``two data centers`` and require a uniform
|
||||
configuration across the entire cluster. Conversely, opt for a ``stretch pool``
|
||||
when you need a particular pool to be replicated across ``more than two data centers``,
|
||||
providing a more granular level of control and a larger cluster size.
|
||||
|
||||
Stretch Mode
|
||||
============
|
||||
|
Loading…
Reference in New Issue
Block a user