2019-02-27 12:49:47 +00:00
|
|
|
Local Pool Module
|
2017-09-06 19:34:50 +00:00
|
|
|
=================
|
|
|
|
|
2019-02-27 12:49:47 +00:00
|
|
|
The *localpool* module can automatically create RADOS pools that are
|
2017-09-06 19:34:50 +00:00
|
|
|
localized to a subset of the overall cluster. For example, by default, it will
|
2020-12-15 06:02:31 +00:00
|
|
|
create a pool for each distinct ``rack`` in the cluster. This can be useful for
|
|
|
|
deployments where it is desirable to distribute some data locally and other data
|
|
|
|
globally across the cluster. One use-case is measuring performance and testing
|
|
|
|
behavior of specific drive, NIC, or chassis models in isolation.
|
2017-09-06 19:34:50 +00:00
|
|
|
|
|
|
|
Enabling
|
|
|
|
--------
|
|
|
|
|
|
|
|
The *localpool* module is enabled with::
|
|
|
|
|
|
|
|
ceph mgr module enable localpool
|
|
|
|
|
|
|
|
Configuring
|
|
|
|
-----------
|
|
|
|
|
|
|
|
The *localpool* module understands the following options:
|
|
|
|
|
|
|
|
* **subtree** (default: `rack`): which CRUSH subtree type the module
|
|
|
|
should create a pool for.
|
|
|
|
* **failure_domain** (default: `host`): what failure domain we should
|
|
|
|
separate data replicas across.
|
|
|
|
* **pg_num** (default: `128`): number of PGs to create for each pool
|
|
|
|
* **num_rep** (default: `3`): number of replicas for each pool.
|
|
|
|
(Currently, pools are always replicated.)
|
2017-10-02 22:11:46 +00:00
|
|
|
* **min_size** (default: none): value to set min_size to (unchanged from Ceph's default if this option is not set)
|
2017-09-06 19:34:50 +00:00
|
|
|
* **prefix** (default: `by-$subtreetype-`): prefix for the pool name.
|
|
|
|
|
|
|
|
These options are set via the config-key interface. For example, to
|
|
|
|
change the replication level to 2x with only 64 PGs, ::
|
|
|
|
|
2018-04-16 21:30:18 +00:00
|
|
|
ceph config set mgr mgr/localpool/num_rep 2
|
|
|
|
ceph config set mgr mgr/localpool/pg_num 64
|