ceph/doc/mgr/localpool.rst
Lenz Grimmer c3149421bc doc: Replaced "plugin" with "module" in the Mgr documentation
The documentation currently refers to Ceph Manager Modules as
"plugins" in many places, while the command line interface uses
"module" to enable/disable modules. Replaced all occurences
of "plugin" with "module" in the docs, to avoid confusion and to
be in alignment with the CLI. Also fixed the capitalizations of some
module chapters.

Fixes: https://tracker.ceph.com/issues/38481
Signed-off-by: Lenz Grimmer <lgrimmer@suse.com>
2019-02-27 13:49:47 +01:00

36 lines
1.3 KiB
ReStructuredText

Local Pool Module
=================
The *localpool* module can automatically create RADOS pools that are
localized to a subset of the overall cluster. For example, by default, it will
create a pool for each distinct rack in the cluster. This can be useful for some
deployments that want to distribute some data locally as well as globally across the cluster .
Enabling
--------
The *localpool* module is enabled with::
ceph mgr module enable localpool
Configuring
-----------
The *localpool* module understands the following options:
* **subtree** (default: `rack`): which CRUSH subtree type the module
should create a pool for.
* **failure_domain** (default: `host`): what failure domain we should
separate data replicas across.
* **pg_num** (default: `128`): number of PGs to create for each pool
* **num_rep** (default: `3`): number of replicas for each pool.
(Currently, pools are always replicated.)
* **min_size** (default: none): value to set min_size to (unchanged from Ceph's default if this option is not set)
* **prefix** (default: `by-$subtreetype-`): prefix for the pool name.
These options are set via the config-key interface. For example, to
change the replication level to 2x with only 64 PGs, ::
ceph config set mgr mgr/localpool/num_rep 2
ceph config set mgr mgr/localpool/pg_num 64