mirror of
https://github.com/ceph/ceph
synced 2025-01-18 09:02:08 +00:00
402d2eacbc
The current documentation tries really hard to convince people to set both `osd_pool_default_pg_num` and `osd_pool_default_pgp_num` in their configs, but at least the latter has undesirable side effects on any Ceph version that has PG autoscaling enabled by default (at least quincy and beyond). Assume a cluster with defaults of `64` for `pg_num` and `pgp_num`. Starting `radosgw` will fail as it tries to create various pools without providing values for `pg_num` or `pgp_num`. This triggers the following in `OSDMonitor::prepare_new_pool()`: - `pg_num` is set to `1`, because autoscaling is enabled - `pgp_num` is set to `osd pool default pgp_num`, which we set to `64` - This is an invalid setup, so the pool creation fails Likewise, `ceph osd pool create mypool` (without providing values for `pg_num` or `pgp_num`) does not work. Following this rationale: - Not providing a default value for `pgp_num` will always do the right thing, unless you use advanced features, in which case you can be expected to set both values on pool creation - Setting `osd_pool_default_pgp_num` in your config breaks pool creation for various cases This commit: - Removes `osd_pool_default_pgp_num` from all example configs - Adds mentions of the autoscaling and how it interacts with the default values in various places For each file that was touched, the following maintenance was also performed: - Change interternal spaces to underscores for config values - Remove mentions of filestore or any of its settings - Fix minor inconsistencies, like indentation etc. There is also a ticket which I think is very relevant and fixed by this, though it only captures part of the broader issue addressed here: Fixes: https://tracker.ceph.com/issues/47176 Signed-off-by: Conrad Hoffmann <ch@bitfehler.net>
32 lines
1.1 KiB
Plaintext
32 lines
1.1 KiB
Plaintext
[global]
|
|
fsid = {cluster-id}
|
|
mon_initial_members = {hostname}[, {hostname}]
|
|
mon_host = {ip-address}[, {ip-address}]
|
|
|
|
#All clusters have a front-side public network.
|
|
#If you have two network interfaces, you can configure a private / cluster
|
|
#network for RADOS object replication, heartbeats, backfill,
|
|
#recovery, etc.
|
|
public_network = {network}[, {network}]
|
|
#cluster_network = {network}[, {network}]
|
|
|
|
#Clusters require authentication by default.
|
|
auth_cluster_required = cephx
|
|
auth_service_required = cephx
|
|
auth_client_required = cephx
|
|
|
|
#Choose reasonable number of replicas and placement groups.
|
|
osd_journal_size = {n}
|
|
osd_pool_default_size = {n} # Write an object n times.
|
|
osd_pool_default_min_size = {n} # Allow writing n copies in a degraded state.
|
|
osd_pool_default_pg_autoscale_mode = {mode} # on, off, or warn
|
|
# Only used if autoscaling is off or warn:
|
|
osd_pool_default_pg_num = {n}
|
|
|
|
#Choose a reasonable crush leaf type.
|
|
#0 for a 1-node cluster.
|
|
#1 for a multi node cluster in a single rack
|
|
#2 for a multi node, multi chassis cluster with multiple hosts in a chassis
|
|
#3 for a multi node cluster with hosts across racks, etc.
|
|
osd_crush_chooseleaf_type = {n}
|