common: default pg_autoscale_mode=on for new pools

Signed-off-by: Sage Weil <sage@redhat.com>
This commit is contained in:
Sage Weil 2019-09-03 10:54:47 -05:00
parent 991520b869
commit 91e4fc24e7
5 changed files with 14 additions and 7 deletions

View File

@ -135,3 +135,10 @@
* New OSD daemon command dump_scrub_reservations which reveals the
scrub reservations that are held for local (primary) and remote (replica) PGs.
* The ``pg_autoscale_mode`` is now set to ``on`` by default for newly
created pools, which means that Ceph will automatically manage the
number of PGs. To change this behavior, or to learn more about PG
autoscaling, see :ref:`pg-autoscaler`. Note that existing pools in
upgraded clusters will still be set to ``warn`` by default.

View File

@ -2625,7 +2625,7 @@ std::vector<Option> get_global_options() {
.set_description(""),
Option("osd_pool_default_pg_autoscale_mode", Option::TYPE_STR, Option::LEVEL_ADVANCED)
.set_default("warn")
.set_default("on")
.set_flag(Option::FLAG_RUNTIME)
.set_enum_allowed({"off", "warn", "on"})
.set_description("Default PG autoscaling behavior for new pools"),

View File

@ -25,7 +25,7 @@
nearfull_ratio 0
min_compat_client jewel
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 3
@ -51,7 +51,7 @@
nearfull_ratio 0
min_compat_client jewel
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 1

View File

@ -85,7 +85,7 @@
nearfull_ratio 0
min_compat_client jewel
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 3
@ -94,5 +94,5 @@
osdmaptool: writing epoch 1 to myosdmap
$ osdmaptool --print myosdmap | grep 'pool 1'
osdmaptool: osdmap file 'myosdmap'
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
$ rm -f myosdmap

View File

@ -797,7 +797,7 @@
nearfull_ratio 0
min_compat_client jewel
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 239
@ -807,5 +807,5 @@
osdmaptool: writing epoch 1 to om
$ osdmaptool --print om | grep 'pool 1'
osdmaptool: osdmap file 'om'
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
$ rm -f om