Merge pull request #30440 from dzafman/wip-41900

Revert "common: default pg_autoscale_mode=on for new pools"

Reviewed-by: Sage Weil <sage@redhat.com>
This commit is contained in:
David Zafman 2019-09-18 09:18:46 -07:00 committed by GitHub
commit 7ea365920b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 7 additions and 14 deletions

View File

@ -141,10 +141,3 @@
* New OSD daemon command dump_scrub_reservations which reveals the
scrub reservations that are held for local (primary) and remote (replica) PGs.
* The ``pg_autoscale_mode`` is now set to ``on`` by default for newly
created pools, which means that Ceph will automatically manage the
number of PGs. To change this behavior, or to learn more about PG
autoscaling, see :ref:`pg-autoscaler`. Note that existing pools in
upgraded clusters will still be set to ``warn`` by default.

View File

@ -2625,7 +2625,7 @@ std::vector<Option> get_global_options() {
.set_description(""),
Option("osd_pool_default_pg_autoscale_mode", Option::TYPE_STR, Option::LEVEL_ADVANCED)
.set_default("on")
.set_default("warn")
.set_flag(Option::FLAG_RUNTIME)
.set_enum_allowed({"off", "warn", "on"})
.set_description("Default PG autoscaling behavior for new pools"),

View File

@ -25,7 +25,7 @@
nearfull_ratio 0
min_compat_client jewel
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 3
@ -51,7 +51,7 @@
nearfull_ratio 0
min_compat_client jewel
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 1

View File

@ -85,7 +85,7 @@
nearfull_ratio 0
min_compat_client jewel
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 3
@ -94,5 +94,5 @@
osdmaptool: writing epoch 1 to myosdmap
$ osdmaptool --print myosdmap | grep 'pool 1'
osdmaptool: osdmap file 'myosdmap'
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 192 pgp_num 192 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
$ rm -f myosdmap

View File

@ -797,7 +797,7 @@
nearfull_ratio 0
min_compat_client jewel
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
max_osd 239
@ -807,5 +807,5 @@
osdmaptool: writing epoch 1 to om
$ osdmaptool --print om | grep 'pool 1'
osdmaptool: osdmap file 'om'
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode on last_change 0 flags hashpspool stripe_width 0 application rbd
pool 1 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 15296 pgp_num 15296 autoscale_mode warn last_change 0 flags hashpspool stripe_width 0 application rbd
$ rm -f om