ceph/doc/rados
Sage Weil 78bf924480 mgr/pg_autoscaler: default to pg_num[_min] = 16
4 or 8 PGs doesn't provide much parallelism at baseline.  Start with 16
and set the floor there; that's a more reasonable number of OSDs that
will be put to work on a single pool.

Note that there is no magic number here.  At some point someone has to
tell Ceph if an empty pool should get lots of PGs across lots of devices
to get the full throughput of the cluster.  But this will be a bit less
painful/surprising for users.

Fixes: https://tracker.ceph.com/issues/42509
Signed-off-by: Sage Weil <sage@redhat.com>
2019-11-14 13:37:44 -06:00
..
api doc: fix typos 2019-09-26 09:17:07 +02:00
command osd/PG: scrub error when objects are larger than osd_max_object_size 2019-08-14 20:25:12 -05:00
configuration mgr/pg_autoscaler: default to pg_num[_min] = 16 2019-11-14 13:37:44 -06:00
deployment doc: update with osd addition 2019-11-01 13:55:41 +08:00
man
operations Merge PR #31177 into master 2019-11-08 07:22:05 -06:00
troubleshooting doc: remove all pg_num arguments to 'osd pool create' 2019-09-22 16:58:33 -05:00
index.rst doc: filesystem to file system 2019-09-10 08:43:28 -07:00