* refs/pull/32788/head:
qa/tasks/mgr/dashboard: set pg_num to 32
mgr/pg_autoscaler: default to pg_num[_min] = 32
Reviewed-by: Sage Weil <sage@redhat.com>
78bf924480 increased the default to 16.
Increasing it further to 32 will provide enough parallelism to improve
out of the box performance for new users.
Fixes: https://tracker.ceph.com/issues/43757
Signed-off-by: Neha Ojha <nojha@redhat.com>
This incorporates Neha's suggestion that the list of formats
be made complete everywhere it appears in the document.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
This tiny commit changes "The foregoing functionality
equivalent to" to "The foregoing functionality is
equivalent to".
Signed-off-by: Zac Dover <zac.dover@gmail.com>
This commit simply adds "json-pretty", "xml", and "xml-pretty" to
the list of formats available to the --format flag in the command
"ceph pg dump".
Signed-off-by: Zac Dover <zac.dover@gmail.com>
This bug changes a clause of the form "When x remains in y status
and never achieve a z status" to a clause of the form "When x remains
in y status and never achieves a z status".
The change is one of "achieve" to "achieves".
Signed-off-by: Zac Dover <zac.dover@gmail.com>
Remove last bits of support for 'mds_cache_size'.
'mds_cache_memory_limit' is preferred.
Fixes: https://tracker.ceph.com/issues/41951
Signed-off-by: Ramana Raja <rraja@redhat.com>
4 or 8 PGs doesn't provide much parallelism at baseline. Start with 16
and set the floor there; that's a more reasonable number of OSDs that
will be put to work on a single pool.
Note that there is no magic number here. At some point someone has to
tell Ceph if an empty pool should get lots of PGs across lots of devices
to get the full throughput of the cluster. But this will be a bit less
painful/surprising for users.
Fixes: https://tracker.ceph.com/issues/42509
Signed-off-by: Sage Weil <sage@redhat.com>
osd/OSDMap: Show health warning if a pool is configured with size 1
Reviewed-by: Sage Weil <sweil@redhat.com>
Reviewed-by: David Zafman <dzafman@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
Introduce a config option called 'mon_warn_on_pool_no_redundancy' that is
used to show a health warning if any pool in the ceph cluster is
configured with a size of 1. The user can mute/unmute the warning using
'ceph health mute/unmute POOL_NO_REDUNDANCY'.
Add standalone test to verify warning on setting pool size=1. Set the
associated warning to 'false' in ceph.conf.template under qa/tasks so
that existing tests do not break.
Fixes: https://tracker.ceph.com/issues/41666
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
This allows an optional, arbitrary key/value constraint clauses to
be appended to "profile XYZ" and "allow module XYZ" caps. A module
can then provide additional validatation against these meta-arguments.
Example:
profile rbd pool=rbd
allow module rbd_support with pool=rbd
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
This allows specific python add-on modules to be whitelisted instead
of manually adding each command exported by the module.
allow module {module-name} {access-spec}
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
osd/PrimaryLogPG: always use strict priority ordering for kicked recovery ops
Reviewed-by: Yan Jun <yan.jun8@zte.com.cn>
Reviewed-by: Sage Weil <sage@redhat.com>
This reverts commit c0f87e0f91.
The 'osd_op_queue_cut_off' config option determines which level of
high priority ops should use strict priority ordering and may change
from time to time. Since the main strategy of 'osd_kick_recovery_op_priority'
is to simply follow up 'osd_op_queue_cut_off', we can instead make a direct
use of 'osd_op_queue_cut_off' to achieve the same thing explicitly.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
osd: Change osd op queue cut off default to high
Reviewed-by: Samuel Just <sjust@redhat.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: xie xingguo <xie.xingguo@zte.com.cn>