Use OSD_POOL_PRIORITY_MAX and OSD_POOL_PRIORITY_MIN constants
Scale legacy priorities if exceeds maximum
Signed-off-by: David Zafman <dzafman@redhat.com>
Add the missing `max_change`, `max_osds`, and `--no-increasing` parameters to `reweight-by-utilization` and `test-reweight-by-utilization`. Minor adjustments to wording.
Signed-off-by: Anthony D'Atri <anthony.datri@gmail.com>
osd_pool_default_pg_autoscale_mode is the right parameter to
set placement-group autoscale mode.
Signed-off-by: Changcheng Liu <changcheng.liu@intel.com>
The current documentation for the MANY_OBJECTS_PER_PG warning
states that The threshold can be raised to silence the health
warning by adjusting the mon_pg_warn_max_object_skew config
option on the monitors. It seems that this is not true (at least)
since the luminous times, and this option should be adjusted on
the managers.
I encountered this problem and I spend quite sometime injecting
the mon_pg_warn_max_object_skew to the monitors, added the option
ceph.conf and restarted the monitors several times but the warning
was not going away. I had to download the code to see what's
happening and I found out this:
$ git grep -A 3 mon_pg_warn_max_object_skew src/common/options.cc
src/common/options.cc:1480: Option("mon_pg_warn_max_object_skew", Option::TYPE_FLOAT, Option::LEVEL_ADVANCED)
src/common/options.cc-1481- .set_default(10.0)
src/common/options.cc-1482- .set_description("max skew few average in objects per pg")
src/common/options.cc-1483- .add_service("mgr"),
After I restarted the ceph-mgr service, the warning went away.
Signed-off-by: Vangelis Tasoulas <vangelis@tasoulas.net>
Added note about the requirement for the latest ceph-iscsi version
3 to the dashboard documentation. Added some doc references
and replaced some URLs in the iSCSI docs with reST labels instead.
Signed-off-by: Lenz Grimmer <lgrimmer@suse.com>
config-ref: add a note on current scheduler settings.
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Reviewed-by: J. Eric Ivancich <ivancich@redhat.com>
This doesn't integrate very well into network-config.rst, mostly because
that document is horribly out of data and I don't know where to start.
:(
Signed-off-by: Sage Weil <sage@redhat.com>
Add option desciptions for osd_recovery_priority and osd_recovery_op_priority
Fixes: https://tracker.ceph.com/issues/23999
Signed-off-by: David Zafman <dzafman@redhat.com>
ruleset is not used after merging below patch
commit f9a095deb1
crush: s/ruleset/id/ in decompiled output
Moving away from the 'ruleset' terminology.
Signed-off-by: Changcheng Liu <changcheng.liu@intel.com>
For those with multiple storage pools sharing the same devices,
I think it would make much more sense to offer per-pool
commands to bring pools with high priority, e.g., because they
are hosting data of more importance than others, back to normal
quickly.
Fixes: http://tracker.ceph.com/issues/38456
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
Fixed the incorrect mention of 'osd_deep_mon_scrub_interval' in health-checks.rst.
Changed it to 'osd_deep_scrub_interval'.
Fixes: https://tracker.ceph.com/issues/38310
Signed-off-by: Ashish Singh <assingh@redhat.com>
Since ceph-deploy would not support --cluster option anymore, section in this doc could be removed
Signed-off-by: Tatsuya Naganawa <tatsuyan201101@gmail.com>