Use OSD_POOL_PRIORITY_MAX and OSD_POOL_PRIORITY_MIN constants
Scale legacy priorities if exceeds maximum
Signed-off-by: David Zafman <dzafman@redhat.com>
osd_pool_default_pg_autoscale_mode is the right parameter to
set placement-group autoscale mode.
Signed-off-by: Changcheng Liu <changcheng.liu@intel.com>
The current documentation for the MANY_OBJECTS_PER_PG warning
states that The threshold can be raised to silence the health
warning by adjusting the mon_pg_warn_max_object_skew config
option on the monitors. It seems that this is not true (at least)
since the luminous times, and this option should be adjusted on
the managers.
I encountered this problem and I spend quite sometime injecting
the mon_pg_warn_max_object_skew to the monitors, added the option
ceph.conf and restarted the monitors several times but the warning
was not going away. I had to download the code to see what's
happening and I found out this:
$ git grep -A 3 mon_pg_warn_max_object_skew src/common/options.cc
src/common/options.cc:1480: Option("mon_pg_warn_max_object_skew", Option::TYPE_FLOAT, Option::LEVEL_ADVANCED)
src/common/options.cc-1481- .set_default(10.0)
src/common/options.cc-1482- .set_description("max skew few average in objects per pg")
src/common/options.cc-1483- .add_service("mgr"),
After I restarted the ceph-mgr service, the warning went away.
Signed-off-by: Vangelis Tasoulas <vangelis@tasoulas.net>
Added note about the requirement for the latest ceph-iscsi version
3 to the dashboard documentation. Added some doc references
and replaced some URLs in the iSCSI docs with reST labels instead.
Signed-off-by: Lenz Grimmer <lgrimmer@suse.com>
config-ref: add a note on current scheduler settings.
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Reviewed-by: J. Eric Ivancich <ivancich@redhat.com>
This doesn't integrate very well into network-config.rst, mostly because
that document is horribly out of data and I don't know where to start.
:(
Signed-off-by: Sage Weil <sage@redhat.com>
Add option desciptions for osd_recovery_priority and osd_recovery_op_priority
Fixes: https://tracker.ceph.com/issues/23999
Signed-off-by: David Zafman <dzafman@redhat.com>
ruleset is not used after merging below patch
commit f9a095deb1
crush: s/ruleset/id/ in decompiled output
Moving away from the 'ruleset' terminology.
Signed-off-by: Changcheng Liu <changcheng.liu@intel.com>
For those with multiple storage pools sharing the same devices,
I think it would make much more sense to offer per-pool
commands to bring pools with high priority, e.g., because they
are hosting data of more importance than others, back to normal
quickly.
Fixes: http://tracker.ceph.com/issues/38456
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
Fixed the incorrect mention of 'osd_deep_mon_scrub_interval' in health-checks.rst.
Changed it to 'osd_deep_scrub_interval'.
Fixes: https://tracker.ceph.com/issues/38310
Signed-off-by: Ashish Singh <assingh@redhat.com>
Since ceph-deploy would not support --cluster option anymore, section in this doc could be removed
Signed-off-by: Tatsuya Naganawa <tatsuyan201101@gmail.com>
Make this mon_warn code clearer since it involves 2 values
Code used mon scrub interval instead of pg scrub interval
Rename config values to include _pg_ and ratio to make it more clear
Fix scrub warniing handling use per-pool intervals when specified
Fixes: http://tracker.ceph.com/issues/37264
Signed-off-by: David Zafman <dzafman@redhat.com>
These were never implemented. They can be added back if they are
implemented and shown to help performance.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
* refs/pull/25849/head:
qa/suites/rados/upgrade: one mon per node, and enable-msgr2 at end
qa/rados/thrash-old-clients: avoid msgr2
mon: make bootstrap rank check more robust
mon: clean up probe debug output a bit
msg/async: use v1 for v1 <-> [v2,v1] peers
msg/async/AsyncMessenger: drop single-use _send_to
mon/HealthMonitor: raise MON_MSGR2_NOT_ENABLED if mons not bound to msgr2
doc/rados/operations/health-checks: document MON_* health warnings
mon/MonMapMonitor: add 'mon enable-msgr2' command
mon: respawn if rank addr changes
mon/MonMap: calc_addr_mons() after setting rank addrvec
Reviewed-by: Ricardo Dias <rdias@suse.com>
If the ms_bind_msgr2 option is enabled, and all mons are nautilus,
raise a health alert if any mons aren't bound to msgr2 addresses.
Whitelist tests that mon_bind_addrvec=false or mon_bind_msgr2=false.
Signed-off-by: Sage Weil <sage@redhat.com>