doc/rados/configuration: update to be in sync with ConfUtils changes
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Brad Hubbard <bhubbard@redhat.com>
osd: add hdd and ssd variants for osd_recovery_max_active
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Reviewed-by: Mark Nelson <mnelson@redhat.com>
Treat backfull_toofull as a warning condition because it can resolve itself.
Includes test case for PG_BACKFILL_FULL
Includes test case for recovery_toofull / PG_RECOVERY_FULL
Fixes: https://tracker.ceph.com/issues/39555
Signed-off-by: David Zafman <dzafman@redhat.com>
- be specific about stopped OSDs
- add missing '--no-mon-config' option
- fix indent of here script delimiting identifier
- use $host variable in for loop
Signed-off-by: Hannes von Haugwitz <hannes@vonhaugwitz.com>
to use strict priority ordering.
The new "mclock_opclass/mclock_client" queue basically prioritizes
operations based on the class they belong to. The priority property
of an operation, if lower than a specific value (64, by default),
will get ignored and hence all operations from the same class will
be treated fairly in a FIFO fashion (but still limited by the total
IOPS or bandwidth available for the corresponding class).
To reduce the impact of performance, a more general strategy would be
enforcing some limitations on the IOPS or bandwidth for the background
recovery (or backfill) operation class. However, this way we'll end up
blocking client operations too if they are currently blocked by some
degraded objects which need to be recovered first.
We hereby grant recovery operations of this kind a higher priority
to force them to use strict priority ordering, which should still
be of significance once we switch to the new "mclock_opclass/mclock_client"
queue.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
The Luminous release notes tell users to ensure that rbd clients have
the ability to blacklist other client users; this is provided by
"profile rbd", which this change now documents explicitly in the user
management documentation.
Signed-off-by: Matthew Vernon <mv3@sanger.ac.uk>
Use OSD_POOL_PRIORITY_MAX and OSD_POOL_PRIORITY_MIN constants
Scale legacy priorities if exceeds maximum
Signed-off-by: David Zafman <dzafman@redhat.com>
Add the missing `max_change`, `max_osds`, and `--no-increasing` parameters to `reweight-by-utilization` and `test-reweight-by-utilization`. Minor adjustments to wording.
Signed-off-by: Anthony D'Atri <anthony.datri@gmail.com>
osd_pool_default_pg_autoscale_mode is the right parameter to
set placement-group autoscale mode.
Signed-off-by: Changcheng Liu <changcheng.liu@intel.com>
The current documentation for the MANY_OBJECTS_PER_PG warning
states that The threshold can be raised to silence the health
warning by adjusting the mon_pg_warn_max_object_skew config
option on the monitors. It seems that this is not true (at least)
since the luminous times, and this option should be adjusted on
the managers.
I encountered this problem and I spend quite sometime injecting
the mon_pg_warn_max_object_skew to the monitors, added the option
ceph.conf and restarted the monitors several times but the warning
was not going away. I had to download the code to see what's
happening and I found out this:
$ git grep -A 3 mon_pg_warn_max_object_skew src/common/options.cc
src/common/options.cc:1480: Option("mon_pg_warn_max_object_skew", Option::TYPE_FLOAT, Option::LEVEL_ADVANCED)
src/common/options.cc-1481- .set_default(10.0)
src/common/options.cc-1482- .set_description("max skew few average in objects per pg")
src/common/options.cc-1483- .add_service("mgr"),
After I restarted the ceph-mgr service, the warning went away.
Signed-off-by: Vangelis Tasoulas <vangelis@tasoulas.net>
Added note about the requirement for the latest ceph-iscsi version
3 to the dashboard documentation. Added some doc references
and replaced some URLs in the iSCSI docs with reST labels instead.
Signed-off-by: Lenz Grimmer <lgrimmer@suse.com>
config-ref: add a note on current scheduler settings.
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Reviewed-by: J. Eric Ivancich <ivancich@redhat.com>