Merge PR #22433 into master

* refs/pull/22433/head:
	common/config: Add description to (near)full ratio settings

Reviewed-by: David Zafman <dzafman@redhat.com>
Reviewed-by: Sage Weil <sage@redhat.com>
This commit is contained in:
Sage Weil 2018-08-06 08:57:03 -05:00
commit 6090545cc7
3 changed files with 25 additions and 5 deletions

View File

@ -520,6 +520,9 @@ you expect to fail to arrive at a reasonable full ratio. Repeat the foregoing
process with a higher number of OSD failures (e.g., a rack of OSDs) to arrive at
a reasonable number for a near full ratio.
The following settings only apply on cluster creation and are then stored in
the OSDMap.
.. code-block:: ini
[global]
@ -559,6 +562,10 @@ a reasonable number for a near full ratio.
.. tip:: If some OSDs are nearfull, but others have plenty of capacity, you
may have a problem with the CRUSH weight for the nearfull OSDs.
.. tip:: These settings only apply during cluster creation. Afterwards they need
to be changed in the OSDMap using ``ceph osd set-nearfull-ratio`` and
``ceph osd set-full-ratio``
.. index:: heartbeat
Heartbeat

View File

@ -208,16 +208,29 @@ is getting near its full ratio. The ``mon osd full ratio`` defaults to
``0.95``, or 95% of capacity before it stops clients from writing data.
The ``mon osd backfillfull ratio`` defaults to ``0.90``, or 90 % of
capacity when it blocks backfills from starting. The
``mon osd nearfull ratio`` defaults to ``0.85``, or 85% of capacity
OSD nearfull ratio defaults to ``0.85``, or 85% of capacity
when it generates a health warning.
Changing it can be done using:
::
ceph osd set-nearfull-ratio <float[0.0-1.0]>
Full cluster issues usually arise when testing how Ceph handles an OSD
failure on a small cluster. When one node has a high percentage of the
cluster's data, the cluster can easily eclipse its nearfull and full ratio
immediately. If you are testing how Ceph reacts to OSD failures on a small
cluster, you should leave ample free disk space and consider temporarily
lowering the ``mon osd full ratio``, ``mon osd backfillfull ratio`` and
``mon osd nearfull ratio``.
lowering the OSD ``full ratio``, OSD ``backfillfull ratio`` and
OSD ``nearfull ratio`` using these commands:
::
ceph osd set-nearfull-ratio <float[0.0-1.0]>
ceph osd set-full-ratio <float[0.0-1.0]>
ceph osd set-backfillfull-ratio <float[0.0-1.0]>
Full ``ceph-osds`` will be reported by ``ceph health``::

View File

@ -1449,7 +1449,7 @@ std::vector<Option> get_global_options() {
.set_default(.95)
.set_flag(Option::FLAG_NO_MON_UPDATE)
.set_flag(Option::FLAG_CLUSTER_CREATE)
.set_description(""),
.set_description("full ratio of OSDs to be set during initial creation of the cluster"),
Option("mon_osd_backfillfull_ratio", Option::TYPE_FLOAT, Option::LEVEL_ADVANCED)
.set_default(.90)
@ -1461,7 +1461,7 @@ std::vector<Option> get_global_options() {
.set_default(.85)
.set_flag(Option::FLAG_NO_MON_UPDATE)
.set_flag(Option::FLAG_CLUSTER_CREATE)
.set_description(""),
.set_description("nearfull ratio for OSDs to be set during initial creation of cluster"),
Option("mon_osd_initial_require_min_compat_client", Option::TYPE_STR, Option::LEVEL_ADVANCED)
.set_default("jewel")