mirror of
https://github.com/ceph/ceph
synced 2024-12-17 17:05:42 +00:00
documentation: explain ceph osd reweight vs crush weight
Using the wording from Gregory Farnum at http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-June/040961.html Signed-off-by: Loic Dachary <loic-201408@dachary.org>
This commit is contained in:
parent
9442336f92
commit
639c9818fe
@ -203,9 +203,16 @@ resending pending requests. ::
|
||||
ceph osd pause
|
||||
ceph osd unpause
|
||||
|
||||
Set the weight of ``{osd-num}`` to ``{weight}``. Two OSDs with the same weight will receive
|
||||
roughly the same number of I/O requests and store approximately the
|
||||
same amount of data. ::
|
||||
Set the weight of ``{osd-num}`` to ``{weight}``. Two OSDs with the
|
||||
same weight will receive roughly the same number of I/O requests and
|
||||
store approximately the same amount of data. ``ceph osd reweight``
|
||||
sets an override weight on the OSD. This value is in the range 0 to 1,
|
||||
and forces CRUSH to re-place (1-weight) of the data that would
|
||||
otherwise live on this drive. It does not change the weights assigned
|
||||
to the buckets above the OSD in the crush map, and is a corrective
|
||||
measure in case the normal CRUSH distribution isn't working out quite
|
||||
right. For instance, if one of your OSDs is at 90% and the others are
|
||||
at 50%, you could reduce this weight to try and compensate for it. ::
|
||||
|
||||
ceph osd reweight {osd-num} {weight}
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user