2012-09-04 18:40:25 +00:00
|
|
|
========================
|
|
|
|
Placement Group States
|
|
|
|
========================
|
|
|
|
|
|
|
|
When checking a cluster's status (e.g., running ``ceph -w`` or ``ceph -s``),
|
|
|
|
Ceph will report on the status of the placement groups. A placement group has
|
|
|
|
one or more states. The optimum state for placement groups in the placement group
|
|
|
|
map is ``active + clean``.
|
|
|
|
|
|
|
|
*Creating*
|
|
|
|
Ceph is still creating the placement group.
|
|
|
|
|
|
|
|
*Active*
|
|
|
|
Ceph will process requests to the placement group.
|
|
|
|
|
|
|
|
*Clean*
|
|
|
|
Ceph replicated all objects in the placement group the correct number of times.
|
|
|
|
|
|
|
|
*Down*
|
|
|
|
A replica with necessary data is down, so the placement group is offline.
|
|
|
|
|
|
|
|
*Scrubbing*
|
2017-11-16 23:13:05 +00:00
|
|
|
Ceph is checking the placement group metadata for inconsistencies.
|
|
|
|
|
|
|
|
*Deep*
|
|
|
|
Ceph is checking the placement group data against stored checksums.
|
2012-09-04 18:40:25 +00:00
|
|
|
|
|
|
|
*Degraded*
|
|
|
|
Ceph has not replicated some objects in the placement group the correct number of times yet.
|
|
|
|
|
|
|
|
*Inconsistent*
|
|
|
|
Ceph detects inconsistencies in the one or more replicas of an object in the placement group
|
|
|
|
(e.g. objects are the wrong size, objects are missing from one replica *after* recovery finished, etc.).
|
|
|
|
|
|
|
|
*Peering*
|
|
|
|
The placement group is undergoing the peering process
|
|
|
|
|
|
|
|
*Repair*
|
|
|
|
Ceph is checking the placement group and repairing any inconsistencies it finds (if possible).
|
|
|
|
|
|
|
|
*Recovering*
|
|
|
|
Ceph is migrating/synchronizing objects and their replicas.
|
|
|
|
|
2017-03-01 11:07:14 +00:00
|
|
|
*Forced-Recovery*
|
|
|
|
High recovery priority of that PG is enforced by user.
|
|
|
|
|
2012-09-04 18:40:25 +00:00
|
|
|
*Backfill*
|
|
|
|
Ceph is scanning and synchronizing the entire contents of a placement group
|
|
|
|
instead of inferring what contents need to be synchronized from the logs of
|
|
|
|
recent operations. *Backfill* is a special case of recovery.
|
|
|
|
|
2017-03-01 11:07:14 +00:00
|
|
|
*Forced-Backfill*
|
|
|
|
High backfill priority of that PG is enforced by user.
|
|
|
|
|
2012-09-07 16:22:10 +00:00
|
|
|
*Wait-backfill*
|
|
|
|
The placement group is waiting in line to start backfill.
|
|
|
|
|
2014-01-20 11:02:02 +00:00
|
|
|
*Backfill-toofull*
|
|
|
|
A backfill operation is waiting because the destination OSD is over its
|
|
|
|
full ratio.
|
|
|
|
|
2012-09-04 18:40:25 +00:00
|
|
|
*Incomplete*
|
2014-10-29 20:57:01 +00:00
|
|
|
Ceph detects that a placement group is missing information about
|
|
|
|
writes that may have occurred, or does not have any healthy
|
|
|
|
copies. If you see this state, try to start any failed OSDs that may
|
2017-04-10 05:14:20 +00:00
|
|
|
contain the needed information. In the case of an erasure coded pool
|
|
|
|
temporarily reducing min_size may allow recovery.
|
2012-09-04 18:40:25 +00:00
|
|
|
|
|
|
|
*Stale*
|
|
|
|
The placement group is in an unknown state - the monitors have not received
|
|
|
|
an update for it since the placement group mapping changed.
|
|
|
|
|
|
|
|
*Remapped*
|
|
|
|
The placement group is temporarily mapped to a different set of OSDs from what
|
|
|
|
CRUSH specified.
|
2014-10-29 21:03:40 +00:00
|
|
|
|
|
|
|
*Undersized*
|
|
|
|
The placement group fewer copies than the configured pool replication level.
|
|
|
|
|
2014-10-29 20:57:01 +00:00
|
|
|
*Peered*
|
|
|
|
The placement group has peered, but cannot serve client IO due to not having
|
|
|
|
enough copies to reach the pool's configured min_size parameter. Recovery
|
|
|
|
may occur in this state, so the pg may heal up to min_size eventually.
|