mirror of
https://github.com/ceph/ceph
synced 2025-02-21 01:47:25 +00:00
doc/rados/operations/pg-states: fix PG state names
Change pg state names according to osd/osd_types.cc as this is what ceph -s and the prometheus exporter will present to users. Signed-off-by: Jan Fajerski <jfajerski@suse.com>
This commit is contained in:
parent
60e8a63fdc
commit
7f8b40fc46
@ -7,102 +7,102 @@ Ceph will report on the status of the placement groups. A placement group has
|
||||
one or more states. The optimum state for placement groups in the placement group
|
||||
map is ``active + clean``.
|
||||
|
||||
*Creating*
|
||||
*creating*
|
||||
Ceph is still creating the placement group.
|
||||
|
||||
*Activating*
|
||||
*activating*
|
||||
The placement group is peered but not yet active.
|
||||
|
||||
*Active*
|
||||
*active*
|
||||
Ceph will process requests to the placement group.
|
||||
|
||||
*Clean*
|
||||
*clean*
|
||||
Ceph replicated all objects in the placement group the correct number of times.
|
||||
|
||||
*Down*
|
||||
*down*
|
||||
A replica with necessary data is down, so the placement group is offline.
|
||||
|
||||
*Scrubbing*
|
||||
*scrubbing*
|
||||
Ceph is checking the placement group metadata for inconsistencies.
|
||||
|
||||
*Deep*
|
||||
*deep*
|
||||
Ceph is checking the placement group data against stored checksums.
|
||||
|
||||
*Degraded*
|
||||
*degraded*
|
||||
Ceph has not replicated some objects in the placement group the correct number of times yet.
|
||||
|
||||
*Inconsistent*
|
||||
*inconsistent*
|
||||
Ceph detects inconsistencies in the one or more replicas of an object in the placement group
|
||||
(e.g. objects are the wrong size, objects are missing from one replica *after* recovery finished, etc.).
|
||||
|
||||
*Peering*
|
||||
*peering*
|
||||
The placement group is undergoing the peering process
|
||||
|
||||
*Repair*
|
||||
*repair*
|
||||
Ceph is checking the placement group and repairing any inconsistencies it finds (if possible).
|
||||
|
||||
*Recovering*
|
||||
*recovering*
|
||||
Ceph is migrating/synchronizing objects and their replicas.
|
||||
|
||||
*Forced-Recovery*
|
||||
*forced_recovery*
|
||||
High recovery priority of that PG is enforced by user.
|
||||
|
||||
*Recovery-wait*
|
||||
*recovery_wait*
|
||||
The placement group is waiting in line to start recover.
|
||||
|
||||
*Recovery-toofull*
|
||||
*recovery_toofull*
|
||||
A recovery operation is waiting because the destination OSD is over its
|
||||
full ratio.
|
||||
|
||||
*Recovery-unfound*
|
||||
*recovery_unfound*
|
||||
Recovery stopped due to unfound objects.
|
||||
|
||||
*Backfilling*
|
||||
*backfilling*
|
||||
Ceph is scanning and synchronizing the entire contents of a placement group
|
||||
instead of inferring what contents need to be synchronized from the logs of
|
||||
recent operations. Backfill is a special case of recovery.
|
||||
|
||||
*Forced-Backfill*
|
||||
*forced_backfill*
|
||||
High backfill priority of that PG is enforced by user.
|
||||
|
||||
*Backfill-wait*
|
||||
*backfill_wait*
|
||||
The placement group is waiting in line to start backfill.
|
||||
|
||||
*Backfill-toofull*
|
||||
*backfill_toofull*
|
||||
A backfill operation is waiting because the destination OSD is over its
|
||||
full ratio.
|
||||
|
||||
*Backfill-unfound*
|
||||
*backfill_unfound*
|
||||
Backfill stopped due to unfound objects.
|
||||
|
||||
*Incomplete*
|
||||
*incomplete*
|
||||
Ceph detects that a placement group is missing information about
|
||||
writes that may have occurred, or does not have any healthy
|
||||
copies. If you see this state, try to start any failed OSDs that may
|
||||
contain the needed information. In the case of an erasure coded pool
|
||||
temporarily reducing min_size may allow recovery.
|
||||
|
||||
*Stale*
|
||||
*stale*
|
||||
The placement group is in an unknown state - the monitors have not received
|
||||
an update for it since the placement group mapping changed.
|
||||
|
||||
*Remapped*
|
||||
*remapped*
|
||||
The placement group is temporarily mapped to a different set of OSDs from what
|
||||
CRUSH specified.
|
||||
|
||||
*Undersized*
|
||||
The placement group fewer copies than the configured pool replication level.
|
||||
*undersized*
|
||||
The placement group has fewer copies than the configured pool replication level.
|
||||
|
||||
*Peered*
|
||||
*peered*
|
||||
The placement group has peered, but cannot serve client IO due to not having
|
||||
enough copies to reach the pool's configured min_size parameter. Recovery
|
||||
may occur in this state, so the pg may heal up to min_size eventually.
|
||||
|
||||
*Snaptrim*
|
||||
*snaptrim*
|
||||
Trimming snaps.
|
||||
|
||||
*Snaptrim-wait*
|
||||
*snaptrim_Wait*
|
||||
Queued to trim snaps.
|
||||
|
||||
*Snaptrim-error*
|
||||
*snaptrim_Error*
|
||||
Error stopped trimming snaps.
|
||||
|
Loading…
Reference in New Issue
Block a user