ceph/doc/rados/operations
Sage Weil f755e353e8 os/bluestore: separate omap per-pool vs per-pg alerts
Currently the health alert raised does not match the docs, and the docs
do not describe what the health alert indicates.

Octopus added per-pool omap storage.  This improves space accounting
and reporting.

Pacific added per-pg omap storage (object hash in key).  This speeds up
PG removal.

Separate everthing out into two distinct alerts raised from bluestore
and surfaced as health alerts, with corresponding config options to
disable, and update the docs accordingly.

Also update the fsck options for warn vs error, and raise separate
errors for the per-pg and per-pool cases.

Signed-off-by: Sage Weil <sage@newdream.net>
2021-03-24 07:34:39 -05:00
..
add-or-rm-mons.rst doc/cephadm: Restoring the MON quorum 2021-02-16 13:48:09 +01:00
add-or-rm-osds.rst doc/rados/operations: Remove upstart 2021-02-15 13:13:36 +01:00
balancer.rst
bluestore-migration.rst
cache-tiering.rst
change-mon-elections.rst
control.rst
crush-map-edits.rst
crush-map.rst
data-placement.rst
devices.rst
erasure-code-clay.rst
erasure-code-isa.rst
erasure-code-jerasure.rst
erasure-code-lrc.rst
erasure-code-profile.rst
erasure-code-shec.rst
erasure-code.rst
health-checks.rst os/bluestore: separate omap per-pool vs per-pg alerts 2021-03-24 07:34:39 -05:00
index.rst
monitoring-osd-pg.rst doc: fix and improve the explainations of up and acting osd sets 2020-12-27 18:33:36 +01:00
monitoring.rst mon/PGMap: align to same side when output ceph df / ceph df detail 2021-01-04 19:59:16 +08:00
operating.rst doc/rados/operations: Remove upstart 2021-02-15 13:13:36 +01:00
pg-concepts.rst
pg-repair.rst
pg-states.rst
placement-groups.rst
pools.rst
stretch-mode.rst doc/rados: s/realy/really/ 2021-02-04 01:04:24 +10:00
upmap.rst
user-management.rst mon: define simple-rados-client-with-blocklist profile 2021-03-19 08:52:55 -07:00