doc/governance: Adam King
Reviewed-by: Adam King <adking@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This PR repairs a link to a PDF. The link was broken
when the PDF assets were moved during the restructure
of the ceph.io website in 2021.
Signed-off-by: Zac Dover <zac.dover@gmail.com>
Enable or disable all telemetry channels at once with:
ceph telemetry enable channel all
ceph telemetry disable channel all
Signed-off-by: Yaarit Hatuka <yaarit@redhat.com>
STATUS column now indicates whether a collection is being reported, and
the reasons why it's not (either the user is not opted-in to this
collection, or its channel is off).
Also, removed the ENROLLED and DEFAULT columns due to potential
confusion they may cause.
In case a user is not opted-in to certain collections, a message will
appear above the table with the missing collections:
New collections are available:
['basic_base', 'basic_mds_metadata', 'crash_base', 'device_base',
'ident_base', 'perf_perf']
Run `ceph telemetry on` to opt-in to these collections.
Signed-off-by: Yaarit Hatuka <yaarit@redhat.com>
Fix issues with PromQL expressions and vector matching with the
`ceph_disk_occupation` metric.
As it turns out, `ceph_disk_occupation` cannot simply be used as
expected, as there seem to be some edge cases for users that have
several OSDs on a single disk. This leads to issues which cannot be
approached by PromQL alone (many-to-many PromQL erros). The data we
have expected is simply different in some rare cases.
I have not found a sole PromQL solution to this issue. What we basically
need is the following.
1. Match on labels `host` and `instance` to get one or more OSD names
from a metadata metric (`ceph_disk_occupation`) to let a user know
about which OSDs belong to which disk.
2. Match on labels `ceph_daemon` of the `ceph_disk_occupation` metric,
in which case the value of `ceph_daemon` must not refer to more than
a single OSD. The exact opposite to requirement 1.
As both operations are currently performed on a single metric, and there
is no way to satisfy both requirements on a single metric, the intention
of this commit is to extend the metric by providing a similar metric
that satisfies one of the requirements. This enables the queries to
differentiate between a vector matching operation to show a string to
the user (where `ceph_daemon` could possibly be `osd.1` or
`osd.1+osd.2`) and to match a vector by having a single `ceph_daemon` in
the condition for the matching.
Although the `ceph_daemon` label is used on a variety of daemons, only
OSDs seem to be affected by this issue (only if more than one OSD is run
on a single disk). This means that only the `ceph_disk_occupation`
metadata metric seems to need to be extended and provided as two
metrics.
`ceph_disk_occupation` is supposed to be used for matching the
`ceph_daemon` label value.
foo * on(ceph_daemon) group_left ceph_disk_occupation
`ceph_disk_occupation_human` is supposed to be used for anything where
the resulting data is displayed to be consumed by humans (graphs, alert
messages, etc).
foo * on(device,instance)
group_left(ceph_daemon) ceph_disk_occupation_human
Fixes: https://tracker.ceph.com/issues/52974
Signed-off-by: Patrick Seidensal <pseidensal@suse.com>
rgw: Add rgw rate limiting per user and per bucket
Reviewed-by: Daniel Gryniewicz <dang@redhat.com>
Reviewed-by: Yuval Lifshitz <ylifshit@redhat.com>
Reviewed-by: Casey Bodley <cbodley@redhat.com>
mgr/orchestrator: add filtering and count option for orch host ls
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Sebastian Wagner <sewagner@redhat.com>
* refs/pull/44054/head:
doc/rados/operations: document pg_num_max
mgr: set max of 32 pgs for .mgr pool
mgr/dashboard: expect pg_num_max property for pools
mon/OSDMonitor: add option --pg-num_max arg for create pool
mon/OSDMonitor: disallow setting pg_num < min or > max
mgr/pg_autoscaler: apply pg_num_max
mon: add pg_num_max pool property
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>