-filter out mons from other clusters
-fix parsing of mon name from role
Fixes: http://tracker.ceph.com/issues/38115
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Now notifications and alerts show an application icon, that gives a hint
about their origin.
Fixes: https://tracker.ceph.com/issues/37950
Signed-off-by: Stephan Müller <smueller@suse.com>
Now it's possible to style values inside the KV-table based on the
values.
Fixes: https://tracker.ceph.com/issues/37951
Signed-off-by: Stephan Müller <smueller@suse.com>
The backend is now capable of receiving alert notifications from
the Prometheus alertmanager and it can get all alerts with all kinds of
parameters from the API of the same.
In the frontend Prometheus alerts can be found in "Cluster > Alerts". Incoming
notifications can be seen as usual in the notifications popover.
To clarify:
Prometheus alerts are received from the alertmanager API.
Prometheus alert notification are send from the alertmanager to the
backend receiver. An alert notification can have multiple alerts, but
these alerts differ from the prometheus alerts.
To clarify that, I've added some models and services.
If one of the methods to get alerts contains changes the user will be
notified.
The documentation explains how to configure the alertmanager to use the
dashboard receiver and how to connect the use of the alertmanager API.
Further it explains where to find the alerts and what happens if they
are configured and something is happening.
Fixes: https://tracker.ceph.com/issues/36721
Signed-off-by: Stephan Müller <smueller@suse.com>
* For consistency:
Set 'Clean' status color to 'HEALTH_OK' color
(Cluster Status card).
Set 'Warning' status color to 'HEALTH_WARN' color.
'Working' (blue) & 'Unknown' (red) are kept due to
previous consensus about these complementary colors
in doughnut/pie charts.
* Renamed Health Pie colors for the sake of clarity.
Signed-off-by: Alfonso Martínez <almartin@redhat.com>
A new Ansible playbook allows now to retrieve the storage devices information produced by ceph-volume.
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
* When the creation of the cluster is delegated to vstart_runner.py
(--create or --create-target-only) the amount of MGRs required
is calculated by the script so there is no more skipped tests
due to insufficient amount of MGRs.
* Additionally, this issue is not reproducible anymore:
Fixes: https://tracker.ceph.com/issues/37964
* Fixed typo: TEUTHOLOFY_PY_REQS
Signed-off-by: Alfonso Martínez <almartin@redhat.com>
doc: Updated feature list and overview in dashboard.rst
Reviewed-by: Laura Paduano <lpaduano@suse.com>
Reviewed-by: Ricardo Marques <rimarques@suse.com>
Reviewed-by: Volker Theile <vtheile@suse.com>
Group the two buttons 'Set Cluster-wide Flags' and 'Set Cluster-wide
Recovery Priority' together into one button menu.
Fixes: http://tracker.ceph.com/issues/37380
Signed-off-by: Tatjana Dehler <tdehler@suse.com>
doc/orchestrator: Aligned Documentation with specification
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Reviewed-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
Reviewed-by: Noah Watkins <noahwatkins@gmail.com>
This is to prevent unsustainable situations where a client has so many
outstanding caps that a linear traversal/operation on the session's caps takes
unacceptable amounts of time.
Fixes: http://tracker.ceph.com/issues/38022
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
As with trimming, use DecayCounters to throttle the number of caps we recall,
both globally and per-session.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This is necessary when the MDS cache size decreases by a significant amount.
For example, when stopping a large MDS or when the operator makes a large cache
size reduction.
Fixes: http://tracker.ceph.com/issues/37723
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
If we try to start up the objectstore, we may make writeable changes to
(say) rocksdb that are not backwards compatible. This happens, for
example, if you start a mimic osd. Even if the compatset checks fail,
rocksdb may have written something that is not backwards compatible.
Fixes: http://tracker.ceph.com/issues/38076
Signed-off-by: Sage Weil <sage@redhat.com>
if cls_log_trim() returns 0, it may have stopped after 1000 entries
before trimming all the way to to_marker. only update last_trim on
ENODATA, so we continue trimming until done
Fixes: http://tracker.ceph.com/issues/38075
Signed-off-by: Casey Bodley <cbodley@redhat.com>