From b0085bf224a1963d38a24a9f45c3d3d7c994f5b0 Mon Sep 17 00:00:00 2001 From: Ernesto Puerta Date: Fri, 10 May 2024 14:28:20 +0200 Subject: [PATCH] mgr/{restful,zabbix}: document removal Fixes: https://tracker.ceph.com/issues/47066 Signed-off-by: Ernesto Puerta --- PendingReleaseNotes | 7 +++++++ debian/ceph-mgr-modules-core.install | 1 + doc/start/index.rst | 6 ++---- 3 files changed, 10 insertions(+), 4 deletions(-) diff --git a/PendingReleaseNotes b/PendingReleaseNotes index d82ed125d92..2381800e8e8 100644 --- a/PendingReleaseNotes +++ b/PendingReleaseNotes @@ -35,6 +35,13 @@ users that modifying "max_mds" may not help with troubleshooting or recovery effort. Instead, it might further destabilize the cluster. +* mgr/restful, mgr/zabbix: both modules, already deprecated since 2020, have been + finally removed. They have not been actively maintenance in the last years, + and started suffering from vulnerabilities in their dependency chain (e.g.: + CVE-2023-46136). As alternatives, for the `restful` module, the `dashboard` module + provides a richer and better maintained RESTful API. Regarding the `zabbix` module, + there are alternative monitoring solutions, like `prometheus`, which is the most + widely adopted among the Ceph user community. >=19.0.0 diff --git a/debian/ceph-mgr-modules-core.install b/debian/ceph-mgr-modules-core.install index 0e803d7f44a..5d1e35204fc 100644 --- a/debian/ceph-mgr-modules-core.install +++ b/debian/ceph-mgr-modules-core.install @@ -15,6 +15,7 @@ usr/share/ceph/mgr/pg_autoscaler usr/share/ceph/mgr/progress usr/share/ceph/mgr/prometheus usr/share/ceph/mgr/rbd_support +usr/share/ceph/mgr/rgw usr/share/ceph/mgr/selftest usr/share/ceph/mgr/snap_schedule usr/share/ceph/mgr/stats diff --git a/doc/start/index.rst b/doc/start/index.rst index 0aec895ab73..439e9b24555 100644 --- a/doc/start/index.rst +++ b/doc/start/index.rst @@ -40,8 +40,8 @@ The Ceph Metadata Server is necessary to run Ceph File System clients. state of the Ceph cluster, including storage utilization, current performance metrics, and system load. The Ceph Manager daemons also host python-based modules to manage and expose Ceph cluster - information, including a web-based :ref:`mgr-dashboard` and - `REST API`_. At least two managers are normally required for high + information, including a web-based :ref:`mgr-dashboard`. + At least two managers are normally required for high availability. - **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`, @@ -62,8 +62,6 @@ contain the object, and which OSD should store the placement group. The CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and recover dynamically. -.. _REST API: ../../mgr/restful - .. container:: columns-2 .. container:: column