Merge pull request #61829 from rkachach/fix_issue_69971

mgr/cephadm: Add virtual_ip as a param for mgmt-gateway apply cmd

Reviewed-by: Adam King <adking@redhat.com>
This commit is contained in:
Adam King 2025-03-11 11:33:11 -04:00 committed by GitHub
commit 388ef10807
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
3 changed files with 41 additions and 30 deletions

View File

@ -7,15 +7,15 @@ Management Gateway
Deploying mgmt-gateway
======================
In Ceph releases beginning with Squid, the `mgmt-gateway` service introduces a new design for Ceph applications
In Ceph releases beginning with Squid, the ``mgmt-gateway`` service introduces a new design for Ceph applications
based on a modular, service-based architecture. This service, managed by cephadm and built on top of nginx
(an open-source, high-performance web server), acts as the new front-end and single entry point to the
Ceph cluster. The `mgmt-gateway` provides unified access to all Ceph applications, including the Ceph dashboard
Ceph cluster. The ``mgmt-gateway`` provides unified access to all Ceph applications, including the Ceph dashboard
and monitoring stack. Employing nginx enhances security and simplifies access management due to its robust
community support and high-security standards. The `mgmt-gateway` service acts as a reverse proxy that routes
community support and high-security standards. The ``mgmt-gateway`` service acts as a reverse proxy that routes
requests to the appropriate Ceph application instances.
In order to deploy the mgmt-gateway service, use the following command:
In order to deploy the ``mgmt-gateway`` service, use the following command:
.. prompt:: bash #
@ -23,7 +23,7 @@ In order to deploy the mgmt-gateway service, use the following command:
Once applied cephadm will reconfigure specific running daemons (such as monitoring) to run behind the
new created service. External access to those services will not be possible anymore. Access will be
consolidated behind the new service endpoint: `https://<node-ip>:<port>`.
consolidated behind the new service endpoint: ``https://<node-ip>:<port>``.
Benefits of the mgmt-gateway service
@ -36,7 +36,7 @@ Benefits of the mgmt-gateway service
Security enhancements
=====================
Once the `mgmt-gateway` service is deployed user cannot access monitoring services without authentication through the
Once the ``mgmt-gateway`` service is deployed user cannot access monitoring services without authentication through the
Ceph dashboard.
@ -44,32 +44,37 @@ High availability enhancements
==============================
nginx HA mechanisms are used to provide high availability for all the Ceph management applications including the Ceph dashboard
and monitoring stack. In case of the Ceph dashboard user no longer need to know where the active manager is running.
`mgmt-gateway` handles manager failover transparently and redirects the user to the active manager. In case of the
monitoring `mgmt-gateway` takes care of handling HA when several instances of Prometheus, Alertmanager or Grafana are
``mgmt-gateway`` handles manager failover transparently and redirects the user to the active manager. In case of the
monitoring ``mgmt-gateway`` takes care of handling HA when several instances of Prometheus, Alertmanager or Grafana are
available. The reverse proxy will automatically detect healthy instances and use them to process user requests.
High Availability for mgmt-gateway service
==========================================
In addition to providing high availability for the underlying backend services, the mgmt-gateway
In addition to providing high availability for the underlying backend services, the ``mgmt-gateway``
service itself can be configured for high availability, ensuring that the system remains resilient
even if certain core components for the service fail.
even if certain core components for the service fail, including the ``mgmt-gateway`` itself.
Multiple mgmt-gateway instances can be deployed in an active/standby configuration using keepalived
for seamless failover. The `oauth2-proxy` service can be deployed as multiple stateless instances,
Multiple ``mgmt-gateway`` instances can be deployed in an active/standby configuration using keepalived
for seamless failover. The ``oauth2-proxy`` service can be deployed as multiple stateless instances,
with nginx acting as a load balancer across them using round-robin strategy. This setup removes
single points of failure and enhances the resilience of the entire system.
In this setup, the underlying internal services follow the same high availability mechanism. Instead of
directly accessing the `mgmt-gateway` internal endpoint, services use the virtual IP specified in the spec.
This ensures that the high availability mechanism for `mgmt-gateway` is transparent to other services.
directly accessing the ``mgmt-gateway`` internal endpoint, services use the virtual IP specified in the spec.
This ensures that the high availability mechanism for ``mgmt-gateway`` is transparent to other services.
Example Configuration for High Availability
The simplest and recommended way to deploy the ``mgmt-gateway`` in high availability mode is by using labels. To
run the ``mgmt-gateway`` in HA mode users can either use the cephadm command line as follows:
To deploy the mgmt-gateway in a high availability setup, here is an example of the specification files required:
.. prompt:: bash #
`mgmt-gateway` Configuration:
ceph orch apply mgmt-gateway --virtual_ip 192.168.100.220 --enable-auth=true --placement="label:mgmt"
Or provide specification files as following:
``mgmt-gateway`` Configuration:
.. code-block:: yaml
@ -80,7 +85,8 @@ To deploy the mgmt-gateway in a high availability setup, here is an example of t
enable_auth: true
virtual_ip: 192.168.100.220
`Ingress` Configuration for Keepalived:
In addition, the user must configure an ingress service to provide virtual IP functionality for
the ``mgmt-gateway``. For example:
.. code-block:: yaml
@ -92,24 +98,26 @@ To deploy the mgmt-gateway in a high availability setup, here is an example of t
backend_service: mgmt-gateway
keepalive_only: true
The number of deployed instances is determined by the number of hosts with the mgmt label.
The ingress is configured in `keepalive_only` mode, with labels ensuring that any changes to
the mgmt-gateway daemons are replicated to the corresponding keepalived instances. Additionally,
the `virtual_ip` parameter must be identical in both specifications.
The number of deployed instances is determined by the number of hosts with the ``mgmt`` label.
The ingress is configured in ``keepalive_only`` mode, with labels ensuring that any changes to
the ``mgmt-gateway`` daemons are replicated to the corresponding keepalived instances.
.. note::
The ``virtual_ip`` parameter must be identical in both the ingress and ``mgmt-gateway`` specifications.
Accessing services with mgmt-gateway
====================================
Once the `mgmt-gateway` service is deployed direct access to the monitoring services will not be allowed anymore.
Once the ``mgmt-gateway`` service is deployed direct access to the monitoring services will not be allowed anymore.
Applications including: Prometheus, Grafana and Alertmanager are now accessible through links
from `Administration > Services`.
from ``Administration > Services``.
Service Specification
=====================
A mgmt-gateway service can be applied using a specification. An example in YAML follows:
A ``mgmt-gateway`` service can be applied using a specification. An example in YAML follows:
.. code-block:: yaml
@ -141,7 +149,7 @@ A mgmt-gateway service can be applied using a specification. An example in YAML
[...]
-----END PRIVATE KEY-----
Fields specific to the ``spec`` section of the mgmt-gateway service are described below.
Fields specific to the ``spec`` section of the ``mgmt-gateway`` service are described below.
.. py:currentmodule:: ceph.deployment.service_spec
@ -179,13 +187,13 @@ Limitations
Default images
~~~~~~~~~~~~~~
The `mgmt-gateway` service internally makes use of nginx reverse proxy. The following container image is used by default:
The ``mgmt-gateway`` service internally makes use of nginx reverse proxy. The following container image is used by default:
::
mgr/cephadm/container_image_nginx = 'quay.io/ceph/nginx:sclorg-nginx-126'
Admins can specify the image to be used by changing the `container_image_nginx` cephadm module option. If there were already
Admins can specify the image to be used by changing the ``container_image_nginx`` cephadm module option. If there were already
running daemon(s) you must redeploy the daemon(s) in order to have them actually use the new image.
For example:
@ -193,4 +201,4 @@ For example:
.. code-block:: bash
ceph config set mgr mgr/cephadm/container_image_nginx <new-nginx-image>
ceph orch redeploy mgmt-gateway
ceph orch redeploy ``mgmt-gateway``

View File

@ -1899,6 +1899,7 @@ Usage:
port: Optional[int] = None,
disable_https: Optional[bool] = False,
enable_auth: Optional[bool] = False,
virtual_ip: Optional[str] = None,
placement: Optional[str] = None,
unmanaged: bool = False,
dry_run: bool = False,
@ -1913,6 +1914,7 @@ Usage:
placement=PlacementSpec.from_string(placement),
unmanaged=unmanaged,
port=port,
virtual_ip=virtual_ip,
disable_https=disable_https,
enable_auth=enable_auth,
preview_only=dry_run

View File

@ -1912,8 +1912,9 @@ class MgmtGatewaySpec(ServiceSpec):
self.ssl_protocols = ssl_protocols
#: List of supported secure SSL ciphers. Changing this list may reduce system security.
self.ssl_ciphers = ssl_ciphers
self.enable_health_check_endpoint = enable_health_check_endpoint
#: Virtual IP address used for the management gateway in a high availability setup.
self.virtual_ip = virtual_ip
self.enable_health_check_endpoint = enable_health_check_endpoint
def get_port_start(self) -> List[int]:
ports = []