=========== RGW Service =========== .. _cephadm-deploy-rgw: Deploy RGWs =========== Cephadm deploys radosgw as a collection of daemons that manage a single-cluster deployment or a particular *realm* and *zone* in a multisite deployment. (For more information about realms and zones, see :ref:`multisite`.) Note that with cephadm, radosgw daemons are configured via the monitor configuration database instead of via a `ceph.conf` or the command line. If that configuration isn't already in place (usually in the ``client.rgw.`` section), then the radosgw daemons will start up with default settings (e.g., binding to port 80). To deploy a set of radosgw daemons, with an arbitrary service name *name*, run the following command: .. prompt:: bash # ceph orch apply rgw ** [--realm=**] [--zone=**] --placement="** [** ...]" Trivial setup ------------- For example, to deploy 2 RGW daemons (the default) for a single-cluster RGW deployment under the arbitrary service id *foo*: .. prompt:: bash # ceph orch apply rgw foo Designated gateways ------------------- A common scenario is to have a labeled set of hosts that will act as gateways, with multiple instances of radosgw running on consecutive ports 8000 and 8001: .. prompt:: bash # ceph orch host label add gwhost1 rgw # the 'rgw' label can be anything ceph orch host label add gwhost2 rgw ceph orch apply rgw foo '--placement=label:rgw count-per-host:2' --port=8000 Multisite zones --------------- To deploy RGWs serving the multisite *myorg* realm and the *us-east-1* zone on *myhost1* and *myhost2*: .. prompt:: bash # ceph orch apply rgw east --realm=myorg --zone=us-east-1 --placement="2 myhost1 myhost2" Note that in a multisite situation, cephadm only deploys the daemons. It does not create or update the realm or zone configurations. To create a new realm and zone, you need to do something like: .. prompt:: bash # radosgw-admin realm create --rgw-realm= --default .. prompt:: bash # radosgw-admin zonegroup create --rgw-zonegroup= --master --default .. prompt:: bash # radosgw-admin zone create --rgw-zonegroup= --rgw-zone= --master --default .. prompt:: bash # radosgw-admin period update --rgw-realm= --commit See :ref:`orchestrator-cli-placement-spec` for details of the placement specification. See :ref:`multisite` for more information of setting up multisite RGW. .. _orchestrator-haproxy-service-spec: High availability service for RGW ================================= The *ingress* service allows you to create a high availability endpoint for RGW with a minumum set of configuration options. The orchestrator will deploy and manage a combination of haproxy and keepalived to provide load balancing on a floating virtual IP. If SSL is used, then SSL must be configured and terminated by the ingress service and not RGW itself. .. image:: ../images/HAProxy_for_RGW.svg There are N hosts where the ingress service is deployed. Each host has a haproxy daemon and a keepalived daemon. A virtual IP is automatically configured on only one of these hosts at a time. Each keepalived daemon checks every few seconds whether the haproxy daemon on the same host is responding. Keepalived will also check that the master keepalived daemon is running without problems. If the "master" keepalived daemon or the active haproxy is not responding, one of the remaining keepalived daemons running in backup mode will be elected as master, and the virtual IP will be moved to that node. The active haproxy acts like a load balancer, distributing all RGW requests between all the RGW daemons available. .. note:: The virtual IP will be configured on an ethernet interface on the host that has an existing IP in the same subnet. (If there are multiple such interfaces, cephadm will choose the "first" one it sees.) **Prerequisites:** * An existing RGW service, without SSL. (If you want SSL service, the certificate should be configured on the ingress service, not the RGW service.) **Deploy of the high availability service for RGW** Use the command:: ceph orch apply -i **Service specification file:** It is a yaml format file with the following properties: .. code-block:: yaml service_type: ingress service_id: rgw.something # adjust to match your existing RGW service placement: hosts: - host1 - host2 - host3 spec: backend_service: rgw.something # adjust to match your existing RGW service virtual_ip: / # ex: 192.168.20.1/24 frontend_port: # ex: 8080 monitor_port: # ex: 1967, used by haproxy for load balancer status ssl_cert: ex: [ "-----BEGIN CERTIFICATE-----", "MIIDZTCCAk2gAwIBAgIUClb9dnseOsgJWAfhPQvrZw2MP2kwDQYJKoZIhvcNAQEL", .... "-----END CERTIFICATE-----", "-----BEGIN PRIVATE KEY-----", .... "sCHaZTUevxb4h6dCEk1XdPr2O2GdjV0uQ++9bKahAy357ELT3zPE8yYqw7aUCyBO", "aW5DSCo8DgfNOgycVL/rqcrc", "-----END PRIVATE KEY-----" ] where the properties of this service specification are: * ``service_type`` Mandatory and set to "ingress" * ``service_id`` The name of the service. We suggest naming this after the service you are controlling ingress for (e.g., ``rgw.foo``). * ``placement hosts`` The hosts where it is desired to run the HA daemons. An haproxy and a keepalived container will be deployed on these hosts. These hosts do not need to match the nodes where RGW is deployed. * ``virtual_ip`` The virtual IP (and network) in CIDR format where the ingress service will be available. * ``frontend_port`` The port used to access the ingress service. * ``ssl_cert``: SSL certificate, if SSL is to be enabled. This must contain the both the certificate and private key blocks in .pem format. **Useful hints for ingress:** * Good to have at least 3 RGW daemons * Use at least 3 hosts for the ingress