mirror of
https://github.com/ceph/ceph
synced 2025-03-11 02:39:05 +00:00
doc: Updated text for more specific region/zone example.
Signed-off-by: John Wilkins <john.wilkins@inktank.com>
This commit is contained in:
parent
5bebf1ac1d
commit
584a6d2527
@ -2,31 +2,28 @@
|
||||
Configuring Federated Gateways
|
||||
================================
|
||||
|
||||
.. versionadded:: 0.69
|
||||
.. versionadded:: 0.71
|
||||
|
||||
In Ceph version 0.69 and beyond, you may configure Ceph Object Gateways in a
|
||||
federated architecture, spanning multiple geographic regions (read affinity and
|
||||
failover), and with multiple zones within a region (disaster recovery).
|
||||
In Ceph version 0.71 and beyond, you may configure Ceph Object Gateways in a
|
||||
federated architecture, with multiple regions, and with multiple zones for a
|
||||
region.
|
||||
|
||||
- **Region**: A region represents a geographical area and contains one
|
||||
- **Region**: A region represents a logical geographic area and contains one
|
||||
or more zones. A cluster with multiple regions must specify a master region.
|
||||
|
||||
- **Zone**: A zone is a logical grouping of one or more Ceph Object Gateway
|
||||
instance(s). A region has a master zone that processes client requests.
|
||||
|
||||
.. image:: ../images/region-zone-sync.png
|
||||
|
||||
When you deploy a :term:`Ceph Object Store` service that spans geographical
|
||||
locales, configuring Ceph Object Gateway regions and metadata synchronization
|
||||
agents enables the service to maintain a global namespace, even though Ceph
|
||||
Object Gateway instances run in different geographic locales and potentially
|
||||
on different Ceph Storage Clusters.
|
||||
|
||||
When you separate one or more Ceph Object Gateway instances within a region into
|
||||
separate logical containers to maintain an extra copy (or copies) of the data,
|
||||
configuring Ceph Object Gateway zones and data synchronization agents enables
|
||||
the service to maintain one or more copy(ies) of the master zone's data. Extra
|
||||
copies of the data are important for failover, backup and disaster recovery.
|
||||
Object Gateway instances run in different geographic locales and potentially on
|
||||
different Ceph Storage Clusters. When you separate one or more Ceph Object
|
||||
Gateway instances within a region into separate logical containers to maintain
|
||||
an extra copy (or copies) of the data, configuring Ceph Object Gateway zones and
|
||||
data synchronization agents enables the service to maintain one or more
|
||||
copy(ies) of the master zone's data. Extra copies of the data are important for
|
||||
failover, backup and disaster recovery.
|
||||
|
||||
You may deploy a single Ceph Storage Cluster with a federated architecture
|
||||
if you have low latency network connections (this isn't recommended). You may
|
||||
@ -39,17 +36,37 @@ of redundancy.
|
||||
Exemplary Cluster
|
||||
=================
|
||||
|
||||
For the purposes of this configuration guide, we provide an exemplary
|
||||
procedure for setting up two regions and two zones within each region.
|
||||
So the cluster will comprise four gateway instances. For naming purposes,
|
||||
we will refer to them as follows:
|
||||
For the purposes of this configuration guide, we provide an exemplary procedure
|
||||
for setting up two regions and two zones for each region. So the cluster will
|
||||
comprise four gateway instances--one per zone. A production cluster at the
|
||||
petabyte scale and beyond would likely involve deploying more instances per
|
||||
zone.
|
||||
|
||||
- Region 1: ``rg1``
|
||||
- Region 1, Zone 1: ``rg1-zn1``
|
||||
- Region 1, Zone 2: ``rg1-zn2``
|
||||
- Region 2: ``rg2``
|
||||
- Region 2, Zone 1: ``rg2-zn1``
|
||||
- Region 2, Zone 2: ``rg2-zn2``
|
||||
Let's assume the first region represents New York and the second region
|
||||
represents London. For naming purposes, we will refer to them by their standard
|
||||
abbreviations:
|
||||
|
||||
- New York: ``ny``
|
||||
- London: ``ldn``
|
||||
|
||||
Zones are logical containers for the gateway instances. The physical location of
|
||||
the gateway instances is up to you, but disaster recovery is an important
|
||||
consideration. A disaster can be as simple as a power failure or a network
|
||||
failure. Yet, it can also involve a natural disaster or a significant political
|
||||
or economic event. In such cases, it is prudent to maintain a secondary zone
|
||||
outside of the geographic (not logical) region.
|
||||
|
||||
Let's assume the master zone for each region is physically located in that
|
||||
region, and the secondary zone is physically located in another region. For
|
||||
continuity, our naming convention will use ``{region name}-{zone name}`` format,
|
||||
but you can use any naming convention you prefer.
|
||||
|
||||
- New York Region, Master Zone: ``ny-ny``
|
||||
- New York Region, Secondary Zone: ``ny-ldn``
|
||||
- London Region, Master Zone: ``ldn-ldn``
|
||||
- London Region, Secondary Zone: ``ldn-ny``
|
||||
|
||||
.. image:: ../images/region-zone-sync.png
|
||||
|
||||
To configure the exemplary cluster, you must configure regions and zones.
|
||||
Once you configure regions and zones, you must configure each instance of a
|
||||
@ -77,22 +94,22 @@ if it exists) the default region and zone.
|
||||
Create Regions
|
||||
--------------
|
||||
|
||||
#. Create a region called ``rg1``.
|
||||
#. Create a region called ``ny``.
|
||||
|
||||
Set ``is_master`` to ``true``. Copy the contents of the following example
|
||||
to a text editor. Replace ``{fqdn}`` with the fully-qualified domain name
|
||||
of the endpoint. Then, save the file to ``region.json``. It will specify a
|
||||
master zone as ``rg1-zn1`` and list it in the ``zones`` list.
|
||||
master zone as ``ny-ny`` and list it in the ``zones`` list.
|
||||
See `Configuration Reference - Regions`_ for details.::
|
||||
|
||||
{ "name": "rg1",
|
||||
"api_name": "rg1",
|
||||
{ "name": "ny",
|
||||
"api_name": "ny",
|
||||
"is_master": "true",
|
||||
"endpoints": [
|
||||
"http:\/\/{fqdn}:80\/"],
|
||||
"master_zone": "rg1-zn1",
|
||||
"master_zone": "ny-ny",
|
||||
"zones": [
|
||||
{ "name": "rg1-zn1",
|
||||
{ "name": "ny-ny",
|
||||
"endpoints": [
|
||||
"http:\/\/{fqdn}:80\/"],
|
||||
"log_meta": "false",
|
||||
@ -101,11 +118,11 @@ Create Regions
|
||||
"default_placement": ""}
|
||||
|
||||
|
||||
#. To create ``rg1``, execute::
|
||||
#. To create ``ny``, execute::
|
||||
|
||||
sudo radosgw-admin region set --infile region.json
|
||||
|
||||
Repeat the foregoing process to create region ``rg2``, but set
|
||||
Repeat the foregoing process to create region ``ldn``, but set
|
||||
``is_master`` to ``false`` and update the ``master_zone`` and
|
||||
``zones`` fields.
|
||||
|
||||
@ -128,10 +145,10 @@ Create Zone Users
|
||||
|
||||
Create zone users before configuring the zones. ::
|
||||
|
||||
sudo radosgw-admin user create --uid="rg1-zn1" --display-name="Region-1 Zone-1"
|
||||
sudo radosgw-admin user create --uid="rg1-zn2" --display-name="Region-1 Zone-2"
|
||||
sudo radosgw-admin user create --uid="rg2-zn1" --display-name="Region-2 Zone-1"
|
||||
sudo radosgw-admin user create --uid="rg2-zn2" --display-name="Region-2 Zone-2"
|
||||
sudo radosgw-admin user create --uid="ny-ny" --display-name="Region-NY Zone-NY"
|
||||
sudo radosgw-admin user create --uid="ny-ldn" --display-name="Region-NY Zone-LDN"
|
||||
sudo radosgw-admin user create --uid="ldn-ny" --display-name="Region-LDN Zone-LDN"
|
||||
sudo radosgw-admin user create --uid="ldn-ldn" --display-name="Region-LDN Zone-NY"
|
||||
|
||||
Copy the ``access_key`` and ``secret_key`` fields for each user. You will need them
|
||||
to configure each zone.
|
||||
@ -140,7 +157,7 @@ to configure each zone.
|
||||
Create a Zone
|
||||
-------------
|
||||
|
||||
#. Create a zone called ``rg1-zn1``.
|
||||
#. Create a zone called ``ny-ny``.
|
||||
|
||||
Paste the contents of the ``access_key`` and ``secret_key`` fields from the
|
||||
step of creating a zone user into the ``system_key`` field. This
|
||||
@ -148,26 +165,26 @@ Create a Zone
|
||||
See `Configuration Reference - Pools`_ for details on gateway pools.
|
||||
See `Configuration Reference - Zones`_ for details on zones. ::
|
||||
|
||||
{ "domain_root": ".rg1-zn1.rgw",
|
||||
"control_pool": ".rg1-zn1.rgw.control",
|
||||
"gc_pool": ".rg1-zn1.rgw.gc",
|
||||
"log_pool": ".rg1-zn1.log",
|
||||
"intent_log_pool": ".rg1-zn1.intent-log",
|
||||
"usage_log_pool": ".rg1-zn1.usage",
|
||||
"user_keys_pool": ".rg1-zn1.users",
|
||||
"user_email_pool": ".rg1-zn1.users.email",
|
||||
"user_swift_pool": ".rg1-zn1.users.swift",
|
||||
"user_uid_pool": ".rg1-zn1.users.uid",
|
||||
{ "domain_root": ".ny-ny.rgw",
|
||||
"control_pool": ".ny-ny.rgw.control",
|
||||
"gc_pool": ".ny-ny.rgw.gc",
|
||||
"log_pool": ".ny-ny.log",
|
||||
"intent_log_pool": ".ny-ny.intent-log",
|
||||
"usage_log_pool": ".ny-ny.usage",
|
||||
"user_keys_pool": ".ny-ny.users",
|
||||
"user_email_pool": ".ny-ny.users.email",
|
||||
"user_swift_pool": ".ny-ny.users.swift",
|
||||
"user_uid_pool": ".ny-ny.users.uid",
|
||||
"system_key": { "access_key": "", "secret_key": ""}
|
||||
}
|
||||
|
||||
|
||||
#. To create ``r1-zn1``, execute::
|
||||
#. To create ``ny-ny``, execute::
|
||||
|
||||
sudo radosgw-admin zone set --rgw-zone=rg1-zn1 --infile zone.json
|
||||
sudo radosgw-admin zone set --rgw-zone=ny-ny --infile zone.json
|
||||
|
||||
Repeat the previous to steps to create zones ``rg1-zn2``, ``rg2-zn1``,
|
||||
and ``rg2-zn2`` replacing ``rg*-zn*`` in the ``zone.json`` file.
|
||||
Repeat the previous to steps to create zones ``ny-ldn``, ``ldn-ny``,
|
||||
and ``ldn-ldn`` (replacing ``ny-ny`` in the ``zone.json`` file).
|
||||
|
||||
#. Delete the default zone. ::
|
||||
|
||||
@ -186,18 +203,36 @@ If the username(s) and key(s) that provide your Ceph Object Gateway with access
|
||||
to the Ceph Storage Cluster do not have write capability to the :term:`Ceph
|
||||
Monitor`, you must create the pools manually. See `Configuration Reference -
|
||||
Pools`_ for details on the default pools for gateways. See `Pools`_ for
|
||||
details on creating pools. For each pool name:
|
||||
details on creating pools. The default pools for a Ceph Object Gateway are:
|
||||
|
||||
- ``.rgw``
|
||||
- ``.rgw.control``
|
||||
- ``.rgw.gc``
|
||||
- ``.log``
|
||||
- ``.intent-log``
|
||||
- ``.usage``
|
||||
- ``.users``
|
||||
- ``.users.email``
|
||||
- ``.users.swift``
|
||||
- ``.users.uid``
|
||||
|
||||
The `Exemplary Cluster`_ assumes that you will have a Ceph Storage Cluster for
|
||||
each region, and that you will create pools for each zone that resides
|
||||
**physically** in that region (e.g., ``ny-ny`` and ``ldn-ny`` in New York and
|
||||
``ldn-ldn`` and ``ny-ldn`` in London). For each pool, prepend the name of the
|
||||
zone name (e.g., ``.ny-ny``, ``.ny-ldn``, ``ldn-ldn``, ``ldn-ny``). For example:
|
||||
|
||||
- ``.ny-ny.rgw``
|
||||
- ``.ny-ny.rgw.control``
|
||||
- ``.ny-ny.rgw.gc``
|
||||
- ``.ny-ny.log``
|
||||
- ``.ny-ny.intent-log``
|
||||
- ``.ny-ny.usage``
|
||||
- ``.ny-ny.users``
|
||||
- ``.ny-ny.users.email``
|
||||
- ``.ny-ny.users.swift``
|
||||
- ``.ny-ny.users.uid``
|
||||
|
||||
- ``.rg1-zn1.rgw``
|
||||
- ``.rg1-zn1.rgw.control``
|
||||
- ``.rg1-zn1.rgw.gc``
|
||||
- ``.rg1-zn1.log``
|
||||
- ``.rg1-zn1.intent-log``
|
||||
- ``.rg1-zn1.usage``
|
||||
- ``.rg1-zn1.users``
|
||||
- ``.rg1-zn1.users.email``
|
||||
- ``.rg1-zn1.users.swift``
|
||||
- ``.rg1-zn1.users.uid``
|
||||
|
||||
Execute one of the following::
|
||||
|
||||
@ -205,27 +240,35 @@ Execute one of the following::
|
||||
ceph osd pool create {poolname} {pg-num} {pgp-num}
|
||||
|
||||
|
||||
Configuring a Gateway Instance
|
||||
==============================
|
||||
.. tip:: When adding a large number of pools, it may take some time for your
|
||||
cluster to return to a ``active + clean`` state.
|
||||
|
||||
Before you configure a gateway instance, determine an ID for the instance. You
|
||||
|
||||
Configuring Gateway Instances
|
||||
=============================
|
||||
|
||||
The `Exemplary Cluster`_ assumes that you will configure an instance for
|
||||
each zone. In larger deployments, you may need to configure multiple instances
|
||||
per zone to handle higher loads.
|
||||
|
||||
Before you configure a gateway instance, determine an ID for the instance. You
|
||||
can name a Ceph Object Gateway instance anything you like. In large clusters
|
||||
with regions and zones, it may help to add region and zone names into your
|
||||
instance name. For example::
|
||||
|
||||
rg1-zn1-instance1
|
||||
ny-ny-instance1
|
||||
|
||||
When referring to your instance identifier in the Ceph configuration file, it
|
||||
is prepended with ``client.radosgw.``. For example, an instance named
|
||||
``rg1-zn1-instance1`` will look like this::
|
||||
``ny-ny-instance1`` will look like this::
|
||||
|
||||
[client.radosgw.rg1-zn1-instance1]
|
||||
[client.radosgw.ny-ny-instance1]
|
||||
|
||||
Similarly, the default data path for an instance named
|
||||
``rg1-zn1-instance1`` is prepended with ``{cluster}-radosgw.``. For
|
||||
``ny-ny-instance1`` is prepended with ``{cluster}-radosgw.``. For
|
||||
example::
|
||||
|
||||
/var/lib/ceph/radosgw/ceph-radosgw.rg1-zn1-instance1
|
||||
/var/lib/ceph/radosgw/ceph-radosgw.ny-ny-instance1
|
||||
|
||||
|
||||
Create a Data Directory
|
||||
@ -233,7 +276,7 @@ Create a Data Directory
|
||||
|
||||
Create a data directory on the node where you installed ``radosgw``. ::
|
||||
|
||||
sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.rg1-zn1-instance1
|
||||
sudo mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.ny-ny-instance1
|
||||
|
||||
|
||||
Create a Storage Cluster User
|
||||
@ -246,8 +289,8 @@ Administration`_ for a discussion on adding keyrings and keys.
|
||||
|
||||
#. Create a keyring for the Ceph Object Gateway. For example::
|
||||
|
||||
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.rg1.keyring
|
||||
sudo chmod +r /etc/ceph/ceph.client.radosgw.rg1.keyring
|
||||
sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.ny.keyring
|
||||
sudo chmod +r /etc/ceph/ceph.client.radosgw.ny.keyring
|
||||
|
||||
|
||||
#. Generate a key so that the Ceph Object Gateway can provide a user name and
|
||||
@ -255,14 +298,19 @@ Administration`_ for a discussion on adding keyrings and keys.
|
||||
capabilities to the key. See `Configuration Reference - Pools`_ for details
|
||||
on the effect of write permissions for the monitor and creating pools. ::
|
||||
|
||||
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.rg1.keyring -n client.radosgw.rg1-zn1 --gen-key
|
||||
sudo ceph-authtool -n client.radosgw.rg1-zn1 --cap osd 'allow rwx' --cap mon 'allow rw' /etc/ceph/ceph.client.radosgw.rg1.keyring
|
||||
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.ny.keyring -n client.radosgw.ny-ny --gen-key
|
||||
sudo ceph-authtool /etc/ceph/ceph.client.radosgw.ny.keyring -n client.radosgw.ldn-ny --gen-key
|
||||
sudo ceph-authtool -n client.radosgw.ny-ny --cap osd 'allow rwx' --cap mon 'allow rw' /etc/ceph/ceph.client.radosgw.ny.keyring
|
||||
sudo ceph-authtool -n client.radosgw.ldn-ny --cap osd 'allow rwx' --cap mon 'allow rw' /etc/ceph/ceph.client.radosgw.ny.keyring
|
||||
|
||||
**Note:** You will need to generate a key for each zone that will access
|
||||
the Ceph Storage Cluster (assuming 1 per region).
|
||||
|
||||
#. Once you have created a keyring and key to enable the Ceph Object Gateway
|
||||
with access to the Ceph Storage Cluster, add it as an entry to your Ceph
|
||||
Storage Cluster. For example::
|
||||
|
||||
sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.rg1-zn1 -i /etc/ceph/ceph.client.radosgw.rg1.keyring
|
||||
sudo ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.ny-ny -i /etc/ceph/ceph.client.radosgw.ny.keyring
|
||||
|
||||
|
||||
Create a Gateway Configuration
|
||||
@ -321,7 +369,7 @@ script, execute the following procedures on the server node.
|
||||
Copy the following into the editor. ::
|
||||
|
||||
#!/bin/sh
|
||||
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.rg1-zn1-instance1
|
||||
exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.ny-ny-instance1
|
||||
|
||||
#. Save the file.
|
||||
|
||||
@ -340,16 +388,16 @@ client of the Ceph Storage Cluster, so you must place each instance under a
|
||||
instance ID. For example::
|
||||
|
||||
|
||||
[client.radosgw.rg1-zn1-instance1]
|
||||
[client.radosgw.ny-ny-instance1]
|
||||
|
||||
#Region Info
|
||||
rgw region = rg1
|
||||
rgw region root pool = .rg1.rgw.root
|
||||
rgw region = ny
|
||||
rgw region root pool = .ny.rgw.root
|
||||
|
||||
#Zone Info
|
||||
rgw zone = rg1-zn1
|
||||
rgw zone root pool = .rg1-zn1.rgw.root
|
||||
keyring = /etc/ceph/ceph.client.radosgw.rg1.keyring
|
||||
rgw zone = ny-ny
|
||||
rgw zone root pool = .ny-ny.rgw.root
|
||||
keyring = /etc/ceph/ceph.client.radosgw.ny.keyring
|
||||
|
||||
#DNS Info for S3 Subdomains
|
||||
rgw dns name = {hostname}
|
||||
@ -388,8 +436,8 @@ gateway instance we recommend restarting the ``apache2`` service. For example::
|
||||
sudo service apache2 restart
|
||||
|
||||
|
||||
Start
|
||||
=====
|
||||
Start Gateways
|
||||
==============
|
||||
|
||||
Start up the ``radosgw`` service. When starting the service with other than
|
||||
the default region and zone, you must specify them explicitly. ::
|
||||
@ -397,24 +445,28 @@ the default region and zone, you must specify them explicitly. ::
|
||||
sudo /etc/init.d/radosgw start --rgw-region={region} --rgw-zone={zone}
|
||||
|
||||
|
||||
Activate Metadata Agent
|
||||
=======================
|
||||
Synchronize Metadata
|
||||
====================
|
||||
|
||||
The metadata agent synchronizes metadata between two regions. The source region
|
||||
is the master region for the cluster, and the destination region is the secondary
|
||||
region that will receive metadata.
|
||||
The metadata agent maintains a global namespace for the cluster. The master
|
||||
zone of the master region is the source for all other instances in the cluster.
|
||||
|
||||
To configure the synchronization agent, retrieve the following from the master
|
||||
zone of the the source and destination regions:
|
||||
|
||||
Configure an Agent
|
||||
------------------
|
||||
|
||||
To configure the metadata synchronization agent, retrieve the following from all
|
||||
zones:
|
||||
|
||||
- Access Key
|
||||
- Secret Key
|
||||
- Hostname
|
||||
- Port
|
||||
|
||||
Specify these values in a configuration file (e.g., ``region-md-sync.conf``),
|
||||
and include a ``log_file`` name and a an identifier for the ``daemon_id``. For
|
||||
example:
|
||||
You only need the hostname and port for a single instance (assuming all gateway
|
||||
instances in a region/zone access the same Ceph Storage Cluster). Specify these
|
||||
values in a configuration file (e.g., ``cluster-md-sync.conf``), and include a
|
||||
``log_file`` name and an identifier for the ``daemon_id``. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@ -426,15 +478,76 @@ example:
|
||||
dest_access_key: {destination-access-key}
|
||||
dest_secret_key: {destination-secret-key}
|
||||
dest_host: {destination-hostname}
|
||||
dest_port: {destinatio-port}
|
||||
dest_port: {destination-port}
|
||||
dest_zone: {destination-zone}
|
||||
log_file: {log.filename}
|
||||
daemon_id: {daemon-id}
|
||||
|
||||
To activate the metadata agent, execute the following::
|
||||
The `Exemplary Cluster`_ assumes that ``ny-ny`` is the master region and zone,
|
||||
so it is the source for ``ny-ldn``, ``ldn-ldn`` and ``ldn-ny``.
|
||||
|
||||
|
||||
Activate an Agent
|
||||
-----------------
|
||||
|
||||
To activate the metadata synchronization agent, execute the following::
|
||||
|
||||
radosgw-agent -c region-md-sync.conf
|
||||
|
||||
You must have an agent for each source-destination pair.
|
||||
|
||||
|
||||
Replicate Data
|
||||
==============
|
||||
|
||||
The data synchronization agent replicates the data of a master zone to a
|
||||
secondary zone. The master zone of a region is the source for the secondary zone
|
||||
of the region.
|
||||
|
||||
|
||||
Configure an Agent
|
||||
------------------
|
||||
|
||||
To configure the synchronization agent, retrieve the following from all zones:
|
||||
|
||||
- Access Key
|
||||
- Secret Key
|
||||
- Hostname
|
||||
- Port
|
||||
|
||||
You only need the hostname and port for a single instance (assuming all gateway
|
||||
instances in a region/zone access the same Ceph Storage Cluster). Specify these
|
||||
values in a configuration file (e.g., ``cluster-data-sync.conf``), and include a
|
||||
``log_file`` name and an identifier for the ``daemon_id``. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
src_access_key: {source-access-key}
|
||||
src_secret_key: {source-secret-key}
|
||||
src_host: {source-hostname}
|
||||
src_port: {source-port}
|
||||
src_zone: {source-zone}
|
||||
dest_access_key: {destination-access-key}
|
||||
dest_secret_key: {destination-secret-key}
|
||||
dest_host: {destination-hostname}
|
||||
dest_port: {destination-port}
|
||||
dest_zone: {destination-zone}
|
||||
log_file: {log.filename}
|
||||
daemon_id: {daemon-id}
|
||||
|
||||
The `Exemplary Cluster`_ assumes that ``ny-ny`` and ``ldn-ldn`` are the master
|
||||
zones, so they are the source for ``ny-ldn`` and ``ldn-ldn`` respectively.
|
||||
|
||||
|
||||
Activate an Agent
|
||||
-----------------
|
||||
|
||||
To activate the data synchronization agent, execute the following::
|
||||
|
||||
radosgw-agent -c region-data-sync.conf
|
||||
|
||||
You must have an agent for each source-destination pair.
|
||||
|
||||
|
||||
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user