doc: replace region with zonegroup in configure bucket sharding section

Fixes: http://tracker.ceph.com/issues/21610
Signed-off-by: Orit Wasserman <owasserm@redhat.com>
This commit is contained in:
Orit Wasserman 2017-10-01 08:40:27 +03:00
parent 534c30aca4
commit 8a1034bccd

View File

@ -273,23 +273,22 @@ On Ubuntu execute::
sudo service radosgw restart id=rgw.<short-hostname>
For federated configurations, each zone may have a different ``index_pool``
setting for failover. To make the value consistent for a region's zones, you
may set ``rgw_override_bucket_index_max_shards`` in a gateway's region
setting for failover. To make the value consistent for a zonegroup's zones, you
may set ``rgw_override_bucket_index_max_shards`` in a gateway's zonegroup
configuration. For example::
radosgw-admin region get > region.json
radosgw-admin zonegroup get > zonegroup.json
Open the ``region.json`` file and edit the ``bucket_index_max_shards`` setting
for each named zone. Save the ``region.json`` file and reset the region. For
example::
Open the ``zonegroup.json`` file and edit the ``bucket_index_max_shards`` setting
for each named zone. Save the ``zonegroup.json`` file and reset the zonegroup.
For example::
radosgw-admin region set < region.json
radosgw-admin zonegroup set < zonegroup.json
Once you have updated your region, update the region map. For example::
Once you have updated your zonegroup, update and commit the period.
For example::
radosgw-admin regionmap update --name client.rgw.ceph-client
Where ``client.rgw.ceph-client`` is the name of the gateway user.
radosgw-admin period update --commit
.. note:: Mapping the index pool (for each zone, if applicable) to a CRUSH
ruleset of SSD-based OSDs may also help with bucket index performance.