doc/rgw: update pool names, document namespaces

Fixes: http://tracker.ceph.com/issues/19504

Signed-off-by: Casey Bodley <cbodley@redhat.com>
This commit is contained in:
Casey Bodley 2017-05-18 14:45:03 -04:00
parent 9cc834e1a0
commit 3a6471a6e6
3 changed files with 61 additions and 52 deletions

View File

@ -38,6 +38,7 @@ you may write data with one API and retrieve it with the other.
Manual Install w/Civetweb <../../install/install-ceph-gateway>
Multisite Configuration <multisite>
Configuring Pools <pools>
Config Reference <config-ref>
Admin Guide <admin>
S3 API <s3>

View File

@ -74,58 +74,8 @@ In this guide, the ``rgw1`` host will serve as the master zone of the
master zone group; and, the ``rgw2`` host will serve as the secondary zone
of the master zone group.
Pools
=====
We recommend using the `Ceph Placement Groups per Pool
Calculator <http://ceph.com/pgcalc/>`__ to calculate a
suitable number of placement groups for the pools the ``ceph-radosgw``
daemon will create. Set the calculated values as defaults in your Ceph
configuration file. For example:
::
osd pool default pg num = 50
osd pool default pgp num = 50
.. note:: Make this change to the Ceph configuration file on your
storage cluster; then, either make a runtime change to the
configuration so that it will use those defaults when the gateway
instance creates the pools.
Alternatively, create the pools manually. See
`Pools <http://docs.ceph.com/docs/master/rados/operations/pools/#pools>`__
for details on creating pools.
Pool names particular to a zone follow the naming convention
``{zone-name}.pool-name``. For example, a zone named ``us-east`` will
have the following pools:
- ``.rgw.root``
- ``us-east.rgw.control``
- ``us-east.rgw.data.root``
- ``us-east.rgw.gc``
- ``us-east.rgw.log``
- ``us-east.rgw.intent-log``
- ``us-east.rgw.usage``
- ``us-east.rgw.users.keys``
- ``us-east.rgw.users.email``
- ``us-east.rgw.users.swift``
- ``us-east.rgw.users.uid``
- ``us-east.rgw.buckets.index``
- ``us-east.rgw.buckets.data``
See `Pools`_ for instructions on creating and tuning pools for Ceph
Object Storage.
Configuring a Master Zone
@ -1504,3 +1454,6 @@ instance.
| | keeping inter-zone group | | |
| | synchronization progress. | | |
+-------------------------------------+-----------------------------------+---------+-----------------------+
.. _`Pools`: ../pools

55
doc/radosgw/pools.rst Normal file
View File

@ -0,0 +1,55 @@
=====
Pools
=====
The Ceph Object Gateway uses several pools for its various storage needs,
which are listed in the Zone object (see ``radosgw-admin zone get``). A
single zone named ``default`` is created automatically with pool names
starting with ``default.rgw.``, but a `Multisite Configuration`_ will have
multiple zones.
Tuning
======
When ``radosgw`` first tries to operate on a zone pool that does not
exist, it will create that pool with the default values from
``osd pool default pg num`` and ``osd pool default pgp num``. These defaults
are sufficient for some pools, but others (especially those listed in
``placement_pools`` for the bucket index and data) will require additional
tuning. We recommend using the `Ceph Placement Groups per Pool
Calculator <http://ceph.com/pgcalc/>`__ to calculate a suitable number of
placement groups for these pools. See
`Pools <http://docs.ceph.com/docs/master/rados/operations/pools/#pools>`__
for details on pool creation.
Pool Namespaces
===============
.. versionadded:: Luminous
Pool names particular to a zone follow the naming convention
``{zone-name}.pool-name``. For example, a zone named ``us-east`` will
have the following pools:
- ``.rgw.root``
- ``us-east.rgw.control``
- ``us-east.rgw.meta``
- ``us-east.rgw.log``
- ``us-east.rgw.buckets.index``
- ``us-east.rgw.buckets.data``
The zone definitions list several more pools than that, but many of those
are consolidated through the use of rados namespaces. For example, all of
the following pool entries use namespaces of the ``us-east.rgw.meta`` pool::
"user_keys_pool": "us-east.rgw.meta:users.keys",
"user_email_pool": "us-east.rgw.meta:users.email",
"user_swift_pool": "us-east.rgw.meta:users.swift",
"user_uid_pool": "us-east.rgw.meta:users.uid",
.. _`Multisite Configuration`: ../multisite