diff --git a/doc/rados/operations/user-management.rst b/doc/rados/operations/user-management.rst index 6f12e7f427a..3fcb1f2e71e 100644 --- a/doc/rados/operations/user-management.rst +++ b/doc/rados/operations/user-management.rst @@ -98,24 +98,24 @@ capabilities when creating or updating a user. Capability syntax follows the form:: - {daemon-type} 'allow {capability}' [{daemon-type} 'allow {capability}'] + {daemon-type} '{capspec}[, {capspec} ...]' - -- **Monitor Caps:** Monitor capabilities include ``r``, ``w``, ``x`` and - ``allow profile {cap}``. For example:: +- **Monitor Caps:** Monitor capabilities include ``r``, ``w``, ``x`` access + settings or ``profile {name}``. For example:: mon 'allow rwx' - mon 'allow profile osd' + mon 'profile osd' -- **OSD Caps:** OSD capabilities include ``r``, ``w``, ``x``, ``class-read``, - ``class-write`` and ``profile osd``. Additionally, OSD capabilities also - allow for pool and namespace settings. :: +- **OSD Caps:** OSD capabilities include ``r``, ``w``, ``x``, ``class-read``, + ``class-write`` access settings or ``profile {name}``. Additionally, OSD + capabilities also allow for pool and namespace settings. :: - osd 'allow {capability}' [pool={poolname}] [namespace={namespace-name}] + osd 'allow {access} [pool={pool-name} [namespace={namespace-name}]]' + osd 'profile {name} [pool={pool-name} [namespace={namespace-name}]]' - **Metadata Server Caps:** Metadata server capability simply requires ``allow``, or blank and does not parse anything further. :: - + mds 'allow' @@ -168,20 +168,20 @@ The following entries describe each capability. admin commands. -``profile osd`` +``profile osd`` (Monitor only) :Description: Gives a user permissions to connect as an OSD to other OSDs or monitors. Conferred on OSDs to enable OSDs to handle replication heartbeat traffic and status reporting. -``profile mds`` +``profile mds`` (Monitor only) :Description: Gives a user permissions to connect as a MDS to other MDSs or monitors. -``profile bootstrap-osd`` +``profile bootstrap-osd`` (Monitor only) :Description: Gives a user permissions to bootstrap an OSD. Conferred on deployment tools such as ``ceph-disk``, ``ceph-deploy``, etc. @@ -189,13 +189,23 @@ The following entries describe each capability. bootstrapping an OSD. -``profile bootstrap-mds`` +``profile bootstrap-mds`` (Monitor only) :Description: Gives a user permissions to bootstrap a metadata server. Conferred on deployment tools such as ``ceph-deploy``, etc. so they have permissions to add keys, etc. when bootstrapping a metadata server. +``profile rbd`` (Monitor and OSD) + +:Description: Gives a user permissions to manipulate RBD images. When used + as a Monitor cap, it provides the minimal privileges required + by an RBD client application. When used as an OSD cap, it + provides read-write access to an RBD client application. + +``profile rbd-read-only`` (OSD only) + +:Description: Gives a user read-only permissions to an RBD image. Pool diff --git a/doc/rbd/libvirt.rst b/doc/rbd/libvirt.rst index 686e19c3964..8bf4372b7ea 100644 --- a/doc/rbd/libvirt.rst +++ b/doc/rbd/libvirt.rst @@ -71,11 +71,11 @@ To configure Ceph for use with ``libvirt``, perform the following steps: rbd pool init -#. `Create a Ceph User`_ (or use ``client.admin`` for version 0.9.7 and - earlier). The following example uses the Ceph user name ``client.libvirt`` +#. `Create a Ceph User`_ (or use ``client.admin`` for version 0.9.7 and + earlier). The following example uses the Ceph user name ``client.libvirt`` and references ``libvirt-pool``. :: - ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool' + ceph auth get-or-create client.libvirt mon 'profile rbd' osd 'profile rbd pool=libvirt-pool' Verify the name exists. :: diff --git a/doc/rbd/rados-rbd-cmds.rst b/doc/rbd/rados-rbd-cmds.rst index cc113c1d069..f28641f04bf 100644 --- a/doc/rbd/rados-rbd-cmds.rst +++ b/doc/rbd/rados-rbd-cmds.rst @@ -25,6 +25,30 @@ Create a Block Device Pool .. note:: The ``rbd`` tool assumes a default pool name of 'rbd' when not provided. +Create a Block Device User +========================== + +Unless specified, the ``rbd`` command will access the Ceph cluster using the ID +``admin``. This ID allows full administrative access to the cluster. It is +recommended that you utilize a more restricted user wherever possible. + +To `create a Ceph user`_, with ``ceph`` specify the ``auth get-or-create`` +command, user name, monitor caps, and OSD caps:: + + ceph auth get-or-create client.{ID} mon 'profile rbd' osd 'profile {profile name} [pool={pool-name}][, profile ...]' + +For example, to create a user ID named ``qemu`` with read-write access to the +pool ``vms`` and read-only access to the pool ``images``, execute the +following:: + + ceph auth get-or-create client.qemu mon 'profile rbd' osd 'profile rbd pool=vms, profile rbd-read-only pool=images' + +The output from the ``ceph auth get-or-create`` command will be the keyring for +the specified user, which can be written to ``/etc/ceph/ceph.client.{ID}.keyring``. + +.. note:: The user ID can be specified when using the ``rbd`` command by + providing the ``--id {id}`` optional argument. + Creating a Block Device Image ============================= @@ -33,7 +57,7 @@ the :term:`Ceph Storage Cluster` first. To create a block device image, execute the following:: rbd create --size {megabytes} {pool-name}/{image-name} - + For example, to create a 1GB image named ``bar`` that stores information in a pool named ``swimmingpool``, execute the following:: @@ -126,3 +150,4 @@ For example:: .. _create a pool: ../../rados/operations/pools/#create-a-pool .. _Storage Pools: ../../rados/operations/pools .. _RBD – Manage RADOS Block Device (RBD) Images: ../../man/8/rbd/ +.. _create a Ceph user: ../../rados/operations/user-management#add-a-user diff --git a/doc/rbd/rbd-cloudstack.rst b/doc/rbd/rbd-cloudstack.rst index e888f787a3f..f66d6d4bb3d 100644 --- a/doc/rbd/rbd-cloudstack.rst +++ b/doc/rbd/rbd-cloudstack.rst @@ -81,7 +81,7 @@ credentials to access the ``cloudstack`` pool we just created. Although we could use ``client.admin`` for this, it's recommended to create a user with only access to the ``cloudstack`` pool. :: - ceph auth get-or-create client.cloudstack mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cloudstack' + ceph auth get-or-create client.cloudstack mon 'profile rbd' osd 'profile rbd pool=cloudstack' Use the information returned by the command in the next step when adding the Primary Storage. diff --git a/doc/rbd/rbd-openstack.rst b/doc/rbd/rbd-openstack.rst index 907276fe15b..6f3f06f887d 100644 --- a/doc/rbd/rbd-openstack.rst +++ b/doc/rbd/rbd-openstack.rst @@ -132,17 +132,9 @@ Setup Ceph Client Authentication If you have `cephx authentication`_ enabled, create a new user for Nova/Cinder and Glance. Execute the following:: - ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' - ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' - -If you run an OpenStack version before Mitaka, create the following ``client.cinder`` key:: - - ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' - -Since Mitaka introduced the support of RBD snapshots while doing a snapshot of a Nova instance, -we need to allow the ``client.cinder`` key write access to the ``images`` pool; therefore, create the following key:: - - ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images' + ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' + ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images' + ceph auth get-or-create client.cinder-backup mon 'profile rbd' osd 'profile rbd pool=backups' Add the keyrings for ``client.cinder``, ``client.glance``, and ``client.cinder-backup`` to the appropriate nodes and change their ownership::