2013-04-15 14:39:32 +00:00
=============================
Block Devices and CloudStack
=============================
2013-06-14 23:57:15 +00:00
You may use Ceph Block Device images with CloudStack 4.0 and higher through
2013-04-15 14:39:32 +00:00
`` libvirt `` , which configures the QEMU interface to `` librbd `` . Ceph stripes
block device images as objects across the cluster, which means that large Ceph
2013-06-14 23:57:15 +00:00
Block Device images have better performance than a standalone server!
2013-04-15 14:39:32 +00:00
2013-06-14 23:57:15 +00:00
To use Ceph Block Devices with CloudStack 4.0 and higher, you must install QEMU,
2013-04-15 14:39:32 +00:00
`` libvirt `` , and CloudStack first. We recommend using a separate physical host
for your CloudStack installation. CloudStack recommends a minimum of 4GB of RAM
and a dual-core processor, but more CPU and RAM will perform better. The
following diagram depicts the CloudStack/Ceph technology stack.
.. ditaa :: +---------------------------------------------------+
| CloudStack |
+---------------------------------------------------+
| libvirt |
+------------------------+--------------------------+
|
| configures
v
+---------------------------------------------------+
| QEMU |
+---------------------------------------------------+
| librbd |
+---------------------------------------------------+
| librados |
+------------------------+-+------------------------+
| OSDs | | Monitors |
+------------------------+ +------------------------+
2013-06-14 23:57:15 +00:00
.. important :: To use Ceph Block Devices with CloudStack, you must have
access to a running Ceph Storage Cluster.
2013-04-15 14:39:32 +00:00
CloudStack integrates with Ceph's block devices to provide CloudStack with a
back end for CloudStack's Primary Storage. The instructions below detail the
setup for CloudStack Primary Storage.
2014-10-01 15:01:07 +00:00
.. note :: We recommend installing with Ubuntu 14.04 or later so that
2013-04-15 14:39:32 +00:00
you can use package installation instead of having to compile
2013-05-01 09:09:11 +00:00
libvirt from source.
2014-03-04 21:24:36 +00:00
2013-04-15 14:39:32 +00:00
Installing and configuring QEMU for use with CloudStack doesn't require any
2013-06-14 23:57:15 +00:00
special handling. Ensure that you have a running Ceph Storage Cluster. Install
QEMU and configure it for use with Ceph; then, install `` libvirt `` version
0.9.13 or higher (you may need to compile from source) and ensure it is running
with Ceph.
2013-04-15 14:39:32 +00:00
2016-03-25 11:06:24 +00:00
.. note :: Ubuntu 14.04 and CentOS 7.2 will have `` libvirt `` with RBD storage
pool support enabled by default.
2013-04-15 14:39:32 +00:00
2014-10-01 15:01:07 +00:00
.. index :: pools; CloudStack
2013-06-14 23:57:15 +00:00
2013-04-15 14:39:32 +00:00
Create a Pool
=============
By default, Ceph block devices use the `` rbd `` pool. Create a pool for
CloudStack NFS Primary Storage. Ensure your Ceph cluster is running, then create
the pool. ::
ceph osd pool create cloudstack
See `Create a Pool`_ for details on specifying the number of placement groups
for your pools, and `Placement Groups`_ for details on the number of placement
groups you should set for your pools.
2017-06-26 13:48:00 +00:00
A newly created pool must initialized prior to use. Use the `` rbd `` tool
to initialize the pool::
rbd pool init cloudstack
2014-09-08 20:24:52 +00:00
Create a Ceph User
==================
2014-01-29 10:08:08 +00:00
2014-09-08 20:24:52 +00:00
To access the Ceph cluster we require a Ceph user which has the correct
credentials to access the `` cloudstack `` pool we just created. Although we could
use `` client.admin `` for this, it's recommended to create a user with only
access to the `` cloudstack `` pool. ::
2014-01-29 10:08:08 +00:00
2017-07-07 14:41:07 +00:00
ceph auth get-or-create client.cloudstack mon 'profile rbd' osd 'profile rbd pool=cloudstack'
2014-01-29 10:08:08 +00:00
2014-09-08 20:24:52 +00:00
Use the information returned by the command in the next step when adding the
Primary Storage.
See `User Management`_ for additional details.
2013-04-15 14:39:32 +00:00
Add Primary Storage
===================
2018-03-26 19:44:11 +00:00
To add a Ceph block device as Primary Storage, the steps include:
2013-04-15 14:39:32 +00:00
#. Log in to the CloudStack UI.
#. Click **Infrastructure** on the left side navigation bar.
2018-03-26 19:44:11 +00:00
#. Select **View All** under **Primary Storage** .
#. Click the **Add Primary Storage** button on the top right hand side.
#. Fill in the following information, according to your infrastructure setup:
2013-04-15 14:39:32 +00:00
2018-03-27 17:55:13 +00:00
- Scope (i.e. Cluster or Zone-Wide).
2018-03-26 19:44:11 +00:00
- Zone.
- Pod.
- Cluster.
- Name of Primary Storage.
2013-04-15 14:39:32 +00:00
- For **Protocol** , select `` RBD `` .
2018-03-26 19:44:11 +00:00
2018-03-27 17:55:13 +00:00
- For **Provider** , select the appropriate provider type (i.e. DefaultPrimary, SolidFire, SolidFireShared, or CloudByte). Depending on the provider chosen, fill out the information pertinent to your setup.
2018-03-26 19:44:11 +00:00
2018-03-27 17:55:13 +00:00
#. Add cluster information (`` cephx `` is supported).
2013-04-15 14:39:32 +00:00
2018-03-26 19:44:11 +00:00
- For **RADOS Monitor** , provide the IP address of a Ceph monitor node.
2018-03-27 17:55:13 +00:00
- For **RADOS Pool** , provide the name of an RBD pool.
2018-03-26 19:44:11 +00:00
2018-03-27 17:55:13 +00:00
- For **RADOS User** , provide a user that has sufficient rights to the RBD pool. Note: Do not include the `` client. `` part of the user.
2018-03-26 19:44:11 +00:00
2018-03-27 17:55:13 +00:00
- For **RADOS Secret** , provide the secret the user's secret.
2018-03-26 19:44:11 +00:00
- **Storage Tags** are optional. Use tags at your own discretion. For more information about storage tags in CloudStack, refer to `Storage Tags`_ .
#. Click **OK** .
2013-04-15 14:39:32 +00:00
Create a Disk Offering
======================
2014-03-04 21:24:36 +00:00
To create a new disk offering, refer to `Create a New Disk Offering (4.2.0)`_ .
Create a disk offering so that it matches the `` rbd `` tag.
The `` StoragePoolAllocator `` will choose the `` rbd ``
2013-04-15 14:39:32 +00:00
pool when searching for a suitable storage pool. If the disk offering doesn't
match the `` rbd `` tag, the `` StoragePoolAllocator `` may select the pool you
created (e.g., `` cloudstack `` ).
2012-09-06 00:24:54 +00:00
2012-08-08 21:51:10 +00:00
Limitations
2012-09-06 00:24:54 +00:00
===========
2013-05-01 09:09:11 +00:00
- CloudStack will only bind to one monitor (You can however create a Round Robin DNS record over multiple monitors)
2012-08-08 21:51:10 +00:00
2012-09-06 00:24:54 +00:00
2012-08-08 21:51:10 +00:00
2013-04-15 14:39:32 +00:00
.. _Create a Pool: ../../rados/operations/pools#createpool
.. _Placement Groups: ../../rados/operations/placement-groups
.. _Install and Configure QEMU: ../qemu-rbd
.. _Install and Configure libvirt: ../libvirt
2014-03-04 21:24:36 +00:00
.. _KVM Hypervisor Host Installation: http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/hypervisor-kvm-install-flow.html
2018-03-26 19:44:11 +00:00
.. _Storage Tags: http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/4.11/storage.html#storage-tags
2014-03-04 21:24:36 +00:00
.. _Create a New Disk Offering (4.2.0): http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Admin_Guide/compute-disk-service-offerings.html#creating-disk-offerings
2014-10-01 15:01:07 +00:00
.. _User Management: ../../rados/operations/user-management