2012-10-10 20:22:47 +00:00
=============================
Block Devices and OpenStack
=============================
2012-09-18 20:00:58 +00:00
2013-06-14 23:57:48 +00:00
.. index :: Ceph Block Device; OpenStack
You may use Ceph Block Device images with OpenStack through `` libvirt `` , which
2012-10-10 20:22:47 +00:00
configures the QEMU interface to `` librbd `` . Ceph stripes block device images as
2013-06-14 23:57:48 +00:00
objects across the cluster, which means that large Ceph Block Device images have
2012-10-10 20:22:47 +00:00
better performance than a standalone server!
2012-09-18 20:00:58 +00:00
2013-06-14 23:57:48 +00:00
To use Ceph Block Devices with OpenStack, you must install QEMU, `` libvirt `` ,
2013-10-31 00:21:14 +00:00
and OpenStack first. We recommend using a separate physical node for your
2012-10-10 20:22:47 +00:00
OpenStack installation. OpenStack recommends a minimum of 8GB of RAM and a
quad-core processor. The following diagram depicts the OpenStack/Ceph
technology stack.
2012-09-18 20:00:58 +00:00
.. ditaa :: +---------------------------------------------------+
| OpenStack |
2012-10-09 21:06:40 +00:00
+---------------------------------------------------+
2012-09-18 20:00:58 +00:00
| libvirt |
2012-10-09 21:06:40 +00:00
+------------------------+--------------------------+
|
| configures
v
+---------------------------------------------------+
| QEMU |
2012-09-18 20:00:58 +00:00
+---------------------------------------------------+
| librbd |
+---------------------------------------------------+
2012-10-01 21:53:57 +00:00
| librados |
2012-09-26 00:19:50 +00:00
+------------------------+-+------------------------+
| OSDs | | Monitors |
+------------------------+ +------------------------+
2012-06-18 21:29:04 +00:00
2013-12-06 14:43:37 +00:00
.. important :: To use Ceph Block Devices with OpenStack, you must have
2013-06-14 23:57:48 +00:00
access to a running Ceph Storage Cluster.
2012-06-18 21:29:04 +00:00
2013-12-06 14:43:37 +00:00
Three parts of OpenStack integrate with Ceph's block devices:
2012-09-18 20:00:58 +00:00
2014-10-30 10:59:14 +00:00
- **Images** : OpenStack Glance manages images for VMs. Images are immutable.
2014-07-10 00:18:03 +00:00
OpenStack treats images as binary blobs and downloads them accordingly.
2014-10-30 10:59:14 +00:00
- **Volumes** : Volumes are block devices. OpenStack uses volumes to boot VMs,
or to attach volumes to running VMs. OpenStack manages volumes using
2014-07-10 00:18:03 +00:00
Cinder services.
2014-10-30 10:59:14 +00:00
- **Guest Disks** : Guest disks are guest operating system disks. By default,
2014-07-10 00:18:03 +00:00
when you boot a virtual machine, its disk appears as a file on the filesystem
2014-10-30 10:59:14 +00:00
of the hypervisor (usually under `` /var/lib/nova/instances/<uuid>/ `` ). Prior
to OpenStack Havana, the only way to boot a VM in Ceph was to use the
2014-07-10 00:18:03 +00:00
boot-from-volume functionality of Cinder. However, now it is possible to boot
2014-10-30 10:59:14 +00:00
every virtual machine inside Ceph directly without using Cinder, which is
advantageous because it allows you to perform maintenance operations easily
with the live-migration process. Additionally, if your hypervisor dies it is
2014-07-10 00:18:03 +00:00
also convenient to trigger `` nova evacuate `` and run the virtual machine
elsewhere almost seamlessly.
2013-12-06 14:43:37 +00:00
You can use OpenStack Glance to store images in a Ceph Block Device, and you
2013-10-31 00:21:14 +00:00
can use Cinder to boot a VM using a copy-on-write clone of an image.
2012-10-10 20:22:47 +00:00
2013-12-06 14:43:37 +00:00
The instructions below detail the setup for Glance, Cinder and Nova, although
2012-10-10 20:22:47 +00:00
they do not have to be used together. You may store images in Ceph block devices
while running VMs using a local disk, or vice versa.
2012-06-18 21:29:04 +00:00
2014-10-30 10:59:14 +00:00
.. important :: Ceph doesn’ t support QCOW2 for hosting a virtual machine disk.
Thus if you want to boot virtual machines in Ceph (ephemeral backend or boot
2014-07-10 00:18:03 +00:00
from volume), the Glance image format must be `` RAW `` .
2014-06-02 15:11:47 +00:00
2013-12-06 14:43:37 +00:00
.. tip :: This document describes using Ceph Block Devices with OpenStack Havana.
For earlier versions of OpenStack see
2013-10-31 00:44:24 +00:00
`Block Devices and OpenStack (Dumpling)`_ .
2013-06-14 23:57:48 +00:00
.. index :: pools; OpenStack
2012-06-18 21:29:04 +00:00
Create a Pool
2012-09-18 20:00:58 +00:00
=============
2012-10-10 20:22:47 +00:00
By default, Ceph block devices use the `` rbd `` pool. You may use any available
2013-10-31 00:21:14 +00:00
pool. We recommend creating a pool for Cinder and a pool for Glance. Ensure
2012-10-10 20:22:47 +00:00
your Ceph cluster is running, then create the pools. ::
2012-06-18 21:29:04 +00:00
2012-11-12 20:01:07 +00:00
ceph osd pool create volumes 128
ceph osd pool create images 128
2013-12-06 14:43:37 +00:00
ceph osd pool create backups 128
2014-06-02 15:11:47 +00:00
ceph osd pool create vms 128
2012-09-18 20:00:58 +00:00
2012-10-10 20:22:47 +00:00
See `Create a Pool`_ for detail on specifying the number of placement groups for
your pools, and `Placement Groups`_ for details on the number of placement
2012-10-01 18:39:54 +00:00
groups you should set for your pools.
2012-12-03 20:22:37 +00:00
.. _Create a Pool: ../../rados/operations/pools#createpool
.. _Placement Groups: ../../rados/operations/placement-groups
2012-10-01 18:39:54 +00:00
2012-10-10 20:22:47 +00:00
Configure OpenStack Ceph Clients
2012-10-01 18:39:54 +00:00
================================
2014-07-10 00:18:03 +00:00
The nodes running `` glance-api `` , `` cinder-volume `` , `` nova-compute `` and
`` cinder-backup `` act as Ceph clients. Each requires the `` ceph.conf `` file::
2012-10-01 18:39:54 +00:00
2012-10-10 20:22:47 +00:00
ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
2012-10-01 18:39:54 +00:00
2014-07-10 00:18:03 +00:00
2012-10-01 18:39:54 +00:00
Install Ceph client packages
----------------------------
2013-12-06 14:43:37 +00:00
On the `` glance-api `` node, you'll need the Python bindings for `` librbd `` ::
2012-10-01 18:39:54 +00:00
2015-02-24 20:12:59 +00:00
sudo apt-get install python-rbd
sudo yum install python-rbd
2012-10-01 18:39:54 +00:00
2014-07-10 00:18:03 +00:00
On the `` nova-compute `` , `` cinder-backup `` and on the `` cinder-volume `` node,
use both the Python bindings and the client command line tools::
2012-10-01 18:39:54 +00:00
sudo apt-get install ceph-common
2016-04-11 16:54:02 +00:00
sudo yum install ceph-common
2012-10-01 18:39:54 +00:00
2012-10-10 20:22:47 +00:00
Setup Ceph Client Authentication
2012-10-01 18:39:54 +00:00
--------------------------------
2012-11-13 16:14:09 +00:00
If you have `cephx authentication`_ enabled, create a new user for Nova/Cinder
2013-10-31 00:21:14 +00:00
and Glance. Execute the following::
2012-12-30 07:57:01 +00:00
2014-06-02 15:11:47 +00:00
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
2013-12-06 14:43:37 +00:00
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
2012-11-13 16:14:09 +00:00
2014-07-10 00:18:03 +00:00
Add the keyrings for `` client.cinder `` , `` client.glance `` , and
`` client.cinder-backup `` to the appropriate nodes and change their ownership::
2012-10-01 18:39:54 +00:00
2013-12-06 14:43:37 +00:00
ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring
ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring
2012-06-18 21:29:04 +00:00
2014-07-10 00:18:03 +00:00
Nodes running `` nova-compute `` need the keyring file for the `` nova-compute ``
2015-03-23 19:37:29 +00:00
process::
ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
They also need to store the secret key of the `` client.cinder `` user in
2014-07-10 00:18:03 +00:00
`` libvirt `` . The libvirt process needs it to access the cluster while attaching
a block device from Cinder.
2013-12-06 14:43:37 +00:00
2014-07-10 00:18:03 +00:00
Create a temporary copy of the secret key on the nodes running
`` nova-compute `` ::
2012-09-18 20:00:58 +00:00
2013-12-06 14:43:37 +00:00
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key
2012-06-18 21:29:04 +00:00
2013-10-31 00:21:14 +00:00
Then, on the compute nodes, add the secret key to `` libvirt `` and remove the
2013-03-12 21:25:44 +00:00
temporary copy of the key::
2012-06-18 21:29:04 +00:00
2013-12-06 14:43:37 +00:00
uuidgen
457eb676-33da-42ec-9a8c-9293d545c337
2012-10-01 18:39:54 +00:00
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
2013-12-06 14:43:37 +00:00
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
2012-10-01 18:39:54 +00:00
<usage type='ceph'>
2013-12-06 14:43:37 +00:00
<name>client.cinder secret</name>
2012-10-01 18:39:54 +00:00
</usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
2013-12-06 14:43:37 +00:00
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
2012-09-18 20:00:58 +00:00
2012-10-10 20:22:47 +00:00
Save the uuid of the secret for configuring `` nova-compute `` later.
2012-09-18 20:00:58 +00:00
2014-10-30 10:59:14 +00:00
.. important :: You don't necessarily need the UUID on all the compute nodes.
However from a platform consistency perspective, it's better to keep the
2014-07-10 00:18:03 +00:00
same UUID.
2013-12-06 14:43:37 +00:00
2012-12-03 20:22:37 +00:00
.. _cephx authentication: ../../rados/operations/authentication
2012-10-09 20:53:37 +00:00
2012-10-01 18:39:54 +00:00
2012-10-10 20:22:47 +00:00
Configure OpenStack to use Ceph
===============================
2012-10-01 18:39:54 +00:00
Configuring Glance
------------------
2012-10-10 20:22:47 +00:00
Glance can use multiple back ends to store images. To use Ceph block devices by
2014-11-13 17:24:07 +00:00
default, configure Glance like the following.
2012-10-01 18:39:54 +00:00
2014-11-13 17:24:07 +00:00
Prior to Juno
~~~~~~~~~~~~~~
2012-10-01 18:39:54 +00:00
2014-11-13 17:24:07 +00:00
Edit `` /etc/glance/glance-api.conf `` and add under the `` [DEFAULT] `` section::
2013-10-31 00:44:24 +00:00
2014-11-13 17:24:07 +00:00
default_store = rbd
rbd_store_user = glance
rbd_store_pool = images
rbd_store_chunk_size = 8
Juno
~~~~
Edit `` /etc/glance/glance-api.conf `` and add under the `` [glance_store] `` section::
2015-01-18 21:55:57 +00:00
[DEFAULT]
...
default_store = rbd
...
2014-11-13 17:24:07 +00:00
[glance_store]
stores = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
2016-08-09 13:51:18 +00:00
.. important :: Glance has not completely moved to 'store' yet.
So we still need to configure the store in the DEFAULT section until Kilo.
2014-11-13 17:24:07 +00:00
2016-08-09 13:51:18 +00:00
Kilo
~~~~
Edit `` /etc/glance/glance-api.conf `` and add under the `` [glance_store] `` section::
2014-11-13 17:24:07 +00:00
2016-08-09 13:51:18 +00:00
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
For more information about the configuration options available in Glance please refer to the OpenStack Configuration Reference: http://docs.openstack.org/.
2014-11-13 17:24:07 +00:00
Any OpenStack version
~~~~~~~~~~~~~~~~~~~~~
If you want to enable copy-on-write cloning of images, also add under the `` [DEFAULT] `` section::
show_image_direct_url = True
2013-10-31 00:44:24 +00:00
2014-07-10 00:18:03 +00:00
Note that this exposes the back end location via Glance's API, so the endpoint
2013-10-31 00:44:24 +00:00
with this option enabled should not be publicly accessible.
2014-10-30 10:59:14 +00:00
Disable the Glance cache management to avoid images getting cached under `` /var/lib/glance/image-cache/ `` ,
assuming your configuration file has `` flavor = keystone+cachemanagement `` ::
[paste_deploy]
flavor = keystone
2012-10-10 20:22:47 +00:00
2015-06-03 17:37:06 +00:00
Image properties
~~~~~~~~~~~~~~~~
We recommend to use the following properties for your images:
- `` hw_scsi_model=virtio-scsi `` : add the virtio-scsi controller and get better performance and support for discard operation
- `` hw_disk_bus=scsi `` : connect every cinder block devices to that controller
- `` hw_qemu_guest_agent=yes `` : enable the QEMU guest agent
- `` os_require_quiesce=yes `` : send fs-freeze/thaw calls through the QEMU guest agent
2014-11-13 17:24:07 +00:00
2013-10-31 00:21:14 +00:00
Configuring Cinder
------------------
2012-10-10 20:22:47 +00:00
OpenStack requires a driver to interact with Ceph block devices. You must also
2014-10-30 10:59:14 +00:00
specify the pool name for the block device. On your OpenStack node, edit
2014-07-10 00:18:03 +00:00
`` /etc/cinder/cinder.conf `` by adding::
2013-05-30 21:17:35 +00:00
2016-03-04 06:50:41 +00:00
[DEFAULT]
...
enabled_backends = ceph
...
[ceph]
2014-11-13 17:24:07 +00:00
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
2012-06-18 21:29:04 +00:00
2014-10-30 10:59:14 +00:00
If you're using `cephx authentication`_ , also configure the user and uuid of
2014-07-10 00:18:03 +00:00
the secret you added to `` libvirt `` as documented earlier::
2013-12-06 14:43:37 +00:00
2016-03-04 06:50:41 +00:00
[ceph]
...
2014-11-13 17:24:07 +00:00
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
2013-12-06 14:43:37 +00:00
2014-05-19 22:21:12 +00:00
Note that if you are configuring multiple cinder back ends,
2014-11-13 17:24:07 +00:00
`` glance_api_version = 2 `` must be in the `` [DEFAULT] `` section.
2014-05-19 22:21:12 +00:00
2013-12-06 14:43:37 +00:00
Configuring Cinder Backup
-------------------------
OpenStack Cinder Backup requires a specific daemon so don't forget to install it.
On your Cinder Backup node, edit `` /etc/cinder/cinder.conf `` and add::
2014-11-13 17:24:07 +00:00
backup_driver = cinder.backup.drivers.ceph
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_user = cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
2012-10-01 18:39:54 +00:00
2012-09-18 20:00:58 +00:00
2014-06-02 15:11:47 +00:00
Configuring Nova to attach Ceph RBD block device
------------------------------------------------
2014-07-10 00:18:03 +00:00
In order to attach Cinder devices (either normal block or by issuing a boot
from volume), you must tell Nova (and libvirt) which user and UUID to refer to
when attaching the device. libvirt will refer to this user when connecting and
authenticating with the Ceph cluster. ::
2014-06-02 15:11:47 +00:00
2014-11-13 17:24:07 +00:00
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
2014-06-02 15:11:47 +00:00
These two flags are also used by the Nova ephemeral backend.
2013-12-06 14:43:37 +00:00
Configuring Nova
----------------
2014-10-30 10:59:14 +00:00
In order to boot all the virtual machines directly into Ceph, you must
2014-07-10 00:18:03 +00:00
configure the ephemeral backend for Nova.
2014-05-19 22:14:29 +00:00
2014-11-10 14:06:20 +00:00
It is recommended to enable the RBD cache in your Ceph configuration file
(enabled by default since Giant). Moreover, enabling the admin socket
2015-06-03 17:37:06 +00:00
brings a lot of benefits while troubleshooting. Having one socket
2014-11-10 14:06:20 +00:00
per virtual machine using a Ceph block device will help investigating performance and/or wrong behaviors.
This socket can be accessed like this::
ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok help
Now on every compute nodes edit your Ceph configuration file::
[client]
rbd cache = true
rbd cache writethrough until flush = true
2015-06-03 17:37:06 +00:00
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20
Configure the permissions of these paths::
mkdir -p /var/run/ceph/guests/ /var/log/qemu/
chown qemu:libvirtd /var/run/ceph/guests /var/log/qemu/
Note that user `` qemu `` and group `` libvirtd `` can vary depending on your system.
The provided example works for RedHat based systems.
2014-11-10 14:06:20 +00:00
.. tip :: If your virtual machine is already running you can simply restart it to get the socket
2014-11-03 22:04:33 +00:00
Havana and Icehouse
~~~~~~~~~~~~~~~~~~~
2014-07-10 00:18:03 +00:00
Havana and Icehouse require patches to implement copy-on-write cloning and fix
bugs with image size and live migration of ephemeral disks on rbd. These are
available in branches based on upstream Nova `stable/havana`_ and
`stable/icehouse`_ . Using them is not mandatory but **highly recommended** in
order to take advantage of the copy-on-write clone functionality.
2014-05-19 22:14:29 +00:00
2014-07-10 00:18:03 +00:00
On every Compute node, edit `` /etc/nova/nova.conf `` and add::
2013-12-06 14:43:37 +00:00
2014-11-13 17:24:07 +00:00
libvirt_images_type = rbd
libvirt_images_rbd_pool = vms
libvirt_images_rbd_ceph_conf = /etc/ceph/ceph.conf
2015-05-16 00:37:33 +00:00
libvirt_disk_cachemodes="network=writeback"
2014-11-13 17:24:07 +00:00
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
2013-12-06 14:43:37 +00:00
2014-07-10 00:18:03 +00:00
It is also a good practice to disable file injection. While booting an
instance, Nova usually attempts to open the rootfs of the virtual machine.
Then, Nova injects values such as password, ssh keys etc. directly into the
filesystem. However, it is better to rely on the metadata service and
2014-10-30 10:59:14 +00:00
`` cloud-init `` .
2014-07-10 00:18:03 +00:00
On every Compute node, edit `` /etc/nova/nova.conf `` and add::
2013-12-06 14:43:37 +00:00
2014-11-13 17:24:07 +00:00
libvirt_inject_password = false
libvirt_inject_key = false
libvirt_inject_partition = -2
2012-11-13 16:14:09 +00:00
2014-06-02 15:11:47 +00:00
To ensure a proper live-migration, use the following flags::
2015-06-03 17:37:06 +00:00
libvirt_live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
2014-11-10 14:06:20 +00:00
2014-11-03 22:04:33 +00:00
Juno
~~~~
2014-11-10 14:06:20 +00:00
In Juno, Ceph block device was moved under the `` [libvirt] `` section.
On every Compute node, edit `` /etc/nova/nova.conf `` under the `` [libvirt] ``
2014-11-03 22:04:33 +00:00
section and add::
[libvirt]
2014-11-13 17:24:07 +00:00
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
2015-05-16 00:37:33 +00:00
disk_cachemodes="network=writeback"
2014-11-03 22:04:33 +00:00
It is also a good practice to disable file injection. While booting an
instance, Nova usually attempts to open the rootfs of the virtual machine.
Then, Nova injects values such as password, ssh keys etc. directly into the
filesystem. However, it is better to rely on the metadata service and
`` cloud-init `` .
On every Compute node, edit `` /etc/nova/nova.conf `` and add the following
under the `` [libvirt] `` section::
2014-11-13 17:24:07 +00:00
inject_password = false
inject_key = false
inject_partition = -2
2014-11-03 22:04:33 +00:00
2015-06-03 17:37:06 +00:00
To ensure a proper live-migration, use the following flags (under the `` [libvirt] `` section)::
2014-11-03 22:04:33 +00:00
2015-06-03 17:37:06 +00:00
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
Kilo
~~~~
Enable discard support for virtual machine ephemeral root disk::
[libvirt]
...
...
hw_disk_discard = unmap # enable discard support (be careful of performance)
2014-06-02 15:11:47 +00:00
2012-09-26 00:21:52 +00:00
Restart OpenStack
2012-09-18 20:00:58 +00:00
=================
2012-10-10 20:22:47 +00:00
To activate the Ceph block device driver and load the block device pool name
2014-07-10 00:18:03 +00:00
into the configuration, you must restart OpenStack. Thus, for Debian based
systems execute these commands on the appropriate nodes::
2012-06-18 21:29:04 +00:00
2013-12-06 14:43:37 +00:00
sudo glance-control api restart
2012-10-09 20:53:37 +00:00
sudo service nova-compute restart
sudo service cinder-volume restart
2013-12-06 14:43:37 +00:00
sudo service cinder-backup restart
For Red Hat based systems execute::
sudo service openstack-glance-api restart
sudo service openstack-nova-compute restart
sudo service openstack-cinder-volume restart
sudo service openstack-cinder-backup restart
2012-06-18 21:29:04 +00:00
2013-12-06 14:43:37 +00:00
Once OpenStack is up and running, you should be able to create a volume
and boot from it.
2012-10-10 20:22:47 +00:00
2012-10-01 18:39:54 +00:00
2012-10-10 20:22:47 +00:00
Booting from a Block Device
===========================
2012-10-01 18:39:54 +00:00
2013-10-31 00:21:14 +00:00
You can create a volume from an image using the Cinder command line tool::
2012-10-01 18:39:54 +00:00
cinder create --image-id {id of image} --display-name {name of volume} {size of volume}
2014-07-10 00:18:03 +00:00
Note that image must be RAW format. You can use `qemu-img`_ to convert
from one format to another. For example::
2012-10-24 23:19:21 +00:00
2014-07-10 00:18:03 +00:00
qemu-img convert -f {source-format} -O {output-format} {source-filename} {output-filename}
2012-10-24 23:19:21 +00:00
qemu-img convert -f qcow2 -O raw precise-cloudimg.img precise-cloudimg.raw
2013-10-31 00:21:14 +00:00
When Glance and Cinder are both using Ceph block devices, the image is a
2014-10-30 10:59:14 +00:00
copy-on-write clone, so it can create a new volume quickly. In the OpenStack
dashboard, you can boot from that volume by performing the following steps:
2012-10-01 18:39:54 +00:00
2014-10-30 10:59:14 +00:00
#. Launch a new instance.
2014-07-10 00:18:03 +00:00
#. Choose the image associated to the copy-on-write clone.
2015-06-18 14:43:45 +00:00
#. Select 'boot from volume'.
2014-07-10 00:18:03 +00:00
#. Select the volume you created.
2012-10-24 23:19:21 +00:00
.. _qemu-img: ../qemu-rbd/#running-qemu-with-rbd
2013-12-06 14:43:37 +00:00
.. _Block Devices and OpenStack (Dumpling): http://ceph.com/docs/dumpling/rbd/rbd-openstack
2014-07-10 00:18:03 +00:00
.. _stable/havana: https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd
.. _stable/icehouse: https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse