2015-11-03 15:37:07 +00:00
|
|
|
===========================
|
|
|
|
Install Ceph Object Gateway
|
|
|
|
===========================
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2016-01-21 12:58:47 +00:00
|
|
|
As of `firefly` (v0.80), Ceph Object Gateway is running on Civetweb (embedded
|
2015-11-04 15:43:45 +00:00
|
|
|
into the ``ceph-radosgw`` daemon) instead of Apache and FastCGI. Using Civetweb
|
|
|
|
simplifies the Ceph Object Gateway installation and configuration.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
.. note:: To run the Ceph Object Gateway service, you should have a running
|
|
|
|
Ceph storage cluster, and the gateway host should have access to the
|
|
|
|
public network.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
.. note:: In version 0.80, the Ceph Object Gateway does not support SSL. You
|
|
|
|
may setup a reverse proxy server with SSL to dispatch HTTPS requests
|
|
|
|
as HTTP requests to CivetWeb.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Execute the Pre-Installation Procedure
|
|
|
|
--------------------------------------
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
See Preflight_ and execute the pre-installation procedures on your Ceph Object
|
|
|
|
Gateway node. Specifically, you should disable ``requiretty`` on your Ceph
|
|
|
|
Deploy user, set SELinux to ``Permissive`` and set up a Ceph Deploy user with
|
|
|
|
password-less ``sudo``. For Ceph Object Gateways, you will need to open the
|
|
|
|
port that Civetweb will use in production.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
.. note:: Civetweb runs on port ``7480`` by default.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Install Ceph Object Gateway
|
|
|
|
---------------------------
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
From the working directory of your administration server, install the Ceph
|
|
|
|
Object Gateway package on the Ceph Object Gateway node. For example::
|
2014-04-08 22:53:32 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
ceph-deploy install --rgw <gateway-node1> [<gateway-node2> ...]
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
The ``ceph-common`` package is a dependency, so ``ceph-deploy`` will install
|
|
|
|
this too. The ``ceph`` CLI tools are intended for administrators. To make your
|
|
|
|
Ceph Object Gateway node an administrator node, execute the following from the
|
|
|
|
working directory of your administration server::
|
2014-04-29 01:08:59 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
ceph-deploy admin <node-name>
|
2014-04-29 01:08:59 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Create a Gateway Instance
|
|
|
|
-------------------------
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
From the working directory of your administration server, create an instance of
|
|
|
|
the Ceph Object Gateway on the Ceph Object Gateway. For example::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
ceph-deploy rgw create <gateway-node1>
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Once the gateway is running, you should be able to access it on port ``7480``
|
|
|
|
with an unauthenticated request like this::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
http://client-node:7480
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
If the gateway instance is working properly, you should receive a response like
|
|
|
|
this::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
<?xml version="1.0" encoding="UTF-8"?>
|
|
|
|
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
|
2015-11-04 15:43:45 +00:00
|
|
|
<Owner>
|
2015-11-03 15:37:07 +00:00
|
|
|
<ID>anonymous</ID>
|
|
|
|
<DisplayName></DisplayName>
|
|
|
|
</Owner>
|
|
|
|
<Buckets>
|
|
|
|
</Buckets>
|
|
|
|
</ListAllMyBucketsResult>
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
If at any point you run into trouble and you want to start over, execute the
|
|
|
|
following to purge the configuration::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
ceph-deploy purge <gateway-node1> [<gateway-node2>]
|
|
|
|
ceph-deploy purgedata <gateway-node1> [<gateway-node2>]
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
If you execute ``purge``, you must re-install Ceph.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Change the Default Port
|
|
|
|
-----------------------
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2016-04-19 09:26:10 +00:00
|
|
|
Civetweb runs on port ``7480`` by default. To change the default port (e.g., to
|
2015-11-04 15:43:45 +00:00
|
|
|
port ``80``), modify your Ceph configuration file in the working directory of
|
|
|
|
your administration server. Add a section entitled
|
|
|
|
``[client.rgw.<gateway-node>]``, replacing ``<gateway-node>`` with the short
|
|
|
|
node name of your Ceph Object Gateway node (i.e., ``hostname -s``).
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2016-03-23 21:32:14 +00:00
|
|
|
.. note:: As of version 11.0.1, the Ceph Object Gateway **does** support SSL.
|
|
|
|
See `Using SSL with Civetweb`_ for information on how to set that up.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
For example, if your node name is ``gateway-node1``, add a section like this
|
|
|
|
after the ``[global]`` section::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
[client.rgw.gateway-node1]
|
|
|
|
rgw_frontends = "civetweb port=80"
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
.. note:: Ensure that you leave no whitespace between ``port=<port-number>`` in
|
|
|
|
the ``rgw_frontends`` key/value pair. The ``[client.rgw.gateway-node1]``
|
|
|
|
heading identifies this portion of the Ceph configuration file as
|
|
|
|
configuring a Ceph Storage Cluster client where the client type is a Ceph
|
2016-04-19 09:26:10 +00:00
|
|
|
Object Gateway (i.e., ``rgw``), and the name of the instance is
|
2015-11-04 15:43:45 +00:00
|
|
|
``gateway-node1``.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Push the updated configuration file to your Ceph Object Gateway node
|
|
|
|
(and other Ceph nodes)::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
ceph-deploy --overwrite-conf config push <gateway-node> [<other-nodes>]
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
To make the new port setting take effect, restart the Ceph Object
|
|
|
|
Gateway::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
sudo systemctl restart ceph-radosgw.service
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Finally, check to ensure that the port you selected is open on the node's
|
|
|
|
firewall (e.g., port ``80``). If it is not open, add the port and reload the
|
2016-04-19 09:26:10 +00:00
|
|
|
firewall configuration. If you use the ``firewalld`` daemon, execute::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
sudo firewall-cmd --list-all
|
|
|
|
sudo firewall-cmd --zone=public --add-port 80/tcp --permanent
|
|
|
|
sudo firewall-cmd --reload
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2016-01-21 12:58:47 +00:00
|
|
|
If you use ``iptables``, execute::
|
2015-11-04 15:43:45 +00:00
|
|
|
|
|
|
|
sudo iptables --list
|
|
|
|
sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT
|
|
|
|
|
|
|
|
Replace ``<iface>`` and ``<ip-address>/<netmask>`` with the relevant values for
|
|
|
|
your Ceph Object Gateway node.
|
|
|
|
|
|
|
|
Once you have finished configuring ``iptables``, ensure that you make the
|
|
|
|
change persistent so that it will be in effect when your Ceph Object Gateway
|
|
|
|
node reboots. Execute::
|
|
|
|
|
|
|
|
sudo apt-get install iptables-persistent
|
|
|
|
|
|
|
|
A terminal UI will open up. Select ``yes`` for the prompts to save current
|
|
|
|
``IPv4`` iptables rules to ``/etc/iptables/rules.v4`` and current ``IPv6``
|
|
|
|
iptables rules to ``/etc/iptables/rules.v6``.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
The ``IPv4`` iptables rule that you set in the earlier step will be loaded in
|
|
|
|
``/etc/iptables/rules.v4`` and will be persistent across reboots.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
If you add a new ``IPv4`` iptables rule after installing
|
|
|
|
``iptables-persistent`` you will have to add it to the rule file. In such case,
|
|
|
|
execute the following as the ``root`` user::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
iptables-save > /etc/iptables/rules.v4
|
|
|
|
|
2016-03-23 21:32:14 +00:00
|
|
|
Using SSL with Civetweb
|
|
|
|
-----------------------
|
|
|
|
.. _Using SSL with Civetweb:
|
|
|
|
|
|
|
|
Before using SSL with civetweb, you will need a certificate that will match
|
|
|
|
the host name that that will be used to access the Ceph Object Gateway.
|
|
|
|
You may wish to obtain one that has `subject alternate name` fields for
|
|
|
|
more flexibility. If you intend to use S3-style subdomains
|
|
|
|
(`Add Wildcard to DNS`_), you will need a `wildcard` certificate.
|
|
|
|
|
|
|
|
Civetweb requires that the server key, server certificate, and any other
|
|
|
|
CA or intermediate certificates be supplied in one file. Each of these
|
|
|
|
items must be in `pem` form. Because the combined file contains the
|
|
|
|
secret key, it should be protected from unauthorized access.
|
|
|
|
|
|
|
|
To configure ssl operation, append ``s`` to the port number. Currently
|
|
|
|
it is not possible to configure the radosgw to listen on both
|
|
|
|
http and https, you must pick only one. So::
|
|
|
|
|
|
|
|
[client.rgw.gateway-node1]
|
|
|
|
rgw_frontends = civetweb port=443s ssl_certificate=/etc/ceph/private/keyandcert.pem
|
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Migrating from Apache to Civetweb
|
|
|
|
---------------------------------
|
|
|
|
|
|
|
|
If you're running the Ceph Object Gateway on Apache and FastCGI with Ceph
|
|
|
|
Storage v0.80 or above, you're already running Civetweb--it starts with the
|
|
|
|
``ceph-radosgw`` daemon and it's running on port 7480 by default so that it
|
|
|
|
doesn't conflict with your Apache and FastCGI installation and other commonly
|
|
|
|
used web service ports. Migrating to use Civetweb basically involves removing
|
|
|
|
your Apache installation. Then, you must remove Apache and FastCGI settings
|
|
|
|
from your Ceph configuration file and reset ``rgw_frontends`` to Civetweb.
|
|
|
|
|
|
|
|
Referring back to the description for installing a Ceph Object Gateway with
|
|
|
|
``ceph-deploy``, notice that the configuration file only has one setting
|
|
|
|
``rgw_frontends`` (and that's assuming you elected to change the default port).
|
|
|
|
The ``ceph-deploy`` utility generates the data directory and the keyring for
|
|
|
|
you--placing the keyring in ``/var/lib/ceph/radosgw/{rgw-intance}``. The daemon
|
|
|
|
looks in default locations, whereas you may have specified different settings
|
|
|
|
in your Ceph configuration file. Since you already have keys and a data
|
|
|
|
directory, you will want to maintain those paths in your Ceph configuration
|
|
|
|
file if you used something other than default paths.
|
|
|
|
|
|
|
|
A typical Ceph Object Gateway configuration file for an Apache-based deployment
|
2016-08-09 07:56:24 +00:00
|
|
|
looks something similar as the following:
|
2015-11-04 15:43:45 +00:00
|
|
|
|
|
|
|
On Red Hat Enterprise Linux::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
[client.radosgw.gateway-node1]
|
|
|
|
host = {hostname}
|
|
|
|
keyring = /etc/ceph/ceph.client.radosgw.keyring
|
|
|
|
rgw socket path = ""
|
|
|
|
log file = /var/log/radosgw/client.radosgw.gateway-node1.log
|
|
|
|
rgw frontends = fastcgi socket\_port=9000 socket\_host=0.0.0.0
|
|
|
|
rgw print continue = false
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
On Ubuntu::
|
|
|
|
|
|
|
|
[client.radosgw.gateway-node]
|
|
|
|
host = {hostname}
|
|
|
|
keyring = /etc/ceph/ceph.client.radosgw.keyring
|
|
|
|
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
|
|
|
|
log file = /var/log/radosgw/client.radosgw.gateway-node1.log
|
|
|
|
|
|
|
|
To modify it for use with Civetweb, simply remove the Apache-specific settings
|
|
|
|
such as ``rgw_socket_path`` and ``rgw_print_continue``. Then, change the
|
|
|
|
``rgw_frontends`` setting to reflect Civetweb rather than the Apache FastCGI
|
|
|
|
front end and specify the port number you intend to use. For example::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
[client.radosgw.gateway-node1]
|
|
|
|
host = {hostname}
|
|
|
|
keyring = /etc/ceph/ceph.client.radosgw.keyring
|
|
|
|
log file = /var/log/radosgw/client.radosgw.gateway-node1.log
|
|
|
|
rgw_frontends = civetweb port=80
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Finally, restart the Ceph Object Gateway. On Red Hat Enterprise Linux execute::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
sudo systemctl restart ceph-radosgw.service
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
On Ubuntu execute::
|
|
|
|
|
|
|
|
sudo service radosgw restart id=rgw.<short-hostname>
|
|
|
|
|
|
|
|
If you used a port number that is not open, you will also need to open that
|
|
|
|
port on your firewall.
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Configure Bucket Sharding
|
|
|
|
-------------------------
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
A Ceph Object Gateway stores bucket index data in the ``index_pool``, which
|
|
|
|
defaults to ``.rgw.buckets.index``. Sometimes users like to put many objects
|
|
|
|
(hundreds of thousands to millions of objects) in a single bucket. If you do
|
|
|
|
not use the gateway administration interface to set quotas for the maximum
|
|
|
|
number of objects per bucket, the bucket index can suffer significant
|
|
|
|
performance degradation when users place large numbers of objects into a
|
|
|
|
bucket.
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
In Ceph 0.94, you may shard bucket indices to help prevent performance
|
|
|
|
bottlenecks when you allow a high number of objects per bucket. The
|
|
|
|
``rgw_override_bucket_index_max_shards`` setting allows you to set a maximum
|
|
|
|
number of shards per bucket. The default value is ``0``, which means bucket
|
|
|
|
index sharding is off by default.
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
To turn bucket index sharding on, set ``rgw_override_bucket_index_max_shards``
|
|
|
|
to a value greater than ``0``.
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
For simple configurations, you may add ``rgw_override_bucket_index_max_shards``
|
|
|
|
to your Ceph configuration file. Add it under ``[global]`` to create a
|
|
|
|
system-wide value. You can also set it for each instance in your Ceph
|
|
|
|
configuration file.
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Once you have changed your bucket sharding configuration in your Ceph
|
|
|
|
configuration file, restart your gateway. On Red Hat Enteprise Linux execute::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
sudo systemctl restart ceph-radosgw.service
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
On Ubuntu execute::
|
|
|
|
|
2016-11-25 11:45:33 +00:00
|
|
|
sudo service radosgw restart id=rgw.<short-hostname>
|
2015-11-04 15:43:45 +00:00
|
|
|
|
|
|
|
For federated configurations, each zone may have a different ``index_pool``
|
|
|
|
setting for failover. To make the value consistent for a region's zones, you
|
|
|
|
may set ``rgw_override_bucket_index_max_shards`` in a gateway's region
|
|
|
|
configuration. For example::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
radosgw-admin region get > region.json
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Open the ``region.json`` file and edit the ``bucket_index_max_shards`` setting
|
|
|
|
for each named zone. Save the ``region.json`` file and reset the region. For
|
|
|
|
example::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
radosgw-admin region set < region.json
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Once you've updated your region, update the region map. For example::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
radosgw-admin regionmap update --name client.rgw.ceph-client
|
2014-12-26 14:28:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Where ``client.rgw.ceph-client`` is the name of the gateway user.
|
2014-12-26 14:28:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
.. note:: Mapping the index pool (for each zone, if applicable) to a CRUSH
|
|
|
|
ruleset of SSD-based OSDs may also help with bucket index performance.
|
2014-12-26 14:28:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Add Wildcard to DNS
|
|
|
|
-------------------
|
2016-03-23 21:32:14 +00:00
|
|
|
.. _Add Wildcard to DNS:
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
To use Ceph with S3-style subdomains (e.g., bucket-name.domain-name.com), you
|
|
|
|
need to add a wildcard to the DNS record of the DNS server you use with the
|
|
|
|
``ceph-radosgw`` daemon.
|
2014-12-26 14:28:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
The address of the DNS must also be specified in the Ceph configuration file
|
|
|
|
with the ``rgw dns name = {hostname}`` setting.
|
2014-12-26 14:28:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
For ``dnsmasq``, add the following address setting with a dot (.) prepended to
|
|
|
|
the host name::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
address=/.{hostname-or-fqdn}/{host-ip-address}
|
2014-12-26 14:28:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
For example::
|
2014-12-26 14:28:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
address=/.gateway-node1/192.168.122.75
|
2014-12-26 14:28:57 +00:00
|
|
|
|
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
For ``bind``, add a wildcard to the DNS record. For example::
|
2014-12-26 14:28:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
$TTL 604800
|
|
|
|
@ IN SOA gateway-node1. root.gateway-node1. (
|
|
|
|
2 ; Serial
|
|
|
|
604800 ; Refresh
|
|
|
|
86400 ; Retry
|
|
|
|
2419200 ; Expire
|
|
|
|
604800 ) ; Negative Cache TTL
|
|
|
|
;
|
|
|
|
@ IN NS gateway-node1.
|
|
|
|
@ IN A 192.168.122.113
|
|
|
|
* IN CNAME @
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Restart your DNS server and ping your server with a subdomain to ensure that
|
|
|
|
your ``ceph-radosgw`` daemon can process the subdomain requests::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
ping mybucket.{hostname}
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
For example::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
ping mybucket.gateway-node1
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Add Debugging (if needed)
|
|
|
|
-------------------------
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Once you finish the setup procedure, if you encounter issues with your
|
|
|
|
configuration, you can add debugging to the ``[global]`` section of your Ceph
|
|
|
|
configuration file and restart the gateway(s) to help troubleshoot any
|
|
|
|
configuration issues. For example::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
[global]
|
|
|
|
#append the following in the global section.
|
|
|
|
debug ms = 1
|
|
|
|
debug rgw = 20
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Using the Gateway
|
2015-11-03 15:37:07 +00:00
|
|
|
-----------------
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
To use the REST interfaces, first create an initial Ceph Object Gateway user
|
|
|
|
for the S3 interface. Then, create a subuser for the Swift interface. You then
|
|
|
|
need to verify if the created users are able to access the gateway.
|
2015-11-03 15:37:07 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Create a RADOSGW User for S3 Access
|
2015-11-03 15:37:07 +00:00
|
|
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
A ``radosgw`` user needs to be created and granted access. The command ``man
|
|
|
|
radosgw-admin`` will provide information on additional command options.
|
2015-11-03 15:37:07 +00:00
|
|
|
|
|
|
|
To create the user, execute the following on the ``gateway host``::
|
|
|
|
|
|
|
|
sudo radosgw-admin user create --uid="testuser" --display-name="First User"
|
|
|
|
|
|
|
|
The output of the command will be something like the following::
|
|
|
|
|
|
|
|
{
|
|
|
|
"user_id": "testuser",
|
|
|
|
"display_name": "First User",
|
|
|
|
"email": "",
|
|
|
|
"suspended": 0,
|
|
|
|
"max_buckets": 1000,
|
|
|
|
"auid": 0,
|
|
|
|
"subusers": [],
|
|
|
|
"keys": [{
|
|
|
|
"user": "testuser",
|
|
|
|
"access_key": "I0PJDPCIYZ665MW88W9R",
|
|
|
|
"secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
|
|
|
|
}],
|
|
|
|
"swift_keys": [],
|
|
|
|
"caps": [],
|
|
|
|
"op_mask": "read, write, delete",
|
|
|
|
"default_placement": "",
|
|
|
|
"placement_tags": [],
|
|
|
|
"bucket_quota": {
|
|
|
|
"enabled": false,
|
|
|
|
"max_size_kb": -1,
|
|
|
|
"max_objects": -1
|
|
|
|
},
|
|
|
|
"user_quota": {
|
|
|
|
"enabled": false,
|
|
|
|
"max_size_kb": -1,
|
|
|
|
"max_objects": -1
|
|
|
|
},
|
|
|
|
"temp_url_keys": []
|
|
|
|
}
|
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
.. note:: The values of ``keys->access_key`` and ``keys->secret_key`` are
|
|
|
|
needed for access validation.
|
|
|
|
|
|
|
|
.. important:: Check the key output. Sometimes ``radosgw-admin`` generates a
|
|
|
|
JSON escape character ``\`` in ``access_key`` or ``secret_key``
|
|
|
|
and some clients do not know how to handle JSON escape
|
|
|
|
characters. Remedies include removing the JSON escape character
|
|
|
|
``\``, encapsulating the string in quotes, regenerating the key
|
|
|
|
and ensuring that it does not have a JSON escape character or
|
|
|
|
specify the key and secret manually. Also, if ``radosgw-admin``
|
|
|
|
generates a JSON escape character ``\`` and a forward slash ``/``
|
|
|
|
together in a key, like ``\/``, only remove the JSON escape
|
|
|
|
character ``\``. Do not remove the forward slash ``/`` as it is
|
|
|
|
a valid character in the key.
|
|
|
|
|
|
|
|
Create a Swift User
|
2015-11-03 15:37:07 +00:00
|
|
|
^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
A Swift subuser needs to be created if this kind of access is needed. Creating
|
|
|
|
a Swift user is a two step process. The first step is to create the user. The
|
|
|
|
second is to create the secret key.
|
2015-11-03 15:37:07 +00:00
|
|
|
|
|
|
|
Execute the following steps on the ``gateway host``:
|
|
|
|
|
|
|
|
Create the Swift user::
|
|
|
|
|
|
|
|
sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
|
|
|
|
|
|
|
|
The output will be something like the following::
|
|
|
|
|
|
|
|
{
|
|
|
|
"user_id": "testuser",
|
|
|
|
"display_name": "First User",
|
|
|
|
"email": "",
|
|
|
|
"suspended": 0,
|
|
|
|
"max_buckets": 1000,
|
|
|
|
"auid": 0,
|
|
|
|
"subusers": [{
|
|
|
|
"id": "testuser:swift",
|
|
|
|
"permissions": "full-control"
|
|
|
|
}],
|
|
|
|
"keys": [{
|
|
|
|
"user": "testuser:swift",
|
|
|
|
"access_key": "3Y1LNW4Q6X0Y53A52DET",
|
|
|
|
"secret_key": ""
|
|
|
|
}, {
|
|
|
|
"user": "testuser",
|
|
|
|
"access_key": "I0PJDPCIYZ665MW88W9R",
|
|
|
|
"secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
|
|
|
|
}],
|
|
|
|
"swift_keys": [],
|
|
|
|
"caps": [],
|
|
|
|
"op_mask": "read, write, delete",
|
|
|
|
"default_placement": "",
|
|
|
|
"placement_tags": [],
|
|
|
|
"bucket_quota": {
|
|
|
|
"enabled": false,
|
|
|
|
"max_size_kb": -1,
|
|
|
|
"max_objects": -1
|
|
|
|
},
|
|
|
|
"user_quota": {
|
|
|
|
"enabled": false,
|
|
|
|
"max_size_kb": -1,
|
|
|
|
"max_objects": -1
|
|
|
|
},
|
|
|
|
"temp_url_keys": []
|
|
|
|
}
|
|
|
|
|
|
|
|
Create the secret key::
|
|
|
|
|
|
|
|
sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
|
|
|
|
|
|
|
|
The output will be something like the following::
|
|
|
|
|
|
|
|
{
|
|
|
|
"user_id": "testuser",
|
|
|
|
"display_name": "First User",
|
|
|
|
"email": "",
|
|
|
|
"suspended": 0,
|
|
|
|
"max_buckets": 1000,
|
|
|
|
"auid": 0,
|
|
|
|
"subusers": [{
|
|
|
|
"id": "testuser:swift",
|
|
|
|
"permissions": "full-control"
|
|
|
|
}],
|
|
|
|
"keys": [{
|
|
|
|
"user": "testuser:swift",
|
|
|
|
"access_key": "3Y1LNW4Q6X0Y53A52DET",
|
|
|
|
"secret_key": ""
|
|
|
|
}, {
|
|
|
|
"user": "testuser",
|
|
|
|
"access_key": "I0PJDPCIYZ665MW88W9R",
|
|
|
|
"secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
|
|
|
|
}],
|
|
|
|
"swift_keys": [{
|
|
|
|
"user": "testuser:swift",
|
|
|
|
"secret_key": "244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF\/IA"
|
|
|
|
}],
|
|
|
|
"caps": [],
|
|
|
|
"op_mask": "read, write, delete",
|
|
|
|
"default_placement": "",
|
|
|
|
"placement_tags": [],
|
|
|
|
"bucket_quota": {
|
|
|
|
"enabled": false,
|
|
|
|
"max_size_kb": -1,
|
|
|
|
"max_objects": -1
|
|
|
|
},
|
|
|
|
"user_quota": {
|
|
|
|
"enabled": false,
|
|
|
|
"max_size_kb": -1,
|
|
|
|
"max_objects": -1
|
|
|
|
},
|
|
|
|
"temp_url_keys": []
|
|
|
|
}
|
|
|
|
|
|
|
|
Access Verification
|
|
|
|
^^^^^^^^^^^^^^^^^^^
|
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Test S3 Access
|
2015-11-03 15:37:07 +00:00
|
|
|
""""""""""""""
|
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
You need to write and run a Python test script for verifying S3 access. The S3
|
|
|
|
access test script will connect to the ``radosgw``, create a new bucket and
|
|
|
|
list all buckets. The values for ``aws_access_key_id`` and
|
|
|
|
``aws_secret_access_key`` are taken from the values of ``access_key`` and
|
|
|
|
``secret_key`` returned by the ``radosgw_admin`` command.
|
2015-11-03 15:37:07 +00:00
|
|
|
|
|
|
|
Execute the following steps:
|
|
|
|
|
|
|
|
#. You will need to install the ``python-boto`` package::
|
|
|
|
|
|
|
|
sudo yum install python-boto
|
|
|
|
|
|
|
|
#. Create the Python script::
|
|
|
|
|
|
|
|
vi s3test.py
|
|
|
|
|
|
|
|
#. Add the following contents to the file::
|
|
|
|
|
|
|
|
import boto.s3.connection
|
|
|
|
|
|
|
|
access_key = 'I0PJDPCIYZ665MW88W9R'
|
|
|
|
secret_key = 'dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA'
|
|
|
|
conn = boto.connect_s3(
|
2016-10-11 16:34:23 +00:00
|
|
|
aws_access_key_id=access_key,
|
|
|
|
aws_secret_access_key=secret_key,
|
|
|
|
host='{hostname}', port={port},
|
|
|
|
is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
|
|
|
|
)
|
2015-11-03 15:37:07 +00:00
|
|
|
|
|
|
|
bucket = conn.create_bucket('my-new-bucket')
|
2016-10-11 16:34:23 +00:00
|
|
|
for bucket in conn.get_all_buckets():
|
|
|
|
print "{name} {created}".format(
|
|
|
|
name=bucket.name,
|
|
|
|
created=bucket.creation_date,
|
|
|
|
)
|
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Replace ``{hostname}`` with the hostname of the host where you have
|
2016-04-19 09:26:10 +00:00
|
|
|
configured the gateway service i.e., the ``gateway host``. Replace {port}
|
2015-11-04 15:43:45 +00:00
|
|
|
with the port number you are using with Civetweb.
|
2015-11-03 15:37:07 +00:00
|
|
|
|
|
|
|
#. Run the script::
|
|
|
|
|
|
|
|
python s3test.py
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
The output will be something like the following::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
my-new-bucket 2015-02-16T17:09:10.000Z
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
Test swift access
|
|
|
|
"""""""""""""""""
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Swift access can be verified via the ``swift`` command line client. The command
|
|
|
|
``man swift`` will provide more information on available command line options.
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
To install ``swift`` client, execute the following commands. On Red Hat
|
|
|
|
Enterprise Linux::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
sudo yum install python-setuptools
|
2015-11-04 15:43:45 +00:00
|
|
|
sudo easy_install pip
|
|
|
|
sudo pip install --upgrade setuptools
|
|
|
|
sudo pip install --upgrade python-swiftclient
|
|
|
|
|
2016-01-21 12:58:47 +00:00
|
|
|
On Debian-based distributions::
|
2015-11-04 15:43:45 +00:00
|
|
|
|
|
|
|
sudo apt-get install python-setuptools
|
|
|
|
sudo easy_install pip
|
|
|
|
sudo pip install --upgrade setuptools
|
|
|
|
sudo pip install --upgrade python-swiftclient
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
To test swift access, execute the following::
|
2013-10-30 22:18:57 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
swift -A http://{IP ADDRESS}:{port}/auth/1.0 -U testuser:swift -K '{swift_secret_key}' list
|
2014-08-07 21:15:45 +00:00
|
|
|
|
2015-11-04 15:43:45 +00:00
|
|
|
Replace ``{IP ADDRESS}`` with the public IP address of the gateway server and
|
|
|
|
``{swift_secret_key}`` with its value from the output of ``radosgw-admin key
|
|
|
|
create`` command executed for the ``swift`` user. Replace {port} with the port
|
|
|
|
number you are using with Civetweb (e.g., ``7480`` is the default). If you
|
|
|
|
don't replace the port, it will default to port ``80``.
|
2014-08-07 21:15:45 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
For example::
|
2014-08-07 21:15:45 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
swift -A http://10.19.143.116:7480/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list
|
2014-08-07 21:15:45 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
The output should be::
|
2013-11-04 20:50:30 +00:00
|
|
|
|
2015-11-03 15:37:07 +00:00
|
|
|
my-new-bucket
|
2015-11-04 15:43:45 +00:00
|
|
|
|
|
|
|
.. _Preflight: ../../start/quick-start-preflight
|