2013-04-26 21:01:46 +00:00
=====================
Preflight Checklist
=====================
.. versionadded :: 0.60
2014-09-12 16:15:53 +00:00
Thank you for trying Ceph! We recommend setting up a `` ceph-deploy `` admin
:term: `node` and a 3-node :term: `Ceph Storage Cluster` to explore the basics of
2015-10-30 21:46:12 +00:00
Ceph. This **Preflight Checklist** will help you prepare a `` ceph-deploy ``
admin node and three Ceph Nodes (or virtual machines) that will host your Ceph
Storage Cluster. Before proceeding any further, see `OS Recommendations`_ to
verify that you have a supported distribution and version of Linux. When
you use a single Linux distribution and version across the cluster, it will
make it easier for you to troubleshoot issues that arise in production.
2013-04-26 21:01:46 +00:00
2014-05-08 01:01:52 +00:00
In the descriptions below, :term: `Node` refers to a single machine.
.. include :: quick-common.rst
2013-04-26 21:01:46 +00:00
2013-09-17 21:01:27 +00:00
Ceph Deploy Setup
=================
2013-09-24 21:45:15 +00:00
Add Ceph repositories to the `` ceph-deploy `` admin node. Then, install
2014-08-05 16:51:33 +00:00
`` ceph-deploy `` .
2013-04-26 21:01:46 +00:00
2013-09-17 21:01:27 +00:00
Advanced Package Tool (APT)
---------------------------
2013-04-26 21:01:46 +00:00
2013-09-17 21:01:27 +00:00
For Debian and Ubuntu distributions, perform the following steps:
2013-05-09 19:48:59 +00:00
2013-09-17 21:01:27 +00:00
#. Add the release key::
2013-04-26 21:01:46 +00:00
2015-10-26 19:18:19 +00:00
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
2013-05-09 19:48:59 +00:00
2013-09-17 21:01:27 +00:00
#. Add the Ceph packages to your repository. Replace `` {ceph-stable-release} ``
2016-05-02 16:50:37 +00:00
with a stable Ceph release (e.g., `` hammer `` , `` jewel `` , etc.)
2013-09-17 21:01:27 +00:00
For example::
2014-08-05 16:51:33 +00:00
2015-10-30 21:46:12 +00:00
echo deb http://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
2013-05-09 19:48:59 +00:00
2014-08-05 16:51:33 +00:00
#. Update your repository and install `` ceph-deploy `` ::
2013-04-26 21:01:46 +00:00
2013-09-17 21:01:27 +00:00
sudo apt-get update && sudo apt-get install ceph-deploy
2013-05-09 19:48:59 +00:00
2014-07-17 20:56:01 +00:00
.. note :: You can also use the EU mirror eu.ceph.com for downloading your packages.
Simply replace `` http://ceph.com/ `` by `` http://eu.ceph.com/ ``
2013-05-09 19:48:59 +00:00
2013-09-17 21:01:27 +00:00
Red Hat Package Manager (RPM)
-----------------------------
2013-04-26 21:01:46 +00:00
2016-05-02 16:50:37 +00:00
For CentOS 7, perform the following steps:
2013-06-11 21:46:35 +00:00
2016-01-14 18:40:19 +00:00
#. On Red Hat Enterprise Linux 7, register the target machine with `` subscription-manager `` , verify your subscriptions, and enable the "Extras" repoistory for package dependencies. For example::
sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
2016-05-02 16:50:37 +00:00
#. Install and enable the Extra Packages for Enterprise Linux (EPEL)
repository. Please see the `EPEL wiki`_ page for more information.
2016-01-14 18:40:19 +00:00
#. On CentOS, you can execute the following command chain::
sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*
2014-08-05 16:51:33 +00:00
#. Add the package to your repository. Open a text editor and create a
2013-09-20 20:01:36 +00:00
Yellowdog Updater, Modified (YUM) entry. Use the file path
2014-08-05 16:51:33 +00:00
`` /etc/yum.repos.d/ceph.repo `` . For example::
2013-06-11 21:46:35 +00:00
2013-09-20 20:01:36 +00:00
sudo vim /etc/yum.repos.d/ceph.repo
2013-06-11 21:46:35 +00:00
2014-08-05 16:51:33 +00:00
Paste the following example code. Replace `` {ceph-release} `` with
2016-05-02 16:50:37 +00:00
the recent major release of Ceph (e.g., `` jewel `` ). Replace `` {distro} ``
with your Linux distribution (e.g., `` el7 `` for CentOS 7). Finally, save the
contents to the
2013-09-20 20:01:36 +00:00
`` /etc/yum.repos.d/ceph.repo `` file. ::
2013-06-11 21:46:35 +00:00
2013-09-20 20:01:36 +00:00
[ceph-noarch]
name=Ceph noarch packages
2015-10-30 21:48:21 +00:00
baseurl=http://download.ceph.com/rpm-{ceph-release}/{distro}/noarch
2013-09-20 20:01:36 +00:00
enabled=1
gpgcheck=1
type=rpm-md
2015-10-26 19:18:19 +00:00
gpgkey=https://download.ceph.com/keys/release.asc
2013-06-11 21:46:35 +00:00
2014-08-05 16:51:33 +00:00
#. Update your repository and install `` ceph-deploy `` ::
2013-06-11 21:46:35 +00:00
2013-09-20 20:01:36 +00:00
sudo yum update && sudo yum install ceph-deploy
2013-06-11 21:46:35 +00:00
2014-07-17 20:56:01 +00:00
.. note :: You can also use the EU mirror eu.ceph.com for downloading your packages.
Simply replace `` http://ceph.com/ `` by `` http://eu.ceph.com/ ``
2014-01-17 00:48:09 +00:00
Ceph Node Setup
===============
2015-10-30 21:46:12 +00:00
The admin node must be have password-less SSH access to Ceph nodes.
2014-09-12 16:15:53 +00:00
When ceph-deploy logs in to a Ceph node as a user, that particular
user must have passwordless `` sudo `` privileges.
2014-05-19 20:38:31 +00:00
2014-09-12 16:15:53 +00:00
Install NTP
-----------
2014-01-17 00:48:09 +00:00
2014-09-12 16:15:53 +00:00
We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
prevent issues arising from clock drift. See `Clock`_ for details.
2014-01-17 00:48:09 +00:00
2015-10-30 21:46:12 +00:00
On CentOS / RHEL, execute::
2014-05-19 20:38:31 +00:00
2014-09-12 16:15:53 +00:00
sudo yum install ntp ntpdate ntp-doc
2014-05-19 20:38:31 +00:00
2014-09-12 16:15:53 +00:00
On Debian / Ubuntu, execute::
2014-01-17 00:48:09 +00:00
2014-09-12 16:15:53 +00:00
sudo apt-get install ntp
2014-01-17 00:48:09 +00:00
2015-10-30 21:46:12 +00:00
Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
2014-09-12 16:15:53 +00:00
same NTP time server. See `NTP`_ for details.
2014-01-17 00:48:09 +00:00
2014-09-12 16:15:53 +00:00
Install SSH Server
------------------
2014-01-17 00:48:09 +00:00
2014-09-12 16:15:53 +00:00
For **ALL** Ceph Nodes perform the following steps:
2014-01-17 00:48:09 +00:00
2014-08-05 16:51:33 +00:00
#. Install an SSH server (if necessary) on each Ceph Node::
2014-01-17 00:48:09 +00:00
sudo apt-get install openssh-server
2014-05-19 20:38:31 +00:00
2014-08-05 16:51:33 +00:00
or::
2014-05-19 20:38:31 +00:00
2014-01-17 00:48:09 +00:00
sudo yum install openssh-server
2014-08-05 16:51:33 +00:00
2014-09-12 16:15:53 +00:00
#. Ensure the SSH server is running on **ALL** Ceph Nodes.
2015-10-16 20:35:43 +00:00
Create a Ceph Deploy User
-------------------------
2014-09-12 16:15:53 +00:00
The `` ceph-deploy `` utility must login to a Ceph node as a user
that has passwordless `` sudo `` privileges, because it needs to install
2015-10-30 21:46:12 +00:00
software and configuration files without prompting for passwords.
2014-09-12 16:15:53 +00:00
Recent versions of `` ceph-deploy `` support a `` --username `` option so you can
specify any user that has password-less `` sudo `` (including `` root `` , although
this is **NOT** recommended). To use `` ceph-deploy --username {username} `` , the
user you specify must have password-less SSH access to the Ceph node, as
`` ceph-deploy `` will not prompt you for a password.
2015-10-16 20:35:43 +00:00
We recommend creating a specific user for `` ceph-deploy `` on **ALL** Ceph nodes
in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
name across the cluster may improve ease of use (not required), but you should
avoid obvious user names, because hackers typically use them with brute force
hacks (e.g., `` root `` , `` admin `` , `` {productname} `` ). The following procedure,
substituting `` {username} `` for the user name you define, describes how to
create a user with passwordless `` sudo `` .
.. note :: Starting with the `Infernalis release`_ the "ceph" user name is reserved
for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
removing the user must be done before attempting an upgrade.
2014-09-12 16:15:53 +00:00
2015-10-16 20:35:43 +00:00
#. Create a new user on each Ceph Node. ::
2014-09-12 16:15:53 +00:00
ssh user@ceph-server
sudo useradd -d /home/{username} -m {username}
sudo passwd {username}
2015-10-16 20:35:43 +00:00
#. For the new user you added to each Ceph node, ensure that the user has
2014-09-12 16:15:53 +00:00
`` sudo `` privileges. ::
echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}
Enable Password-less SSH
------------------------
Since `` ceph-deploy `` will not prompt for a password, you must generate
2015-03-10 16:23:30 +00:00
SSH keys on the admin node and distribute the public key to each Ceph
node. `` ceph-deploy `` will attempt to generate the SSH keys for initial
monitors.
2014-09-12 16:15:53 +00:00
#. Generate the SSH keys, but do not use `` sudo `` or the
2014-01-17 00:48:09 +00:00
`` root `` user. Leave the passphrase empty::
ssh-keygen
2015-10-30 21:46:12 +00:00
2014-01-17 00:48:09 +00:00
Generating public/private key pair.
2014-09-12 16:15:53 +00:00
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
2014-01-17 00:48:09 +00:00
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
2014-09-12 16:15:53 +00:00
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
2014-01-17 00:48:09 +00:00
2014-09-12 16:15:53 +00:00
#. Copy the key to each Ceph Node, replacing `` {username} `` with the user name
2015-10-16 20:35:43 +00:00
you created with `Create a Ceph Deploy User`_ . ::
2014-01-17 00:48:09 +00:00
2014-09-12 16:15:53 +00:00
ssh-copy-id {username}@node1
ssh-copy-id {username}@node2
ssh-copy-id {username}@node3
2014-01-17 00:48:09 +00:00
2015-10-30 21:46:12 +00:00
#. (Recommended) Modify the `` ~/.ssh/config `` file of your `` ceph-deploy ``
admin node so that `` ceph-deploy `` can log in to Ceph nodes as the user you
created without requiring you to specify `` --username {username} `` each
2014-09-12 16:15:53 +00:00
time you execute `` ceph-deploy `` . This has the added benefit of streamlining
`` ssh `` and `` scp `` usage. Replace `` {username} `` with the user name you
created::
2014-01-17 00:48:09 +00:00
Host node1
Hostname node1
2014-09-12 16:15:53 +00:00
User {username}
2014-01-17 00:48:09 +00:00
Host node2
Hostname node2
2014-09-12 16:15:53 +00:00
User {username}
2014-01-17 00:48:09 +00:00
Host node3
Hostname node3
2014-09-12 16:15:53 +00:00
User {username}
2014-01-17 00:48:09 +00:00
2014-09-12 16:15:53 +00:00
Enable Networking On Bootup
---------------------------
Ceph OSDs peer with each other and report to Ceph Monitors over the network.
If networking is `` off `` by default, the Ceph cluster cannot come online
during bootup until you enable networking.
The default configuration on some distributions (e.g., CentOS) has the
networking interface(s) off by default. Ensure that, during boot up, your
network interface(s) turn(s) on so that your Ceph daemons can communicate over
the network. For example, on Red Hat and CentOS, navigate to
`` /etc/sysconfig/network-scripts `` and ensure that the `` ifcfg-{iface} `` file
has `` ONBOOT `` set to `` yes `` .
Ensure Connectivity
-------------------
Ensure connectivity using `` ping `` with short hostnames (`` hostname -s `` ).
Address hostname resolution issues as necessary.
2015-10-30 21:46:12 +00:00
2014-09-12 16:15:53 +00:00
.. note :: Hostnames should resolve to a network IP address, not to the
2014-08-05 16:51:33 +00:00
loopback IP address (e.g., hostnames should resolve to an IP address other
2015-10-30 21:46:12 +00:00
than `` 127.0.0.1 `` ). If you use your admin node as a Ceph node, you
2014-09-12 16:15:53 +00:00
should also ensure that it resolves to its hostname and IP address
(i.e., not its loopback IP address).
Open Required Ports
-------------------
Ceph Monitors communicate using port `` 6789 `` by default. Ceph OSDs communicate
2015-02-08 14:38:14 +00:00
in a port range of `` 6800:7300 `` by default. See the `Network Configuration
2014-09-12 16:15:53 +00:00
Reference`_ for details. Ceph OSDs can use multiple network connections to
communicate with clients, monitors, other OSDs for replication, and other OSDs
for heartbeats.
On some distributions (e.g., RHEL), the default firewall configuration is fairly
strict. You may need to adjust your firewall settings allow inbound requests so
that clients in your network can communicate with daemons on your Ceph nodes.
For `` firewalld `` on RHEL 7, add port `` 6789 `` for Ceph Monitor nodes and ports
2015-05-21 18:53:43 +00:00
`` 6800:7300 `` for Ceph OSDs to the public zone and ensure that you make the
2014-09-12 16:15:53 +00:00
setting permanent so that it is enabled on reboot. For example::
sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
2015-05-21 18:53:43 +00:00
For `` iptables `` , add port `` 6789 `` for Ceph Monitors and ports `` 6800:7300 ``
2014-09-12 16:15:53 +00:00
for Ceph OSDs. For example::
sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
Once you have finished configuring `` iptables `` , ensure that you make the
changes persistent on each node so that they will be in effect when your nodes
reboot. For example::
/sbin/service iptables save
TTY
---
On CentOS and RHEL, you may receive an error while trying to execute
`` ceph-deploy `` commands. If `` requiretty `` is set by default on your Ceph
nodes, disable it by executing `` sudo visudo `` and locate the `` Defaults
requiretty`` setting. Change it to ` ` Defaults:ceph !requiretty `` or comment it
out to ensure that `` ceph-deploy `` can connect using the user you created with
2015-10-16 20:35:43 +00:00
`Create a Ceph Deploy User`_ .
2014-09-12 16:15:53 +00:00
.. note :: If editing, `` /etc/sudoers `` , ensure that you use
`` sudo visudo `` rather than a text editor.
SELinux
-------
On CentOS and RHEL, SELinux is set to `` Enforcing `` by default. To streamline your
installation, we recommend setting SELinux to `` Permissive `` or disabling it
entirely and ensuring that your installation and cluster are working properly
before hardening your configuration. To set SELinux to `` Permissive `` , execute the
following::
2014-01-17 00:48:09 +00:00
2014-09-12 16:15:53 +00:00
sudo setenforce 0
2014-01-17 00:48:09 +00:00
2014-09-12 16:15:53 +00:00
To configure SELinux persistently (recommended if SELinux is an issue), modify
the configuration file at `` /etc/selinux/config `` .
2014-01-17 00:48:09 +00:00
2015-01-09 18:26:51 +00:00
Priorities/Preferences
----------------------
Ensure that your package manager has priority/preferences packages installed and
enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
enable optional repositories. ::
sudo yum install yum-plugin-priorities
For example, on RHEL 7 server, execute the following to install
`` yum-plugin-priorities `` and enable the `` rhel-7-server-optional-rpms ``
repository::
sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
2013-04-26 21:01:46 +00:00
Summary
=======
2013-09-20 20:01:36 +00:00
This completes the Quick Start Preflight. Proceed to the `Storage Cluster
Quick Start`_.
2013-04-26 21:01:46 +00:00
2013-06-11 21:46:35 +00:00
.. _Storage Cluster Quick Start: ../quick-ceph-deploy
2014-01-17 00:48:09 +00:00
.. _OS Recommendations: ../os-recommendations
2014-09-12 16:15:53 +00:00
.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
.. _Clock: ../../rados/configuration/mon-config-ref#clock
2014-12-04 16:06:03 +00:00
.. _NTP: http://www.ntp.org/
2015-10-16 20:35:43 +00:00
.. _Infernalis release: ../../release-notes/#v9-1-0-infernalis-release-candidate
2016-01-14 18:40:19 +00:00
.. _EPEL wiki: https://fedoraproject.org/wiki/EPEL