Merge pull request #3195 from nilamdyuti/wip-doc-ceph-deploy

doc: Changes format style in ceph-deploy to improve readability as html.

Reviewed-by: John Wilkins <jowilkin@redhat.com>
This commit is contained in:
John Wilkins 2014-12-18 15:01:50 -08:00
commit 3b46995c97
4 changed files with 797 additions and 467 deletions

View File

@ -499,6 +499,7 @@ fi
%config %{_sysconfdir}/bash_completion.d/ceph
%config(noreplace) %{_sysconfdir}/logrotate.d/ceph
%config(noreplace) %{_sysconfdir}/logrotate.d/radosgw
%{_mandir}/man8/ceph-deploy.8*
%{_mandir}/man8/ceph-disk.8*
%{_mandir}/man8/ceph-mon.8*
%{_mandir}/man8/ceph-mds.8*

1
debian/ceph.install vendored
View File

@ -24,6 +24,7 @@ usr/share/doc/ceph/sample.ceph.conf
usr/share/doc/ceph/sample.fetch_config
usr/share/man/man8/ceph-clsinfo.8
usr/share/man/man8/ceph-debugpack.8
usr/share/man/man8/ceph-deploy.8
usr/share/man/man8/ceph-disk.8
usr/share/man/man8/ceph-mon.8
usr/share/man/man8/ceph-osd.8

View File

@ -1,5 +1,5 @@
=====================================
ceph-deploy -- Ceph quickstart tool
ceph-deploy -- Ceph deployment tool
=====================================
.. program:: ceph-deploy
@ -17,6 +17,8 @@ Synopsis
| **ceph-deploy** **osd** *activate* [*ceph-node*]:[*dir-path*]
| **ceph-deploy** **osd** *create* [*ceph-node*]:[*dir-path*]
| **ceph-deploy** **admin** [*admin-node*][*ceph-node*...]
| **ceph-deploy** **purgedata** [*ceph-node*][*ceph-node*...]
@ -26,369 +28,465 @@ Synopsis
Description
===========
**ceph-deploy** is a tool which allows easy and quick deployment of a ceph
cluster without involving complex and detailed manual configuration. It uses
ssh to gain access to other ceph nodes from the admin node, sudo for
:program:`ceph-deploy` is a tool which allows easy and quick deployment of a
Ceph cluster without involving complex and detailed manual configuration. It
uses ssh to gain access to other Ceph nodes from the admin node, sudo for
administrator privileges on them and the underlying Python scripts automates
the manual process of ceph installation on each node from the admin node itself.
the manual process of Ceph installation on each node from the admin node itself.
It can be easily run on an workstation and doesn't require servers, databases or
any other automated tools. With **ceph-deploy**, it is really easy to set up and
take down a cluster. However, it is not a generic deployment tool. It is a
specific tool which is designed for those who want to get ceph up and running
any other automated tools. With :program:`ceph-deploy`, it is really easy to set
up and take down a cluster. However, it is not a generic deployment tool. It is
a specific tool which is designed for those who want to get Ceph up and running
quickly with only the unavoidable initial configuration settings and without the
overhead of installing other tools like **Chef**, **Puppet** or **Juju**. Those
overhead of installing other tools like ``Chef``, ``Puppet`` or ``Juju``. Those
who want to customize security settings, partitions or directory locations and
want to set up a cluster following detailed manual steps, should use other tools
i.e, **Chef**, **Puppet**, **Juju** or **Crowbar**.
i.e, ``Chef``, ``Puppet``, ``Juju`` or ``Crowbar``.
With **ceph-deploy**, you can install ceph packages on remote nodes, create a
cluster, add monitors, gather/forget keys, add OSDs and metadata servers,
configure admin hosts or take down the cluster.
With :program:`ceph-deploy`, you can install Ceph packages on remote nodes,
create a cluster, add monitors, gather/forget keys, add OSDs and metadata
servers, configure admin hosts or take down the cluster.
Commands
========
**new**: Start deploying a new cluster and write a configuration file and keyring
for it. It tries to copy ssh keys from admin node to gain passwordless ssh to
monitor node(s), validates host IP, creates a cluster with a new initial monitor
node or nodes for monitor quorum, a ceph configuration file, a monitor secret
keyring and a log file for the new cluster. It populates the newly created ceph
configuration file with **fsid** of cluster, hostnames and IP addresses of initial
monitor members under [global] section.
new
---
Usage: ceph-deploy new [MON][MON...]
Start deploying a new cluster and write a configuration file and keyring for it.
It tries to copy ssh keys from admin node to gain passwordless ssh to monitor
node(s), validates host IP, creates a cluster with a new initial monitor node or
nodes for monitor quorum, a ceph configuration file, a monitor secret keyring and
a log file for the new cluster. It populates the newly created Ceph configuration
file with ``fsid`` of cluster, hostnames and IP addresses of initial monitor
members under ``[global]`` section.
Here, [MON] is initial monitor hostname, fqdn, or hostname:fqdn pair.
Usage::
Other options like --no-ssh-copykey, --fsid, --cluster-network and
--public-network can also be used with this command.
ceph-deploy new [MON][MON...]
If more than one network interface is used, **public network** setting has to be
added under **[global]** section of ceph configuration file. If the public subnet
is given, **new** command will choose the one IP from the remote host that exists
Here, [MON] is initial monitor hostname (short hostname i.e, ``hostname -s``).
Other options like :option:`--no-ssh-copykey`, :option:`--fsid`,
:option:`--cluster-network` and :option:`--public-network` can also be used with
this command.
If more than one network interface is used, ``public network`` setting has to be
added under ``[global]`` section of Ceph configuration file. If the public subnet
is given, ``new`` command will choose the one IP from the remote host that exists
within the subnet range. Public network can also be added at runtime using
**--public-network** option with the command as mentioned above.
:option:`--public-network` option with the command as mentioned above.
It is also recommended to change the default number of replicas in the Ceph
configuration file from 3 to 2 so that Ceph can achieve an **(active + clean)**
state with just two Ceph OSDs. To do that, add the following line under the
**[global]** section:
osd pool default size = 2
install
-------
**install**: Install Ceph packages on remote hosts. As a first step it installs
**yum-plugin-priorities** in admin and other nodes using passwordless ssh and sudo
Install Ceph packages on remote hosts. As a first step it installs
``yum-plugin-priorities`` in admin and other nodes using passwordless ssh and sudo
so that Ceph packages from upstream repository get more priority. It then detects
the platform and distribution for the hosts and installs Ceph normally by
downloading distro compatible packages if adequate repo for Ceph is already added.
It sets the **version_kind** to be the right one out of **stable**, **testing**,
and **development**. Generally the **stable** version and latest release is used
for installation. During detection of platform and distribution before installation,
if it finds the **distro.init** to be **sysvinit** (Fedora, CentOS/RHEL etc), it
doesn't allow installation with custom cluster name and uses the default name
**ceph** for the cluster.
``--release`` flag is used to get the latest release for installation. During
detection of platform and distribution before installation, if it finds the
``distro.init`` to be ``sysvinit`` (Fedora, CentOS/RHEL etc), it doesn't allow
installation with custom cluster name and uses the default name ``ceph`` for the
cluster.
If the user explicitly specifies a custom repo url with **--repo-url** for
If the user explicitly specifies a custom repo url with :option:`--repo-url` for
installation, anything detected from the configuration will be overridden and
the custom repository location will be used for installation of Ceph packages.
If required, valid custom repositories are also detected and installed. In case of
installation from a custom repo a boolean is used to determine the logic needed to
proceed with a custom repo installation. A custom repo install helper is used that
goes through config checks to retrieve repos (and any extra repos defined) and
installs them. **cd_conf** is the object built from argparse that holds the flags
and information needed to determine what metadata from the configuration is to be
used.
If required, valid custom repositories are also detected and installed. In case
of installation from a custom repo a boolean is used to determine the logic
needed to proceed with a custom repo installation. A custom repo install helper
is used that goes through config checks to retrieve repos (and any extra repos
defined) and installs them. ``cd_conf`` is the object built from ``argparse``
that holds the flags and information needed to determine what metadata from the
configuration is to be used.
A user can also opt to install only the repository without installing ceph and
its dependencies by using **--repo** option.
A user can also opt to install only the repository without installing Ceph and
its dependencies by using :option:`--repo` option.
Usage: ceph-deploy install [HOST][HOST...]
Usage::
ceph-deploy install [HOST][HOST...]
Here, [HOST] is/are the host node(s) where Ceph is to be installed.
Other options like --release, --testing, --dev, --adjust-repos, --no-adjust-repos,
--repo, --local-mirror, --repo-url and --gpg-url can also be used with this
command.
An option ``--release`` is used to install a release known as CODENAME
(default: firefly).
**mds**: Deploy Ceph mds on remote hosts. A metadata server is needed to use
CephFS and the **mds** command is used to create one on the desired host node.
It uses the subcommand **create** to do so. **create** first gets the hostname
and distro information of the desired mds host. It then tries to read the
bootstrap-mds key for the cluster and deploy it in the desired host. The key
generally has a format of {cluster}.bootstrap-mds.keyring. If it doesn't finds
a keyring, it runs **gatherkeys** to get the keyring. It then creates a mds on the
desired host under the path /var/lib/ceph/mds/ in /var/lib/ceph/mds/{cluster}-{name}
format and a bootstrap keyring under /var/lib/ceph/bootstrap-mds/ in
/var/lib/ceph/bootstrap-mds/{cluster}.keyring format. It then runs appropriate
commands based on **distro.init** to start the **mds**. To remove the mds,
subcommand **destroy** is used.
Other options like :option:`--testing`, :option:`--dev`,:option:`--adjust-repos`,
:option:`--no-adjust-repos`, :option:`--repo`, :option:`--local-mirror`,
:option:`--repo-url` and :option:`--gpg-url` can also be used with this command.
Usage: ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
ceph-deploy mds destroy [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
mds
---
Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS and
the ``mds`` command is used to create one on the desired host node. It uses the
subcommand ``create`` to do so. ``create`` first gets the hostname and distro
information of the desired mds host. It then tries to read the ``bootstrap-mds``
key for the cluster and deploy it in the desired host. The key generally has a
format of ``{cluster}.bootstrap-mds.keyring``. If it doesn't finds a keyring,
it runs ``gatherkeys`` to get the keyring. It then creates a mds on the desired
host under the path ``/var/lib/ceph/mds/`` in ``/var/lib/ceph/mds/{cluster}-{name}``
format and a bootstrap keyring under ``/var/lib/ceph/bootstrap-mds/`` in
``/var/lib/ceph/bootstrap-mds/{cluster}.keyring`` format. It then runs appropriate
commands based on ``distro.init`` to start the ``mds``. To remove the mds,
subcommand ``destroy`` is used.
Usage::
ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
ceph-deploy mds destroy [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
The [DAEMON-NAME] is optional.
**mon**: Deploy Ceph monitor on remote hosts. **mon** makes use of certain
subcommands to deploy Ceph monitors on other nodes.
Subcommand **create-initial** deploys for monitors defined in
**mon initial members** under **[global]** section in Ceph configuration file,
mon
---
Deploy Ceph monitor on remote hosts. ``mon`` makes use of certain subcommands
to deploy Ceph monitors on other nodes.
Subcommand ``create-initial`` deploys for monitors defined in
``mon initial members`` under ``[global]`` section in Ceph configuration file,
wait until they form quorum and then gatherkeys, reporting the monitor status
along the process. If monitors don't form quorum the command will eventually
time out.
Usage: ceph-deploy mon create-initial
Usage::
Subcommand **create** is used to deploy Ceph monitors by explicitly specifying the
hosts which are desired to be made monitors. If no hosts are specified it will
default to use the **mon initial members** defined under **[global]** section of
Ceph configuration file. **create** first detects platform and distro for desired
hosts and checks if hostname is compatible for deployment. It then uses the monitor
keyring initially created using **new** command and deploys the monitor in desired
host. If multiple hosts were specified during **new** command i.e, if there are
multiple hosts in **mon initial members** and multiple keyrings were created then
a concatenated keyring is used for deployment of monitors. In this process a
keyring parser is used which looks for **[entity]** sections in monitor keyrings
and returns a list of those sections. A helper is then used to collect all
keyrings into a single blob that will be used to inject it to monitors with
**--mkfs** on remote nodes. All keyring files are concatenated to be in a
directory ending with **.keyring**. During this process the helper uses list of
sections returned by keyring parser to check if an entity is already present in
a keyring and if not, adds it. The concatenated keyring is used for deployment
ceph-deploy mon create-initial
Subcommand ``create`` is used to deploy Ceph monitors by explicitly specifying
the hosts which are desired to be made monitors. If no hosts are specified it
will default to use the ``mon initial members`` defined under ``[global]``
section of Ceph configuration file. ``create`` first detects platform and distro
for desired hosts and checks if hostname is compatible for deployment. It then
uses the monitor keyring initially created using ``new`` command and deploys the
monitor in desired host. If multiple hosts were specified during ``new`` command
i.e, if there are multiple hosts in ``mon initial members`` and multiple keyrings
were created then a concatenated keyring is used for deployment of monitors. In
this process a keyring parser is used which looks for ``[entity]`` sections in
monitor keyrings and returns a list of those sections. A helper is then used to
collect all keyrings into a single blob that will be used to inject it to monitors
with :option:`--mkfs` on remote nodes. All keyring files are concatenated to be
in a directory ending with ``.keyring``. During this process the helper uses list
of sections returned by keyring parser to check if an entity is already present
in a keyring and if not, adds it. The concatenated keyring is used for deployment
of monitors to desired multiple hosts.
Usage: ceph-deploy mon create [HOST] [HOST...]
Usage::
ceph-deploy mon create [HOST] [HOST...]
Here, [HOST] is hostname of desired monitor host(s).
Subcommand **add** is used to add a monitor to an existing cluster. It first
detects platform and distro for desired host and checks if hostname is
compatible for deployment. It then uses the monitor keyring, ensures
configuration for new monitor host and adds the monitor to the cluster.
If the section for the monitor exists and defines a mon addr that
will be used, otherwise it will fallback by resolving the hostname to an
IP. If --address is used it will override all other options. After
adding the monitor to the cluster, it gives it some time to start. It then
looks for any monitor errors and checks monitor status. Monitor errors
arise if the monitor is not added in **mon initial members**, if it doesn't
exist in monmap and if neither public_addr nor public_network keys were
defined for monitors. Under such conditions, monitors may not be able to form
quorum. Monitor status tells if the monitor is up and running normally. The
status is checked by running ceph daemon mon.hostname mon_status on
remote end which provides the output and returns a boolean status of what is
going on. **False** means a monitor that is not fine even if it is up and
running, while **True** means the monitor is up and running correctly.
Subcommand ``add`` is used to add a monitor to an existing cluster. It first
detects platform and distro for desired host and checks if hostname is compatible
for deployment. It then uses the monitor keyring, ensures configuration for new
monitor host and adds the monitor to the cluster. If the section for the monitor
exists and defines a mon addr that will be used, otherwise it will fallback by
resolving the hostname to an IP. If :option:`--address` is used it will override
all other options. After adding the monitor to the cluster, it gives it some time
to start. It then looks for any monitor errors and checks monitor status. Monitor
errors arise if the monitor is not added in ``mon initial members``, if it doesn't
exist in ``monmap`` and if neither ``public_addr`` nor ``public_network`` keys
were defined for monitors. Under such conditions, monitors may not be able to
form quorum. Monitor status tells if the monitor is up and running normally. The
status is checked by running ``ceph daemon mon.hostname mon_status`` on remote
end which provides the output and returns a boolean status of what is going on.
``False`` means a monitor that is not fine even if it is up and running, while
``True`` means the monitor is up and running correctly.
Usage: ceph-deploy mon add [HOST]
Usage::
ceph-deploy mon add [HOST] --address [IP]
ceph-deploy mon add [HOST]
ceph-deploy mon add [HOST] --address [IP]
Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor
node.
Subcommand **destroy** is used to completely remove monitors on remote hosts. It
takes hostnames as arguments. It stops the monitor, verifies if ceph-mon daemon
really stopped, creates an archive directory **mon-remove** under /var/lib/ceph/,
archives old monitor directory in {cluster}-{hostname}-{stamp} format in it and
removes the monitor from cluster by running **ceph remove...** command.
Subcommand ``destroy`` is used to completely remove monitors on remote hosts.
It takes hostnames as arguments. It stops the monitor, verifies if ``ceph-mon``
daemon really stopped, creates an archive directory ``mon-remove`` under
``/var/lib/ceph/``, archives old monitor directory in
``{cluster}-{hostname}-{stamp}`` format in it and removes the monitor from
cluster by running ``ceph remove...`` command.
Usage: ceph-deploy mon destroy [HOST]
Usage::
ceph-deploy mon destroy [HOST]
Here, [HOST] is hostname of monitor that is to be removed.
**gatherkeys**: Gather authentication keys for provisioning new nodes. It
takes hostnames as arguments. It checks for and fetches client.admin keyring,
monitor keyring and bootstrap-mds/bootstrap-osd keyring from monitor host.
These authentication keys are used when new monitors/OSDs/MDS are added to
the cluster.
Usage: ceph-deploy gatherkeys [HOST] [HOST...]
gatherkeys
----------
Gather authentication keys for provisioning new nodes. It takes hostnames as
arguments. It checks for and fetches ``client.admin`` keyring, monitor keyring
and ``bootstrap-mds/bootstrap-osd`` keyring from monitor host. These
authentication keys are used when new ``monitors/OSDs/MDS`` are added to the
cluster.
Usage::
ceph-deploy gatherkeys [HOST] [HOST...]
Here, [HOST] is hostname of the monitor from where keys are to be pulled.
**disk**: Manage disks on a remote host. It actually triggers the **ceph-disk**
utility and it's subcommands to manage disks.
Subcommand **list** lists disk partitions and ceph OSDs.
disk
----
Usage: ceph-deploy disk list [HOST:[DISK]]
Manage disks on a remote host. It actually triggers the ``ceph-disk`` utility
and it's subcommands to manage disks.
Subcommand ``list`` lists disk partitions and Ceph OSDs.
Usage::
ceph-deploy disk list [HOST:[DISK]]
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
Subcommand **prepare** prepares a directory, disk or drive for a ceph OSD. It
creates a GPT partition, marks the partition with ceph type uuid, creates a
file system, marks the file system as ready for ceph consumption, uses entire
Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It
creates a GPT partition, marks the partition with Ceph type uuid, creates a
file system, marks the file system as ready for Ceph consumption, uses entire
partition and adds a new partition to the journal disk.
Usage: ceph-deploy disk prepare [HOST:[DISK]]
Usage::
ceph-deploy disk prepare [HOST:[DISK]]
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
Subcommand **activate** activates the ceph OSD. It mounts the volume in a temporary
location, allocates an OSD id (if needed), remounts in the correct location
/var/lib/ceph/osd/$cluster-$id and starts ceph-osd. It is triggered by udev
when it sees the OSD GPT partition type or on ceph service start with
'ceph disk activate-all'.
Subcommand ``activate`` activates the Ceph OSD. It mounts the volume in a
temporary location, allocates an OSD id (if needed), remounts in the correct
location ``/var/lib/ceph/osd/$cluster-$id`` and starts ``ceph-osd``. It is
triggered by ``udev`` when it sees the OSD GPT partition type or on ceph service
start with ``ceph disk activate-all``.
Usage: ceph-deploy disk activate [HOST:[DISK]]
Usage::
ceph-deploy disk activate [HOST:[DISK]]
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
Subcommand **zap** zaps/erases/destroys a device's partition table and contents.
It actually uses 'sgdisk' and it's option '--zap-all' to destroy both
GPT and MBR data structures so that the disk becomes suitable for
repartitioning. 'sgdisk' then uses '--mbrtogpt' to convert the MBR or
BSD disklabel disk to a GPT disk. The **prepare** subcommand can now be
executed which will create a new GPT partition.
Subcommand ``zap`` zaps/erases/destroys a device's partition table and contents.
It actually uses ``sgdisk`` and it's option ``--zap-all`` to destroy both GPT and
MBR data structures so that the disk becomes suitable for repartitioning.
``sgdisk`` then uses ``--mbrtogpt`` to convert the MBR or BSD disklabel disk to a
GPT disk. The ``prepare`` subcommand can now be executed which will create a new
GPT partition.
Usage: ceph-deploy disk zap [HOST:[DISK]]
Usage::
ceph-deploy disk zap [HOST:[DISK]]
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
**osd**: Manage OSDs by preparing data disk on remote host. **osd** makes use
of certain subcommands for managing OSDs.
Subcommand **prepare** prepares a directory, disk or drive for a ceph OSD. It
first checks against multiple OSDs getting created and warns about the possibility
of more than the recommended which would cause issues with max allowed PIDs in a
system. It then reads the bootstrap-osd key for the cluster or writes the bootstrap
key if not found. It then uses **ceph-disk** utility's **prepare** subcommand to
prepare the disk, journal and deploy the OSD on the desired host. Once prepared,
it gives some time to the OSD to settle and checks for any possible errors and if
found, reports to the user.
osd
---
Usage: ceph-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Manage OSDs by preparing data disk on remote host. ``osd`` makes use of certain
subcommands for managing OSDs.
Subcommand **activate** activates the OSD prepared using *prepare* subcommand.
It actually uses **ceph-disk** utility's **activate** subcommand with
appropriate init type based on distro to activate the OSD. Once activated,
it gives some time to the OSD to start and checks for any possible errors and if
Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It
first checks against multiple OSDs getting created and warns about the
possibility of more than the recommended which would cause issues with max
allowed PIDs in a system. It then reads the bootstrap-osd key for the cluster or
writes the bootstrap key if not found. It then uses :program:`ceph-disk`
utility's ``prepare`` subcommand to prepare the disk, journal and deploy the OSD
on the desired host. Once prepared, it gives some time to the OSD to settle and
checks for any possible errors and if found, reports to the user.
Usage::
ceph-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Subcommand ``activate`` activates the OSD prepared using ``prepare`` subcommand.
It actually uses :program:`ceph-disk` utility's ``activate`` subcommand with
appropriate init type based on distro to activate the OSD. Once activated, it
gives some time to the OSD to start and checks for any possible errors and if
found, reports to the user. It checks the status of the prepared OSD, checks the
OSD tree and makes sure the OSDs are up and in.
Usage: ceph-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Usage::
Subcommand **create** uses **prepare** and **activate** subcommands to create an
ceph-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Subcommand ``create`` uses ``prepare`` and ``activate`` subcommands to create an
OSD.
Usage: ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Usage::
Subcommand **list** lists disk partitions, ceph OSDs and prints OSD metadata.
It gets the osd tree from a monitor host, uses the **ceph-disk-list** output
ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Subcommand ``list`` lists disk partitions, Ceph OSDs and prints OSD metadata.
It gets the osd tree from a monitor host, uses the ``ceph-disk-list`` output
and gets the mount point by matching the line where the partition mentions
the OSD name, reads metadata from files, checks if a journal path exists,
if the OSD is in a OSD tree and prints the OSD metadata.
Usage: ceph-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Usage::
Subcommand **destroy** is used to completely remove OSDs from remote hosts. It
first takes the desired OSD out of the cluster and waits for the cluster to
rebalance and placement groups to reach **(active+clean)** state again. It then
stops the OSD, removes the OSD from CRUSH map, removes the OSD authentication
key, removes the OSD and updates the cluster's configuration file accordingly.
ceph-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Usage: ceph-deploy osd destroy HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
**admin**: Push configuration and client.admin key to a remote host. It takes
the {cluster}.client.admin.keyring from admin node and writes it under /etc/ceph
directory of desired node.
admin
-----
Usage: ceph-deploy admin [HOST] [HOST...]
Push configuration and ``client.admin`` key to a remote host. It takes
the ``{cluster}.client.admin.keyring`` from admin node and writes it under
``/etc/ceph`` directory of desired node.
Usage::
ceph-deploy admin [HOST] [HOST...]
Here, [HOST] is desired host to be configured for Ceph administration.
**config**: Push/pull configuration file to/from a remote host. It uses
**push** subcommand to takes the configuration file from admin host and
write it to remote host under /etc/ceph directory. It uses **pull** subcommand
to do the opposite i.e, pull the configuration file under /etc/ceph directory
of remote host to admin node.
Usage: ceph-deploy push [HOST] [HOST...]
config
------
Here, [HOST] is the hostname of the node where config file will be pushed.
Push/pull configuration file to/from a remote host. It uses ``push`` subcommand
to takes the configuration file from admin host and write it to remote host under
``/etc/ceph`` directory. It uses ``pull`` subcommand to do the opposite i.e, pull
the configuration file under ``/etc/ceph`` directory of remote host to admin node.
ceph-deploy pull [HOST] [HOST...]
Usage::
Here, [HOST] is the hostname of the node from where config file will be pulled.
ceph-deploy push [HOST] [HOST...]
**uninstall**: Remove Ceph packages from remote hosts. It detects the platform
and distro of selected host and uninstalls Ceph packages from it. However, some
dependencies like librbd1 and librados2 **will not** be removed because they can
cause issues with qemu-kvm.
ceph-deploy pull [HOST] [HOST...]
Usage: ceph-deploy uninstall [HOST] [HOST...]
Here, [HOST] is the hostname of the node where config file will be pushed to or
pulled from.
uninstall
---------
Remove Ceph packages from remote hosts. It detects the platform and distro of
selected host and uninstalls Ceph packages from it. However, some dependencies
like ``librbd1`` and ``librados2`` will not be removed because they can cause
issues with ``qemu-kvm``.
Usage::
ceph-deploy uninstall [HOST] [HOST...]
Here, [HOST] is hostname of the node from where Ceph will be uninstalled.
**purge**: Remove Ceph packages from remote hosts and purge all data. It detects
the platform and distro of selected host, uninstalls Ceph packages and purges all
data. However, some dependencies like librbd1 and librados2 **will not** be removed
because they can cause issues with qemu-kvm.
Usage: ceph-deploy purge [HOST] [HOST...]
purge
-----
Remove Ceph packages from remote hosts and purge all data. It detects the
platform and distro of selected host, uninstalls Ceph packages and purges all
data. However, some dependencies like ``librbd1`` and ``librados2`` will not be
removed because they can cause issues with ``qemu-kvm``.
Usage::
ceph-deploy purge [HOST] [HOST...]
Here, [HOST] is hostname of the node from where Ceph will be purged.
**purgedata**: Purge (delete, destroy, discard, shred) any Ceph data from
/var/lib/ceph. Once it detects the platform and distro of desired host, it first
checks if Ceph is still installed on the selected host and if installed, it won't
purge data from it. If Ceph is already uninstalled from the host, it tries to
remove the contents of /var/lib/ceph. If it fails then probably OSDs are still
mounted and needs to be unmounted to continue. It unmount the OSDs and tries to
remove the contents of /var/lib/ceph again and checks for errors. It also
removes contents of /etc/ceph. Once all steps are successfully completed, all
the Ceph data from the selected host are removed.
Usage: ceph-deploy purgedata [HOST] [HOST...]
purgedata
---------
Purge (delete, destroy, discard, shred) any Ceph data from ``/var/lib/ceph``.
Once it detects the platform and distro of desired host, it first checks if Ceph
is still installed on the selected host and if installed, it won't purge data
from it. If Ceph is already uninstalled from the host, it tries to remove the
contents of ``/var/lib/ceph``. If it fails then probably OSDs are still mounted
and needs to be unmounted to continue. It unmount the OSDs and tries to remove
the contents of ``/var/lib/ceph`` again and checks for errors. It also removes
contents of ``/etc/ceph``. Once all steps are successfully completed, all the
Ceph data from the selected host are removed.
Usage::
ceph-deploy purgedata [HOST] [HOST...]
Here, [HOST] is hostname of the node from where Ceph data will be purged.
**forgetkeys**: Remove authentication keys from the local directory. It removes
all the authentication keys i.e, monitor keyring, client.admin keyring,
bootstrap-osd and bootstrap-mds keyring from the node.
Usage: ceph-deploy forgetkeys
forgetkeys
----------
**pkg**: Manage packages on remote hosts. It is used for installing or removing
packages from remote hosts. The package names for installation or removal are to
specified after the command. Two options --install and --remove are used for this
purpose.
Remove authentication keys from the local directory. It removes all the
authentication keys i.e, monitor keyring, client.admin keyring, bootstrap-osd
and bootstrap-mds keyring from the node.
Usage: ceph-deploy pkg --install [PKGs] [HOST] [HOST...]
Usage::
ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]
ceph-deploy forgetkeys
pkg
---
Manage packages on remote hosts. It is used for installing or removing packages
from remote hosts. The package names for installation or removal are to be
specified after the command. Two options :option:`--install` and
:option:`--remove` are used for this purpose.
Usage::
ceph-deploy pkg --install [PKGs] [HOST] [HOST...]
ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]
Here, [PKGs] is comma-separated package names and [HOST] is hostname of the
remote node where packages are to installed or removed from.
remote node where packages are to be installed or removed from.
**calamari**: Install and configure Calamari nodes. It first checks if distro
is supported for Calamari installation by ceph-deploy. An argument **connect**
is used for installation and configuration. It checks for ceph-deploy
configuration file (cd_conf) and Calamari release repo or **calamari-minion** repo.
It relies on default for repo installation as it doesn't install Ceph unless
specified otherwise. **options** dictionary is also defined because ceph-deploy
calamari
--------
Install and configure Calamari nodes. It first checks if distro is supported
for Calamari installation by ceph-deploy. An argument ``connect`` is used for
installation and configuration. It checks for ``ceph-deploy`` configuration
file (cd_conf) and Calamari release repo or ``calamari-minion`` repo. It relies
on default for repo installation as it doesn't install Ceph unless specified
otherwise. ``options`` dictionary is also defined because ``ceph-deploy``
pops items internally which causes issues when those items are needed to be
available for every host. If the distro is Debian/Ubuntu, it is ensured that
proxy is disabled for **calamari-minion** repo. calamari-minion package is then
installed and custom repository files are added. minion config is placed
proxy is disabled for ``calamari-minion`` repo. ``calamari-minion`` package is
then installed and custom repository files are added. minion config is placed
prior to installation so that it is present when the minion first starts.
config directory, calamari salt config are created and salt-minion package
is installed. If the distro is Redhat/CentOS, the salt-minion service needs to
be started.
Usage: ceph-deploy calamari {connect} [HOST] [HOST...]
Usage::
ceph-deploy calamari {connect} [HOST] [HOST...]
Here, [HOST] is the hostname where Calamari is to be installed.
Other options like --release and --master can also be used this command.
An option ``--release`` can be used to use a given release from repositories
defined in :program:`ceph-deploy`'s configuration. Defaults to ``calamari-minion``.
Another option :option:`--master` can also be used with this command.
Options
=======
.. option:: --version
The current installed version of ceph-deploy.
The current installed version of :program:`ceph-deploy`.
.. option:: --username
@ -404,7 +502,7 @@ Options
.. option:: --ceph-conf
Use (or reuse) a given ceph.conf file.
Use (or reuse) a given ``ceph.conf`` file.
.. option:: --no-ssh-copykey
@ -412,7 +510,7 @@ Options
.. option:: --fsid
Provide an alternate FSID for ceph.conf generation.
Provide an alternate FSID for ``ceph.conf`` generation.
.. option:: --cluster-network
@ -422,10 +520,6 @@ Options
Specify the public network for a cluster.
.. option:: --release
Install a release known as CODENAME (default: firefly).
.. option:: --testing
Install the latest development release.
@ -472,15 +566,15 @@ Options
.. option:: --fs-type
Filesystem to use to format disk (xfs, btrfs or ext4).
Filesystem to use to format disk ``(xfs, btrfs or ext4)``.
.. option:: --dmcrypt
Encrypt [data-path] and/or journal devices with dm-crypt.
Encrypt [data-path] and/or journal devices with ``dm-crypt``.
.. option:: --dmcrypt-key-dir
Directory where dm-crypt keys are stored.
Directory where ``dm-crypt`` keys are stored.
.. option:: --install
@ -490,21 +584,18 @@ Options
Comma-separated package(s) to remove from remote hosts.
.. option:: --release
Use a given release from repositories defined in ceph-deploy's configuration.
Defaults to 'calamari-minion'.
.. option:: --master
The domain for the Calamari master server.
Availability
============
**ceph-deploy** is a part of the Ceph distributed storage system. Please refer to
:program:`ceph-deploy` is a part of the Ceph distributed storage system. Please refer to
the documentation at http://ceph.com/ceph-deploy/docs for more information.
See also
========

View File

@ -1,8 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH "CEPH-DEPLOY" "8" "December 06, 2014" "dev" "Ceph"
.TH "CEPH-DEPLOY" "8" "December 17, 2014" "dev" "Ceph"
.SH NAME
ceph-deploy \- Ceph quickstart tool
ceph-deploy \- Ceph deployment tool
.
.nr rst2man-indent-level 0
.
@ -79,6 +79,10 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.fi
.sp
.nf
\fBceph\-deploy\fP \fBosd\fP \fIcreate\fP [\fIceph\-node\fP]:[\fIdir\-path\fP]
.fi
.sp
.nf
\fBceph\-deploy\fP \fBadmin\fP [\fIadmin\-node\fP][\fIceph\-node\fP\&...]
.fi
.sp
@ -92,109 +96,139 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.sp
.SH DESCRIPTION
.sp
\fBceph\-deploy\fP is a tool which allows easy and quick deployment of a ceph
cluster without involving complex and detailed manual configuration. It uses
ssh to gain access to other ceph nodes from the admin node, sudo for
\fBceph\-deploy\fP is a tool which allows easy and quick deployment of a
Ceph cluster without involving complex and detailed manual configuration. It
uses ssh to gain access to other Ceph nodes from the admin node, sudo for
administrator privileges on them and the underlying Python scripts automates
the manual process of ceph installation on each node from the admin node itself.
the manual process of Ceph installation on each node from the admin node itself.
It can be easily run on an workstation and doesn\(aqt require servers, databases or
any other automated tools. With \fBceph\-deploy\fP, it is really easy to set up and
take down a cluster. However, it is not a generic deployment tool. It is a
specific tool which is designed for those who want to get ceph up and running
any other automated tools. With \fBceph\-deploy\fP, it is really easy to set
up and take down a cluster. However, it is not a generic deployment tool. It is
a specific tool which is designed for those who want to get Ceph up and running
quickly with only the unavoidable initial configuration settings and without the
overhead of installing other tools like \fBChef\fP, \fBPuppet\fP or \fBJuju\fP\&. Those
who want to customize security settings, partitions or directory locations and
want to set up a cluster following detailed manual steps, should use other tools
i.e, \fBChef\fP, \fBPuppet\fP, \fBJuju\fP or \fBCrowbar\fP\&.
.sp
With \fBceph\-deploy\fP, you can install ceph packages on remote nodes, create a
cluster, add monitors, gather/forget keys, add OSDs and metadata servers,
configure admin hosts or take down the cluster.
With \fBceph\-deploy\fP, you can install Ceph packages on remote nodes,
create a cluster, add monitors, gather/forget keys, add OSDs and metadata
servers, configure admin hosts or take down the cluster.
.SH COMMANDS
.SS new
.sp
\fBnew\fP: Start deploying a new cluster and write a configuration file and keyring
for it. It tries to copy ssh keys from admin node to gain passwordless ssh to
monitor node(s), validates host IP, creates a cluster with a new initial monitor
node or nodes for monitor quorum, a ceph configuration file, a monitor secret
keyring and a log file for the new cluster. It populates the newly created ceph
configuration file with \fBfsid\fP of cluster, hostnames and IP addresses of initial
monitor members under [global] section.
Start deploying a new cluster and write a configuration file and keyring for it.
It tries to copy ssh keys from admin node to gain passwordless ssh to monitor
node(s), validates host IP, creates a cluster with a new initial monitor node or
nodes for monitor quorum, a ceph configuration file, a monitor secret keyring and
a log file for the new cluster. It populates the newly created Ceph configuration
file with \fBfsid\fP of cluster, hostnames and IP addresses of initial monitor
members under \fB[global]\fP section.
.sp
Usage: ceph\-deploy new [MON][MON...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
Here, [MON] is initial monitor hostname, fqdn, or hostname:fqdn pair.
.nf
.ft C
ceph\-deploy new [MON][MON...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Other options like \-\-no\-ssh\-copykey, \-\-fsid, \-\-cluster\-network and
\-\-public\-network can also be used with this command.
Here, [MON] is initial monitor hostname (short hostname i.e, \fBhostname \-s\fP).
.sp
Other options like \fI\%--no-ssh-copykey\fP, \fI\%--fsid\fP,
\fI\%--cluster-network\fP and \fI\%--public-network\fP can also be used with
this command.
.sp
If more than one network interface is used, \fBpublic network\fP setting has to be
added under \fB[global]\fP section of ceph configuration file. If the public subnet
added under \fB[global]\fP section of Ceph configuration file. If the public subnet
is given, \fBnew\fP command will choose the one IP from the remote host that exists
within the subnet range. Public network can also be added at runtime using
\fB\-\-public\-network\fP option with the command as mentioned above.
\fI\%--public-network\fP option with the command as mentioned above.
.SS install
.sp
It is also recommended to change the default number of replicas in the Ceph
configuration file from 3 to 2 so that Ceph can achieve an \fB(active + clean)\fP
state with just two Ceph OSDs. To do that, add the following line under the
\fB[global]\fP section:
.sp
osd pool default size = 2
.sp
\fBinstall\fP: Install Ceph packages on remote hosts. As a first step it installs
Install Ceph packages on remote hosts. As a first step it installs
\fByum\-plugin\-priorities\fP in admin and other nodes using passwordless ssh and sudo
so that Ceph packages from upstream repository get more priority. It then detects
the platform and distribution for the hosts and installs Ceph normally by
downloading distro compatible packages if adequate repo for Ceph is already added.
It sets the \fBversion_kind\fP to be the right one out of \fBstable\fP, \fBtesting\fP,
and \fBdevelopment\fP\&. Generally the \fBstable\fP version and latest release is used
for installation. During detection of platform and distribution before installation,
if it finds the \fBdistro.init\fP to be \fBsysvinit\fP (Fedora, CentOS/RHEL etc), it
doesn\(aqt allow installation with custom cluster name and uses the default name
\fBceph\fP for the cluster.
\fB\-\-release\fP flag is used to get the latest release for installation. During
detection of platform and distribution before installation, if it finds the
\fBdistro.init\fP to be \fBsysvinit\fP (Fedora, CentOS/RHEL etc), it doesn\(aqt allow
installation with custom cluster name and uses the default name \fBceph\fP for the
cluster.
.sp
If the user explicitly specifies a custom repo url with \fB\-\-repo\-url\fP for
If the user explicitly specifies a custom repo url with \fI\%--repo-url\fP for
installation, anything detected from the configuration will be overridden and
the custom repository location will be used for installation of Ceph packages.
If required, valid custom repositories are also detected and installed. In case of
installation from a custom repo a boolean is used to determine the logic needed to
proceed with a custom repo installation. A custom repo install helper is used that
goes through config checks to retrieve repos (and any extra repos defined) and
installs them. \fBcd_conf\fP is the object built from argparse that holds the flags
and information needed to determine what metadata from the configuration is to be
used.
If required, valid custom repositories are also detected and installed. In case
of installation from a custom repo a boolean is used to determine the logic
needed to proceed with a custom repo installation. A custom repo install helper
is used that goes through config checks to retrieve repos (and any extra repos
defined) and installs them. \fBcd_conf\fP is the object built from \fBargparse\fP
that holds the flags and information needed to determine what metadata from the
configuration is to be used.
.sp
A user can also opt to install only the repository without installing ceph and
its dependencies by using \fB\-\-repo\fP option.
A user can also opt to install only the repository without installing Ceph and
its dependencies by using \fI\%--repo\fP option.
.sp
Usage: ceph\-deploy install [HOST][HOST...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy install [HOST][HOST...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is/are the host node(s) where Ceph is to be installed.
.sp
Other options like \-\-release, \-\-testing, \-\-dev, \-\-adjust\-repos, \-\-no\-adjust\-repos,
\-\-repo, \-\-local\-mirror, \-\-repo\-url and \-\-gpg\-url can also be used with this
command.
An option \fB\-\-release\fP is used to install a release known as CODENAME
(default: firefly).
.sp
\fBmds\fP: Deploy Ceph mds on remote hosts. A metadata server is needed to use
CephFS and the \fBmds\fP command is used to create one on the desired host node.
It uses the subcommand \fBcreate\fP to do so. \fBcreate\fP first gets the hostname
and distro information of the desired mds host. It then tries to read the
bootstrap\-mds key for the cluster and deploy it in the desired host. The key
generally has a format of {cluster}.bootstrap\-mds.keyring. If it doesn\(aqt finds
a keyring, it runs \fBgatherkeys\fP to get the keyring. It then creates a mds on the
desired host under the path /var/lib/ceph/mds/ in /var/lib/ceph/mds/{cluster}\-{name}
format and a bootstrap keyring under /var/lib/ceph/bootstrap\-mds/ in
/var/lib/ceph/bootstrap\-mds/{cluster}.keyring format. It then runs appropriate
Other options like \fI\%--testing\fP, \fI\%--dev\fP,:option:\fI\-\-adjust\-repos\fP,
\fI\%--no-adjust-repos\fP, \fI\%--repo\fP, \fI\%--local-mirror\fP,
\fI\%--repo-url\fP and \fI\%--gpg-url\fP can also be used with this command.
.SS mds
.sp
Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS and
the \fBmds\fP command is used to create one on the desired host node. It uses the
subcommand \fBcreate\fP to do so. \fBcreate\fP first gets the hostname and distro
information of the desired mds host. It then tries to read the \fBbootstrap\-mds\fP
key for the cluster and deploy it in the desired host. The key generally has a
format of \fB{cluster}.bootstrap\-mds.keyring\fP\&. If it doesn\(aqt finds a keyring,
it runs \fBgatherkeys\fP to get the keyring. It then creates a mds on the desired
host under the path \fB/var/lib/ceph/mds/\fP in \fB/var/lib/ceph/mds/{cluster}\-{name}\fP
format and a bootstrap keyring under \fB/var/lib/ceph/bootstrap\-mds/\fP in
\fB/var/lib/ceph/bootstrap\-mds/{cluster}.keyring\fP format. It then runs appropriate
commands based on \fBdistro.init\fP to start the \fBmds\fP\&. To remove the mds,
subcommand \fBdestroy\fP is used.
.sp
Usage: ceph\-deploy mds create [HOST[:DAEMON\-NAME]] [HOST[:DAEMON\-NAME]...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy mds create [HOST[:DAEMON\-NAME]] [HOST[:DAEMON\-NAME]...]
ceph\-deploy mds destroy [HOST[:DAEMON\-NAME]] [HOST[:DAEMON\-NAME]...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
The [DAEMON\-NAME] is optional.
.SS mon
.sp
\fBmon\fP: Deploy Ceph monitor on remote hosts. \fBmon\fP makes use of certain
subcommands to deploy Ceph monitors on other nodes.
Deploy Ceph monitor on remote hosts. \fBmon\fP makes use of certain subcommands
to deploy Ceph monitors on other nodes.
.sp
Subcommand \fBcreate\-initial\fP deploys for monitors defined in
\fBmon initial members\fP under \fB[global]\fP section in Ceph configuration file,
@ -202,255 +236,469 @@ wait until they form quorum and then gatherkeys, reporting the monitor status
along the process. If monitors don\(aqt form quorum the command will eventually
time out.
.sp
Usage: ceph\-deploy mon create\-initial
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
Subcommand \fBcreate\fP is used to deploy Ceph monitors by explicitly specifying the
hosts which are desired to be made monitors. If no hosts are specified it will
default to use the \fBmon initial members\fP defined under \fB[global]\fP section of
Ceph configuration file. \fBcreate\fP first detects platform and distro for desired
hosts and checks if hostname is compatible for deployment. It then uses the monitor
keyring initially created using \fBnew\fP command and deploys the monitor in desired
host. If multiple hosts were specified during \fBnew\fP command i.e, if there are
multiple hosts in \fBmon initial members\fP and multiple keyrings were created then
a concatenated keyring is used for deployment of monitors. In this process a
keyring parser is used which looks for \fB[entity]\fP sections in monitor keyrings
and returns a list of those sections. A helper is then used to collect all
keyrings into a single blob that will be used to inject it to monitors with
\fB\-\-mkfs\fP on remote nodes. All keyring files are concatenated to be in a
directory ending with \fB\&.keyring\fP\&. During this process the helper uses list of
sections returned by keyring parser to check if an entity is already present in
a keyring and if not, adds it. The concatenated keyring is used for deployment
.nf
.ft C
ceph\-deploy mon create\-initial
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Subcommand \fBcreate\fP is used to deploy Ceph monitors by explicitly specifying
the hosts which are desired to be made monitors. If no hosts are specified it
will default to use the \fBmon initial members\fP defined under \fB[global]\fP
section of Ceph configuration file. \fBcreate\fP first detects platform and distro
for desired hosts and checks if hostname is compatible for deployment. It then
uses the monitor keyring initially created using \fBnew\fP command and deploys the
monitor in desired host. If multiple hosts were specified during \fBnew\fP command
i.e, if there are multiple hosts in \fBmon initial members\fP and multiple keyrings
were created then a concatenated keyring is used for deployment of monitors. In
this process a keyring parser is used which looks for \fB[entity]\fP sections in
monitor keyrings and returns a list of those sections. A helper is then used to
collect all keyrings into a single blob that will be used to inject it to monitors
with \fI\-\-mkfs\fP on remote nodes. All keyring files are concatenated to be
in a directory ending with \fB\&.keyring\fP\&. During this process the helper uses list
of sections returned by keyring parser to check if an entity is already present
in a keyring and if not, adds it. The concatenated keyring is used for deployment
of monitors to desired multiple hosts.
.sp
Usage: ceph\-deploy mon create [HOST] [HOST...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy mon create [HOST] [HOST...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is hostname of desired monitor host(s).
.sp
Subcommand \fBadd\fP is used to add a monitor to an existing cluster. It first
detects platform and distro for desired host and checks if hostname is
compatible for deployment. It then uses the monitor keyring, ensures
configuration for new monitor host and adds the monitor to the cluster.
If the section for the monitor exists and defines a mon addr that
will be used, otherwise it will fallback by resolving the hostname to an
IP. If \-\-address is used it will override all other options. After
adding the monitor to the cluster, it gives it some time to start. It then
looks for any monitor errors and checks monitor status. Monitor errors
arise if the monitor is not added in \fBmon initial members\fP, if it doesn\(aqt
exist in monmap and if neither public_addr nor public_network keys were
defined for monitors. Under such conditions, monitors may not be able to form
quorum. Monitor status tells if the monitor is up and running normally. The
status is checked by running ceph daemon mon.hostname mon_status on
remote end which provides the output and returns a boolean status of what is
going on. \fBFalse\fP means a monitor that is not fine even if it is up and
running, while \fBTrue\fP means the monitor is up and running correctly.
detects platform and distro for desired host and checks if hostname is compatible
for deployment. It then uses the monitor keyring, ensures configuration for new
monitor host and adds the monitor to the cluster. If the section for the monitor
exists and defines a mon addr that will be used, otherwise it will fallback by
resolving the hostname to an IP. If \fI\%--address\fP is used it will override
all other options. After adding the monitor to the cluster, it gives it some time
to start. It then looks for any monitor errors and checks monitor status. Monitor
errors arise if the monitor is not added in \fBmon initial members\fP, if it doesn\(aqt
exist in \fBmonmap\fP and if neither \fBpublic_addr\fP nor \fBpublic_network\fP keys
were defined for monitors. Under such conditions, monitors may not be able to
form quorum. Monitor status tells if the monitor is up and running normally. The
status is checked by running \fBceph daemon mon.hostname mon_status\fP on remote
end which provides the output and returns a boolean status of what is going on.
\fBFalse\fP means a monitor that is not fine even if it is up and running, while
\fBTrue\fP means the monitor is up and running correctly.
.sp
Usage: ceph\-deploy mon add [HOST]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy mon add [HOST]
ceph\-deploy mon add [HOST] \-\-address [IP]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor
node.
.sp
Subcommand \fBdestroy\fP is used to completely remove monitors on remote hosts. It
takes hostnames as arguments. It stops the monitor, verifies if ceph\-mon daemon
really stopped, creates an archive directory \fBmon\-remove\fP under /var/lib/ceph/,
archives old monitor directory in {cluster}\-{hostname}\-{stamp} format in it and
removes the monitor from cluster by running \fBceph remove...\fP command.
Subcommand \fBdestroy\fP is used to completely remove monitors on remote hosts.
It takes hostnames as arguments. It stops the monitor, verifies if \fBceph\-mon\fP
daemon really stopped, creates an archive directory \fBmon\-remove\fP under
\fB/var/lib/ceph/\fP, archives old monitor directory in
\fB{cluster}\-{hostname}\-{stamp}\fP format in it and removes the monitor from
cluster by running \fBceph remove...\fP command.
.sp
Usage: ceph\-deploy mon destroy [HOST]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy mon destroy [HOST]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is hostname of monitor that is to be removed.
.SS gatherkeys
.sp
\fBgatherkeys\fP: Gather authentication keys for provisioning new nodes. It
takes hostnames as arguments. It checks for and fetches client.admin keyring,
monitor keyring and bootstrap\-mds/bootstrap\-osd keyring from monitor host.
These authentication keys are used when new monitors/OSDs/MDS are added to
the cluster.
Gather authentication keys for provisioning new nodes. It takes hostnames as
arguments. It checks for and fetches \fBclient.admin\fP keyring, monitor keyring
and \fBbootstrap\-mds/bootstrap\-osd\fP keyring from monitor host. These
authentication keys are used when new \fBmonitors/OSDs/MDS\fP are added to the
cluster.
.sp
Usage: ceph\-deploy gatherkeys [HOST] [HOST...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy gatherkeys [HOST] [HOST...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is hostname of the monitor from where keys are to be pulled.
.SS disk
.sp
\fBdisk\fP: Manage disks on a remote host. It actually triggers the \fBceph\-disk\fP
utility and it\(aqs subcommands to manage disks.
Manage disks on a remote host. It actually triggers the \fBceph\-disk\fP utility
and it\(aqs subcommands to manage disks.
.sp
Subcommand \fBlist\fP lists disk partitions and ceph OSDs.
Subcommand \fBlist\fP lists disk partitions and Ceph OSDs.
.sp
Usage: ceph\-deploy disk list [HOST:[DISK]]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy disk list [HOST:[DISK]]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
.sp
Subcommand \fBprepare\fP prepares a directory, disk or drive for a ceph OSD. It
creates a GPT partition, marks the partition with ceph type uuid, creates a
file system, marks the file system as ready for ceph consumption, uses entire
Subcommand \fBprepare\fP prepares a directory, disk or drive for a Ceph OSD. It
creates a GPT partition, marks the partition with Ceph type uuid, creates a
file system, marks the file system as ready for Ceph consumption, uses entire
partition and adds a new partition to the journal disk.
.sp
Usage: ceph\-deploy disk prepare [HOST:[DISK]]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy disk prepare [HOST:[DISK]]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
.sp
Subcommand \fBactivate\fP activates the ceph OSD. It mounts the volume in a temporary
location, allocates an OSD id (if needed), remounts in the correct location
/var/lib/ceph/osd/$cluster\-$id and starts ceph\-osd. It is triggered by udev
when it sees the OSD GPT partition type or on ceph service start with
\(aqceph disk activate\-all\(aq.
Subcommand \fBactivate\fP activates the Ceph OSD. It mounts the volume in a
temporary location, allocates an OSD id (if needed), remounts in the correct
location \fB/var/lib/ceph/osd/$cluster\-$id\fP and starts \fBceph\-osd\fP\&. It is
triggered by \fBudev\fP when it sees the OSD GPT partition type or on ceph service
start with \fBceph disk activate\-all\fP\&.
.sp
Usage: ceph\-deploy disk activate [HOST:[DISK]]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy disk activate [HOST:[DISK]]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
.sp
Subcommand \fBzap\fP zaps/erases/destroys a device\(aqs partition table and contents.
It actually uses \(aqsgdisk\(aq and it\(aqs option \(aq\-\-zap\-all\(aq to destroy both
GPT and MBR data structures so that the disk becomes suitable for
repartitioning. \(aqsgdisk\(aq then uses \(aq\-\-mbrtogpt\(aq to convert the MBR or
BSD disklabel disk to a GPT disk. The \fBprepare\fP subcommand can now be
executed which will create a new GPT partition.
It actually uses \fBsgdisk\fP and it\(aqs option \fB\-\-zap\-all\fP to destroy both GPT and
MBR data structures so that the disk becomes suitable for repartitioning.
\fBsgdisk\fP then uses \fB\-\-mbrtogpt\fP to convert the MBR or BSD disklabel disk to a
GPT disk. The \fBprepare\fP subcommand can now be executed which will create a new
GPT partition.
.sp
Usage: ceph\-deploy disk zap [HOST:[DISK]]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy disk zap [HOST:[DISK]]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
.SS osd
.sp
\fBosd\fP: Manage OSDs by preparing data disk on remote host. \fBosd\fP makes use
of certain subcommands for managing OSDs.
Manage OSDs by preparing data disk on remote host. \fBosd\fP makes use of certain
subcommands for managing OSDs.
.sp
Subcommand \fBprepare\fP prepares a directory, disk or drive for a ceph OSD. It
first checks against multiple OSDs getting created and warns about the possibility
of more than the recommended which would cause issues with max allowed PIDs in a
system. It then reads the bootstrap\-osd key for the cluster or writes the bootstrap
key if not found. It then uses \fBceph\-disk\fP utility\(aqs \fBprepare\fP subcommand to
prepare the disk, journal and deploy the OSD on the desired host. Once prepared,
it gives some time to the OSD to settle and checks for any possible errors and if
found, reports to the user.
Subcommand \fBprepare\fP prepares a directory, disk or drive for a Ceph OSD. It
first checks against multiple OSDs getting created and warns about the
possibility of more than the recommended which would cause issues with max
allowed PIDs in a system. It then reads the bootstrap\-osd key for the cluster or
writes the bootstrap key if not found. It then uses \fBceph\-disk\fP
utility\(aqs \fBprepare\fP subcommand to prepare the disk, journal and deploy the OSD
on the desired host. Once prepared, it gives some time to the OSD to settle and
checks for any possible errors and if found, reports to the user.
.sp
Usage: ceph\-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
Subcommand \fBactivate\fP activates the OSD prepared using \fIprepare\fP subcommand.
.nf
.ft C
ceph\-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Subcommand \fBactivate\fP activates the OSD prepared using \fBprepare\fP subcommand.
It actually uses \fBceph\-disk\fP utility\(aqs \fBactivate\fP subcommand with
appropriate init type based on distro to activate the OSD. Once activated,
it gives some time to the OSD to start and checks for any possible errors and if
appropriate init type based on distro to activate the OSD. Once activated, it
gives some time to the OSD to start and checks for any possible errors and if
found, reports to the user. It checks the status of the prepared OSD, checks the
OSD tree and makes sure the OSDs are up and in.
.sp
Usage: ceph\-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Subcommand \fBcreate\fP uses \fBprepare\fP and \fBactivate\fP subcommands to create an
OSD.
.sp
Usage: ceph\-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
Subcommand \fBlist\fP lists disk partitions, ceph OSDs and prints OSD metadata.
.nf
.ft C
ceph\-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Subcommand \fBlist\fP lists disk partitions, Ceph OSDs and prints OSD metadata.
It gets the osd tree from a monitor host, uses the \fBceph\-disk\-list\fP output
and gets the mount point by matching the line where the partition mentions
the OSD name, reads metadata from files, checks if a journal path exists,
if the OSD is in a OSD tree and prints the OSD metadata.
.sp
Usage: ceph\-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
Subcommand \fBdestroy\fP is used to completely remove OSDs from remote hosts. It
first takes the desired OSD out of the cluster and waits for the cluster to
rebalance and placement groups to reach \fB(active+clean)\fP state again. It then
stops the OSD, removes the OSD from CRUSH map, removes the OSD authentication
key, removes the OSD and updates the cluster\(aqs configuration file accordingly.
.nf
.ft C
ceph\-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
.ft P
.fi
.UNINDENT
.UNINDENT
.SS admin
.sp
Usage: ceph\-deploy osd destroy HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
Push configuration and \fBclient.admin\fP key to a remote host. It takes
the \fB{cluster}.client.admin.keyring\fP from admin node and writes it under
\fB/etc/ceph\fP directory of desired node.
.sp
\fBadmin\fP: Push configuration and client.admin key to a remote host. It takes
the {cluster}.client.admin.keyring from admin node and writes it under /etc/ceph
directory of desired node.
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
Usage: ceph\-deploy admin [HOST] [HOST...]
.nf
.ft C
ceph\-deploy admin [HOST] [HOST...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is desired host to be configured for Ceph administration.
.SS config
.sp
\fBconfig\fP: Push/pull configuration file to/from a remote host. It uses
\fBpush\fP subcommand to takes the configuration file from admin host and
write it to remote host under /etc/ceph directory. It uses \fBpull\fP subcommand
to do the opposite i.e, pull the configuration file under /etc/ceph directory
of remote host to admin node.
Push/pull configuration file to/from a remote host. It uses \fBpush\fP subcommand
to takes the configuration file from admin host and write it to remote host under
\fB/etc/ceph\fP directory. It uses \fBpull\fP subcommand to do the opposite i.e, pull
the configuration file under \fB/etc/ceph\fP directory of remote host to admin node.
.sp
Usage: ceph\-deploy push [HOST] [HOST...]
.sp
Here, [HOST] is the hostname of the node where config file will be pushed.
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy push [HOST] [HOST...]
ceph\-deploy pull [HOST] [HOST...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is the hostname of the node from where config file will be pulled.
Here, [HOST] is the hostname of the node where config file will be pushed to or
pulled from.
.SS uninstall
.sp
\fBuninstall\fP: Remove Ceph packages from remote hosts. It detects the platform
and distro of selected host and uninstalls Ceph packages from it. However, some
dependencies like librbd1 and librados2 \fBwill not\fP be removed because they can
cause issues with qemu\-kvm.
Remove Ceph packages from remote hosts. It detects the platform and distro of
selected host and uninstalls Ceph packages from it. However, some dependencies
like \fBlibrbd1\fP and \fBlibrados2\fP will not be removed because they can cause
issues with \fBqemu\-kvm\fP\&.
.sp
Usage: ceph\-deploy uninstall [HOST] [HOST...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy uninstall [HOST] [HOST...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is hostname of the node from where Ceph will be uninstalled.
.SS purge
.sp
\fBpurge\fP: Remove Ceph packages from remote hosts and purge all data. It detects
the platform and distro of selected host, uninstalls Ceph packages and purges all
data. However, some dependencies like librbd1 and librados2 \fBwill not\fP be removed
because they can cause issues with qemu\-kvm.
Remove Ceph packages from remote hosts and purge all data. It detects the
platform and distro of selected host, uninstalls Ceph packages and purges all
data. However, some dependencies like \fBlibrbd1\fP and \fBlibrados2\fP will not be
removed because they can cause issues with \fBqemu\-kvm\fP\&.
.sp
Usage: ceph\-deploy purge [HOST] [HOST...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy purge [HOST] [HOST...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is hostname of the node from where Ceph will be purged.
.SS purgedata
.sp
\fBpurgedata\fP: Purge (delete, destroy, discard, shred) any Ceph data from
/var/lib/ceph. Once it detects the platform and distro of desired host, it first
checks if Ceph is still installed on the selected host and if installed, it won\(aqt
purge data from it. If Ceph is already uninstalled from the host, it tries to
remove the contents of /var/lib/ceph. If it fails then probably OSDs are still
mounted and needs to be unmounted to continue. It unmount the OSDs and tries to
remove the contents of /var/lib/ceph again and checks for errors. It also
removes contents of /etc/ceph. Once all steps are successfully completed, all
the Ceph data from the selected host are removed.
Purge (delete, destroy, discard, shred) any Ceph data from \fB/var/lib/ceph\fP\&.
Once it detects the platform and distro of desired host, it first checks if Ceph
is still installed on the selected host and if installed, it won\(aqt purge data
from it. If Ceph is already uninstalled from the host, it tries to remove the
contents of \fB/var/lib/ceph\fP\&. If it fails then probably OSDs are still mounted
and needs to be unmounted to continue. It unmount the OSDs and tries to remove
the contents of \fB/var/lib/ceph\fP again and checks for errors. It also removes
contents of \fB/etc/ceph\fP\&. Once all steps are successfully completed, all the
Ceph data from the selected host are removed.
.sp
Usage: ceph\-deploy purgedata [HOST] [HOST...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy purgedata [HOST] [HOST...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is hostname of the node from where Ceph data will be purged.
.SS forgetkeys
.sp
\fBforgetkeys\fP: Remove authentication keys from the local directory. It removes
all the authentication keys i.e, monitor keyring, client.admin keyring,
bootstrap\-osd and bootstrap\-mds keyring from the node.
Remove authentication keys from the local directory. It removes all the
authentication keys i.e, monitor keyring, client.admin keyring, bootstrap\-osd
and bootstrap\-mds keyring from the node.
.sp
Usage: ceph\-deploy forgetkeys
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
\fBpkg\fP: Manage packages on remote hosts. It is used for installing or removing
packages from remote hosts. The package names for installation or removal are to
specified after the command. Two options \-\-install and \-\-remove are used for this
purpose.
.nf
.ft C
ceph\-deploy forgetkeys
.ft P
.fi
.UNINDENT
.UNINDENT
.SS pkg
.sp
Usage: ceph\-deploy pkg \-\-install [PKGs] [HOST] [HOST...]
Manage packages on remote hosts. It is used for installing or removing packages
from remote hosts. The package names for installation or removal are to be
specified after the command. Two options \fI\%--install\fP and
\fI\%--remove\fP are used for this purpose.
.sp
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy pkg \-\-install [PKGs] [HOST] [HOST...]
ceph\-deploy pkg \-\-remove [PKGs] [HOST] [HOST...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [PKGs] is comma\-separated package names and [HOST] is hostname of the
remote node where packages are to installed or removed from.
remote node where packages are to be installed or removed from.
.SS calamari
.sp
\fBcalamari\fP: Install and configure Calamari nodes. It first checks if distro
is supported for Calamari installation by ceph\-deploy. An argument \fBconnect\fP
is used for installation and configuration. It checks for ceph\-deploy
configuration file (cd_conf) and Calamari release repo or \fBcalamari\-minion\fP repo.
It relies on default for repo installation as it doesn\(aqt install Ceph unless
specified otherwise. \fBoptions\fP dictionary is also defined because ceph\-deploy
Install and configure Calamari nodes. It first checks if distro is supported
for Calamari installation by ceph\-deploy. An argument \fBconnect\fP is used for
installation and configuration. It checks for \fBceph\-deploy\fP configuration
file (cd_conf) and Calamari release repo or \fBcalamari\-minion\fP repo. It relies
on default for repo installation as it doesn\(aqt install Ceph unless specified
otherwise. \fBoptions\fP dictionary is also defined because \fBceph\-deploy\fP
pops items internally which causes issues when those items are needed to be
available for every host. If the distro is Debian/Ubuntu, it is ensured that
proxy is disabled for \fBcalamari\-minion\fP repo. calamari\-minion package is then
installed and custom repository files are added. minion config is placed
proxy is disabled for \fBcalamari\-minion\fP repo. \fBcalamari\-minion\fP package is
then installed and custom repository files are added. minion config is placed
prior to installation so that it is present when the minion first starts.
config directory, calamari salt config are created and salt\-minion package
is installed. If the distro is Redhat/CentOS, the salt\-minion service needs to
be started.
.sp
Usage: ceph\-deploy calamari {connect} [HOST] [HOST...]
Usage:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
ceph\-deploy calamari {connect} [HOST] [HOST...]
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Here, [HOST] is the hostname where Calamari is to be installed.
.sp
Other options like \-\-release and \-\-master can also be used this command.
An option \fB\-\-release\fP can be used to use a given release from repositories
defined in \fBceph\-deploy\fP\(aqs configuration. Defaults to \fBcalamari\-minion\fP\&.
.sp
Another option \fI\%--master\fP can also be used with this command.
.SH OPTIONS
.INDENT 0.0
.TP
.B \-\-version
The current installed version of ceph\-deploy.
The current installed version of \fBceph\-deploy\fP\&.
.UNINDENT
.INDENT 0.0
.TP
@ -470,7 +718,7 @@ Name of the cluster.
.INDENT 0.0
.TP
.B \-\-ceph\-conf
Use (or reuse) a given ceph.conf file.
Use (or reuse) a given \fBceph.conf\fP file.
.UNINDENT
.INDENT 0.0
.TP
@ -480,7 +728,7 @@ Do not attempt to copy ssh keys.
.INDENT 0.0
.TP
.B \-\-fsid
Provide an alternate FSID for ceph.conf generation.
Provide an alternate FSID for \fBceph.conf\fP generation.
.UNINDENT
.INDENT 0.0
.TP
@ -494,11 +742,6 @@ Specify the public network for a cluster.
.UNINDENT
.INDENT 0.0
.TP
.B \-\-release
Install a release known as CODENAME (default: firefly).
.UNINDENT
.INDENT 0.0
.TP
.B \-\-testing
Install the latest development release.
.UNINDENT
@ -555,17 +798,17 @@ Destroy the partition table and content of a disk.
.INDENT 0.0
.TP
.B \-\-fs\-type
Filesystem to use to format disk (xfs, btrfs or ext4).
Filesystem to use to format disk \fB(xfs, btrfs or ext4)\fP\&.
.UNINDENT
.INDENT 0.0
.TP
.B \-\-dmcrypt
Encrypt [data\-path] and/or journal devices with dm\-crypt.
Encrypt [data\-path] and/or journal devices with \fBdm\-crypt\fP\&.
.UNINDENT
.INDENT 0.0
.TP
.B \-\-dmcrypt\-key\-dir
Directory where dm\-crypt keys are stored.
Directory where \fBdm\-crypt\fP keys are stored.
.UNINDENT
.INDENT 0.0
.TP
@ -579,12 +822,6 @@ Comma\-separated package(s) to remove from remote hosts.
.UNINDENT
.INDENT 0.0
.TP
.B \-\-release
Use a given release from repositories defined in ceph\-deploy\(aqs configuration.
Defaults to \(aqcalamari\-minion\(aq.
.UNINDENT
.INDENT 0.0
.TP
.B \-\-master
The domain for the Calamari master server.
.UNINDENT