2015-04-07 10:11:29 +00:00
|
|
|
:orphan:
|
|
|
|
|
2014-12-05 18:37:44 +00:00
|
|
|
=====================================
|
2014-12-17 15:14:52 +00:00
|
|
|
ceph-deploy -- Ceph deployment tool
|
2014-12-05 18:37:44 +00:00
|
|
|
=====================================
|
|
|
|
|
|
|
|
.. program:: ceph-deploy
|
|
|
|
|
|
|
|
Synopsis
|
|
|
|
========
|
|
|
|
|
|
|
|
| **ceph-deploy** **new** [*initial-monitor-node(s)*]
|
|
|
|
|
|
|
|
| **ceph-deploy** **install** [*ceph-node*] [*ceph-node*...]
|
|
|
|
|
|
|
|
| **ceph-deploy** **mon** *create-initial*
|
|
|
|
|
|
|
|
| **ceph-deploy** **osd** *prepare* [*ceph-node*]:[*dir-path*]
|
|
|
|
|
|
|
|
| **ceph-deploy** **osd** *activate* [*ceph-node*]:[*dir-path*]
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
| **ceph-deploy** **osd** *create* [*ceph-node*]:[*dir-path*]
|
|
|
|
|
2014-12-05 18:37:44 +00:00
|
|
|
| **ceph-deploy** **admin** [*admin-node*][*ceph-node*...]
|
|
|
|
|
|
|
|
| **ceph-deploy** **purgedata** [*ceph-node*][*ceph-node*...]
|
|
|
|
|
|
|
|
| **ceph-deploy** **forgetkeys**
|
|
|
|
|
|
|
|
Description
|
|
|
|
===========
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
:program:`ceph-deploy` is a tool which allows easy and quick deployment of a
|
|
|
|
Ceph cluster without involving complex and detailed manual configuration. It
|
|
|
|
uses ssh to gain access to other Ceph nodes from the admin node, sudo for
|
2014-12-05 18:37:44 +00:00
|
|
|
administrator privileges on them and the underlying Python scripts automates
|
2014-12-17 15:14:52 +00:00
|
|
|
the manual process of Ceph installation on each node from the admin node itself.
|
2014-12-05 18:37:44 +00:00
|
|
|
It can be easily run on an workstation and doesn't require servers, databases or
|
2014-12-17 15:14:52 +00:00
|
|
|
any other automated tools. With :program:`ceph-deploy`, it is really easy to set
|
|
|
|
up and take down a cluster. However, it is not a generic deployment tool. It is
|
|
|
|
a specific tool which is designed for those who want to get Ceph up and running
|
2014-12-05 18:37:44 +00:00
|
|
|
quickly with only the unavoidable initial configuration settings and without the
|
2014-12-17 15:14:52 +00:00
|
|
|
overhead of installing other tools like ``Chef``, ``Puppet`` or ``Juju``. Those
|
2014-12-05 18:37:44 +00:00
|
|
|
who want to customize security settings, partitions or directory locations and
|
|
|
|
want to set up a cluster following detailed manual steps, should use other tools
|
2014-12-17 15:14:52 +00:00
|
|
|
i.e, ``Chef``, ``Puppet``, ``Juju`` or ``Crowbar``.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
With :program:`ceph-deploy`, you can install Ceph packages on remote nodes,
|
|
|
|
create a cluster, add monitors, gather/forget keys, add OSDs and metadata
|
|
|
|
servers, configure admin hosts or take down the cluster.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Commands
|
|
|
|
========
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
new
|
|
|
|
---
|
|
|
|
|
|
|
|
Start deploying a new cluster and write a configuration file and keyring for it.
|
|
|
|
It tries to copy ssh keys from admin node to gain passwordless ssh to monitor
|
|
|
|
node(s), validates host IP, creates a cluster with a new initial monitor node or
|
|
|
|
nodes for monitor quorum, a ceph configuration file, a monitor secret keyring and
|
|
|
|
a log file for the new cluster. It populates the newly created Ceph configuration
|
|
|
|
file with ``fsid`` of cluster, hostnames and IP addresses of initial monitor
|
|
|
|
members under ``[global]`` section.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
ceph-deploy new [MON][MON...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2015-06-17 15:46:50 +00:00
|
|
|
Here, [MON] is the initial monitor hostname (short hostname i.e, ``hostname -s``).
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Other options like :option:`--no-ssh-copykey`, :option:`--fsid`,
|
|
|
|
:option:`--cluster-network` and :option:`--public-network` can also be used with
|
|
|
|
this command.
|
|
|
|
|
|
|
|
If more than one network interface is used, ``public network`` setting has to be
|
|
|
|
added under ``[global]`` section of Ceph configuration file. If the public subnet
|
|
|
|
is given, ``new`` command will choose the one IP from the remote host that exists
|
2014-12-05 18:37:44 +00:00
|
|
|
within the subnet range. Public network can also be added at runtime using
|
2014-12-17 15:14:52 +00:00
|
|
|
:option:`--public-network` option with the command as mentioned above.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
install
|
|
|
|
-------
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Install Ceph packages on remote hosts. As a first step it installs
|
|
|
|
``yum-plugin-priorities`` in admin and other nodes using passwordless ssh and sudo
|
2014-12-05 18:37:44 +00:00
|
|
|
so that Ceph packages from upstream repository get more priority. It then detects
|
|
|
|
the platform and distribution for the hosts and installs Ceph normally by
|
|
|
|
downloading distro compatible packages if adequate repo for Ceph is already added.
|
2014-12-17 15:14:52 +00:00
|
|
|
``--release`` flag is used to get the latest release for installation. During
|
|
|
|
detection of platform and distribution before installation, if it finds the
|
|
|
|
``distro.init`` to be ``sysvinit`` (Fedora, CentOS/RHEL etc), it doesn't allow
|
|
|
|
installation with custom cluster name and uses the default name ``ceph`` for the
|
|
|
|
cluster.
|
|
|
|
|
|
|
|
If the user explicitly specifies a custom repo url with :option:`--repo-url` for
|
2014-12-05 18:37:44 +00:00
|
|
|
installation, anything detected from the configuration will be overridden and
|
|
|
|
the custom repository location will be used for installation of Ceph packages.
|
2014-12-17 15:14:52 +00:00
|
|
|
If required, valid custom repositories are also detected and installed. In case
|
|
|
|
of installation from a custom repo a boolean is used to determine the logic
|
|
|
|
needed to proceed with a custom repo installation. A custom repo install helper
|
|
|
|
is used that goes through config checks to retrieve repos (and any extra repos
|
|
|
|
defined) and installs them. ``cd_conf`` is the object built from ``argparse``
|
|
|
|
that holds the flags and information needed to determine what metadata from the
|
|
|
|
configuration is to be used.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
A user can also opt to install only the repository without installing Ceph and
|
|
|
|
its dependencies by using :option:`--repo` option.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy install [HOST][HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is/are the host node(s) where Ceph is to be installed.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
An option ``--release`` is used to install a release known as CODENAME
|
|
|
|
(default: firefly).
|
|
|
|
|
2014-12-20 08:17:55 +00:00
|
|
|
Other options like :option:`--testing`, :option:`--dev`, :option:`--adjust-repos`,
|
2014-12-17 15:14:52 +00:00
|
|
|
:option:`--no-adjust-repos`, :option:`--repo`, :option:`--local-mirror`,
|
|
|
|
:option:`--repo-url` and :option:`--gpg-url` can also be used with this command.
|
|
|
|
|
|
|
|
|
|
|
|
mds
|
|
|
|
---
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS and
|
|
|
|
the ``mds`` command is used to create one on the desired host node. It uses the
|
|
|
|
subcommand ``create`` to do so. ``create`` first gets the hostname and distro
|
|
|
|
information of the desired mds host. It then tries to read the ``bootstrap-mds``
|
|
|
|
key for the cluster and deploy it in the desired host. The key generally has a
|
|
|
|
format of ``{cluster}.bootstrap-mds.keyring``. If it doesn't finds a keyring,
|
|
|
|
it runs ``gatherkeys`` to get the keyring. It then creates a mds on the desired
|
|
|
|
host under the path ``/var/lib/ceph/mds/`` in ``/var/lib/ceph/mds/{cluster}-{name}``
|
|
|
|
format and a bootstrap keyring under ``/var/lib/ceph/bootstrap-mds/`` in
|
|
|
|
``/var/lib/ceph/bootstrap-mds/{cluster}.keyring`` format. It then runs appropriate
|
2016-05-17 13:33:19 +00:00
|
|
|
commands based on ``distro.init`` to start the ``mds``.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]
|
|
|
|
|
2014-12-05 18:37:44 +00:00
|
|
|
The [DAEMON-NAME] is optional.
|
|
|
|
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
mon
|
|
|
|
---
|
|
|
|
|
|
|
|
Deploy Ceph monitor on remote hosts. ``mon`` makes use of certain subcommands
|
|
|
|
to deploy Ceph monitors on other nodes.
|
|
|
|
|
|
|
|
Subcommand ``create-initial`` deploys for monitors defined in
|
|
|
|
``mon initial members`` under ``[global]`` section in Ceph configuration file,
|
2014-12-05 18:37:44 +00:00
|
|
|
wait until they form quorum and then gatherkeys, reporting the monitor status
|
|
|
|
along the process. If monitors don't form quorum the command will eventually
|
|
|
|
time out.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy mon create-initial
|
|
|
|
|
|
|
|
Subcommand ``create`` is used to deploy Ceph monitors by explicitly specifying
|
|
|
|
the hosts which are desired to be made monitors. If no hosts are specified it
|
|
|
|
will default to use the ``mon initial members`` defined under ``[global]``
|
|
|
|
section of Ceph configuration file. ``create`` first detects platform and distro
|
|
|
|
for desired hosts and checks if hostname is compatible for deployment. It then
|
|
|
|
uses the monitor keyring initially created using ``new`` command and deploys the
|
|
|
|
monitor in desired host. If multiple hosts were specified during ``new`` command
|
|
|
|
i.e, if there are multiple hosts in ``mon initial members`` and multiple keyrings
|
|
|
|
were created then a concatenated keyring is used for deployment of monitors. In
|
|
|
|
this process a keyring parser is used which looks for ``[entity]`` sections in
|
|
|
|
monitor keyrings and returns a list of those sections. A helper is then used to
|
|
|
|
collect all keyrings into a single blob that will be used to inject it to monitors
|
|
|
|
with :option:`--mkfs` on remote nodes. All keyring files are concatenated to be
|
|
|
|
in a directory ending with ``.keyring``. During this process the helper uses list
|
|
|
|
of sections returned by keyring parser to check if an entity is already present
|
|
|
|
in a keyring and if not, adds it. The concatenated keyring is used for deployment
|
2014-12-05 18:37:44 +00:00
|
|
|
of monitors to desired multiple hosts.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy mon create [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is hostname of desired monitor host(s).
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Subcommand ``add`` is used to add a monitor to an existing cluster. It first
|
|
|
|
detects platform and distro for desired host and checks if hostname is compatible
|
|
|
|
for deployment. It then uses the monitor keyring, ensures configuration for new
|
|
|
|
monitor host and adds the monitor to the cluster. If the section for the monitor
|
|
|
|
exists and defines a mon addr that will be used, otherwise it will fallback by
|
|
|
|
resolving the hostname to an IP. If :option:`--address` is used it will override
|
|
|
|
all other options. After adding the monitor to the cluster, it gives it some time
|
|
|
|
to start. It then looks for any monitor errors and checks monitor status. Monitor
|
|
|
|
errors arise if the monitor is not added in ``mon initial members``, if it doesn't
|
|
|
|
exist in ``monmap`` and if neither ``public_addr`` nor ``public_network`` keys
|
|
|
|
were defined for monitors. Under such conditions, monitors may not be able to
|
|
|
|
form quorum. Monitor status tells if the monitor is up and running normally. The
|
|
|
|
status is checked by running ``ceph daemon mon.hostname mon_status`` on remote
|
|
|
|
end which provides the output and returns a boolean status of what is going on.
|
|
|
|
``False`` means a monitor that is not fine even if it is up and running, while
|
|
|
|
``True`` means the monitor is up and running correctly.
|
|
|
|
|
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy mon add [HOST]
|
|
|
|
|
|
|
|
ceph-deploy mon add [HOST] --address [IP]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is the hostname and [IP] is the IP address of the desired monitor
|
2015-06-16 14:06:17 +00:00
|
|
|
node. Please note, unlike other ``mon`` subcommands, only one node can be
|
|
|
|
specified at a time.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Subcommand ``destroy`` is used to completely remove monitors on remote hosts.
|
|
|
|
It takes hostnames as arguments. It stops the monitor, verifies if ``ceph-mon``
|
|
|
|
daemon really stopped, creates an archive directory ``mon-remove`` under
|
|
|
|
``/var/lib/ceph/``, archives old monitor directory in
|
|
|
|
``{cluster}-{hostname}-{stamp}`` format in it and removes the monitor from
|
|
|
|
cluster by running ``ceph remove...`` command.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
|
|
|
|
2015-06-16 14:06:17 +00:00
|
|
|
ceph-deploy mon destroy [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is hostname of monitor that is to be removed.
|
|
|
|
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
gatherkeys
|
|
|
|
----------
|
|
|
|
|
|
|
|
Gather authentication keys for provisioning new nodes. It takes hostnames as
|
|
|
|
arguments. It checks for and fetches ``client.admin`` keyring, monitor keyring
|
|
|
|
and ``bootstrap-mds/bootstrap-osd`` keyring from monitor host. These
|
|
|
|
authentication keys are used when new ``monitors/OSDs/MDS`` are added to the
|
|
|
|
cluster.
|
|
|
|
|
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy gatherkeys [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is hostname of the monitor from where keys are to be pulled.
|
|
|
|
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
disk
|
|
|
|
----
|
|
|
|
|
|
|
|
Manage disks on a remote host. It actually triggers the ``ceph-disk`` utility
|
|
|
|
and it's subcommands to manage disks.
|
|
|
|
|
|
|
|
Subcommand ``list`` lists disk partitions and Ceph OSDs.
|
|
|
|
|
|
|
|
Usage::
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
ceph-deploy disk list [HOST:[DISK]]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It
|
|
|
|
creates a GPT partition, marks the partition with Ceph type uuid, creates a
|
|
|
|
file system, marks the file system as ready for Ceph consumption, uses entire
|
2014-12-05 18:37:44 +00:00
|
|
|
partition and adds a new partition to the journal disk.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy disk prepare [HOST:[DISK]]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Subcommand ``activate`` activates the Ceph OSD. It mounts the volume in a
|
|
|
|
temporary location, allocates an OSD id (if needed), remounts in the correct
|
|
|
|
location ``/var/lib/ceph/osd/$cluster-$id`` and starts ``ceph-osd``. It is
|
|
|
|
triggered by ``udev`` when it sees the OSD GPT partition type or on ceph service
|
|
|
|
start with ``ceph disk activate-all``.
|
|
|
|
|
|
|
|
Usage::
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
ceph-deploy disk activate [HOST:[DISK]]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Subcommand ``zap`` zaps/erases/destroys a device's partition table and contents.
|
|
|
|
It actually uses ``sgdisk`` and it's option ``--zap-all`` to destroy both GPT and
|
|
|
|
MBR data structures so that the disk becomes suitable for repartitioning.
|
|
|
|
``sgdisk`` then uses ``--mbrtogpt`` to convert the MBR or BSD disklabel disk to a
|
|
|
|
GPT disk. The ``prepare`` subcommand can now be executed which will create a new
|
|
|
|
GPT partition.
|
|
|
|
|
|
|
|
Usage::
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
ceph-deploy disk zap [HOST:[DISK]]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is hostname of the node and [DISK] is disk name or path.
|
|
|
|
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
osd
|
|
|
|
---
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Manage OSDs by preparing data disk on remote host. ``osd`` makes use of certain
|
|
|
|
subcommands for managing OSDs.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Subcommand ``prepare`` prepares a directory, disk or drive for a Ceph OSD. It
|
|
|
|
first checks against multiple OSDs getting created and warns about the
|
|
|
|
possibility of more than the recommended which would cause issues with max
|
|
|
|
allowed PIDs in a system. It then reads the bootstrap-osd key for the cluster or
|
|
|
|
writes the bootstrap key if not found. It then uses :program:`ceph-disk`
|
|
|
|
utility's ``prepare`` subcommand to prepare the disk, journal and deploy the OSD
|
|
|
|
on the desired host. Once prepared, it gives some time to the OSD to settle and
|
|
|
|
checks for any possible errors and if found, reports to the user.
|
|
|
|
|
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy osd prepare HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
|
|
|
|
|
|
|
|
Subcommand ``activate`` activates the OSD prepared using ``prepare`` subcommand.
|
|
|
|
It actually uses :program:`ceph-disk` utility's ``activate`` subcommand with
|
|
|
|
appropriate init type based on distro to activate the OSD. Once activated, it
|
|
|
|
gives some time to the OSD to start and checks for any possible errors and if
|
2014-12-05 18:37:44 +00:00
|
|
|
found, reports to the user. It checks the status of the prepared OSD, checks the
|
|
|
|
OSD tree and makes sure the OSDs are up and in.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy osd activate HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Subcommand ``create`` uses ``prepare`` and ``activate`` subcommands to create an
|
2014-12-05 18:37:44 +00:00
|
|
|
OSD.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
ceph-deploy osd create HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
|
|
|
|
|
|
|
|
Subcommand ``list`` lists disk partitions, Ceph OSDs and prints OSD metadata.
|
|
|
|
It gets the osd tree from a monitor host, uses the ``ceph-disk-list`` output
|
2014-12-05 18:37:44 +00:00
|
|
|
and gets the mount point by matching the line where the partition mentions
|
|
|
|
the OSD name, reads metadata from files, checks if a journal path exists,
|
|
|
|
if the OSD is in a OSD tree and prints the OSD metadata.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy osd list HOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL]...]
|
|
|
|
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
admin
|
|
|
|
-----
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Push configuration and ``client.admin`` key to a remote host. It takes
|
|
|
|
the ``{cluster}.client.admin.keyring`` from admin node and writes it under
|
|
|
|
``/etc/ceph`` directory of desired node.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
ceph-deploy admin [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is desired host to be configured for Ceph administration.
|
|
|
|
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
config
|
|
|
|
------
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Push/pull configuration file to/from a remote host. It uses ``push`` subcommand
|
|
|
|
to takes the configuration file from admin host and write it to remote host under
|
|
|
|
``/etc/ceph`` directory. It uses ``pull`` subcommand to do the opposite i.e, pull
|
|
|
|
the configuration file under ``/etc/ceph`` directory of remote host to admin node.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-04-22 10:05:47 +00:00
|
|
|
ceph-deploy config push [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-04-22 10:05:47 +00:00
|
|
|
ceph-deploy config pull [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Here, [HOST] is the hostname of the node where config file will be pushed to or
|
|
|
|
pulled from.
|
|
|
|
|
|
|
|
|
|
|
|
uninstall
|
|
|
|
---------
|
|
|
|
|
|
|
|
Remove Ceph packages from remote hosts. It detects the platform and distro of
|
|
|
|
selected host and uninstalls Ceph packages from it. However, some dependencies
|
|
|
|
like ``librbd1`` and ``librados2`` will not be removed because they can cause
|
|
|
|
issues with ``qemu-kvm``.
|
|
|
|
|
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy uninstall [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is hostname of the node from where Ceph will be uninstalled.
|
|
|
|
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
purge
|
|
|
|
-----
|
|
|
|
|
|
|
|
Remove Ceph packages from remote hosts and purge all data. It detects the
|
|
|
|
platform and distro of selected host, uninstalls Ceph packages and purges all
|
|
|
|
data. However, some dependencies like ``librbd1`` and ``librados2`` will not be
|
|
|
|
removed because they can cause issues with ``qemu-kvm``.
|
|
|
|
|
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy purge [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is hostname of the node from where Ceph will be purged.
|
|
|
|
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
purgedata
|
|
|
|
---------
|
|
|
|
|
|
|
|
Purge (delete, destroy, discard, shred) any Ceph data from ``/var/lib/ceph``.
|
|
|
|
Once it detects the platform and distro of desired host, it first checks if Ceph
|
|
|
|
is still installed on the selected host and if installed, it won't purge data
|
|
|
|
from it. If Ceph is already uninstalled from the host, it tries to remove the
|
|
|
|
contents of ``/var/lib/ceph``. If it fails then probably OSDs are still mounted
|
|
|
|
and needs to be unmounted to continue. It unmount the OSDs and tries to remove
|
|
|
|
the contents of ``/var/lib/ceph`` again and checks for errors. It also removes
|
|
|
|
contents of ``/etc/ceph``. Once all steps are successfully completed, all the
|
|
|
|
Ceph data from the selected host are removed.
|
|
|
|
|
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy purgedata [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is hostname of the node from where Ceph data will be purged.
|
|
|
|
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
forgetkeys
|
|
|
|
----------
|
|
|
|
|
|
|
|
Remove authentication keys from the local directory. It removes all the
|
|
|
|
authentication keys i.e, monitor keyring, client.admin keyring, bootstrap-osd
|
|
|
|
and bootstrap-mds keyring from the node.
|
|
|
|
|
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy forgetkeys
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
pkg
|
|
|
|
---
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Manage packages on remote hosts. It is used for installing or removing packages
|
|
|
|
from remote hosts. The package names for installation or removal are to be
|
|
|
|
specified after the command. Two options :option:`--install` and
|
|
|
|
:option:`--remove` are used for this purpose.
|
|
|
|
|
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy pkg --install [PKGs] [HOST] [HOST...]
|
|
|
|
|
|
|
|
ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [PKGs] is comma-separated package names and [HOST] is hostname of the
|
2014-12-17 15:14:52 +00:00
|
|
|
remote node where packages are to be installed or removed from.
|
|
|
|
|
|
|
|
|
|
|
|
calamari
|
|
|
|
--------
|
|
|
|
|
|
|
|
Install and configure Calamari nodes. It first checks if distro is supported
|
|
|
|
for Calamari installation by ceph-deploy. An argument ``connect`` is used for
|
|
|
|
installation and configuration. It checks for ``ceph-deploy`` configuration
|
|
|
|
file (cd_conf) and Calamari release repo or ``calamari-minion`` repo. It relies
|
|
|
|
on default for repo installation as it doesn't install Ceph unless specified
|
|
|
|
otherwise. ``options`` dictionary is also defined because ``ceph-deploy``
|
2014-12-05 18:37:44 +00:00
|
|
|
pops items internally which causes issues when those items are needed to be
|
|
|
|
available for every host. If the distro is Debian/Ubuntu, it is ensured that
|
2014-12-17 15:14:52 +00:00
|
|
|
proxy is disabled for ``calamari-minion`` repo. ``calamari-minion`` package is
|
|
|
|
then installed and custom repository files are added. minion config is placed
|
2014-12-05 18:37:44 +00:00
|
|
|
prior to installation so that it is present when the minion first starts.
|
|
|
|
config directory, calamari salt config are created and salt-minion package
|
|
|
|
is installed. If the distro is Redhat/CentOS, the salt-minion service needs to
|
|
|
|
be started.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Usage::
|
|
|
|
|
|
|
|
ceph-deploy calamari {connect} [HOST] [HOST...]
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Here, [HOST] is the hostname where Calamari is to be installed.
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
An option ``--release`` can be used to use a given release from repositories
|
|
|
|
defined in :program:`ceph-deploy`'s configuration. Defaults to ``calamari-minion``.
|
|
|
|
|
|
|
|
Another option :option:`--master` can also be used with this command.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
Options
|
|
|
|
=======
|
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --address
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
IP address of the host node to be added to the cluster.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --adjust-repos
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Install packages modifying source repos.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
.. option:: --ceph-conf
|
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
Use (or reuse) a given ``ceph.conf`` file.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --cluster
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Name of the cluster.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --dev
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Install a bleeding edge built from Git branch or tag (default: master).
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
.. option:: --cluster-network
|
|
|
|
|
|
|
|
Specify the (internal) cluster network.
|
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --dmcrypt
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Encrypt [data-path] and/or journal devices with ``dm-crypt``.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --dmcrypt-key-dir
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Directory where ``dm-crypt`` keys are stored.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --install
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Comma-separated package(s) to install on remote hosts.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --fs-type
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2017-06-30 17:54:18 +00:00
|
|
|
Filesystem to use to format disk ``(xfs, btrfs or ext4)``. Note that support for btrfs and ext4 is no longer tested or recommended; please use xfs.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --fsid
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Provide an alternate FSID for ``ceph.conf`` generation.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --gpg-url
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Specify a GPG key url to be used with custom repos (defaults to ceph.com).
|
|
|
|
|
|
|
|
.. option:: --keyrings
|
|
|
|
|
|
|
|
Concatenate multiple keyrings to be seeded on new monitors.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
|
|
|
.. option:: --local-mirror
|
|
|
|
|
|
|
|
Fetch packages and push them to hosts for a local repo mirror.
|
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --master
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
The domain for the Calamari master server.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --mkfs
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Inject keys to MONs on remote nodes.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --no-adjust-repos
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Install packages without modifying source repos.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --no-ssh-copykey
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Do not attempt to copy ssh keys.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --overwrite-conf
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Overwrite an existing conf file on remote host (if present).
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --public-network
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Specify the public network for a cluster.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --remove
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Comma-separated package(s) to remove from remote hosts.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --repo
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Install repo files only (skips package installation).
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --repo-url
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Specify a repo url that mirrors/contains Ceph packages.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --testing
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
Install the latest development release.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
.. option:: --username
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2016-02-23 05:55:19 +00:00
|
|
|
The username to connect to the remote host.
|
|
|
|
|
|
|
|
.. option:: --version
|
|
|
|
|
|
|
|
The current installed version of :program:`ceph-deploy`.
|
|
|
|
|
|
|
|
.. option:: --zap-disk
|
|
|
|
|
|
|
|
Destroy the partition table and content of a disk.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
|
2014-12-05 18:37:44 +00:00
|
|
|
Availability
|
|
|
|
============
|
|
|
|
|
2015-01-22 02:16:35 +00:00
|
|
|
:program:`ceph-deploy` is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to
|
2017-10-23 11:26:28 +00:00
|
|
|
the documentation at https://ceph.com/ceph-deploy/docs for more information.
|
2014-12-05 18:37:44 +00:00
|
|
|
|
2014-12-17 15:14:52 +00:00
|
|
|
|
2014-12-05 18:37:44 +00:00
|
|
|
See also
|
|
|
|
========
|
|
|
|
|
|
|
|
:doc:`ceph-mon <ceph-mon>`\(8),
|
|
|
|
:doc:`ceph-osd <ceph-osd>`\(8),
|
|
|
|
:doc:`ceph-disk <ceph-disk>`\(8),
|
|
|
|
:doc:`ceph-mds <ceph-mds>`\(8)
|