2020-03-17 13:54:47 +00:00
|
|
|
.. _cephadm-adoption:
|
|
|
|
|
2020-03-15 13:45:46 +00:00
|
|
|
Converting an existing cluster to cephadm
|
|
|
|
=========================================
|
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
Cephadm allows you to convert an existing Ceph cluster that
|
2020-03-15 13:45:46 +00:00
|
|
|
has been deployed with ceph-deploy, ceph-ansible, DeepSea, or similar tools.
|
|
|
|
|
|
|
|
Limitations
|
|
|
|
-----------
|
|
|
|
|
|
|
|
* Cephadm only works with BlueStore OSDs. If there are FileStore OSDs
|
|
|
|
in your cluster you cannot manage them.
|
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
Preparation
|
|
|
|
-----------
|
2020-03-15 13:45:46 +00:00
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
#. Get the ``cephadm`` command line tool on each host in the existing
|
|
|
|
cluster. See :ref:`get-cephadm`.
|
2020-03-15 13:45:46 +00:00
|
|
|
|
|
|
|
#. Prepare each host for use by ``cephadm``::
|
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
# cephadm prepare-host
|
2020-03-15 13:45:46 +00:00
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
#. Determine which Ceph version you will use. You can use any Octopus (15.2.z)
|
2020-03-15 13:45:46 +00:00
|
|
|
release or later. For example, ``docker.io/ceph/ceph:v15.2.0``. The default
|
|
|
|
will be the latest stable release, but if you are upgrading from an earlier
|
|
|
|
release at the same time be sure to refer to the upgrade notes for any
|
|
|
|
special steps to take while upgrading.
|
|
|
|
|
|
|
|
The image is passed to cephadm with::
|
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
# cephadm --image $IMAGE <rest of command goes here>
|
|
|
|
|
|
|
|
#. Cephadm can provide a list of all Ceph daemons on the current host::
|
|
|
|
|
|
|
|
# cephadm ls
|
|
|
|
|
|
|
|
Before starting, you should see that all existing daemons have a
|
|
|
|
style of ``legacy`` in the resulting output. As the adoption
|
|
|
|
process progresses, adopted daemons will appear as style
|
|
|
|
``cephadm:v1``.
|
|
|
|
|
|
|
|
|
|
|
|
Adoption process
|
|
|
|
----------------
|
|
|
|
|
|
|
|
#. Ensure the ceph configuration is migrated to use the cluster config database.
|
|
|
|
If the ``/etc/ceph/ceph.conf`` is identical on each host, then on one host::
|
|
|
|
|
|
|
|
# ceph config assimilate-conf -i /etc/ceph/ceph.conf
|
|
|
|
|
|
|
|
If there are config variations on each host, you may need to repeat
|
|
|
|
this command on each host. You can view the cluster's
|
|
|
|
configuration to confirm that it is complete with::
|
|
|
|
|
|
|
|
# ceph config dump
|
2020-03-15 13:45:46 +00:00
|
|
|
|
|
|
|
#. Adopt each monitor::
|
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
# cephadm adopt --style legacy --name mon.<hostname>
|
|
|
|
|
|
|
|
Each legacy monitor should stop, quickly restart as a cephadm
|
|
|
|
container, and rejoin the quorum.
|
2020-03-15 13:45:46 +00:00
|
|
|
|
|
|
|
#. Adopt each manager::
|
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
# cephadm adopt --style legacy --name mgr.<hostname>
|
2020-03-15 13:45:46 +00:00
|
|
|
|
|
|
|
#. Enable cephadm::
|
|
|
|
|
|
|
|
# ceph mgr module enable cephadm
|
|
|
|
# ceph orch set backend cephadm
|
|
|
|
|
|
|
|
#. Generate an SSH key::
|
|
|
|
|
|
|
|
# ceph cephadm generate-key
|
2020-03-17 13:23:26 +00:00
|
|
|
# ceph cephadm get-pub-key > ceph.pub
|
2020-03-15 13:45:46 +00:00
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
#. Install the cluster SSH key on each host in the cluster::
|
2020-03-15 13:45:46 +00:00
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
# ssh-copy-id -f -i ceph.pub root@<host>
|
2020-03-15 13:45:46 +00:00
|
|
|
|
2020-06-09 14:15:43 +00:00
|
|
|
.. note::
|
|
|
|
It is also possible to import an existing ssh key. See
|
|
|
|
:ref:`ssh errors <cephadm-ssh-errors>` in the troubleshooting
|
|
|
|
document for instructions describing how to import existing
|
|
|
|
ssh keys.
|
|
|
|
|
2020-03-15 13:45:46 +00:00
|
|
|
#. Tell cephadm which hosts to manage::
|
|
|
|
|
|
|
|
# ceph orch host add <hostname> [ip-address]
|
|
|
|
|
|
|
|
This will perform a ``cephadm check-host`` on each host before
|
|
|
|
adding it to ensure it is working. The IP address argument is only
|
2020-03-17 13:23:26 +00:00
|
|
|
required if DNS does not allow you to connect to each host by its
|
2020-03-15 13:45:46 +00:00
|
|
|
short name.
|
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
#. Verify that the adopted monitor and manager daemons are visible::
|
2020-03-15 13:45:46 +00:00
|
|
|
|
|
|
|
# ceph orch ps
|
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
#. Adopt all OSDs in the cluster::
|
|
|
|
|
|
|
|
# cephadm adopt --style legacy --name <name>
|
|
|
|
|
|
|
|
For example::
|
|
|
|
|
|
|
|
# cephadm adopt --style legacy --name osd.1
|
|
|
|
# cephadm adopt --style legacy --name osd.2
|
|
|
|
|
|
|
|
#. Redeploy MDS daemons by telling cephadm how many daemons to run for
|
|
|
|
each file system. You can list file systems by name with ``ceph fs
|
|
|
|
ls``. For each file system::
|
|
|
|
|
2020-06-18 08:56:35 +00:00
|
|
|
# ceph orch apply mds <fs-name> [--placement=<placement>]
|
2020-03-17 13:23:26 +00:00
|
|
|
|
|
|
|
For example, in a cluster with a single file system called `foo`::
|
|
|
|
|
|
|
|
# ceph fs ls
|
|
|
|
name: foo, metadata pool: foo_metadata, data pools: [foo_data ]
|
|
|
|
# ceph orch apply mds foo 2
|
|
|
|
|
|
|
|
Wait for the new MDS daemons to start with::
|
|
|
|
|
|
|
|
# ceph orch ps --daemon-type mds
|
|
|
|
|
|
|
|
Finally, stop and remove the legacy MDS daemons::
|
|
|
|
|
|
|
|
# systemctl stop ceph-mds.target
|
|
|
|
# rm -rf /var/lib/ceph/mds/ceph-*
|
|
|
|
|
|
|
|
#. Redeploy RGW daemons. Cephadm manages RGW daemons by zone. For each
|
|
|
|
zone, deploy new RGW daemons with cephadm::
|
|
|
|
|
2020-06-18 08:56:35 +00:00
|
|
|
# ceph orch apply rgw <realm> <zone> [--subcluster=<subcluster>] [--port=<port>] [--ssl] [--placement=<placement>]
|
2020-03-17 13:23:26 +00:00
|
|
|
|
|
|
|
where *<placement>* can be a simple daemon count, or a list of
|
|
|
|
specific hosts (see :ref:`orchestrator-cli-placement-spec`).
|
2020-03-15 13:45:46 +00:00
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
Once the daemons have started and you have confirmed they are functioning,
|
|
|
|
stop and remove the old legacy daemons::
|
2020-03-15 13:45:46 +00:00
|
|
|
|
2020-03-17 13:23:26 +00:00
|
|
|
# systemctl stop ceph-rgw.target
|
|
|
|
# rm -rf /var/lib/ceph/radosgw/ceph-*
|
2020-03-15 13:45:46 +00:00
|
|
|
|
2020-06-05 10:55:15 +00:00
|
|
|
For adopting single-site systems without a realm, see also
|
|
|
|
:ref:`rgw-multisite-migrate-from-single-site`.
|
|
|
|
|
2020-03-15 13:45:46 +00:00
|
|
|
#. Check the ``ceph health detail`` output for cephadm warnings about
|
|
|
|
stray cluster daemons or hosts that are not yet managed.
|