Merge pull request #32595 from rs-fabrica/doc_install_upgrading-ceph_systemctl_use

doc/install/upgrading-ceph: systemctl in Ubuntu instructions

Reviewed-by: Kefu Chai <kchai@redhat.com>
This commit is contained in:
Kefu Chai 2020-01-14 10:29:45 +08:00 committed by GitHub
commit b127d6d32e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -29,19 +29,19 @@ release.
The `Upgrade Procedures`_ are relatively simple, but do look at the `release
notes document of your release`_ before upgrading. The basic process involves
three steps:
three steps:
#. Use ``ceph-deploy`` on your admin node to upgrade the packages for
multiple hosts (using the ``ceph-deploy install`` command), or login to each
#. Use ``ceph-deploy`` on your admin node to upgrade the packages for
multiple hosts (using the ``ceph-deploy install`` command), or login to each
host and upgrade the Ceph package `using your distro's package manager`_.
For example, when `Upgrading Monitors`_, the ``ceph-deploy`` syntax might
look like this::
ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
ceph-deploy install --release firefly mon1 mon2 mon3
**Note:** The ``ceph-deploy install`` command will upgrade the packages
in the specified node(s) from the old release to the release you specify.
**Note:** The ``ceph-deploy install`` command will upgrade the packages
in the specified node(s) from the old release to the release you specify.
There is no ``ceph-deploy upgrade`` command.
#. Login in to each Ceph node and restart each Ceph daemon.
@ -62,7 +62,7 @@ Before upgrading Ceph daemons, upgrade the ``ceph-deploy`` tool. ::
Or::
sudo apt-get install ceph-deploy
Or::
sudo yum install ceph-deploy python-pushy
@ -71,7 +71,7 @@ Or::
Upgrade Procedures
==================
The following sections describe the upgrade process.
The following sections describe the upgrade process.
.. important:: Each release of Ceph may have some additional steps. Refer to
the `release notes document of your release`_ for details **BEFORE** you
@ -83,16 +83,16 @@ Upgrading Monitors
To upgrade monitors, perform the following steps:
#. Upgrade the Ceph package for each daemon instance.
#. Upgrade the Ceph package for each daemon instance.
You may use ``ceph-deploy`` to address all monitor nodes at once.
You may use ``ceph-deploy`` to address all monitor nodes at once.
For example::
ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
ceph-deploy install --release hammer mon1 mon2 mon3
You may also use the package manager for your Linux distribution on
each individual node. To upgrade packages manually on each Debian/Ubuntu
You may also use the package manager for your Linux distribution on
each individual node. To upgrade packages manually on each Debian/Ubuntu
host, perform the following steps::
ssh {mon-host}
@ -102,19 +102,19 @@ To upgrade monitors, perform the following steps:
ssh {mon-host}
sudo yum update && sudo yum install ceph
#. Restart each monitor. For Ubuntu distributions, use::
sudo restart ceph-mon id={hostname}
#. Restart each monitor. For Ubuntu distributions, use::
sudo systemctl restart ceph-mon@{hostname}.service
For CentOS/Red Hat/Debian distributions, use::
sudo /etc/init.d/ceph restart {mon-id}
For CentOS/Red Hat distributions deployed with ``ceph-deploy``,
For CentOS/Red Hat distributions deployed with ``ceph-deploy``,
the monitor ID is usually ``mon.{hostname}``.
#. Ensure each monitor has rejoined the quorum::
ceph mon stat
@ -127,15 +127,15 @@ Upgrading an OSD
To upgrade a Ceph OSD Daemon, perform the following steps:
#. Upgrade the Ceph OSD Daemon package.
#. Upgrade the Ceph OSD Daemon package.
You may use ``ceph-deploy`` to address all Ceph OSD Daemon nodes at
You may use ``ceph-deploy`` to address all Ceph OSD Daemon nodes at
once. For example::
ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
ceph-deploy install --release hammer osd1 osd2 osd3
You may also use the package manager on each node to upgrade packages
You may also use the package manager on each node to upgrade packages
`using your distro's package manager`_. For Debian/Ubuntu hosts, perform the
following steps on each host::
@ -148,24 +148,24 @@ To upgrade a Ceph OSD Daemon, perform the following steps:
sudo yum update && sudo yum install ceph
#. Restart the OSD, where ``N`` is the OSD number. For Ubuntu, use::
#. Restart the OSD, where ``N`` is the OSD number. For Ubuntu, use::
sudo restart ceph-osd id=N
sudo systemctl restart ceph-osd@{N}.service
For multiple OSDs on a host, you may restart all of them with Upstart. ::
For multiple OSDs on a host, you may restart all of them with systemd. ::
sudo systemctl restart ceph-osd
sudo restart ceph-osd-all
For CentOS/Red Hat/Debian distributions, use::
sudo /etc/init.d/ceph restart N
sudo /etc/init.d/ceph restart N
#. Ensure each upgraded Ceph OSD Daemon has rejoined the cluster::
ceph osd stat
Ensure that you have completed the upgrade cycle for all of your
Ensure that you have completed the upgrade cycle for all of your
Ceph OSD Daemons.
@ -174,8 +174,8 @@ Upgrading a Metadata Server
To upgrade a Ceph Metadata Server, perform the following steps:
#. Upgrade the Ceph Metadata Server package. You may use ``ceph-deploy`` to
address all Ceph Metadata Server nodes at once, or use the package manager
#. Upgrade the Ceph Metadata Server package. You may use ``ceph-deploy`` to
address all Ceph Metadata Server nodes at once, or use the package manager
on each node. For example::
ceph-deploy install --release {release-name} ceph-node1
@ -192,11 +192,11 @@ To upgrade a Ceph Metadata Server, perform the following steps:
ssh {mon-host}
sudo yum update && sudo yum install ceph-mds
#. Restart the metadata server. For Ubuntu, use::
sudo restart ceph-mds id={hostname}
#. Restart the metadata server. For Ubuntu, use::
sudo systemctl restart ceph-mds@{hostname}.service
For CentOS/Red Hat/Debian distributions, use::
sudo /etc/init.d/ceph restart mds.{hostname}
@ -216,7 +216,7 @@ Once you have upgraded the packages and restarted daemons on your Ceph
cluster, we recommend upgrading ``ceph-common`` and client libraries
(``librbd1`` and ``librados2``) on your client nodes too.
#. Upgrade the package::
#. Upgrade the package::
ssh {client-host}
apt-get update && sudo apt-get install ceph-common librados2 librbd1 python-rados python-rbd