doc/cephadm: fix formatting, typos

No need for [monitor 1] when accessing the CLI--this can happy from
any node or container that has a working CLI.

Signed-off-by: Sage Weil <sage@redhat.com>
This commit is contained in:
Sage Weil 2020-03-07 09:13:23 -06:00
parent e025445232
commit e0161132bc

View File

@ -55,12 +55,11 @@ your Linux distribution::
Bootstrap a new cluster
=======================
To create a new cluster, you need to know:
* Which *IP address* to use for the cluster's first monitor. This is
normally just the IP for the first cluster node. If there are
multiple networks and interfaces, be sure to choose one that will be
accessible by any hosts accessing the Ceph cluster.
To create a new cluster, you need to know which *IP address* to use
for the cluster's first monitor. This is normally just the IP for the
first cluster node. If there are multiple networks and interfaces, be
sure to choose one that will be accessible by any hosts accessing the
Ceph cluster.
To bootstrap the cluster run the following command::
@ -109,19 +108,19 @@ Watching cephadm log messages
Cephadm logs to the ``cephadm`` cluster log channel, which means you can monitor progress in realtime with::
ceph -W cephadm
# ceph -W cephadm
By default it will show info-level events and above. To see
debug-level messages too,::
debug-level messages too::
ceph config set mgr mgr/cephadm/log_to_cluster_level debug
ceph -W cephadm --watch-debug
# ceph config set mgr mgr/cephadm/log_to_cluster_level debug
# ceph -W cephadm --watch-debug
Be careful: the debug messagse are very verbose!
Be careful: the debug messages are very verbose!
You can see recent events with::
ceph log last cephadm
# ceph log last cephadm
These events are also logged to the ``ceph.cephadm.log`` file on
monitor hosts and/or to the monitor-daemon stderr.
@ -138,7 +137,7 @@ For each new host you'd like to add to the cluster, you need to do two things:
#. Tell Ceph that the new node is part of the cluster::
[monitor 1] # ceph orch host add *newhost*
# ceph orch host add *newhost*
Deploying additional monitors
=============================
@ -164,18 +163,18 @@ To add OSDs to the cluster, you have two options:
1) You need to know the device name for the block device (hard disk or SSD)
that will be used. Then,::
[monitor 1] # ceph orch osd create *<host>*:*<path-to-device>*
# ceph orch osd create *<host>*:*<path-to-device>*
For example, to deploy an OSD on host *newhost*'s SSD,::
[monitor 1] # ceph orch osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028
# ceph orch osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028
2) You need to describe your disk setup by it's properties (Drive Groups)
Link to DriveGroup docs.::
[monitor 1] # ceph orchestrator osd create -i my_drivegroups.yml
# ceph orchestrator osd create -i my_drivegroups.yml
.. _drivegroups: drivegroups::
@ -186,7 +185,7 @@ Deploying manager daemons
It is a good idea to have at least one backup manager daemon. To
deploy one or more new manager daemons,::
[monitor 1] # ceph orch apply mgr *<new-num-mgrs>* [*<host1>* ...]
# ceph orch apply mgr *<new-num-mgrs>* [*<host1>* ...]
Deploying MDSs
==============