From 24b753f2c8111305cf73de9b40a393c0c1a9bae7 Mon Sep 17 00:00:00 2001 From: Sebastian Wagner Date: Fri, 23 Jul 2021 09:54:14 +0200 Subject: [PATCH 1/2] doc/cephadm: Move some sections from mon.rst to serivce-management.rst Avoid duplication and instead only reference the corresponding sections. Signed-off-by: Sebastian Wagner --- doc/cephadm/mon.rst | 111 +++-------------------------- doc/cephadm/service-management.rst | 86 +++++++++++++++++++++- 2 files changed, 91 insertions(+), 106 deletions(-) diff --git a/doc/cephadm/mon.rst b/doc/cephadm/mon.rst index 38ae493f5ff..2121c28a7c7 100644 --- a/doc/cephadm/mon.rst +++ b/doc/cephadm/mon.rst @@ -26,7 +26,11 @@ default to that subnet unless cephadm is instructed to do otherwise. If all of the ceph monitor daemons in your cluster are in the same subnet, manual administration of the ceph monitor daemons is not necessary. ``cephadm`` will automatically add up to five monitors to the subnet, as -needed, as new hosts are added to the cluster. +needed, as new hosts are added to the cluster. + +By default, cephadm will deploy 5 daemons on arbitrary hosts. See +:ref:`orchestrator-cli-placement-spec`_ for details of specifying +the placement of daemons. Designating a Particular Subnet for Monitors -------------------------------------------- @@ -48,67 +52,18 @@ format (e.g., ``10.1.2.0/24``): Cephadm deploys new monitor daemons only on hosts that have IP addresses in the designated subnet. -Changing the number of monitors from the default ------------------------------------------------- - -If you want to adjust the default of 5 monitors, run this command: +You can also specify two public networks by using a list of networks: .. prompt:: bash # - ceph orch apply mon ** - -Deploying monitors only to specific hosts ------------------------------------------ - -To deploy monitors on a specific set of hosts, run this command: - - .. prompt:: bash # - - ceph orch apply mon ** - - Be sure to include the first (bootstrap) host in this list. - -Using Host Labels ------------------ - -You can control which hosts the monitors run on by making use of host labels. -To set the ``mon`` label to the appropriate hosts, run this command: - - .. prompt:: bash # - - ceph orch host label add ** mon - - To view the current hosts and labels, run this command: - - .. prompt:: bash # - - ceph orch host ls + ceph config set mon public_network *,* For example: .. prompt:: bash # - ceph orch host label add host1 mon - ceph orch host label add host2 mon - ceph orch host label add host3 mon - ceph orch host ls + ceph config set mon public_network 10.1.2.0/24,192.168.0.1/24 - .. code-block:: bash - - HOST ADDR LABELS STATUS - host1 mon - host2 mon - host3 mon - host4 - host5 - - Tell cephadm to deploy monitors based on the label by running this command: - - .. prompt:: bash # - - ceph orch apply mon label:mon - -See also :ref:`host labels `. Deploying Monitors on a Particular Network ------------------------------------------ @@ -136,53 +91,3 @@ run this command: ceph orch apply mon --unmanaged ceph orch daemon add mon newhost1:10.1.2.123 ceph orch daemon add mon newhost2:10.1.2.0/24 - - .. note:: - The **apply** command can be confusing. For this reason, we recommend using - YAML specifications. - - Each ``ceph orch apply mon`` command supersedes the one before it. - This means that you must use the proper comma-separated list-based - syntax when you want to apply monitors to more than one host. - If you do not use the proper syntax, you will clobber your work - as you go. - - For example: - - .. prompt:: bash # - - ceph orch apply mon host1 - ceph orch apply mon host2 - ceph orch apply mon host3 - - This results in only one host having a monitor applied to it: host 3. - - (The first command creates a monitor on host1. Then the second command - clobbers the monitor on host1 and creates a monitor on host2. Then the - third command clobbers the monitor on host2 and creates a monitor on - host3. In this scenario, at this point, there is a monitor ONLY on - host3.) - - To make certain that a monitor is applied to each of these three hosts, - run a command like this: - - .. prompt:: bash # - - ceph orch apply mon "host1,host2,host3" - - There is another way to apply monitors to multiple hosts: a ``yaml`` file - can be used. Instead of using the "ceph orch apply mon" commands, run a - command of this form: - - .. prompt:: bash # - - ceph orch apply -i file.yaml - - Here is a sample **file.yaml** file:: - - service_type: mon - placement: - hosts: - - host1 - - host2 - - host3 diff --git a/doc/cephadm/service-management.rst b/doc/cephadm/service-management.rst index 9f819998a86..c8f47d9edd7 100644 --- a/doc/cephadm/service-management.rst +++ b/doc/cephadm/service-management.rst @@ -158,6 +158,54 @@ or in a YAML files. cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`. + .. note:: + The **apply** command can be confusing. For this reason, we recommend using + YAML specifications. + + Each ``ceph orch apply `` command supersedes the one before it. + If you do not use the proper syntax, you will clobber your work + as you go. + + For example: + + .. prompt:: bash # + + ceph orch apply mon host1 + ceph orch apply mon host2 + ceph orch apply mon host3 + + This results in only one host having a monitor applied to it: host 3. + + (The first command creates a monitor on host1. Then the second command + clobbers the monitor on host1 and creates a monitor on host2. Then the + third command clobbers the monitor on host2 and creates a monitor on + host3. In this scenario, at this point, there is a monitor ONLY on + host3.) + + To make certain that a monitor is applied to each of these three hosts, + run a command like this: + + .. prompt:: bash # + + ceph orch apply mon "host1,host2,host3" + + There is another way to apply monitors to multiple hosts: a ``yaml`` file + can be used. Instead of using the "ceph orch apply mon" commands, run a + command of this form: + + .. prompt:: bash # + + ceph orch apply -i file.yaml + + Here is a sample **file.yaml** file:: + + service_type: mon + placement: + hosts: + - host1 + - host2 + - host3 + Explicit placements ------------------- @@ -192,7 +240,39 @@ and ``=name`` specifies the name of the new monitor. Placement by labels ------------------- -Daemons can be explicitly placed on hosts that match a specific label: +Daemon placement can be limited to hosts that match a specific label. To set +a label ``mylabel`` to the appropriate hosts, run this command: + + .. prompt:: bash # + + ceph orch host label add ** mylabel + + To view the current hosts and labels, run this command: + + .. prompt:: bash # + + ceph orch host ls + + For example: + + .. prompt:: bash # + + ceph orch host label add host1 mylabel + ceph orch host label add host2 mylabel + ceph orch host label add host3 mylabel + ceph orch host ls + + .. code-block:: bash + + HOST ADDR LABELS STATUS + host1 mylabel + host2 mylabel + host3 mylabel + host4 + host5 + +Now, Tell cephadm to deploy daemons based on the label by running +this command: .. prompt:: bash # @@ -240,8 +320,8 @@ Or in YAML: host_pattern: "*" -Setting a limit ---------------- +Changing the number of monitors +------------------------------- By specifying ``count``, only the number of daemons specified will be created: From 9df20490b210cf0e5fcf17920e91d390989cc4bf Mon Sep 17 00:00:00 2001 From: Sebastian Wagner Date: Fri, 23 Jul 2021 10:09:08 +0200 Subject: [PATCH 2/2] doc/cephadm: MON IP change Signed-off-by: Sebastian Wagner --- doc/cephadm/mon.rst | 82 +++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 80 insertions(+), 2 deletions(-) diff --git a/doc/cephadm/mon.rst b/doc/cephadm/mon.rst index 2121c28a7c7..e66df617126 100644 --- a/doc/cephadm/mon.rst +++ b/doc/cephadm/mon.rst @@ -29,7 +29,7 @@ manual administration of the ceph monitor daemons is not necessary. needed, as new hosts are added to the cluster. By default, cephadm will deploy 5 daemons on arbitrary hosts. See -:ref:`orchestrator-cli-placement-spec`_ for details of specifying +:ref:`orchestrator-cli-placement-spec` for details of specifying the placement of daemons. Designating a Particular Subnet for Monitors @@ -80,7 +80,7 @@ run this command: .. prompt:: bash # - ceph orch daemon add mon * [...]* + ceph orch daemon add mon * For example, to deploy a second monitor on ``newhost1`` using an IP address ``10.1.2.123`` and a third monitor on ``newhost2`` in @@ -91,3 +91,81 @@ run this command: ceph orch apply mon --unmanaged ceph orch daemon add mon newhost1:10.1.2.123 ceph orch daemon add mon newhost2:10.1.2.0/24 + + Now, enable automatic placement of Daemons + + .. prompt:: bash # + + ceph orch apply mon --placement="newhost1,newhost2,newhost3" --dry-run + + See :ref:`orchestrator-cli-placement-spec` for details of specifying + the placement of daemons. + + Finally apply this new placement by dropping ``--dry-run`` + + .. prompt:: bash # + + ceph orch apply mon --placement="newhost1,newhost2,newhost3" + + +Moving Monitors to a Different Network +-------------------------------------- + +To move Monitors to a new network, deploy new monitors on the new network and +subsequently remove monitors from the old network. It is not advised to +modify and inject the ``monmap`` manually. + +First, disable the automated placement of daemons: + + .. prompt:: bash # + + ceph orch apply mon --unmanaged + +To deploy each additional monitor: + + .. prompt:: bash # + + ceph orch daemon add mon ** + +For example, to deploy a second monitor on ``newhost1`` using an IP +address ``10.1.2.123`` and a third monitor on ``newhost2`` in +network ``10.1.2.0/24``, run the following commands: + + .. prompt:: bash # + + ceph orch apply mon --unmanaged + ceph orch daemon add mon newhost1:10.1.2.123 + ceph orch daemon add mon newhost2:10.1.2.0/24 + + Subsequently remove monitors from the old network: + + .. prompt:: bash # + + ceph orch daemon rm *mon.* + + Update the ``public_network``: + + .. prompt:: bash # + + ceph config set mon public_network ** + + For example: + + .. prompt:: bash # + + ceph config set mon public_network 10.1.2.0/24 + + Now, enable automatic placement of Daemons + + .. prompt:: bash # + + ceph orch apply mon --placement="newhost1,newhost2,newhost3" --dry-run + + See :ref:`orchestrator-cli-placement-spec` for details of specifying + the placement of daemons. + + Finally apply this new placement by dropping ``--dry-run`` + + .. prompt:: bash # + + ceph orch apply mon --placement="newhost1,newhost2,newhost3"