Merge PR #33131 into master

* refs/pull/33131/head:
	mgr/orch: 'ceph orchestrator ...' -> 'ceph orch ...'

Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Sebastian Wagner <swagner@suse.com>
This commit is contained in:
Sage Weil 2020-02-10 12:17:45 -06:00
commit 366d3fc33e
18 changed files with 115 additions and 115 deletions

View File

@ -102,7 +102,7 @@ For each new host you'd like to add to the cluster, you need to do two things:
#. Tell Ceph that the new node is part of the cluster::
[monitor 1] # ceph orchestrator host add *newhost*
[monitor 1] # ceph orch host add *newhost*
Deploying additional monitors
=============================
@ -114,12 +114,12 @@ either as a simple IP address or as a CIDR network name.
To deploy additional monitors,::
[monitor 1] # ceph orchestrator mon update *<new-num-monitors>* *<host1:network1> [<host1:network2>...]*
[monitor 1] # ceph orch mon update *<new-num-monitors>* *<host1:network1> [<host1:network2>...]*
For example, to deploy a second monitor on ``newhost`` using an IP
address in network ``10.1.2.0/24``,::
[monitor 1] # ceph orchestrator mon update 2 newhost:10.1.2.0/24
[monitor 1] # ceph orch mon update 2 newhost:10.1.2.0/24
Deploying OSDs
==============
@ -127,11 +127,11 @@ Deploying OSDs
To add an OSD to the cluster, you need to know the device name for the
block device (hard disk or SSD) that will be used. Then,::
[monitor 1] # ceph orchestrator osd create *<host>*:*<path-to-device>*
[monitor 1] # ceph orch osd create *<host>*:*<path-to-device>*
For example, to deploy an OSD on host *newhost*'s SSD,::
[monitor 1] # ceph orchestrator osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028
[monitor 1] # ceph orch osd create newhost:/dev/disk/by-id/ata-WDC_WDS200T2B0A-00SM50_182294800028
Deploying manager daemons
=========================
@ -139,7 +139,7 @@ Deploying manager daemons
It is a good idea to have at least one backup manager daemon. To
deploy one or more new manager daemons,::
[monitor 1] # ceph orchestrator mgr update *<new-num-mgrs>* [*<host1>* ...]
[monitor 1] # ceph orch mgr update *<new-num-mgrs>* [*<host1>* ...]
Deploying MDSs
==============

View File

@ -194,14 +194,14 @@ Add Storage for NFS-Ganesha Servers to prevent recovery conflicts::
[root@minikube /]# ceph osd pool create nfs-ganesha 64
pool 'nfs-ganesha' created
[root@minikube /]# ceph osd pool set nfs-ganesha size 1
[root@minikube /]# ceph orchestrator nfs add mynfs nfs-ganesha ganesha
[root@minikube /]# ceph orch nfs add mynfs nfs-ganesha ganesha
Here we have created a NFS-Ganesha cluster called "mynfs" in "ganesha"
namespace with "nfs-ganesha" OSD pool.
Scale out NFS-Ganesha cluster::
[root@minikube /]# ceph orchestrator nfs update mynfs 2
[root@minikube /]# ceph orch nfs update mynfs 2
Configure NFS-Ganesha Exports
-----------------------------

View File

@ -108,7 +108,7 @@ Second Node
#. Copy the public key from node 1 to node 2::
[node 1] $ sudo ./cephadm shell -c ceph.conf -k ceph.keyring
[ceph: root@node 1] $ ceph orchestrator host add 192.168.1.102
[ceph: root@node 1] $ ceph orch host add 192.168.1.102
Third Node
----------
@ -134,11 +134,11 @@ Third Node
#. Copy the public key from node 1 to node 3::
[node 1] $ sudo ./cephadm shell -c ceph.conf -k ceph.keyring
[ceph: root@node 1] $ ceph orchestrator host add 192.168.1.103
[ceph: root@node 1] $ ceph orch host add 192.168.1.103
#. On node 1, issue the command that adds node 3 to the cluster::
[node 1] $ sudo ceph orchestrator host add 192.168.1.103
[node 1] $ sudo ceph orch host add 192.168.1.103
Creating Two More Monitors
@ -146,13 +146,13 @@ Creating Two More Monitors
#. Set up a Ceph monitor on node 2 by issuing the following command on node 1. ::
[node 1] $ sudo ceph orchestrator mon update 2 192.168.1.102:192.168.1.0/24
[node 1] $ sudo ceph orch mon update 2 192.168.1.102:192.168.1.0/24
[sudo] password for user:
["(Re)deployed mon 192.168.1.102 on host '192.168.1.102'"]
[user@192-168-1-101 ~] $ \
#. Set up a Ceph monitor on node 3 by issuing the following command on node 1::
[node 1] $ sudo ceph orchestrator mon update 3 192.168.1.103:192.168.1.0/24
[node 1] $ sudo ceph orch mon update 3 192.168.1.103:192.168.1.0/24
[sudo] password for user:
["(Re)deployed mon 192.168.1.103 on host '192.168.1.103'"]
[user@192-168-1-101 ~]$
@ -166,7 +166,7 @@ Creating an OSD on the First Node
#. Use a command of the following form to create an OSD on node 1::
[node 1@192-168-1-101]$ sudo ceph orchestrator osd create 192-168-1-101:/dev/by-id/ata-WDC+WDS_300T2C0A-00SM50_123405928343
[node 1@192-168-1-101]$ sudo ceph orch osd create 192-168-1-101:/dev/by-id/ata-WDC+WDS_300T2C0A-00SM50_123405928343
["Created osd(s) on host '192-168-1-101'"]
[node 1@192-168-1-101]$
@ -176,7 +176,7 @@ Creating an OSD on the Second Node
#. Use a command of the following form ON NODE 1 to create an OSD on node 2::
[node 1@192-168-1-101]$ sudo ceph orchestrator osd create 192-168-1-102:/dev/by-id/ata-WDC+WDS_300T2C0A-00SM50_123405928383
[node 1@192-168-1-101]$ sudo ceph orch osd create 192-168-1-102:/dev/by-id/ata-WDC+WDS_300T2C0A-00SM50_123405928383
["Created osd(s) on host '192-168-1-102'"]
[node 1@192-168-1-101]$
@ -186,7 +186,7 @@ Creating an OSD on the Third Node
#. Use a command of the following form ON NODE 1 to create an OSD on node 3::
[node 1@192-168-1-101]$ sudo ceph orchestrator osd create 192-168-1-103:/dev//dev/by-id/ata-WDC+WDS_300T2C0A-00SM50_123405928384
[node 1@192-168-1-101]$ sudo ceph orch osd create 192-168-1-103:/dev//dev/by-id/ata-WDC+WDS_300T2C0A-00SM50_123405928384
["Created osd(s) on host '192-168-1-103'"]
[node 1@192-168-1-101]$

View File

@ -47,11 +47,11 @@ CEPHADM_STRAY_HOST
One or more hosts have running Ceph daemons but are not registered as
hosts managed by *cephadm*. This means that those services cannot
currently be managed by cephadm (e.g., restarted, upgraded, included
in `ceph orchestrator service ls`).
in `ceph orch service ls`).
You can manage the host(s) with::
ceph orchestrator host add *<hostname>*
ceph orch host add *<hostname>*
Note that you may need to configure SSH access to the remote host
before this will work.
@ -71,7 +71,7 @@ One or more Ceph daemons are running but not are not managed by
*cephadm*, perhaps because they were deploy using a different tool, or
were started manually. This means that those services cannot
currently be managed by cephadm (e.g., restarted, upgraded, included
in `ceph orchestrator service ls`).
in `ceph orch service ls`).
**FIXME:** We need to implement and document an adopt procedure here.
@ -94,7 +94,7 @@ You can manually run this check with::
You can remove a broken host from management with::
ceph orchestrator host rm *<hostname>*
ceph orch host rm *<hostname>*
You can disable this health warning with::

View File

@ -52,23 +52,23 @@ Configuration
To enable the orchestrator, please select the orchestrator module to use
with the ``set backend`` command::
ceph orchestrator set backend <module>
ceph orch set backend <module>
For example, to enable the Rook orchestrator module and use it with the CLI::
ceph mgr module enable rook
ceph orchestrator set backend rook
ceph orch set backend rook
You can then check backend is properly configured::
ceph orchestrator status
ceph orch status
Disable the Orchestrator
~~~~~~~~~~~~~~~~~~~~~~~~
To disable the orchestrator again, use the empty string ``""``::
ceph orchestrator set backend ""
ceph orch set backend ""
ceph mgr module disable rook
Usage
@ -90,7 +90,7 @@ Status
::
ceph orchestrator status
ceph orch status
Show current orchestrator mode and high-level status (whether the module able
to talk to it)
@ -102,12 +102,12 @@ Host Management
List hosts associated with the cluster::
ceph orchestrator host ls
ceph orch host ls
Add and remove hosts::
ceph orchestrator host add <host>
ceph orchestrator host rm <host>
ceph orch host add <host>
ceph orch host rm <host>
OSD Management
~~~~~~~~~~~~~~
@ -120,11 +120,11 @@ filtered to a particular node:
::
ceph orchestrator device ls [--host=...] [--refresh]
ceph orch device ls [--host=...] [--refresh]
Example::
# ceph orchestrator device ls
# ceph orch device ls
Host 192.168.121.206:
Device Path Type Size Rotates Available Model
/dev/sdb hdd 50.0G True True ATA/QEMU HARDDISK
@ -143,8 +143,8 @@ Create OSDs
Create OSDs on a group of devices on a single host::
ceph orchestrator osd create <host>:<drive>
ceph orchestrator osd create -i <path-to-drive-group.json>
ceph orch osd create <host>:<drive>
ceph orch osd create -i <path-to-drive-group.json>
The output of ``osd create`` is not specified and may vary between orchestrator backends.
@ -154,7 +154,7 @@ Where ``drive.group.json`` is a JSON file containing the fields defined in
Example::
# ceph orchestrator osd create 192.168.121.206:/dev/sdc
# ceph orch osd create 192.168.121.206:/dev/sdc
{"status": "OK", "msg": "", "data": {"event": "playbook_on_stats", "uuid": "7082f3ba-f5b7-4b7c-9477-e74ca918afcb", "stdout": "\r\nPLAY RECAP *********************************************************************\r\n192.168.121.206 : ok=96 changed=3 unreachable=0 failed=0 \r\n", "counter": 932, "pid": 10294, "created": "2019-05-28T22:22:58.527821", "end_line": 1170, "runner_ident": "083cad3c-8197-11e9-b07a-2016b900e38f", "start_line": 1166, "event_data": {"ignored": 0, "skipped": {"192.168.121.206": 186}, "ok": {"192.168.121.206": 96}, "artifact_data": {}, "rescued": 0, "changed": {"192.168.121.206": 3}, "pid": 10294, "dark": {}, "playbook_uuid": "409364a6-9d49-4e44-8b7b-c28e5b3adf89", "playbook": "add-osd.yml", "failures": {}, "processed": {"192.168.121.206": 1}}, "parent_uuid": "409364a6-9d49-4e44-8b7b-c28e5b3adf89"}}
.. note::
@ -164,14 +164,14 @@ Decommission an OSD
^^^^^^^^^^^^^^^^^^^
::
ceph orchestrator osd rm <osd-id> [osd-id...]
ceph orch osd rm <osd-id> [osd-id...]
Removes one or more OSDs from the cluster and the host, if the OSDs are marked as
``destroyed``.
Example::
# ceph orchestrator osd rm 4
# ceph orch osd rm 4
{"status": "OK", "msg": "", "data": {"event": "playbook_on_stats", "uuid": "1a16e631-906d-48e0-9e24-fa7eb593cc0a", "stdout": "\r\nPLAY RECAP *********************************************************************\r\n192.168.121.158 : ok=2 changed=0 unreachable=0 failed=0 \r\n192.168.121.181 : ok=2 changed=0 unreachable=0 failed=0 \r\n192.168.121.206 : ok=2 changed=0 unreachable=0 failed=0 \r\nlocalhost : ok=31 changed=8 unreachable=0 failed=0 \r\n", "counter": 240, "pid": 10948, "created": "2019-05-28T22:26:09.264012", "end_line": 308, "runner_ident": "8c093db0-8197-11e9-b07a-2016b900e38f", "start_line": 301, "event_data": {"ignored": 0, "skipped": {"localhost": 37}, "ok": {"192.168.121.181": 2, "192.168.121.158": 2, "192.168.121.206": 2, "localhost": 31}, "artifact_data": {}, "rescued": 0, "changed": {"localhost": 8}, "pid": 10948, "dark": {}, "playbook_uuid": "a12ec40e-bce9-4bc9-b09e-2d8f76a5be02", "playbook": "shrink-osd.yml", "failures": {}, "processed": {"192.168.121.181": 1, "192.168.121.158": 1, "192.168.121.206": 1, "localhost": 1}}, "parent_uuid": "a12ec40e-bce9-4bc9-b09e-2d8f76a5be02"}}
.. note::
@ -182,24 +182,24 @@ Example::
^^^^^^^^^^^^^^^^^^^
::
ceph orchestrator device ident-on <dev_id>
ceph orchestrator device ident-on <dev_name> <host>
ceph orchestrator device fault-on <dev_id>
ceph orchestrator device fault-on <dev_name> <host>
ceph orch device ident-on <dev_id>
ceph orch device ident-on <dev_name> <host>
ceph orch device fault-on <dev_id>
ceph orch device fault-on <dev_name> <host>
ceph orchestrator device ident-off <dev_id> [--force=true]
ceph orchestrator device ident-off <dev_id> <host> [--force=true]
ceph orchestrator device fault-off <dev_id> [--force=true]
ceph orchestrator device fault-off <dev_id> <host> [--force=true]
ceph orch device ident-off <dev_id> [--force=true]
ceph orch device ident-off <dev_id> <host> [--force=true]
ceph orch device fault-off <dev_id> [--force=true]
ceph orch device fault-off <dev_id> <host> [--force=true]
where ``dev_id`` is the device id as listed in ``osd metadata``,
``dev_name`` is the name of the device on the system and ``host`` is the host as
returned by ``orchestrator host ls``
ceph orchestrator osd ident-on {primary,journal,db,wal,all} <osd-id>
ceph orchestrator osd ident-off {primary,journal,db,wal,all} <osd-id>
ceph orchestrator osd fault-on {primary,journal,db,wal,all} <osd-id>
ceph orchestrator osd fault-off {primary,journal,db,wal,all} <osd-id>
ceph orch osd ident-on {primary,journal,db,wal,all} <osd-id>
ceph orch osd ident-off {primary,journal,db,wal,all} <osd-id>
ceph orch osd fault-on {primary,journal,db,wal,all} <osd-id>
ceph orch osd fault-off {primary,journal,db,wal,all} <osd-id>
Where ``journal`` is the filestore journal, ``wal`` is the write ahead log of
bluestore and ``all`` stands for all devices associated with the osd
@ -213,13 +213,13 @@ error if it doesn't know how to do this transition.
Update the number of monitor nodes::
ceph orchestrator mon update <num> [host, host:network...]
ceph orch mon update <num> [host, host:network...]
Each host can optionally specify a network for the monitor to listen on.
Update the number of manager nodes::
ceph orchestrator mgr update <num> [host...]
ceph orch mgr update <num> [host...]
..
.. note::
@ -242,17 +242,17 @@ services of a particular type via optional --type parameter
::
ceph orchestrator service ls [--host host] [--svc_type type] [--refresh]
ceph orch service ls [--host host] [--svc_type type] [--refresh]
Discover the status of a particular service::
ceph orchestrator service ls --svc_type type --svc_id <name> [--refresh]
ceph orch service ls --svc_type type --svc_id <name> [--refresh]
Query the status of a particular service instance (mon, osd, mds, rgw). For OSDs
the id is the numeric OSD ID, for MDS services it is the file system name::
ceph orchestrator service-instance status <type> <instance-name> [--refresh]
ceph orch service-instance status <type> <instance-name> [--refresh]
@ -271,19 +271,19 @@ Sizing: the ``size`` parameter gives the number of daemons in the cluster
Creating/growing/shrinking/removing services::
ceph orchestrator {mds,rgw} update <name> <size> [host…]
ceph orchestrator {mds,rgw} add <name>
ceph orchestrator nfs update <name> <size> [host…]
ceph orchestrator nfs add <name> <pool> [--namespace=<namespace>]
ceph orchestrator {mds,rgw,nfs} rm <name>
ceph orch {mds,rgw} update <name> <size> [host…]
ceph orch {mds,rgw} add <name>
ceph orch nfs update <name> <size> [host…]
ceph orch nfs add <name> <pool> [--namespace=<namespace>]
ceph orch {mds,rgw,nfs} rm <name>
e.g., ``ceph orchestrator mds update myfs 3 host1 host2 host3``
e.g., ``ceph orch mds update myfs 3 host1 host2 host3``
Start/stop/reload::
ceph orchestrator service {stop,start,reload} <type> <name>
ceph orch service {stop,start,reload} <type> <name>
ceph orchestrator service-instance {start,stop,reload} <type> <instance-name>
ceph orch service-instance {start,stop,reload} <type> <instance-name>
Current Implementation Status

View File

@ -414,11 +414,11 @@ def ceph_bootstrap(ctx, config):
log.info('Adding host %s to orchestrator...' % remote.shortname)
_shell(ctx, cluster_name, remote, [
'ceph', 'orchestrator', 'host', 'add',
'ceph', 'orch', 'host', 'add',
remote.shortname
])
r = _shell(ctx, cluster_name, remote,
['ceph', 'orchestrator', 'host', 'ls', '--format=json'],
['ceph', 'orch', 'host', 'ls', '--format=json'],
stdout=StringIO())
hosts = [node['host'] for node in json.loads(r.stdout.getvalue())]
assert remote.shortname in hosts
@ -464,7 +464,7 @@ def ceph_mons(ctx, config):
log.info('Adding %s on %s' % (mon, remote.shortname))
num_mons += 1
_shell(ctx, cluster_name, remote, [
'ceph', 'orchestrator', 'mon', 'update',
'ceph', 'orch', 'mon', 'update',
str(num_mons),
remote.shortname + ':' + ctx.ceph[cluster_name].mons[mon] + '=' + id_,
])
@ -499,11 +499,11 @@ def ceph_mons(ctx, config):
if teuthology.is_type('mon', cluster_name)(r)]:
c_, _, id_ = teuthology.split_role(mon)
_shell(ctx, cluster_name, remote, [
'ceph', 'orchestrator', 'service-instance', 'reconfig',
'ceph', 'orch', 'service-instance', 'reconfig',
'mon', id_,
])
_shell(ctx, cluster_name, ctx.ceph[cluster_name].bootstrap_remote, [
'ceph', 'orchestrator', 'service-instance', 'reconfig',
'ceph', 'orch', 'service-instance', 'reconfig',
'mgr', ctx.ceph[cluster_name].first_mgr,
])
@ -534,7 +534,7 @@ def ceph_mgrs(ctx, config):
daemons[mgr] = (remote, id_)
if nodes:
_shell(ctx, cluster_name, remote, [
'ceph', 'orchestrator', 'mgr', 'update',
'ceph', 'orch', 'mgr', 'update',
str(len(nodes) + 1)] + nodes
)
for mgr, i in daemons.items():
@ -587,7 +587,7 @@ def ceph_osds(ctx, config):
_shell(ctx, cluster_name, remote, [
'ceph-volume', 'lvm', 'zap', dev])
_shell(ctx, cluster_name, remote, [
'ceph', 'orchestrator', 'osd', 'create',
'ceph', 'orch', 'osd', 'create',
remote.shortname + ':' + short_dev
])
ctx.daemons.register_daemon(
@ -623,7 +623,7 @@ def ceph_mdss(ctx, config):
daemons[role] = (remote, id_)
if nodes:
_shell(ctx, cluster_name, remote, [
'ceph', 'orchestrator', 'mds', 'update',
'ceph', 'orch', 'mds', 'update',
'all',
str(len(nodes))] + nodes
)

View File

@ -17,7 +17,7 @@ class HostControllerTest(DashboardTestCase):
super(HostControllerTest, cls).setUpClass()
cls._load_module("test_orchestrator")
cmd = ['orchestrator', 'set', 'backend', 'test_orchestrator']
cmd = ['orch', 'set', 'backend', 'test_orchestrator']
cls.mgr_cluster.mon_manager.raw_cluster_cmd(*cmd)
cmd = ['test_orchestrator', 'load_data', '-i', '-']

View File

@ -71,7 +71,7 @@ class OrchestratorControllerTest(DashboardTestCase):
def setUpClass(cls):
super(OrchestratorControllerTest, cls).setUpClass()
cls._load_module('test_orchestrator')
cmd = ['orchestrator', 'set', 'backend', 'test_orchestrator']
cmd = ['orch', 'set', 'backend', 'test_orchestrator']
cls.mgr_cluster.mon_manager.raw_cluster_cmd(*cmd)
cmd = ['test_orchestrator', 'load_data', '-i', '-']

View File

@ -18,7 +18,7 @@ class TestOrchestratorCli(MgrTestCase):
return self.mgr_cluster.mon_manager.raw_cluster_cmd(module, *args)
def _orch_cmd(self, *args):
return self._cmd("orchestrator", *args)
return self._cmd("orch", *args)
def _progress_cmd(self, *args):
return self.mgr_cluster.mon_manager.raw_cluster_cmd("progress", *args)
@ -27,7 +27,7 @@ class TestOrchestratorCli(MgrTestCase):
"""
raw_cluster_cmd doesn't support kwargs.
"""
return self.mgr_cluster.mon_manager.raw_cluster_cmd_result("orchestrator", *args, **kwargs)
return self.mgr_cluster.mon_manager.raw_cluster_cmd_result("orch", *args, **kwargs)
def _test_orchestrator_cmd_result(self, *args, **kwargs):
return self.mgr_cluster.mon_manager.raw_cluster_cmd_result("test_orchestrator", *args, **kwargs)

View File

@ -206,7 +206,7 @@ $SUDO pvcreate $loop_dev && $SUDO vgcreate $OSD_VG_NAME $loop_dev
for id in `seq 0 $((--OSD_TO_CREATE))`; do
$SUDO lvcreate -l $((100/$OSD_TO_CREATE))%VG -n $OSD_LV_NAME.$id $OSD_VG_NAME
$CEPHADM shell --fsid $FSID --config $CONFIG --keyring $KEYRING -- \
ceph orchestrator osd create \
ceph orch osd create \
$(hostname):/dev/$OSD_VG_NAME/$OSD_LV_NAME.$id
done

View File

@ -1890,7 +1890,7 @@ def command_bootstrap():
logger.info('Enabling cephadm module...')
cli(['mgr', 'module', 'enable', 'cephadm'])
logger.info('Setting orchestrator backend to cephadm...')
cli(['orchestrator', 'set', 'backend', 'cephadm'])
cli(['orch', 'set', 'backend', 'cephadm'])
logger.info('Generating ssh key...')
cli(['cephadm', 'generate-key'])
@ -1920,7 +1920,7 @@ def command_bootstrap():
host = get_hostname()
logger.info('Adding host %s...' % host)
cli(['orchestrator', 'host', 'add', host])
cli(['orch', 'host', 'add', host])
if not args.skip_dashboard:
logger.info('Enabling the dashboard module...')

View File

@ -77,14 +77,14 @@ Add the newly created host(s) to the inventory.
::
# ceph orchestrator host add <host>
# ceph orch host add <host>
4) Verify the inventory
::
# ceph orchestrator host ls
# ceph orch host ls
You should see the hostname in the list.

View File

@ -125,7 +125,7 @@ class NoOrchestrator(OrchestratorError):
"""
No orchestrator in configured.
"""
def __init__(self, msg="No orchestrator configured (try `ceph orchestrator set backend`)"):
def __init__(self, msg="No orchestrator configured (try `ceph orch set backend`)"):
super(NoOrchestrator, self).__init__(msg)

View File

@ -156,7 +156,7 @@ class OrchestratorCli(orchestrator.OrchestratorClientMixin, MgrModule):
return self.get_module_option("orchestrator")
@orchestrator._cli_write_command(
'orchestrator host add',
'orch host add',
'name=host,type=CephString,req=true '
'name=addr,type=CephString,req=false '
'name=labels,type=CephString,n=N,req=false',
@ -169,7 +169,7 @@ class OrchestratorCli(orchestrator.OrchestratorClientMixin, MgrModule):
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator host rm',
'orch host rm',
"name=host,type=CephString,req=true",
'Remove a host')
def _remove_host(self, host):
@ -190,7 +190,7 @@ class OrchestratorCli(orchestrator.OrchestratorClientMixin, MgrModule):
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_read_command(
'orchestrator host ls',
'orch host ls',
'name=format,type=CephChoices,strings=json|plain,req=false',
'List hosts')
def _get_hosts(self, format='plain'):
@ -214,7 +214,7 @@ class OrchestratorCli(orchestrator.OrchestratorClientMixin, MgrModule):
return HandleCommandResult(stdout=output)
@orchestrator._cli_write_command(
'orchestrator host label add',
'orch host label add',
'name=host,type=CephString '
'name=label,type=CephString',
'Add a host label')
@ -225,7 +225,7 @@ class OrchestratorCli(orchestrator.OrchestratorClientMixin, MgrModule):
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator host label rm',
'orch host label rm',
'name=host,type=CephString '
'name=label,type=CephString',
'Add a host label')
@ -236,7 +236,7 @@ class OrchestratorCli(orchestrator.OrchestratorClientMixin, MgrModule):
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_read_command(
'orchestrator device ls',
'orch device ls',
"name=host,type=CephString,n=N,req=false "
"name=format,type=CephChoices,strings=json|plain,req=false "
"name=refresh,type=CephBool,req=false",
@ -288,7 +288,7 @@ class OrchestratorCli(orchestrator.OrchestratorClientMixin, MgrModule):
return HandleCommandResult(stdout='\n'.join(out))
@orchestrator._cli_read_command(
'orchestrator service ls',
'orch service ls',
"name=host,type=CephString,req=false "
"name=svc_type,type=CephChoices,strings=mon|mgr|osd|mds|iscsi|nfs|rgw|rbd-mirror,req=false "
"name=svc_id,type=CephString,req=false "
@ -348,7 +348,7 @@ class OrchestratorCli(orchestrator.OrchestratorClientMixin, MgrModule):
return HandleCommandResult(stdout=table.get_string())
@orchestrator._cli_write_command(
'orchestrator osd create',
'orch osd create',
"name=svc_arg,type=CephString,req=false",
'Create an OSD service. Either --svc_arg=host:drives or -i <drive_group>')
def _create_osd(self, svc_arg=None, inbuf=None):
@ -357,8 +357,8 @@ class OrchestratorCli(orchestrator.OrchestratorClientMixin, MgrModule):
usage = """
Usage:
ceph orchestrator osd create -i <json_file/yaml_file>
ceph orchestrator osd create host:device1,device2,...
ceph orch osd create -i <json_file/yaml_file>
ceph orch osd create host:device1,device2,...
"""
if inbuf:
@ -388,7 +388,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator osd rm',
'orch osd rm',
"name=svc_id,type=CephString,n=N",
'Remove OSD services')
def _osd_rm(self, svc_id):
@ -403,7 +403,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator rbd-mirror add',
'orch rbd-mirror add',
"name=num,type=CephInt,req=false "
"name=hosts,type=CephString,n=N,req=false",
'Create an rbd-mirror service')
@ -417,7 +417,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator rbd-mirror update',
'orch rbd-mirror update',
"name=num,type=CephInt,req=false "
"name=hosts,type=CephString,n=N,req=false "
"name=label,type=CephString,req=false",
@ -432,7 +432,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator rbd-mirror rm',
'orch rbd-mirror rm',
"name=name,type=CephString,req=false",
'Remove rbd-mirror service or rbd-mirror service instance')
def _rbd_mirror_rm(self, name=None):
@ -442,7 +442,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator mds add',
'orch mds add',
"name=fs_name,type=CephString "
"name=num,type=CephInt,req=false "
"name=hosts,type=CephString,n=N,req=false",
@ -457,7 +457,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator mds update',
'orch mds update',
"name=fs_name,type=CephString "
"name=num,type=CephInt,req=false "
"name=hosts,type=CephString,n=N,req=false "
@ -477,7 +477,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator mds rm',
'orch mds rm',
"name=name,type=CephString",
'Remove an MDS service (mds id or fs_name)')
def _mds_rm(self, name):
@ -487,7 +487,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator rgw add',
'orch rgw add',
'name=realm_name,type=CephString '
'name=zone_name,type=CephString '
'name=num,type=CephInt,req=false '
@ -497,8 +497,8 @@ Usage:
def _rgw_add(self, realm_name, zone_name, num=1, hosts=None, inbuf=None):
usage = """
Usage:
ceph orchestrator rgw add -i <json_file>
ceph orchestrator rgw add <realm_name> <zone_name>
ceph orch rgw add -i <json_file>
ceph orch rgw add <realm_name> <zone_name>
"""
if inbuf:
try:
@ -517,7 +517,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator rgw update',
'orch rgw update',
'name=realm_name,type=CephString '
'name=zone_name,type=CephString '
'name=num,type=CephInt,req=false '
@ -535,7 +535,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator rgw rm',
'orch rgw rm',
'name=realm_name,type=CephString '
'name=zone_name,type=CephString',
'Remove an RGW service')
@ -547,7 +547,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator nfs add',
'orch nfs add',
"name=svc_arg,type=CephString "
"name=pool,type=CephString "
"name=namespace,type=CephString,req=false "
@ -569,7 +569,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator nfs update',
'orch nfs update',
"name=svc_id,type=CephString "
'name=num,type=CephInt,req=false '
'name=hosts,type=CephString,n=N,req=false '
@ -586,7 +586,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator nfs rm',
'orch nfs rm',
"name=svc_id,type=CephString",
'Remove an NFS service')
def _nfs_rm(self, svc_id):
@ -596,7 +596,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator service',
'orch service',
"name=action,type=CephChoices,strings=start|stop|restart|redeploy|reconfig "
"name=svc_type,type=CephString "
"name=svc_name,type=CephString",
@ -608,7 +608,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator service-instance',
'orch service-instance',
"name=action,type=CephChoices,strings=start|stop|restart|redeploy|reconfig "
"name=svc_type,type=CephString "
"name=svc_id,type=CephString",
@ -620,7 +620,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator mgr update',
'orch mgr update',
"name=num,type=CephInt,req=false "
"name=hosts,type=CephString,n=N,req=false "
"name=label,type=CephString,req=false",
@ -638,7 +638,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator mon update',
'orch mon update',
"name=num,type=CephInt,req=false "
"name=hosts,type=CephString,n=N,req=false "
"name=label,type=CephString,req=false",
@ -658,7 +658,7 @@ Usage:
return HandleCommandResult(stdout=completion.result_str())
@orchestrator._cli_write_command(
'orchestrator set backend',
'orch set backend',
"name=module_name,type=CephString,req=true",
'Select orchestrator module backend')
def _set_backend(self, module_name):
@ -707,7 +707,7 @@ Usage:
return HandleCommandResult(-errno.EINVAL, stderr="Module '{0}' not found".format(module_name))
@orchestrator._cli_write_command(
'orchestrator cancel',
'orch cancel',
desc='cancels ongoing operations')
def _cancel(self):
"""
@ -717,7 +717,7 @@ Usage:
return HandleCommandResult()
@orchestrator._cli_read_command(
'orchestrator status',
'orch status',
desc='Report configured backend and its status')
def _status(self):
o = self._select_orchestrator()

View File

@ -380,7 +380,7 @@ class RookOrchestrator(MgrModule, orchestrator.Orchestrator):
a single DriveGroup for now.
You can work around it by invoking:
$: ceph orchestrator osd create -i <dg.file>
$: ceph orch osd create -i <dg.file>
multiple times. The drivegroup file must only contain one spec at a time.
"""

View File

@ -2,12 +2,12 @@
You can activate the Ceph Manager module by running:
```
$ ceph mgr module enable test_orchestrator
$ ceph orchestrator set backend test_orchestrator
$ ceph orch set backend test_orchestrator
```
# Check status
```
ceph orchestrator status
ceph orch status
```
# Import dummy data

View File

@ -195,7 +195,7 @@ class TestOrchestrator(MgrModule, orchestrator.Orchestrator):
a single DriveGroup for now.
You can work around it by invoking:
$: ceph orchestrator osd create -i <dg.file>
$: ceph orch osd create -i <dg.file>
multiple times. The drivegroup file must only contain one spec at a time.
"""

View File

@ -92,7 +92,7 @@ pvcreate $loop_dev && vgcreate $OSD_VG_NAME $loop_dev
for id in `seq 0 $((--OSD_TO_CREATE))`; do
lvcreate -l $((100/$OSD_TO_CREATE))%VG -n $OSD_LV_NAME.$id $OSD_VG_NAME
$SUDO $CEPHADM shell --fsid $fsid --config c --keyring k -- \
ceph orchestrator osd create \
ceph orch osd create \
$(hostname):/dev/$OSD_VG_NAME/$OSD_LV_NAME.$id
done