2018-08-02 14:32:38 +00:00
|
|
|
|
|
|
|
|
|
|
|
.. _orchestrator-modules:
|
|
|
|
|
|
|
|
.. py:currentmodule:: orchestrator
|
|
|
|
|
|
|
|
ceph-mgr orchestrator modules
|
|
|
|
=============================
|
|
|
|
|
|
|
|
.. warning::
|
|
|
|
|
|
|
|
This is developer documentation, describing Ceph internals that
|
|
|
|
are only relevant to people writing ceph-mgr orchestrator modules.
|
|
|
|
|
|
|
|
In this context, *orchestrator* refers to some external service that
|
|
|
|
provides the ability to discover devices and create Ceph services. This
|
|
|
|
includes external projects such as ceph-ansible, DeepSea, and Rook.
|
|
|
|
|
|
|
|
An *orchestrator module* is a ceph-mgr module (:ref:`mgr-module-dev`)
|
2018-09-18 03:19:18 +00:00
|
|
|
which implements common management operations using a particular
|
2018-08-02 14:32:38 +00:00
|
|
|
orchestrator.
|
|
|
|
|
|
|
|
Orchestrator modules subclass the ``Orchestrator`` class: this class is
|
|
|
|
an interface, it only provides method definitions to be implemented
|
|
|
|
by subclasses. The purpose of defining this common interface
|
|
|
|
for different orchestrators is to enable common UI code, such as
|
|
|
|
the dashboard, to work with various different backends.
|
|
|
|
|
2019-02-07 15:11:02 +00:00
|
|
|
|
|
|
|
.. graphviz::
|
|
|
|
|
|
|
|
digraph G {
|
|
|
|
subgraph cluster_1 {
|
|
|
|
volumes [label="mgr/volumes"]
|
|
|
|
rook [label="mgr/rook"]
|
|
|
|
dashboard [label="mgr/dashboard"]
|
|
|
|
orchestrator_cli [label="mgr/orchestrator_cli"]
|
|
|
|
orchestrator [label="Orchestrator Interface"]
|
|
|
|
ansible [label="mgr/ansible"]
|
|
|
|
ssh [label="mgr/ssh"]
|
|
|
|
deepsea [label="mgr/deepsea"]
|
|
|
|
|
|
|
|
label = "ceph-mgr";
|
|
|
|
}
|
|
|
|
|
|
|
|
volumes -> orchestrator
|
|
|
|
dashboard -> orchestrator
|
|
|
|
orchestrator_cli -> orchestrator
|
|
|
|
orchestrator -> rook -> rook_io
|
|
|
|
orchestrator -> ansible -> ceph_ansible
|
|
|
|
orchestrator -> deepsea -> suse_deepsea
|
|
|
|
orchestrator -> ssh
|
|
|
|
|
|
|
|
|
|
|
|
rook_io [label="Rook"]
|
|
|
|
ceph_ansible [label="ceph-ansible"]
|
|
|
|
suse_deepsea [label="DeepSea"]
|
|
|
|
|
|
|
|
rankdir="TB";
|
|
|
|
}
|
|
|
|
|
2018-08-02 14:32:38 +00:00
|
|
|
Behind all the abstraction, the purpose of orchestrator modules is simple:
|
|
|
|
enable Ceph to do things like discover available hardware, create and
|
|
|
|
destroy OSDs, and run MDS and RGW services.
|
|
|
|
|
|
|
|
A tutorial is not included here: for full and concrete examples, see
|
|
|
|
the existing implemented orchestrator modules in the Ceph source tree.
|
|
|
|
|
|
|
|
Glossary
|
|
|
|
--------
|
|
|
|
|
|
|
|
Stateful service
|
|
|
|
a daemon that uses local storage, such as OSD or mon.
|
|
|
|
|
|
|
|
Stateless service
|
|
|
|
a daemon that doesn't use any local storage, such
|
|
|
|
as an MDS, RGW, nfs-ganesha, iSCSI gateway.
|
|
|
|
|
|
|
|
Label
|
|
|
|
arbitrary string tags that may be applied by administrators
|
|
|
|
to nodes. Typically administrators use labels to indicate
|
|
|
|
which nodes should run which kinds of service. Labels are
|
|
|
|
advisory (from human input) and do not guarantee that nodes
|
|
|
|
have particular physical capabilities.
|
|
|
|
|
|
|
|
Drive group
|
|
|
|
collection of block devices with common/shared OSD
|
|
|
|
formatting (typically one or more SSDs acting as
|
|
|
|
journals/dbs for a group of HDDs).
|
|
|
|
|
|
|
|
Placement
|
|
|
|
choice of which node is used to run a service.
|
|
|
|
|
|
|
|
Key Concepts
|
|
|
|
------------
|
|
|
|
|
|
|
|
The underlying orchestrator remains the source of truth for information
|
|
|
|
about whether a service is running, what is running where, which
|
|
|
|
nodes are available, etc. Orchestrator modules should avoid taking
|
|
|
|
any internal copies of this information, and read it directly from
|
|
|
|
the orchestrator backend as much as possible.
|
|
|
|
|
|
|
|
Bootstrapping nodes and adding them to the underlying orchestration
|
|
|
|
system is outside the scope of Ceph's orchestrator interface. Ceph
|
|
|
|
can only work on nodes when the orchestrator is already aware of them.
|
|
|
|
|
|
|
|
Calls to orchestrator modules are all asynchronous, and return *completion*
|
|
|
|
objects (see below) rather than returning values immediately.
|
|
|
|
|
|
|
|
Where possible, placement of stateless services should be left up to the
|
|
|
|
orchestrator.
|
|
|
|
|
|
|
|
Completions and batching
|
|
|
|
------------------------
|
|
|
|
|
|
|
|
All methods that read or modify the state of the system can potentially
|
|
|
|
be long running. To handle that, all such methods return a *completion*
|
|
|
|
object (a *ReadCompletion* or a *WriteCompletion*). Orchestrator modules
|
|
|
|
must implement the *wait* method: this takes a list of completions, and
|
|
|
|
is responsible for checking if they're finished, and advancing the underlying
|
|
|
|
operations as needed.
|
|
|
|
|
|
|
|
Each orchestrator module implements its own underlying mechanisms
|
|
|
|
for completions. This might involve running the underlying operations
|
|
|
|
in threads, or batching the operations up before later executing
|
|
|
|
in one go in the background. If implementing such a batching pattern, the
|
|
|
|
module would do no work on any operation until it appeared in a list
|
|
|
|
of completions passed into *wait*.
|
|
|
|
|
|
|
|
*WriteCompletion* objects have a two-stage execution. First they become
|
|
|
|
*persistent*, meaning that the write has made it to the orchestrator
|
|
|
|
itself, and been persisted there (e.g. a manifest file has been updated).
|
|
|
|
If ceph-mgr crashed at this point, the operation would still eventually take
|
|
|
|
effect. Second, the completion becomes *effective*, meaning that the operation has really happened (e.g. a service has actually been started).
|
|
|
|
|
|
|
|
.. automethod:: Orchestrator.wait
|
|
|
|
|
2019-02-13 14:01:25 +00:00
|
|
|
.. autoclass:: _Completion
|
|
|
|
:members:
|
|
|
|
|
2018-08-02 14:32:38 +00:00
|
|
|
.. autoclass:: ReadCompletion
|
2019-02-26 16:27:53 +00:00
|
|
|
:members:
|
|
|
|
|
2018-08-02 14:32:38 +00:00
|
|
|
.. autoclass:: WriteCompletion
|
2019-02-26 16:27:53 +00:00
|
|
|
:members:
|
2018-08-02 14:32:38 +00:00
|
|
|
|
|
|
|
Placement
|
|
|
|
---------
|
|
|
|
|
|
|
|
In general, stateless services do not require any specific placement
|
|
|
|
rules, as they can run anywhere that sufficient system resources
|
|
|
|
are available. However, some orchestrators may not include the
|
|
|
|
functionality to choose a location in this way, so we can optionally
|
|
|
|
specify a location when creating a stateless service.
|
|
|
|
|
|
|
|
OSD services generally require a specific placement choice, as this
|
|
|
|
will determine which storage devices are used.
|
|
|
|
|
2019-02-13 14:01:25 +00:00
|
|
|
Error Handling
|
|
|
|
--------------
|
|
|
|
|
|
|
|
The main goal of error handling within orchestrator modules is to provide debug information to
|
|
|
|
assist users when dealing with deployment errors.
|
|
|
|
|
|
|
|
.. autoclass:: OrchestratorError
|
|
|
|
.. autoclass:: NoOrchestrator
|
|
|
|
.. autoclass:: OrchestratorValidationError
|
|
|
|
|
|
|
|
|
|
|
|
In detail, orchestrators need to explicitly deal with different kinds of errors:
|
|
|
|
|
|
|
|
1. No orchestrator configured
|
|
|
|
|
|
|
|
See :class:`NoOrchestrator`.
|
|
|
|
|
|
|
|
2. An orchestrator doesn't implement a specific method.
|
|
|
|
|
|
|
|
For example, an Orchestrator doesn't support ``add_host``.
|
|
|
|
|
|
|
|
In this case, a ``NotImplementedError`` is raised.
|
|
|
|
|
|
|
|
3. Missing features within implemented methods.
|
|
|
|
|
|
|
|
E.g. optional parameters to a command that are not supported by the
|
|
|
|
backend (e.g. the hosts field in :func:`Orchestrator.update_mons` command with the rook backend).
|
|
|
|
|
|
|
|
See :class:`OrchestratorValidationError`.
|
|
|
|
|
|
|
|
4. Input validation errors
|
|
|
|
|
|
|
|
The ``orchestrator_cli`` module and other calling modules are supposed to
|
|
|
|
provide meaningful error messages.
|
|
|
|
|
|
|
|
See :class:`OrchestratorValidationError`.
|
|
|
|
|
|
|
|
5. Errors when actually executing commands
|
|
|
|
|
|
|
|
The resulting Completion should contain an error string that assists in understanding the
|
|
|
|
problem. In addition, :func:`_Completion.is_errored` is set to ``True``
|
|
|
|
|
|
|
|
6. Invalid configuration in the orchestrator modules
|
|
|
|
|
|
|
|
This can be tackled similar to 5.
|
|
|
|
|
|
|
|
|
|
|
|
All other errors are unexpected orchestrator issues and thus should raise an exception that are then
|
|
|
|
logged into the mgr log file. If there is a completion object at that point,
|
|
|
|
:func:`_Completion.result` may contain an error message.
|
|
|
|
|
|
|
|
|
2018-08-02 14:32:38 +00:00
|
|
|
Excluded functionality
|
|
|
|
----------------------
|
|
|
|
|
|
|
|
- Ceph's orchestrator interface is not a general purpose framework for
|
|
|
|
managing linux servers -- it is deliberately constrained to manage
|
|
|
|
the Ceph cluster's services only.
|
|
|
|
- Multipathed storage is not handled (multipathing is unnecessary for
|
|
|
|
Ceph clusters). Each drive is assumed to be visible only on
|
|
|
|
a single node.
|
|
|
|
|
2019-02-07 15:11:02 +00:00
|
|
|
Host management
|
|
|
|
---------------
|
|
|
|
|
|
|
|
.. automethod:: Orchestrator.add_host
|
2019-02-11 11:50:43 +00:00
|
|
|
.. automethod:: Orchestrator.remove_host
|
2019-02-07 15:11:02 +00:00
|
|
|
.. automethod:: Orchestrator.get_hosts
|
|
|
|
|
2018-08-02 14:32:38 +00:00
|
|
|
Inventory and status
|
|
|
|
--------------------
|
|
|
|
|
|
|
|
.. automethod:: Orchestrator.get_inventory
|
|
|
|
.. autoclass:: InventoryFilter
|
|
|
|
.. autoclass:: InventoryNode
|
2019-02-05 11:05:10 +00:00
|
|
|
|
2018-08-02 14:32:38 +00:00
|
|
|
.. autoclass:: InventoryDevice
|
2019-02-05 11:05:10 +00:00
|
|
|
:members:
|
2018-08-02 14:32:38 +00:00
|
|
|
|
|
|
|
.. automethod:: Orchestrator.describe_service
|
|
|
|
.. autoclass:: ServiceDescription
|
|
|
|
|
2019-02-07 15:11:02 +00:00
|
|
|
Service Actions
|
|
|
|
---------------
|
|
|
|
|
|
|
|
.. automethod:: Orchestrator.service_action
|
|
|
|
|
2018-08-02 14:32:38 +00:00
|
|
|
OSD management
|
|
|
|
--------------
|
|
|
|
|
|
|
|
.. automethod:: Orchestrator.create_osds
|
|
|
|
.. automethod:: Orchestrator.remove_osds
|
2019-01-29 14:51:16 +00:00
|
|
|
|
2019-07-08 08:29:58 +00:00
|
|
|
.. py:currentmodule:: ceph.deployment.drive_group
|
|
|
|
|
2019-01-11 12:33:05 +00:00
|
|
|
.. autoclass:: DeviceSelection
|
2019-01-29 14:51:16 +00:00
|
|
|
:members:
|
|
|
|
|
2018-08-02 14:32:38 +00:00
|
|
|
.. autoclass:: DriveGroupSpec
|
2019-01-29 14:51:16 +00:00
|
|
|
:members:
|
|
|
|
:exclude-members: from_json
|
2018-08-02 14:32:38 +00:00
|
|
|
|
2019-07-08 08:29:58 +00:00
|
|
|
.. py:currentmodule:: orchestrator
|
|
|
|
|
2019-08-21 12:22:24 +00:00
|
|
|
.. _orchestrator-osd-replace:
|
|
|
|
|
|
|
|
OSD Replacement
|
|
|
|
^^^^^^^^^^^^^^^
|
|
|
|
|
|
|
|
See :ref:`rados-replacing-an-osd` for the underlying process.
|
|
|
|
|
|
|
|
Replacing OSDs is fundamentally a two-staged process, as users need to
|
|
|
|
physically replace drives. The orchestrator therefor exposes this two-staged process.
|
|
|
|
|
|
|
|
Phase one is a call to :meth:`Orchestrator.remove_osds` with ``destroy=True`` in order to mark
|
|
|
|
the OSD as destroyed.
|
|
|
|
|
|
|
|
|
|
|
|
Phase two is a call to :meth:`Orchestrator.create_osds` with a Drive Group with
|
|
|
|
|
|
|
|
.. py:currentmodule:: ceph.deployment.drive_group
|
|
|
|
|
|
|
|
:attr:`DriveGroupSpec.osd_id_claims` set to the destroyed OSD ids.
|
|
|
|
|
|
|
|
.. py:currentmodule:: orchestrator
|
|
|
|
|
2019-02-07 15:11:02 +00:00
|
|
|
Stateless Services
|
|
|
|
------------------
|
|
|
|
|
2019-07-16 10:47:29 +00:00
|
|
|
.. autoclass:: StatelessServiceSpec
|
|
|
|
|
|
|
|
.. automethod:: Orchestrator.add_mds
|
|
|
|
.. automethod:: Orchestrator.remove_mds
|
|
|
|
.. automethod:: Orchestrator.update_mds
|
|
|
|
.. automethod:: Orchestrator.add_rgw
|
|
|
|
.. automethod:: Orchestrator.remove_rgw
|
|
|
|
.. automethod:: Orchestrator.update_rgw
|
|
|
|
|
|
|
|
.. autoclass:: NFSServiceSpec
|
|
|
|
|
|
|
|
.. automethod:: Orchestrator.add_nfs
|
|
|
|
.. automethod:: Orchestrator.remove_nfs
|
|
|
|
.. automethod:: Orchestrator.update_nfs
|
2019-02-07 15:11:02 +00:00
|
|
|
|
2018-08-02 14:32:38 +00:00
|
|
|
Upgrades
|
|
|
|
--------
|
|
|
|
|
|
|
|
.. automethod:: Orchestrator.upgrade_available
|
|
|
|
.. automethod:: Orchestrator.upgrade_start
|
|
|
|
.. automethod:: Orchestrator.upgrade_status
|
|
|
|
.. autoclass:: UpgradeSpec
|
|
|
|
.. autoclass:: UpgradeStatusSpec
|
2018-08-06 15:20:39 +00:00
|
|
|
|
|
|
|
Utility
|
|
|
|
-------
|
|
|
|
|
|
|
|
.. automethod:: Orchestrator.available
|
2019-07-16 10:47:29 +00:00
|
|
|
.. automethod:: Orchestrator.get_feature_set
|
2018-08-06 15:20:39 +00:00
|
|
|
|
2019-02-13 14:01:25 +00:00
|
|
|
Client Modules
|
|
|
|
--------------
|
|
|
|
|
|
|
|
.. autoclass:: OrchestratorClientMixin
|
|
|
|
:members:
|