doc: Replaced "plugin" with "module" in the Mgr documentation

The documentation currently refers to Ceph Manager Modules as
"plugins" in many places, while the command line interface uses
"module" to enable/disable modules. Replaced all occurences
of "plugin" with "module" in the docs, to avoid confusion and to
be in alignment with the CLI. Also fixed the capitalizations of some
module chapters.

Fixes: https://tracker.ceph.com/issues/38481
Signed-off-by: Lenz Grimmer <lgrimmer@suse.com>
This commit is contained in:
Lenz Grimmer 2019-02-27 13:49:47 +01:00
parent c5f21d2212
commit c3149421bc
15 changed files with 63 additions and 64 deletions

View File

@ -1,13 +1,13 @@
Crash plugin
Crash Module
============
The crash plugin collects information about daemon crashdumps and stores
The crash module collects information about daemon crashdumps and stores
it in the Ceph cluster for later analysis.
Daemon crashdumps are dumped in /var/lib/ceph/crash by default; this can
be configured with the option 'crash dir'. Crash directories are named by
time and date and a randomly-generated UUID, and contain a metadata file
'meta' and a recent log file, with a "crash_id" that is the same.
This plugin allows the metadata about those dumps to be persisted in
This module allows the metadata about those dumps to be persisted in
the monitors' storage.
Enabling

View File

@ -1,10 +1,10 @@
=====================
DISKPREDICTION PLUGIN
Diskprediction Module
=====================
The *diskprediction* plugin supports two modes: cloud mode and local mode. In cloud mode, the disk and Ceph operating status information is collected from Ceph cluster and sent to a cloud-based DiskPrediction server over the Internet. DiskPrediction server analyzes the data and provides the analytics and prediction results of performance and disk health states for Ceph clusters.
The *diskprediction* module supports two modes: cloud mode and local mode. In cloud mode, the disk and Ceph operating status information is collected from Ceph cluster and sent to a cloud-based DiskPrediction server over the Internet. DiskPrediction server analyzes the data and provides the analytics and prediction results of performance and disk health states for Ceph clusters.
Local mode doesn't require any external server for data analysis and output results. In local mode, the *diskprediction* plugin uses an internal predictor module for disk prediction service, and then returns the disk prediction result to the Ceph system.
Local mode doesn't require any external server for data analysis and output results. In local mode, the *diskprediction* module uses an internal predictor module for disk prediction service, and then returns the disk prediction result to the Ceph system.
| Local predictor: 70% accuracy
| Cloud predictor for free: 95% accuracy
@ -39,7 +39,7 @@ The connection settings are used for connection between Ceph and DiskPrediction
Local Mode
----------
The *diskprediction* plugin leverages Ceph device health check to collect disk health metrics and uses internal predictor module to produce the disk failure prediction and returns back to Ceph. Thus, no connection settings are required in local mode. The local predictor module requires at least six datasets of device health metrics to implement the prediction.
The *diskprediction* module leverages Ceph device health check to collect disk health metrics and uses internal predictor module to produce the disk failure prediction and returns back to Ceph. Thus, no connection settings are required in local mode. The local predictor module requires at least six datasets of device health metrics to implement the prediction.
Run the following command to use local predictor predict device life expectancy.
@ -86,7 +86,7 @@ Additional optional configuration settings are the following:
Diskprediction Data
===================
The *diskprediction* plugin actively sends/retrieves the following data to/from DiskPrediction server.
The *diskprediction* module actively sends/retrieves the following data to/from DiskPrediction server.
Metrics Data
@ -268,14 +268,14 @@ Osd:
+----------------------+-----------------------------------------+
- Ceph each objects correlation information
- The plugin agent information
- The plugin agent cluster information
- The plugin agent host information
- The module agent information
- The module agent cluster information
- The module agent host information
SMART Data
-----------
- Ceph physical device SMART data (provided by Ceph *devicehealth* plugin)
- Ceph physical device SMART data (provided by Ceph *devicehealth* module)
Prediction Data
@ -348,6 +348,6 @@ use the following command.
debug mgr = 20
With logging set to debug for the manager the plugin will print out logging
With logging set to debug for the manager the module will print out logging
message with prefix *mgr[diskprediction]* for easy filtering.

View File

@ -1,5 +1,5 @@
hello world
===========
Hello World Module
==================
This is a simple module skeleton for documentation purposes.
@ -35,5 +35,5 @@ The log is found at::
Documenting
-----------
After adding a new mgr module/plugin, be sure to add its documentation to ``doc/mgr/plugin_name.rst``.
Also, add a link to your new plugin into ``doc/mgr/index.rst``.
After adding a new mgr module, be sure to add its documentation to ``doc/mgr/module_name.rst``.
Also, add a link to your new module into ``doc/mgr/index.rst``.

View File

@ -26,23 +26,23 @@ sensible.
:maxdepth: 1
Installation and Configuration <administrator>
Writing plugins <plugins>
Writing modules <modules>
Writing orchestrator plugins <orchestrator_modules>
Dashboard plugin <dashboard>
DiskPrediction plugin <diskprediction>
Local pool plugin <localpool>
RESTful plugin <restful>
Zabbix plugin <zabbix>
Prometheus plugin <prometheus>
Influx plugin <influx>
Hello plugin <hello>
Telegraf plugin <telegraf>
Telemetry plugin <telemetry>
Iostat plugin <iostat>
Crash plugin <crash>
Orchestrator CLI plugin <orchestrator_cli>
Rook plugin <rook>
DeepSea plugin <deepsea>
Insights plugin <insights>
Ansible plugin <ansible>
Dashboard module <dashboard>
DiskPrediction module <diskprediction>
Local pool module <localpool>
RESTful module <restful>
Zabbix module <zabbix>
Prometheus module <prometheus>
Influx module <influx>
Hello module <hello>
Telegraf module <telegraf>
Telemetry module <telemetry>
Iostat module <iostat>
Crash module <crash>
Orchestrator CLI module <orchestrator_cli>
Rook module <rook>
DeepSea module <deepsea>
Insights module <insights>
Ansible module <ansible>
SSH orchestrator <ssh>

View File

@ -1,11 +1,11 @@
=============
Influx Plugin
Influx Module
=============
The influx plugin continuously collects and sends time series data to an
The influx module continuously collects and sends time series data to an
influxdb database.
The influx plugin was introduced in the 13.x *Mimic* release.
The influx module was introduced in the 13.x *Mimic* release.
--------
Enabling

View File

@ -1,7 +1,7 @@
Insights plugin
Insights Module
===============
The insights plugin collects and exposes system information to the Insights Core
The insights module collects and exposes system information to the Insights Core
data analysis framework. It is intended to replace explicit interrogation of
Ceph CLIs and daemon admin sockets, reducing the API surface that Insights
depends on. The insights reports contains the following:

View File

@ -3,7 +3,7 @@
iostat
======
This plugin shows the current throughput and IOPS done on the Ceph cluster.
This module shows the current throughput and IOPS done on the Ceph cluster.
Enabling
--------

View File

@ -1,7 +1,7 @@
Local pool plugin
Local Pool Module
=================
The *localpool* plugin can automatically create RADOS pools that are
The *localpool* module can automatically create RADOS pools that are
localized to a subset of the overall cluster. For example, by default, it will
create a pool for each distinct rack in the cluster. This can be useful for some
deployments that want to distribute some data locally as well as globally across the cluster .

View File

@ -10,7 +10,7 @@ ceph-mgr module developer's guide
This is developer documentation, describing Ceph internals that
are only relevant to people writing ceph-mgr modules.
Creating a plugin
Creating a module
-----------------
In pybind/mgr/, create a python module. Within your module, create a class
@ -32,7 +32,7 @@ additional methods to the base ``MgrModule`` class. See
:ref:`Orchestrator modules <orchestrator-modules>` for more on
creating these modules.
Installing a plugin
Installing a module
-------------------
Once your module is present in the location set by the
@ -59,7 +59,7 @@ severities 20, 4, 1 and 0 respectively.
Exposing commands
-----------------
Set the ``COMMANDS`` class attribute of your plugin to a list of dicts
Set the ``COMMANDS`` class attribute of your module to a list of dicts
like this::
COMMANDS = [
@ -197,7 +197,7 @@ an SQL database.
There are no consistency rules about access to cluster structures or
daemon metadata. For example, an OSD might exist in OSDMap but
have no metadata, or vice versa. On a healthy cluster these
will be very rare transient states, but plugins should be written
will be very rare transient states, but modules should be written
to cope with the possibility.
Note that these accessors must not be called in the modules ``__init__``

View File

@ -1,12 +1,12 @@
=================
Prometheus plugin
Prometheus Module
=================
Provides a Prometheus exporter to pass on Ceph performance counters
from the collection point in ceph-mgr. Ceph-mgr receives MMgrReport
messages from all MgrClient processes (mons and OSDs, for instance)
with performance counter schema data and actual counter data, and keeps
a circular buffer of the last N samples. This plugin creates an HTTP
a circular buffer of the last N samples. This module creates an HTTP
endpoint (like all Prometheus exporters) and retrieves the latest sample
of every counter when polled (or "scraped" in Prometheus terminology).
The HTTP path and query parameters are ignored; all extant counters

View File

@ -1,7 +1,7 @@
restful plugin
Restful Module
==============
RESTful plugin offers the REST API access to the status of the cluster
RESTful module offers the REST API access to the status of the cluster
over an SSL-secured connection.
Enabling

View File

@ -1,7 +1,7 @@
===============
Telegraf Plugin
Telegraf Module
===============
The Telegraf plugin collects and sends statistics series to a Telegraf agent.
The Telegraf module collects and sends statistics series to a Telegraf agent.
The Telegraf agent can buffer, aggregate, parse and process the data before
sending it to an output which can be InfluxDB, ElasticSearch and many more.
@ -10,7 +10,7 @@ Currently the only way to send statistics to Telegraf from this module is to
use the socket listener. The module can send statistics over UDP, TCP or
a UNIX socket.
The Telegraf plugin was introduced in the 13.x *Mimic* release.
The Telegraf module was introduced in the 13.x *Mimic* release.
--------
Enabling

View File

@ -1,6 +1,6 @@
Telemetry plugin
Telemetry Module
================
The telemetry plugin sends anonymous data about the cluster, in which it is running, back to the Ceph project.
The telemetry module sends anonymous data about the cluster, in which it is running, back to the Ceph project.
The data being sent back to the project does not contain any sensitive data like pool names, object names, object contents or hostnames.

View File

@ -1,7 +1,7 @@
Zabbix plugin
Zabbix Module
=============
The Zabbix plugin actively sends information to a Zabbix server like:
The Zabbix module actively sends information to a Zabbix server like:
- Ceph status
- I/O operations
@ -12,7 +12,7 @@ The Zabbix plugin actively sends information to a Zabbix server like:
Requirements
------------
The plugin requires that the *zabbix_sender* executable is present on *all*
The module requires that the *zabbix_sender* executable is present on *all*
machines running ceph-mgr. It can be installed on most distributions using
the package manager.
@ -96,7 +96,7 @@ The current configuration of the module can also be shown:
Template
^^^^^^^^
A `template <https://raw.githubusercontent.com/ceph/ceph/9c54334b615362e0a60442c2f41849ed630598ab/src/pybind/mgr/zabbix/zabbix_template.xml>`_.
(XML) to be used on the Zabbix server can be found in the source directory of the plugin.
(XML) to be used on the Zabbix server can be found in the source directory of the module.
This template contains all items and a few triggers. You can customize the triggers afterwards to fit your needs.
@ -124,6 +124,5 @@ ceph-mgr and check the logs.
[mgr]
debug mgr = 20
With logging set to debug for the manager the plugin will print various logging
lines prefixed with *mgr[zabbix]* for easy filtering.
With logging set to debug for the manager the module will print various logging
lines prefixed with *mgr[zabbix]* for easy filtering.

View File

@ -27,7 +27,7 @@ required when running Ceph Filesystem clients.
responsible for keeping track of runtime metrics and the current
state of the Ceph cluster, including storage utilization, current
performance metrics, and system load. The Ceph Manager daemons also
host python-based plugins to manage and expose Ceph cluster
host python-based modules to manage and expose Ceph cluster
information, including a web-based :ref:`mgr-dashboard` and
`REST API`_. At least two managers are normally required for high
availability.