Merge pull request #23443 from alfredodeza/wip-rm24970

ceph-volume: `lvm batch` documentation and man page updates

Reviewed-by: Andrew Schoen <aschoen@redhat.com>
This commit is contained in:
Andrew Schoen 2018-08-06 16:02:58 +00:00 committed by GitHub
commit ef6e10501a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 148 additions and 2 deletions

View File

@ -20,7 +20,7 @@ that may have been deployed with ``ceph-disk``.
Migrating
---------
Starting on Ceph version 12.2.2, ``ceph-disk`` is deprecated. Deprecation
Starting on Ceph version 13.0.0, ``ceph-disk`` is deprecated. Deprecation
warnings will show up that will link to this page. It is strongly suggested
that users start consuming ``ceph-volume``. There are two paths for migrating:
@ -57,6 +57,7 @@ and ``ceph-disk`` is fully disabled. Encryption is fully supported.
systemd
lvm/index
lvm/activate
lvm/batch
lvm/encryption
lvm/prepare
lvm/create

View File

@ -0,0 +1,115 @@
.. _ceph-volume-lvm-batch:
``batch``
===========
This subcommand allows for multiple OSDs to be created at the same time given
an input of devices. Depending on the device type (spinning drive, or solid
state), the internal engine will decide the best approach to create the OSDs.
This decision abstracts away the many nuances when creating an OSD: how large
should a ``block.db`` be? How can one mix a solid state device with spinning
devices in an efficient way?
The process is similar to :ref:`ceph-volume-lvm-create`, and will do the
preparation and activation at once, following the same workflow for each OSD.
..
All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``,
avoiding ``systemd`` units from starting, defining bluestore or filestore,
are supported. Any fine-grained option that may affect a single OSD is not
supported, for example: specifying where journals should be placed.
.. _ceph-volume-lvm-batch_bluestore:
``bluestore``
-------------
The :term:`bluestore` objectstore (the default) is used when creating multiple OSDs
with the ``batch`` sub-command. It allows a few different scenarios depending
on the input of devices:
#. Devices are all spinning HDDs: 1 OSD is created per device
#. Devices are all spinning SSDs: 2 OSDs are created per device
#. Devices are a mix of HDDS and SSDs: data is placed on the spinning device,
the ``block.db`` is created on the SSD, as large as possible.
.. note:: Although operations in ``ceph-volume lvm create`` allow usage of
``block.wal`` it isn't supported with the ``batch`` sub-command
.. _ceph-volume-lvm-batch_report:
Reporting
---------
When a call is received to create OSDs, the tool will prompt the user to
continue if the pre-computed output is acceptable. This output is useful to
understand the outcome of the received devices. Once confirmation is accepted,
the process continues.
Although prompts are good to understand outcomes, it is incredibly useful to
try different inputs to find the best product possible. With the ``--report``
flag, one can prevent any actual operations and just verify outcomes from
inputs.
**pretty reporting**
For two spinning devices, this is how the ``pretty`` report (the default) would
look::
$ ceph-volume lvm batch --report /dev/sdb /dev/sdc
Total OSDs: 2
Type Path LV Size % of device
--------------------------------------------------------------------------------
[data] /dev/sdb 10.74 GB 100%
--------------------------------------------------------------------------------
[data] /dev/sdc 10.74 GB 100%
**JSON reporting**
Reporting can produce a richer output with ``JSON``, which gives a few more
hints on sizing. This feature might be better for other tooling to consume
information that will need to be transformed.
For two spinning devices, this is how the ``JSON`` report would look::
$ ceph-volume lvm batch --report --format=json /dev/sdb /dev/sdc
{
"osds": [
{
"block.db": {},
"data": {
"human_readable_size": "10.74 GB",
"parts": 1,
"path": "/dev/sdb",
"percentage": 100,
"size": 11534336000.0
}
},
{
"block.db": {},
"data": {
"human_readable_size": "10.74 GB",
"parts": 1,
"path": "/dev/sdc",
"percentage": 100,
"size": 11534336000.0
}
}
],
"vgs": [
{
"devices": [
"/dev/sdb"
],
"parts": 1
},
{
"devices": [
"/dev/sdc"
],
"parts": 1
}
]
}

View File

@ -13,7 +13,7 @@ Synopsis
| [--log-path LOG_PATH]
| **ceph-volume** **lvm** [ *trigger* | *create* | *activate* | *prepare*
| *zap* | *list*]
| *zap* | *list* | *batch*]
| **ceph-volume** **simple** [ *trigger* | *scan* | *activate* ]
@ -43,6 +43,36 @@ activated.
Subcommands:
**batch**
Creates OSDs from a list of devices using a ``filestore``
or ``bluestore`` (default) setup. It will create all necessary volume groups
and logical volumes required to have a working OSD.
Example usage with three devices::
ceph-volume lvm batch --bluestore /dev/sda /dev/sdb /dev/sdc
Optional arguments:
* [-h, --help] show the help message and exit
* [--bluestore] Use the bluestore objectstore (default)
* [--filestore] Use the filestore objectstore
* [--yes] Skip the report and prompt to continue provisioning
.. * [--dmcrypt] Enable encryption for the underlying OSD devices
.. * [--crush-device-class] Define a CRUSH device class to assign the OSD to
.. * [--no-systemd] Do not enable or create any systemd units
.. * [--report] Report what the potential outcome would be for the
.. current input (requires devices to be passed in)
.. * [--format] Output format when reporting (used along with
.. --report), can be one of 'pretty' (default) or 'json'
Required positional arguments:
* <DEVICE> Full path to a raw device, like ``/dev/sda``. Multiple
``<DEVICE>`` paths can be passed in.
**activate**
Enables a systemd unit that persists the OSD ID and its UUID (also called
``fsid`` in Ceph CLI tools), so that at boot time it can understand what OSD is