doc/rbd: readability and spelling

Signed-off-by: Anthony D'Atri <anthony.datri@gmail.com>
This commit is contained in:
Anthony D'Atri 2020-10-01 21:09:56 -07:00
parent d98075628e
commit d3f9c6853d
8 changed files with 74 additions and 69 deletions

View File

@ -4,18 +4,18 @@
.. index:: Ceph Block Device; introduction
A block is a sequence of bytes (for example, a 512-byte block of data).
Block-based storage interfaces are the most common way to store data with
rotating media such as hard disks, CDs, floppy disks, and even traditional
9-track tape. The ubiquity of block device interfaces makes a virtual block
device an ideal candidate to interact with a mass data storage system like Ceph.
A block is a sequence of bytes (often 512).
Block-based storage interfaces are a mature and common way to store data on
media including HDDs, SSDs, CDs, floppy disks, and even tape.
The ubiquity of block device interfaces is a perfect fit for interacting
with mass data storage including Ceph.
Ceph block devices are thin-provisioned, resizable and store data striped over
multiple OSDs in a Ceph cluster. Ceph block devices leverage
Ceph block devices are thin-provisioned, resizable, and store data striped over
multiple OSDs. Ceph block devices leverage
:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` capabilities
such as snapshotting, replication and consistency. Ceph's
:abbr:`RADOS (Reliable Autonomic Distributed Object Store)` Block Devices (RBD)
interact with OSDs using kernel modules or the ``librbd`` library.
including snapshotting, replication and strong consistency. Ceph block
storage clients communicate with Ceph clusters through kernel modules or
the ``librbd`` library.
.. ditaa::
@ -30,7 +30,7 @@ interact with OSDs using kernel modules or the ``librbd`` library.
.. note:: Kernel modules can use Linux page caching. For ``librbd``-based
applications, Ceph supports `RBD Caching`_.
Ceph's block devices deliver high performance with infinite scalability to
Ceph's block devices deliver high performance with vast scalability to
`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `QEMU`_, and
cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on
libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster

View File

@ -4,7 +4,7 @@ iSCSI Initiator for Microsoft Windows
**Prerequisite:**
- Microsoft Windows Server 2016
- Microsoft Windows Server 2016 or later
**iSCSI Initiator, Discovery and Setup:**

View File

@ -1,15 +1,15 @@
-----------------------------
Monitoring the iSCSI gateways
-----------------------------
------------------------------
Monitoring Ceph iSCSI gateways
------------------------------
Ceph provides an additional tool for iSCSI gateway environments
Ceph provides a tool for iSCSI gateway environments
to monitor performance of exported RADOS Block Device (RBD) images.
The ``gwtop`` tool is a ``top``-like tool that displays aggregated
performance metrics of RBD images that are exported to clients over
iSCSI. The metrics are sourced from a Performance Metrics Domain Agent
(PMDA). Information from the Linux-IO target (LIO) PMDA is used to list
each exported RBD image with the connected client and its associated I/O
each exported RBD image, the connected client, and its associated I/O
metrics.
**Requirements:**

View File

@ -4,21 +4,22 @@
Ceph iSCSI Gateway
==================
The iSCSI gateway is integrating Ceph Storage with the iSCSI standard to provide
The iSCSI Gateway presents
a Highly Available (HA) iSCSI target that exports RADOS Block Device (RBD) images
as SCSI disks. The iSCSI protocol allows clients (initiators) to send SCSI commands
to SCSI storage devices (targets) over a TCP/IP network. This allows for heterogeneous
clients, such as Microsoft Windows, to access the Ceph Storage cluster.
to storage devices (targets) over a TCP/IP network, enabling clients without
native Ceph client support to access Ceph block storage. These include
Microsoft Windows and even BIOS.
Each iSCSI gateway runs the Linux IO target kernel subsystem (LIO) to provide the
iSCSI protocol support. LIO utilizes a userspace passthrough (TCMU) to interact
Each iSCSI gateway exploits the Linux IO target kernel subsystem (LIO) to provide
iSCSI protocol support. LIO utilizes userspace passthrough (TCMU) to interact
with Ceph's librbd library and expose RBD images to iSCSI clients. With Cephs
iSCSI gateway you can effectively run a fully integrated block-storage
iSCSI gateway you can provision a fully integrated block-storage
infrastructure with all the features and benefits of a conventional Storage Area
Network (SAN).
.. ditaa::
Cluster Network
Cluster Network (optional)
+-------------------------------------------+
| | | |
+-------+ +-------+ +-------+ +-------+

View File

@ -2,29 +2,26 @@
iSCSI Gateway Requirements
==========================
To implement the Ceph iSCSI gateway there are a few requirements. It is recommended
to use two to four iSCSI gateway nodes for a highly available Ceph iSCSI gateway
solution.
It is recommended to provision two to four iSCSI gateway nodes to
realize a highly available Ceph iSCSI gateway solution.
For hardware recommendations, see :ref:`hardware-recommendations` for more
details.
For hardware recommendations, see :ref:`hardware-recommendations` .
.. note::
On the iSCSI gateway nodes, the memory footprint of the RBD images
can grow to a large size. Plan memory requirements accordingly based
off the number RBD images mapped.
On iSCSI gateway nodes the memory footprint is a function of
of the RBD images mapped and can grow to be largee. Plan memory
requirements accordingly based on the number RBD images to be mapped.
There are no specific iSCSI gateway options for the Ceph Monitors or
OSDs, but it is important to lower the default timers for detecting
down OSDs to reduce the possibility of initiator timeouts. The following
configuration options are suggested for each OSD node in the storage
cluster::
OSDs, but it is important to lower the default heartbeat interval for
detecting down OSDs to reduce the possibility of initiator timeouts.
The following configuration options are suggested::
[osd]
osd heartbeat grace = 20
osd heartbeat interval = 5
- Online Updating Using the Ceph Monitor
- Updating Running State From a Ceph Monitor Node
::
@ -32,10 +29,10 @@ cluster::
::
ceph tell osd.0 config set osd_heartbeat_grace 20
ceph tell osd.0 config set osd_heartbeat_interval 5
ceph tell osd.* config set osd_heartbeat_grace 20
ceph tell osd.* config set osd_heartbeat_interval 5
- Online Updating on the OSD Node
- Updating Running State On Each OSD Node
::
@ -47,4 +44,8 @@ cluster::
ceph daemon osd.0 config set osd_heartbeat_interval 5
For more details on setting Ceph's configuration options, see
:ref:`configuring-ceph`.
:ref:`configuring-ceph`. Be sure to persist these settings in
``/etc/ceph.conf`` or, on Mimic and later releases, in the
centralized config store.

View File

@ -3,9 +3,9 @@ Configuring the iSCSI Target using Ansible
==========================================
The Ceph iSCSI gateway is the iSCSI target node and also a Ceph client
node. The Ceph iSCSI gateway can be a standalone node or be colocated on
a Ceph Object Store Disk (OSD) node. Completing the following steps will
install, and configure the Ceph iSCSI gateway for basic operation.
node. The Ceph iSCSI gateway can be provisioned on dedicated node
or be colocated on a Ceph Object Store Disk (OSD) node. The following steps will
install and configure the Ceph iSCSI gateway for basic operation.
**Requirements:**
@ -15,7 +15,7 @@ install, and configure the Ceph iSCSI gateway for basic operation.
- The ``ceph-iscsi`` package installed on all the iSCSI gateway nodes
**Installing:**
**Installation:**
#. On the Ansible installer node, which could be either the administration node
or a dedicated deployment node, perform the following steps:
@ -38,7 +38,7 @@ install, and configure the Ceph iSCSI gateway for basic operation.
If co-locating the iSCSI gateway with an OSD node, then add the OSD node to the
``[iscsigws]`` section.
**Configuring:**
**Configuration:**
The ``ceph-ansible`` package places a file in the ``/usr/share/ceph-ansible/group_vars/``
directory called ``iscsigws.yml.sample``. Create a copy of this sample file named
@ -94,9 +94,9 @@ advanced variables.
| | nodes have access. |
+--------------------------------------+--------------------------------------+
**Deploying:**
**Deployment:**
On the Ansible installer node, perform the following steps.
Perform the followint steps on the Ansible installer node.
#. As ``root``, execute the Ansible playbook:
@ -124,10 +124,10 @@ On the Ansible installer node, perform the following steps.
.. important::
Attempting to use the ``targetcli`` tool to change the configuration will
result in the following issues, such as ALUA misconfiguration and path failover
problems. There is the potential to corrupt data, to have mismatched
cause problems including ALUA misconfiguration and path failover
issues. There is the potential to corrupt data, to have mismatched
configuration across iSCSI gateways, and to have mismatched WWN information,
which will lead to client multipath problems.
leading to client multipath problems.
**Service Management:**

View File

@ -2,10 +2,13 @@
Configuring the iSCSI Target using the Command Line Interface
=============================================================
The Ceph iSCSI gateway is the iSCSI target node and also a Ceph client
node. The Ceph iSCSI gateway can be a standalone node or be colocated on
a Ceph Object Store Disk (OSD) node. Completing the following steps will
install, and configure the Ceph iSCSI gateway for basic operation.
The Ceph iSCSI gateway is both an iSCSI target and a Ceph client;
think of it as a "translator" between Ceph's RBD interface
and the iSCSI standard. The Ceph iSCSI gateway can run on a
standalone node or be colocated with other daemons eg. on
a Ceph Object Store Disk (OSD) node. When co-locating, ensure
that sufficient CPU and memory are available to share.
The following steps install and configure the Ceph iSCSI gateway for basic operation.
**Requirements:**
@ -120,7 +123,7 @@ For rpm based instructions execute the following commands:
# on *each* gateway node. With the SSL files in place, you can use 'api_secure = true'
# to switch to https mode.
# To support the API, the bear minimum settings are:
# To support the API, the bare minimum settings are:
api_secure = false
# Additional API configuration options are as follows, defaults shown.
@ -130,8 +133,8 @@ For rpm based instructions execute the following commands:
# trusted_ip_list = 192.168.0.10,192.168.0.11
.. note::
trusted_ip_list is a list of IP addresses on each iscsi gateway that
will be used for management operations like target creation, lun
trusted_ip_list is a list of IP addresses on each iSCSI gateway that
will be used for management operations like target creation, LUN
exporting, etc. The IP can be the same that will be used for iSCSI
data, like READ/WRITE commands to/from the RBD image, but using
separate IPs is recommended.
@ -160,7 +163,7 @@ For rpm based instructions execute the following commands:
gwcli will create and configure the iSCSI target and RBD images and copy the
configuration across the gateways setup in the last section. Lower level
tools, like targetcli and rbd, can be used to query the local configuration,
tools including targetcli and rbd can be used to query the local configuration,
but should not be used to modify it. This next section will demonstrate how
to create a iSCSI target and export a RBD image as LUN 0.

View File

@ -6,11 +6,11 @@
The most frequent Ceph Block Device use case involves providing block device
images to virtual machines. For example, a user may create a "golden" image
with an OS and any relevant software in an ideal configuration. Then, the user
takes a snapshot of the image. Finally, the user clones the snapshot (usually
with an OS and any relevant software in an ideal configuration. Then the user
takes a snapshot of the image. Finally the user clones the snapshot (potentially
many times). See `Snapshots`_ for details. The ability to make copy-on-write
clones of a snapshot means that Ceph can provision block device images to
virtual machines quickly, because the client doesn't have to download an entire
virtual machines quickly, because the client doesn't have to download the entire
image each time it spins up a new virtual machine.
@ -27,7 +27,7 @@ image each time it spins up a new virtual machine.
+------------------------+ +------------------------+
Ceph Block Devices can integrate with the QEMU virtual machine. For details on
Ceph Block Devices attach to QEMU virtual machines. For details on
QEMU, see `QEMU Open Source Processor Emulator`_. For QEMU documentation, see
`QEMU Manual`_. For installation details, see `Installation`_.
@ -38,10 +38,10 @@ QEMU, see `QEMU Open Source Processor Emulator`_. For QEMU documentation, see
Usage
=====
The QEMU command line expects you to specify the pool name and image name. You
may also specify a snapshot name.
The QEMU command line expects you to specify the Ceph pool and image name. You
may also specify a snapshot.
QEMU will assume that the Ceph configuration file resides in the default
QEMU will assume that Ceph configuration resides in the default
location (e.g., ``/etc/ceph/$cluster.conf``) and that you are executing
commands as the default ``client.admin`` user unless you expressly specify
another Ceph configuration file path or another user. When specifying a user,
@ -116,9 +116,9 @@ Running QEMU with RBD
QEMU can pass a block device from the host on to a guest, but since
QEMU 0.15, there's no need to map an image as a block device on
the host. Instead, QEMU can access an image as a virtual block
device directly via ``librbd``. This performs better because it avoids
an additional context switch, and can take advantage of `RBD caching`_.
the host. Instead, QEMU attaches an image as a virtual block
device directly via ``librbd``. This strategy increases performance
by avoiding context switches and taking advantage of `RBD caching`_.
You can use ``qemu-img`` to convert existing virtual machine images to Ceph
block device images. For example, if you have a qcow2 image, you could run::