2013-05-07 14:57:40 +00:00
|
|
|
|
===============
|
|
|
|
|
Ceph Glossary
|
|
|
|
|
===============
|
|
|
|
|
|
2017-08-17 13:27:12 +00:00
|
|
|
|
.. glossary::
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2023-02-25 10:12:16 +00:00
|
|
|
|
Application
|
2023-02-25 19:51:07 +00:00
|
|
|
|
More properly called a :term:`client`, an application is any program
|
2023-02-25 10:12:16 +00:00
|
|
|
|
external to Ceph that uses a Ceph Cluster to store and
|
|
|
|
|
replicate data.
|
|
|
|
|
|
2022-11-08 02:25:35 +00:00
|
|
|
|
:ref:`BlueStore<rados_config_storage_devices_bluestore>`
|
|
|
|
|
OSD BlueStore is a storage back end used by OSD daemons, and
|
|
|
|
|
was designed specifically for use with Ceph. BlueStore was
|
2023-05-22 21:41:09 +00:00
|
|
|
|
introduced in the Ceph Kraken release. The Luminous release of
|
|
|
|
|
Ceph promoted BlueStore to the default OSD back end,
|
|
|
|
|
supplanting FileStore. As of the Reef release, FileStore is no
|
2023-10-30 02:37:39 +00:00
|
|
|
|
longer available as a storage back end.
|
2023-05-22 21:41:09 +00:00
|
|
|
|
|
2023-10-30 02:37:39 +00:00
|
|
|
|
BlueStore stores objects directly on raw block devices or
|
|
|
|
|
partitions, and does not interact with mounted file systems.
|
|
|
|
|
BlueStore uses RocksDB's key/value database to map object names
|
|
|
|
|
to block locations on disk.
|
2014-02-11 21:28:04 +00:00
|
|
|
|
|
2023-02-23 05:53:39 +00:00
|
|
|
|
Bucket
|
|
|
|
|
In the context of :term:`RGW`, a bucket is a group of objects.
|
2023-02-24 01:07:12 +00:00
|
|
|
|
In a filesystem-based analogy in which objects are the
|
|
|
|
|
counterpart of files, buckets are the counterpart of
|
2023-02-23 05:53:39 +00:00
|
|
|
|
directories. :ref:`Multisite sync
|
|
|
|
|
policies<radosgw-multisite-sync-policy>` can be set on buckets,
|
|
|
|
|
to provide fine-grained control of data movement from one zone
|
2023-02-24 01:07:12 +00:00
|
|
|
|
to another zone.
|
|
|
|
|
|
|
|
|
|
The concept of the bucket has been taken from AWS S3. See also
|
|
|
|
|
`the AWS S3 page on creating buckets <https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html>`_
|
|
|
|
|
and `the AWS S3 'Buckets Overview' page <https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html>`_.
|
|
|
|
|
|
|
|
|
|
OpenStack Swift uses the term "containers" for what RGW and AWS call "buckets".
|
|
|
|
|
See `the OpenStack Storage API overview page <https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html>`_.
|
2023-02-23 05:53:39 +00:00
|
|
|
|
|
2013-05-07 14:57:40 +00:00
|
|
|
|
Ceph
|
2022-11-09 22:39:31 +00:00
|
|
|
|
Ceph is a distributed network storage and file system with
|
|
|
|
|
distributed metadata management and POSIX semantics.
|
|
|
|
|
|
2024-08-02 20:51:32 +00:00
|
|
|
|
`ceph-ansible <https://docs.ceph.com/projects/ceph-ansible/en/latest/index.html>`_
|
|
|
|
|
A GitHub repository, supported from the Jewel release to the
|
|
|
|
|
Quincy release, that facilitates the installation of a Ceph
|
|
|
|
|
cluster.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Block Device
|
2022-12-25 04:02:29 +00:00
|
|
|
|
Also called "RADOS Block Device" and :term:`RBD`. A software
|
|
|
|
|
instrument that orchestrates the storage of block-based data in
|
|
|
|
|
Ceph. Ceph Block Device splits block-based application data
|
2022-11-05 05:34:20 +00:00
|
|
|
|
into "chunks". RADOS stores these chunks as objects. Ceph Block
|
|
|
|
|
Device orchestrates the storage of those objects across the
|
2022-12-25 04:02:29 +00:00
|
|
|
|
storage cluster.
|
2022-11-05 05:34:20 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Block Storage
|
2022-11-20 05:00:00 +00:00
|
|
|
|
One of the three kinds of storage supported by Ceph (the other
|
|
|
|
|
two are object storage and file storage). Ceph Block Storage is
|
|
|
|
|
the block storage "product", which refers to block-storage
|
|
|
|
|
related services and capabilities when used in conjunction with
|
|
|
|
|
the collection of (1) ``librbd`` (a python module that provides
|
|
|
|
|
file-like access to :term:`RBD` images), (2) a hypervisor such
|
|
|
|
|
as QEMU or Xen, and (3) a hypervisor abstraction layer such as
|
|
|
|
|
``libvirt``.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2023-09-11 05:18:40 +00:00
|
|
|
|
:ref:`Ceph Client <architecture_ceph_clients>`
|
2022-11-23 08:16:47 +00:00
|
|
|
|
Any of the Ceph components that can access a Ceph Storage
|
|
|
|
|
Cluster. This includes the Ceph Object Gateway, the Ceph Block
|
|
|
|
|
Device, the Ceph File System, and their corresponding
|
|
|
|
|
libraries. It also includes kernel modules, and FUSEs
|
|
|
|
|
(Filesystems in USERspace).
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Client Libraries
|
|
|
|
|
The collection of libraries that can be used to interact with
|
2022-11-27 23:45:25 +00:00
|
|
|
|
components of the Ceph Cluster.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2013-05-16 20:57:53 +00:00
|
|
|
|
Ceph Cluster Map
|
2022-11-08 13:11:15 +00:00
|
|
|
|
See :term:`Cluster Map`
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Dashboard
|
2022-11-05 16:22:25 +00:00
|
|
|
|
:ref:`The Ceph Dashboard<mgr-dashboard>` is a built-in
|
|
|
|
|
web-based Ceph management and monitoring application through
|
|
|
|
|
which you can inspect and administer various resources within
|
|
|
|
|
the cluster. It is implemented as a :ref:`ceph-manager-daemon`
|
|
|
|
|
module.
|
|
|
|
|
|
2019-09-09 19:36:04 +00:00
|
|
|
|
Ceph File System
|
2022-10-03 12:51:35 +00:00
|
|
|
|
See :term:`CephFS`
|
|
|
|
|
|
2022-11-16 21:28:45 +00:00
|
|
|
|
:ref:`CephFS<ceph-file-system>`
|
2022-12-26 06:05:32 +00:00
|
|
|
|
The **Ceph F**\ile **S**\ystem, or CephFS, is a
|
2022-12-25 04:02:29 +00:00
|
|
|
|
POSIX-compliant file system built on top of Ceph’s distributed
|
|
|
|
|
object store, RADOS. See :ref:`CephFS Architecture
|
|
|
|
|
<arch-cephfs>` for more details.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2024-07-29 15:50:13 +00:00
|
|
|
|
:ref:`ceph-fuse <man-ceph-fuse>`
|
|
|
|
|
:ref:`ceph-fuse <man-ceph-fuse>` is a FUSE ("**F**\ilesystem in
|
|
|
|
|
**USE**\rspace") client for CephFS. ceph-fuse mounts a Ceph FS
|
|
|
|
|
ata specified mount point.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Interim Release
|
2022-12-30 01:32:31 +00:00
|
|
|
|
See :term:`Releases`.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Kernel Modules
|
2022-11-05 16:59:23 +00:00
|
|
|
|
The collection of kernel modules that can be used to interact
|
2022-11-27 23:45:25 +00:00
|
|
|
|
with the Ceph Cluster (for example: ``ceph.ko``, ``rbd.ko``).
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-11-26 00:06:34 +00:00
|
|
|
|
:ref:`Ceph Manager<ceph-manager-daemon>`
|
2022-11-07 13:55:53 +00:00
|
|
|
|
The Ceph manager daemon (ceph-mgr) is a daemon that runs
|
|
|
|
|
alongside monitor daemons to provide monitoring and interfacing
|
|
|
|
|
to external monitoring and management systems. Since the
|
2022-12-26 06:05:32 +00:00
|
|
|
|
Luminous release (12.x), no Ceph cluster functions properly
|
|
|
|
|
unless it contains a running ceph-mgr daemon.
|
2022-11-07 13:55:53 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Manager Dashboard
|
2022-11-10 02:24:12 +00:00
|
|
|
|
See :term:`Ceph Dashboard`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Metadata Server
|
2022-11-14 04:06:55 +00:00
|
|
|
|
See :term:`MDS`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Monitor
|
2022-10-11 16:49:13 +00:00
|
|
|
|
A daemon that maintains a map of the state of the cluster. This
|
|
|
|
|
"cluster state" includes the monitor map, the manager map, the
|
2022-12-26 06:05:32 +00:00
|
|
|
|
OSD map, and the CRUSH map. A Ceph cluster must contain a
|
|
|
|
|
minimum of three running monitors in order to be both redundant
|
|
|
|
|
and highly-available. Ceph monitors and the nodes on which they
|
|
|
|
|
run are often referred to as "mon"s. See :ref:`Monitor Config
|
2022-10-11 16:49:13 +00:00
|
|
|
|
Reference <monitor-config-reference>`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Node
|
2022-11-21 16:49:49 +00:00
|
|
|
|
A Ceph node is a unit of the Ceph Cluster that communicates with
|
|
|
|
|
other nodes in the Ceph Cluster in order to replicate and
|
|
|
|
|
redistribute data. All of the nodes together are called the
|
|
|
|
|
:term:`Ceph Storage Cluster`. Ceph nodes include :term:`OSD`\s,
|
|
|
|
|
:term:`Ceph Monitor`\s, :term:`Ceph Manager`\s, and
|
|
|
|
|
:term:`MDS`\es. The term "node" is usually equivalent to "host"
|
|
|
|
|
in the Ceph documentation. If you have a running Ceph Cluster,
|
|
|
|
|
you can list all of the nodes in it by running the command
|
|
|
|
|
``ceph node ls all``.
|
|
|
|
|
|
2022-11-16 02:39:53 +00:00
|
|
|
|
:ref:`Ceph Object Gateway<object-gateway>`
|
|
|
|
|
An object storage interface built on top of librados. Ceph
|
|
|
|
|
Object Gateway provides a RESTful gateway between applications
|
|
|
|
|
and Ceph storage clusters.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Object Storage
|
2023-01-08 08:04:43 +00:00
|
|
|
|
See :term:`Ceph Object Store`.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
|
|
|
|
Ceph Object Store
|
2022-11-23 13:09:47 +00:00
|
|
|
|
A Ceph Object Store consists of a :term:`Ceph Storage Cluster`
|
|
|
|
|
and a :term:`Ceph Object Gateway` (RGW).
|
|
|
|
|
|
2022-11-08 02:33:18 +00:00
|
|
|
|
:ref:`Ceph OSD<rados_configuration_storage-devices_ceph_osd>`
|
2022-11-07 22:58:35 +00:00
|
|
|
|
Ceph **O**\bject **S**\torage **D**\aemon. The Ceph OSD
|
|
|
|
|
software, which interacts with logical disks (:term:`OSD`).
|
|
|
|
|
Around 2013, there was an attempt by "research and industry"
|
|
|
|
|
(Sage's own words) to insist on using the term "OSD" to mean
|
|
|
|
|
only "Object Storage Device", but the Ceph community has always
|
|
|
|
|
persisted in using the term to mean "Object Storage Daemon" and
|
|
|
|
|
no less an authority than Sage Weil himself confirms in
|
|
|
|
|
November of 2022 that "Daemon is more accurate for how Ceph is
|
|
|
|
|
built" (private correspondence between Zac Dover and Sage Weil,
|
|
|
|
|
07 Nov 2022).
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph OSD Daemon
|
2022-11-10 20:31:59 +00:00
|
|
|
|
See :term:`Ceph OSD`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph OSD Daemons
|
2022-11-10 20:31:59 +00:00
|
|
|
|
See :term:`Ceph OSD`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Platform
|
|
|
|
|
All Ceph software, which includes any piece of code hosted at
|
|
|
|
|
`https://github.com/ceph`_.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Point Release
|
2022-12-30 01:32:31 +00:00
|
|
|
|
See :term:`Releases`.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Project
|
|
|
|
|
The aggregate term for the people, software, mission and
|
|
|
|
|
infrastructure of Ceph.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Release
|
2022-12-30 01:32:31 +00:00
|
|
|
|
See :term:`Releases`.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Release Candidate
|
2022-12-30 01:32:31 +00:00
|
|
|
|
See :term:`Releases`.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Stable Release
|
2022-12-30 01:32:31 +00:00
|
|
|
|
See :term:`Releases`.
|
2017-03-16 07:38:15 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Stack
|
|
|
|
|
A collection of two or more components of Ceph.
|
2017-03-16 07:38:15 +00:00
|
|
|
|
|
2022-11-22 04:02:34 +00:00
|
|
|
|
:ref:`Ceph Storage Cluster<arch-ceph-storage-cluster>`
|
|
|
|
|
The collection of :term:`Ceph Monitor`\s, :term:`Ceph
|
|
|
|
|
Manager`\s, :term:`Ceph Metadata Server`\s, and :term:`OSD`\s
|
|
|
|
|
that work together to store and replicate data for use by
|
|
|
|
|
applications, Ceph Users, and :term:`Ceph Client`\s. Ceph
|
|
|
|
|
Storage Clusters receive data from :term:`Ceph Client`\s.
|
|
|
|
|
|
2022-12-19 18:16:19 +00:00
|
|
|
|
CephX
|
2023-03-28 08:42:11 +00:00
|
|
|
|
The Ceph authentication protocol. CephX authenticates users and
|
|
|
|
|
daemons. CephX operates like Kerberos, but it has no single
|
|
|
|
|
point of failure. See the :ref:`High-availability
|
|
|
|
|
Authentication section<arch_high_availability_authentication>`
|
|
|
|
|
of the Architecture document and the :ref:`CephX Configuration
|
|
|
|
|
Reference<rados-cephx-config-ref>`.
|
2018-06-28 03:49:29 +00:00
|
|
|
|
|
2023-02-25 19:51:07 +00:00
|
|
|
|
Client
|
|
|
|
|
A client is any program external to Ceph that uses a Ceph
|
|
|
|
|
Cluster to store and replicate data.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Cloud Platforms
|
|
|
|
|
Cloud Stacks
|
|
|
|
|
Third party cloud provisioning platforms such as OpenStack,
|
2022-11-05 16:59:23 +00:00
|
|
|
|
CloudStack, OpenNebula, and Proxmox VE.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Cluster Map
|
2022-11-22 18:04:48 +00:00
|
|
|
|
The set of maps consisting of the monitor map, OSD map, PG map,
|
2022-11-21 16:52:23 +00:00
|
|
|
|
MDS map, and CRUSH map, which together report the state of the
|
2022-11-09 13:12:54 +00:00
|
|
|
|
Ceph cluster. See :ref:`the "Cluster Map" section of the
|
|
|
|
|
Architecture document<architecture_cluster_map>` for details.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2024-03-08 17:17:59 +00:00
|
|
|
|
Crimson
|
2024-03-10 10:43:52 +00:00
|
|
|
|
A next-generation OSD architecture whose main aim is the
|
2024-03-08 17:17:59 +00:00
|
|
|
|
reduction of latency costs incurred due to cross-core
|
2024-03-10 10:43:52 +00:00
|
|
|
|
communications. A re-design of the OSD reduces lock
|
2024-03-08 17:17:59 +00:00
|
|
|
|
contention by reducing communication between shards in the data
|
|
|
|
|
path. Crimson improves upon the performance of classic Ceph
|
|
|
|
|
OSDs by eliminating reliance on thread pools. See `Crimson:
|
|
|
|
|
Next-generation Ceph OSD for Multi-core Scalability
|
|
|
|
|
<https://ceph.io/en/news/blog/2023/crimson-multi-core-scalability/>`_.
|
|
|
|
|
See the :ref:`Crimson developer
|
|
|
|
|
documentation<crimson_dev_doc>`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
CRUSH
|
2022-12-25 04:02:29 +00:00
|
|
|
|
**C**\ontrolled **R**\eplication **U**\nder **S**\calable
|
|
|
|
|
**H**\ashing. The algorithm that Ceph uses to compute object
|
2024-02-12 15:06:05 +00:00
|
|
|
|
storage locations. See `CRUSH: Controlled, Scalable,
|
|
|
|
|
Decentralized Placement of Replicated Data
|
|
|
|
|
<https://ceph.com/assets/pdfs/weil-crush-sc06.pdf>`_.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
CRUSH rule
|
|
|
|
|
The CRUSH data placement rule that applies to a particular
|
2022-12-25 04:02:29 +00:00
|
|
|
|
pool or pools.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-12-05 13:43:29 +00:00
|
|
|
|
DAS
|
2022-12-31 04:22:26 +00:00
|
|
|
|
**D**\irect-\ **A**\ttached **S**\torage. Storage that is
|
2022-12-25 04:02:29 +00:00
|
|
|
|
attached directly to the computer accessing it, without passing
|
|
|
|
|
through a network. Contrast with NAS and SAN.
|
2022-12-05 13:43:29 +00:00
|
|
|
|
|
2022-11-29 17:26:04 +00:00
|
|
|
|
:ref:`Dashboard<mgr-dashboard>`
|
2022-10-02 10:48:36 +00:00
|
|
|
|
A built-in web-based Ceph management and monitoring application
|
|
|
|
|
to administer various aspects and objects of the cluster. The
|
|
|
|
|
dashboard is implemented as a Ceph Manager module. See
|
|
|
|
|
:ref:`mgr-dashboard` for more details.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Dashboard Module
|
2022-11-29 17:26:04 +00:00
|
|
|
|
Another name for :term:`Dashboard`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Dashboard Plugin
|
2024-08-14 18:08:14 +00:00
|
|
|
|
Flapping OSD
|
|
|
|
|
An OSD that is repeatedly marked ``up`` and then ``down`` in
|
|
|
|
|
rapid succession. See :ref:`rados_tshooting_flapping_osd`.
|
|
|
|
|
|
2022-12-14 05:59:51 +00:00
|
|
|
|
FQDN
|
|
|
|
|
**F**\ully **Q**\ualified **D**\omain **N**\ame. A domain name
|
|
|
|
|
that is applied to a node in a network and that specifies the
|
|
|
|
|
node's exact location in the tree hierarchy of the DNS.
|
|
|
|
|
|
|
|
|
|
In the context of Ceph cluster administration, FQDNs are often
|
|
|
|
|
applied to hosts. In this documentation, the term "FQDN" is
|
|
|
|
|
used mostly to distinguish between FQDNs and relatively simpler
|
|
|
|
|
hostnames, which do not specify the exact location of the host
|
|
|
|
|
in the tree hierarchy of the DNS but merely name the host.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Host
|
2022-11-27 23:45:25 +00:00
|
|
|
|
Any single machine or server in a Ceph Cluster. See :term:`Ceph
|
2022-11-21 16:49:49 +00:00
|
|
|
|
Node`.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2023-04-13 12:01:44 +00:00
|
|
|
|
Hybrid OSD
|
|
|
|
|
Refers to an OSD that has both HDD and SSD drives.
|
|
|
|
|
|
2024-03-14 06:29:09 +00:00
|
|
|
|
librados
|
|
|
|
|
An API that can be used to create a custom interface to a Ceph
|
|
|
|
|
storage cluster. ``librados`` makes it possible to interact
|
|
|
|
|
with Ceph Monitors and with OSDs. See :ref:`Introduction to
|
|
|
|
|
librados <librados-intro>`. See :ref:`librados (Python)
|
|
|
|
|
<librados-python>`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
LVM tags
|
2022-12-25 04:02:29 +00:00
|
|
|
|
**L**\ogical **V**\olume **M**\anager tags. Extensible metadata
|
|
|
|
|
for LVM volumes and groups. They are used to store
|
|
|
|
|
Ceph-specific information about devices and its relationship
|
|
|
|
|
with OSDs.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2024-02-29 08:08:10 +00:00
|
|
|
|
MDS
|
2022-12-25 04:02:29 +00:00
|
|
|
|
The Ceph **M**\eta\ **D**\ata **S**\erver daemon. Also referred
|
2022-12-26 06:05:32 +00:00
|
|
|
|
to as "ceph-mds". The Ceph metadata server daemon must be
|
|
|
|
|
running in any Ceph cluster that runs the CephFS file system.
|
2024-02-29 08:08:10 +00:00
|
|
|
|
The MDS stores all filesystem metadata. :term:`Client`\s work
|
|
|
|
|
together with either a single MDS or a group of MDSes to
|
|
|
|
|
maintain a distributed metadata cache that is required by
|
|
|
|
|
CephFS.
|
|
|
|
|
|
|
|
|
|
See :ref:`Deploying Metadata Servers<cephfs_add_remote_mds>`.
|
|
|
|
|
|
|
|
|
|
See the :ref:`ceph-mds man page<ceph_mds_man>`.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
MGR
|
|
|
|
|
The Ceph manager software, which collects all the state from
|
|
|
|
|
the whole cluster in one place.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2023-11-14 13:40:42 +00:00
|
|
|
|
:ref:`MON<arch_monitor>`
|
2022-10-02 10:48:36 +00:00
|
|
|
|
The Ceph monitor software.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2023-11-30 07:08:00 +00:00
|
|
|
|
Monitor Store
|
|
|
|
|
The persistent storage that is used by the Monitor. This
|
|
|
|
|
includes the Monitor's RocksDB and all related files in
|
|
|
|
|
``/var/lib/ceph``.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Node
|
2022-11-21 16:49:49 +00:00
|
|
|
|
See :term:`Ceph Node`.
|
|
|
|
|
|
2024-08-23 12:36:16 +00:00
|
|
|
|
Object Storage
|
|
|
|
|
Object storage is one of three kinds of storage relevant to
|
|
|
|
|
Ceph. The other two kinds of storage relevant to Ceph are file
|
|
|
|
|
storage and block storage. Object storage is the category of
|
|
|
|
|
storage most fundamental to Ceph.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Object Storage Device
|
2022-11-06 17:45:24 +00:00
|
|
|
|
See :term:`OSD`.
|
|
|
|
|
|
2024-02-25 21:14:25 +00:00
|
|
|
|
OMAP
|
|
|
|
|
"object map". A key-value store (a database) that is used to
|
|
|
|
|
reduce the time it takes to read data from and to write to the
|
|
|
|
|
Ceph cluster. RGW bucket indexes are stored as OMAPs.
|
|
|
|
|
Erasure-coded pools cannot store RADOS OMAP data structures.
|
|
|
|
|
|
|
|
|
|
Run the command ``ceph osd df`` to see your OMAPs.
|
|
|
|
|
|
|
|
|
|
See Eleanor Cawthon's 2012 paper `A Distributed Key-Value Store
|
|
|
|
|
using Ceph
|
|
|
|
|
<https://ceph.io/assets/pdfs/CawthonKeyValueStore.pdf>`_ (17
|
|
|
|
|
pages).
|
|
|
|
|
|
2024-06-08 20:24:43 +00:00
|
|
|
|
OpenStack Swift
|
|
|
|
|
In the context of Ceph, OpenStack Swift is one of the two APIs
|
|
|
|
|
supported by the Ceph Object Store. The other API supported by
|
|
|
|
|
the Ceph Object Store is S3.
|
|
|
|
|
|
|
|
|
|
See `the OpenStack Storage API overview page
|
|
|
|
|
<https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html>`_.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
OSD
|
2023-01-08 08:04:43 +00:00
|
|
|
|
Probably :term:`Ceph OSD`, but not necessarily. Sometimes
|
|
|
|
|
(especially in older correspondence, and especially in
|
|
|
|
|
documentation that is not written specifically for Ceph), "OSD"
|
|
|
|
|
means "**O**\bject **S**\torage **D**\evice", which refers to a
|
|
|
|
|
physical or logical storage unit (for example: LUN). The Ceph
|
|
|
|
|
community has always used the term "OSD" to refer to
|
|
|
|
|
:term:`Ceph OSD Daemon` despite an industry push in the
|
|
|
|
|
mid-2010s to insist that "OSD" should refer to "Object Storage
|
|
|
|
|
Device", so it is important to know which meaning is intended.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
2024-08-14 18:08:14 +00:00
|
|
|
|
OSD, flapping
|
|
|
|
|
See :term:`Flapping OSD`.
|
|
|
|
|
|
2024-02-12 13:08:27 +00:00
|
|
|
|
OSD FSID
|
|
|
|
|
The OSD fsid is a unique identifier that is used to identify an
|
|
|
|
|
OSD. It is found in the OSD path in a file called ``osd_fsid``.
|
|
|
|
|
The term ``FSID`` is used interchangeably with ``UUID``.
|
|
|
|
|
|
|
|
|
|
OSD ID
|
|
|
|
|
The OSD id an integer unique to each OSD (each OSD has a unique
|
|
|
|
|
OSD ID). Each OSD id is generated by the monitors during the
|
|
|
|
|
creation of its associated OSD.
|
|
|
|
|
|
|
|
|
|
OSD UUID
|
|
|
|
|
The OSD UUID is the unique identifier of an OSD. This term is
|
|
|
|
|
used interchangeably with ``FSID``.
|
2014-02-11 21:28:04 +00:00
|
|
|
|
|
2023-02-17 22:03:06 +00:00
|
|
|
|
Period
|
|
|
|
|
In the context of :term:`RGW`, a period is the configuration
|
|
|
|
|
state of the :term:`Realm`. The period stores the configuration
|
|
|
|
|
state of a multi-site configuration. When the period is updated,
|
|
|
|
|
the "epoch" is said thereby to have been changed.
|
|
|
|
|
|
2023-04-22 08:55:38 +00:00
|
|
|
|
Placement Groups (PGs)
|
|
|
|
|
Placement groups (PGs) are subsets of each logical Ceph pool.
|
|
|
|
|
Placement groups perform the function of placing objects (as a
|
|
|
|
|
group) into OSDs. Ceph manages data internally at
|
|
|
|
|
placement-group granularity: this scales better than would
|
|
|
|
|
managing individual (and therefore more numerous) RADOS
|
|
|
|
|
objects. A cluster that has a larger number of placement groups
|
|
|
|
|
(for example, 100 per OSD) is better balanced than an otherwise
|
|
|
|
|
identical cluster with a smaller number of placement groups.
|
|
|
|
|
|
|
|
|
|
Ceph's internal RADOS objects are each mapped to a specific
|
|
|
|
|
placement group, and each placement group belongs to exactly
|
|
|
|
|
one Ceph pool.
|
|
|
|
|
|
2024-10-23 00:50:25 +00:00
|
|
|
|
PLP
|
|
|
|
|
**P**\ower **L**\oss **P**\rotection. A technology that
|
|
|
|
|
protects the data of solid-state drives by using capacitors to
|
|
|
|
|
extend the amount of time available for transferring data from
|
|
|
|
|
the DRAM cache to the SSD's permanent memory. Consumer-grade
|
|
|
|
|
SSDs are rarely equipped with PLP.
|
|
|
|
|
|
2022-11-11 17:37:11 +00:00
|
|
|
|
:ref:`Pool<rados_pools>`
|
2024-10-23 00:50:25 +00:00
|
|
|
|
|
2022-11-11 17:37:11 +00:00
|
|
|
|
A pool is a logical partition used to store objects.
|
|
|
|
|
|
2014-02-11 21:28:04 +00:00
|
|
|
|
Pools
|
2022-11-11 17:37:11 +00:00
|
|
|
|
See :term:`pool`.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2023-09-12 10:56:03 +00:00
|
|
|
|
:ref:`Primary Affinity <rados_ops_primary_affinity>`
|
|
|
|
|
The characteristic of an OSD that governs the likelihood that
|
|
|
|
|
a given OSD will be selected as the primary OSD (or "lead
|
|
|
|
|
OSD") in an acting set. Primary affinity was introduced in
|
|
|
|
|
Firefly (v. 0.80). See :ref:`Primary Affinity
|
|
|
|
|
<rados_ops_primary_affinity>`.
|
|
|
|
|
|
2024-07-29 10:19:02 +00:00
|
|
|
|
:ref:`Prometheus <mgr-prometheus>`
|
|
|
|
|
An open-source monitoring and alerting toolkit. Ceph offers a
|
|
|
|
|
:ref:`"Prometheus module" <mgr-prometheus>`, which provides a
|
|
|
|
|
Prometheus exporter that passes performance counters from a
|
|
|
|
|
collection point in ``ceph-mgr`` to Prometheus.
|
|
|
|
|
|
2023-11-14 13:40:42 +00:00
|
|
|
|
Quorum
|
|
|
|
|
Quorum is the state that exists when a majority of the
|
|
|
|
|
:ref:`Monitors<arch_monitor>` in the cluster are ``up``. A
|
|
|
|
|
minimum of three :ref:`Monitors<arch_monitor>` must exist in
|
|
|
|
|
the cluster in order for Quorum to be possible.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
RADOS
|
2022-11-18 08:12:24 +00:00
|
|
|
|
**R**\eliable **A**\utonomic **D**\istributed **O**\bject
|
|
|
|
|
**S**\tore. RADOS is the object store that provides a scalable
|
|
|
|
|
service for variably-sized objects. The RADOS object store is
|
|
|
|
|
the core component of a Ceph cluster. `This blog post from
|
|
|
|
|
2009
|
|
|
|
|
<https://ceph.io/en/news/blog/2009/the-rados-distributed-object-store/>`_
|
|
|
|
|
provides a beginner's introduction to RADOS. Readers interested
|
|
|
|
|
in a deeper understanding of RADOS are directed to `RADOS: A
|
|
|
|
|
Scalable, Reliable Storage Service for Petabyte-scale Storage
|
|
|
|
|
Clusters <https://ceph.io/assets/pdfs/weil-rados-pdsw07.pdf>`_.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
RADOS Cluster
|
2022-11-29 16:52:31 +00:00
|
|
|
|
A proper subset of the Ceph Cluster consisting of
|
|
|
|
|
:term:`OSD`\s, :term:`Ceph Monitor`\s, and :term:`Ceph
|
|
|
|
|
Manager`\s.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
RADOS Gateway
|
2022-11-20 03:39:22 +00:00
|
|
|
|
See :term:`RGW`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
RBD
|
2022-12-25 04:02:29 +00:00
|
|
|
|
**R**\ADOS **B**\lock **D**\evice. See :term:`Ceph Block
|
|
|
|
|
Device`.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
2023-02-15 07:51:41 +00:00
|
|
|
|
:ref:`Realm<rgw-realms>`
|
|
|
|
|
In the context of RADOS Gateway (RGW), a realm is a globally
|
|
|
|
|
unique namespace that consists of one or more zonegroups.
|
|
|
|
|
|
2022-12-30 01:32:31 +00:00
|
|
|
|
Releases
|
|
|
|
|
|
|
|
|
|
Ceph Interim Release
|
|
|
|
|
A version of Ceph that has not yet been put through
|
|
|
|
|
quality assurance testing. May contain new features.
|
|
|
|
|
|
|
|
|
|
Ceph Point Release
|
|
|
|
|
Any ad hoc release that includes only bug fixes and
|
|
|
|
|
security fixes.
|
|
|
|
|
|
|
|
|
|
Ceph Release
|
|
|
|
|
Any distinct numbered version of Ceph.
|
|
|
|
|
|
|
|
|
|
Ceph Release Candidate
|
|
|
|
|
A major version of Ceph that has undergone initial
|
|
|
|
|
quality assurance testing and is ready for beta
|
|
|
|
|
testers.
|
|
|
|
|
|
|
|
|
|
Ceph Stable Release
|
|
|
|
|
A major version of Ceph where all features from the
|
|
|
|
|
preceding interim releases have been put through
|
|
|
|
|
quality assurance testing successfully.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Reliable Autonomic Distributed Object Store
|
|
|
|
|
The core set of storage software which stores the user's data
|
2022-11-18 08:12:24 +00:00
|
|
|
|
(MON+OSD). See also :term:`RADOS`.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
2022-11-20 03:39:22 +00:00
|
|
|
|
:ref:`RGW<object-gateway>`
|
2022-12-25 04:02:29 +00:00
|
|
|
|
**R**\ADOS **G**\ate\ **w**\ay.
|
2022-11-20 03:39:22 +00:00
|
|
|
|
|
2022-12-25 04:02:29 +00:00
|
|
|
|
Also called "Ceph Object Gateway". The component of Ceph that
|
|
|
|
|
provides a gateway to both the Amazon S3 RESTful API and the
|
|
|
|
|
OpenStack Swift API.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
2024-06-11 23:11:35 +00:00
|
|
|
|
S3
|
|
|
|
|
In the context of Ceph, S3 is one of the two APIs supported by
|
|
|
|
|
the Ceph Object Store. The other API supported by the Ceph
|
|
|
|
|
Object Store is OpenStack Swift.
|
|
|
|
|
|
|
|
|
|
See `the Amazon S3 overview page
|
|
|
|
|
<https://aws.amazon.com/s3/>`_.
|
|
|
|
|
|
2023-03-27 20:48:32 +00:00
|
|
|
|
scrubs
|
|
|
|
|
|
|
|
|
|
The processes by which Ceph ensures data integrity. During the
|
|
|
|
|
process of scrubbing, Ceph generates a catalog of all objects
|
|
|
|
|
in a placement group, then ensures that none of the objects are
|
|
|
|
|
missing or mismatched by comparing each primary object against
|
|
|
|
|
its replicas, which are stored across other OSDs. Any PG
|
|
|
|
|
is determined to have a copy of an object that is different
|
|
|
|
|
than the other copies or is missing entirely is marked
|
|
|
|
|
"inconsistent" (that is, the PG is marked "inconsistent").
|
|
|
|
|
|
|
|
|
|
There are two kinds of scrubbing: light scrubbing and deep
|
2024-02-11 17:32:19 +00:00
|
|
|
|
scrubbing (also called "shallow scrubbing" and "deep scrubbing",
|
2023-03-27 20:48:32 +00:00
|
|
|
|
respectively). Light scrubbing is performed daily and does
|
|
|
|
|
nothing more than confirm that a given object exists and that
|
|
|
|
|
its metadata is correct. Deep scrubbing is performed weekly and
|
|
|
|
|
reads the data and uses checksums to ensure data integrity.
|
|
|
|
|
|
|
|
|
|
See :ref:`Scrubbing <rados_config_scrubbing>` in the RADOS OSD
|
|
|
|
|
Configuration Reference Guide and page 141 of *Mastering Ceph,
|
|
|
|
|
second edition* (Fisk, Nick. 2019).
|
|
|
|
|
|
2022-12-13 04:39:35 +00:00
|
|
|
|
secrets
|
|
|
|
|
Secrets are credentials used to perform digital authentication
|
|
|
|
|
whenever privileged users must access systems that require
|
|
|
|
|
authentication. Secrets can be passwords, API keys, tokens, SSH
|
|
|
|
|
keys, private certificates, or encryption keys.
|
|
|
|
|
|
2022-11-20 18:09:46 +00:00
|
|
|
|
SDS
|
2022-12-25 04:02:29 +00:00
|
|
|
|
**S**\oftware-**d**\efined **S**\torage.
|
2022-11-20 18:09:46 +00:00
|
|
|
|
|
2017-08-17 13:27:12 +00:00
|
|
|
|
systemd oneshot
|
2022-10-02 10:48:36 +00:00
|
|
|
|
A systemd ``type`` where a command is defined in ``ExecStart``
|
|
|
|
|
which will exit upon completion (it is not intended to
|
|
|
|
|
daemonize)
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2024-06-08 20:24:43 +00:00
|
|
|
|
Swift
|
|
|
|
|
See :term:`OpenStack Swift`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Teuthology
|
|
|
|
|
The collection of software that performs scripted tests on Ceph.
|
2017-08-17 19:38:18 +00:00
|
|
|
|
|
2023-03-24 10:56:51 +00:00
|
|
|
|
User
|
|
|
|
|
An individual or a system actor (for example, an application)
|
|
|
|
|
that uses Ceph clients to interact with the :term:`Ceph Storage
|
|
|
|
|
Cluster`. See :ref:`User<rados-ops-user>` and :ref:`User
|
|
|
|
|
Management<user-management>`.
|
|
|
|
|
|
2023-02-26 18:18:50 +00:00
|
|
|
|
Zone
|
|
|
|
|
In the context of :term:`RGW`, a zone is a logical group that
|
|
|
|
|
consists of one or more :term:`RGW` instances. A zone's
|
|
|
|
|
configuration state is stored in the :term:`period`. See
|
|
|
|
|
:ref:`Zones<radosgw-zones>`.
|
|
|
|
|
|
2017-10-23 11:26:28 +00:00
|
|
|
|
.. _https://github.com/ceph: https://github.com/ceph
|
2022-11-14 04:06:55 +00:00
|
|
|
|
.. _Cluster Map: ../architecture#cluster-map
|