2013-05-07 14:57:40 +00:00
|
|
|
|
===============
|
|
|
|
|
Ceph Glossary
|
|
|
|
|
===============
|
|
|
|
|
|
2017-08-17 13:27:12 +00:00
|
|
|
|
.. glossary::
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2023-02-25 10:12:16 +00:00
|
|
|
|
Application
|
2023-02-25 19:51:07 +00:00
|
|
|
|
More properly called a :term:`client`, an application is any program
|
2023-02-25 10:12:16 +00:00
|
|
|
|
external to Ceph that uses a Ceph Cluster to store and
|
|
|
|
|
replicate data.
|
|
|
|
|
|
2022-11-08 02:25:35 +00:00
|
|
|
|
:ref:`BlueStore<rados_config_storage_devices_bluestore>`
|
|
|
|
|
OSD BlueStore is a storage back end used by OSD daemons, and
|
|
|
|
|
was designed specifically for use with Ceph. BlueStore was
|
2023-05-22 21:41:09 +00:00
|
|
|
|
introduced in the Ceph Kraken release. The Luminous release of
|
|
|
|
|
Ceph promoted BlueStore to the default OSD back end,
|
|
|
|
|
supplanting FileStore. As of the Reef release, FileStore is no
|
|
|
|
|
longer available as a storage backend.
|
|
|
|
|
|
|
|
|
|
BlueStore stores objects directly on Ceph block devices without
|
|
|
|
|
a mounted file system.
|
2014-02-11 21:28:04 +00:00
|
|
|
|
|
2023-02-23 05:53:39 +00:00
|
|
|
|
Bucket
|
|
|
|
|
In the context of :term:`RGW`, a bucket is a group of objects.
|
2023-02-24 01:07:12 +00:00
|
|
|
|
In a filesystem-based analogy in which objects are the
|
|
|
|
|
counterpart of files, buckets are the counterpart of
|
2023-02-23 05:53:39 +00:00
|
|
|
|
directories. :ref:`Multisite sync
|
|
|
|
|
policies<radosgw-multisite-sync-policy>` can be set on buckets,
|
|
|
|
|
to provide fine-grained control of data movement from one zone
|
2023-02-24 01:07:12 +00:00
|
|
|
|
to another zone.
|
|
|
|
|
|
|
|
|
|
The concept of the bucket has been taken from AWS S3. See also
|
|
|
|
|
`the AWS S3 page on creating buckets <https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-buckets-s3.html>`_
|
|
|
|
|
and `the AWS S3 'Buckets Overview' page <https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html>`_.
|
|
|
|
|
|
|
|
|
|
OpenStack Swift uses the term "containers" for what RGW and AWS call "buckets".
|
|
|
|
|
See `the OpenStack Storage API overview page <https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html>`_.
|
2023-02-23 05:53:39 +00:00
|
|
|
|
|
2013-05-07 14:57:40 +00:00
|
|
|
|
Ceph
|
2022-11-09 22:39:31 +00:00
|
|
|
|
Ceph is a distributed network storage and file system with
|
|
|
|
|
distributed metadata management and POSIX semantics.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Block Device
|
2022-12-25 04:02:29 +00:00
|
|
|
|
Also called "RADOS Block Device" and :term:`RBD`. A software
|
|
|
|
|
instrument that orchestrates the storage of block-based data in
|
|
|
|
|
Ceph. Ceph Block Device splits block-based application data
|
2022-11-05 05:34:20 +00:00
|
|
|
|
into "chunks". RADOS stores these chunks as objects. Ceph Block
|
|
|
|
|
Device orchestrates the storage of those objects across the
|
2022-12-25 04:02:29 +00:00
|
|
|
|
storage cluster.
|
2022-11-05 05:34:20 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Block Storage
|
2022-11-20 05:00:00 +00:00
|
|
|
|
One of the three kinds of storage supported by Ceph (the other
|
|
|
|
|
two are object storage and file storage). Ceph Block Storage is
|
|
|
|
|
the block storage "product", which refers to block-storage
|
|
|
|
|
related services and capabilities when used in conjunction with
|
|
|
|
|
the collection of (1) ``librbd`` (a python module that provides
|
|
|
|
|
file-like access to :term:`RBD` images), (2) a hypervisor such
|
|
|
|
|
as QEMU or Xen, and (3) a hypervisor abstraction layer such as
|
|
|
|
|
``libvirt``.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2023-09-11 05:18:40 +00:00
|
|
|
|
:ref:`Ceph Client <architecture_ceph_clients>`
|
2022-11-23 08:16:47 +00:00
|
|
|
|
Any of the Ceph components that can access a Ceph Storage
|
|
|
|
|
Cluster. This includes the Ceph Object Gateway, the Ceph Block
|
|
|
|
|
Device, the Ceph File System, and their corresponding
|
|
|
|
|
libraries. It also includes kernel modules, and FUSEs
|
|
|
|
|
(Filesystems in USERspace).
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Client Libraries
|
|
|
|
|
The collection of libraries that can be used to interact with
|
2022-11-27 23:45:25 +00:00
|
|
|
|
components of the Ceph Cluster.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2013-05-16 20:57:53 +00:00
|
|
|
|
Ceph Cluster Map
|
2022-11-08 13:11:15 +00:00
|
|
|
|
See :term:`Cluster Map`
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Dashboard
|
2022-11-05 16:22:25 +00:00
|
|
|
|
:ref:`The Ceph Dashboard<mgr-dashboard>` is a built-in
|
|
|
|
|
web-based Ceph management and monitoring application through
|
|
|
|
|
which you can inspect and administer various resources within
|
|
|
|
|
the cluster. It is implemented as a :ref:`ceph-manager-daemon`
|
|
|
|
|
module.
|
|
|
|
|
|
2019-09-09 19:36:04 +00:00
|
|
|
|
Ceph File System
|
2022-10-03 12:51:35 +00:00
|
|
|
|
See :term:`CephFS`
|
|
|
|
|
|
2022-11-16 21:28:45 +00:00
|
|
|
|
:ref:`CephFS<ceph-file-system>`
|
2022-12-26 06:05:32 +00:00
|
|
|
|
The **Ceph F**\ile **S**\ystem, or CephFS, is a
|
2022-12-25 04:02:29 +00:00
|
|
|
|
POSIX-compliant file system built on top of Ceph’s distributed
|
|
|
|
|
object store, RADOS. See :ref:`CephFS Architecture
|
|
|
|
|
<arch-cephfs>` for more details.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Interim Release
|
2022-12-30 01:32:31 +00:00
|
|
|
|
See :term:`Releases`.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Kernel Modules
|
2022-11-05 16:59:23 +00:00
|
|
|
|
The collection of kernel modules that can be used to interact
|
2022-11-27 23:45:25 +00:00
|
|
|
|
with the Ceph Cluster (for example: ``ceph.ko``, ``rbd.ko``).
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-11-26 00:06:34 +00:00
|
|
|
|
:ref:`Ceph Manager<ceph-manager-daemon>`
|
2022-11-07 13:55:53 +00:00
|
|
|
|
The Ceph manager daemon (ceph-mgr) is a daemon that runs
|
|
|
|
|
alongside monitor daemons to provide monitoring and interfacing
|
|
|
|
|
to external monitoring and management systems. Since the
|
2022-12-26 06:05:32 +00:00
|
|
|
|
Luminous release (12.x), no Ceph cluster functions properly
|
|
|
|
|
unless it contains a running ceph-mgr daemon.
|
2022-11-07 13:55:53 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Manager Dashboard
|
2022-11-10 02:24:12 +00:00
|
|
|
|
See :term:`Ceph Dashboard`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Metadata Server
|
2022-11-14 04:06:55 +00:00
|
|
|
|
See :term:`MDS`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Monitor
|
2022-10-11 16:49:13 +00:00
|
|
|
|
A daemon that maintains a map of the state of the cluster. This
|
|
|
|
|
"cluster state" includes the monitor map, the manager map, the
|
2022-12-26 06:05:32 +00:00
|
|
|
|
OSD map, and the CRUSH map. A Ceph cluster must contain a
|
|
|
|
|
minimum of three running monitors in order to be both redundant
|
|
|
|
|
and highly-available. Ceph monitors and the nodes on which they
|
|
|
|
|
run are often referred to as "mon"s. See :ref:`Monitor Config
|
2022-10-11 16:49:13 +00:00
|
|
|
|
Reference <monitor-config-reference>`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Node
|
2022-11-21 16:49:49 +00:00
|
|
|
|
A Ceph node is a unit of the Ceph Cluster that communicates with
|
|
|
|
|
other nodes in the Ceph Cluster in order to replicate and
|
|
|
|
|
redistribute data. All of the nodes together are called the
|
|
|
|
|
:term:`Ceph Storage Cluster`. Ceph nodes include :term:`OSD`\s,
|
|
|
|
|
:term:`Ceph Monitor`\s, :term:`Ceph Manager`\s, and
|
|
|
|
|
:term:`MDS`\es. The term "node" is usually equivalent to "host"
|
|
|
|
|
in the Ceph documentation. If you have a running Ceph Cluster,
|
|
|
|
|
you can list all of the nodes in it by running the command
|
|
|
|
|
``ceph node ls all``.
|
|
|
|
|
|
2022-11-16 02:39:53 +00:00
|
|
|
|
:ref:`Ceph Object Gateway<object-gateway>`
|
|
|
|
|
An object storage interface built on top of librados. Ceph
|
|
|
|
|
Object Gateway provides a RESTful gateway between applications
|
|
|
|
|
and Ceph storage clusters.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Object Storage
|
2023-01-08 08:04:43 +00:00
|
|
|
|
See :term:`Ceph Object Store`.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
|
|
|
|
Ceph Object Store
|
2022-11-23 13:09:47 +00:00
|
|
|
|
A Ceph Object Store consists of a :term:`Ceph Storage Cluster`
|
|
|
|
|
and a :term:`Ceph Object Gateway` (RGW).
|
|
|
|
|
|
2022-11-08 02:33:18 +00:00
|
|
|
|
:ref:`Ceph OSD<rados_configuration_storage-devices_ceph_osd>`
|
2022-11-07 22:58:35 +00:00
|
|
|
|
Ceph **O**\bject **S**\torage **D**\aemon. The Ceph OSD
|
|
|
|
|
software, which interacts with logical disks (:term:`OSD`).
|
|
|
|
|
Around 2013, there was an attempt by "research and industry"
|
|
|
|
|
(Sage's own words) to insist on using the term "OSD" to mean
|
|
|
|
|
only "Object Storage Device", but the Ceph community has always
|
|
|
|
|
persisted in using the term to mean "Object Storage Daemon" and
|
|
|
|
|
no less an authority than Sage Weil himself confirms in
|
|
|
|
|
November of 2022 that "Daemon is more accurate for how Ceph is
|
|
|
|
|
built" (private correspondence between Zac Dover and Sage Weil,
|
|
|
|
|
07 Nov 2022).
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph OSD Daemon
|
2022-11-10 20:31:59 +00:00
|
|
|
|
See :term:`Ceph OSD`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph OSD Daemons
|
2022-11-10 20:31:59 +00:00
|
|
|
|
See :term:`Ceph OSD`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Platform
|
|
|
|
|
All Ceph software, which includes any piece of code hosted at
|
|
|
|
|
`https://github.com/ceph`_.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Point Release
|
2022-12-30 01:32:31 +00:00
|
|
|
|
See :term:`Releases`.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Project
|
|
|
|
|
The aggregate term for the people, software, mission and
|
|
|
|
|
infrastructure of Ceph.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Release
|
2022-12-30 01:32:31 +00:00
|
|
|
|
See :term:`Releases`.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Release Candidate
|
2022-12-30 01:32:31 +00:00
|
|
|
|
See :term:`Releases`.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Stable Release
|
2022-12-30 01:32:31 +00:00
|
|
|
|
See :term:`Releases`.
|
2017-03-16 07:38:15 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Ceph Stack
|
|
|
|
|
A collection of two or more components of Ceph.
|
2017-03-16 07:38:15 +00:00
|
|
|
|
|
2022-11-22 04:02:34 +00:00
|
|
|
|
:ref:`Ceph Storage Cluster<arch-ceph-storage-cluster>`
|
|
|
|
|
The collection of :term:`Ceph Monitor`\s, :term:`Ceph
|
|
|
|
|
Manager`\s, :term:`Ceph Metadata Server`\s, and :term:`OSD`\s
|
|
|
|
|
that work together to store and replicate data for use by
|
|
|
|
|
applications, Ceph Users, and :term:`Ceph Client`\s. Ceph
|
|
|
|
|
Storage Clusters receive data from :term:`Ceph Client`\s.
|
|
|
|
|
|
2022-12-19 18:16:19 +00:00
|
|
|
|
CephX
|
2023-03-28 08:42:11 +00:00
|
|
|
|
The Ceph authentication protocol. CephX authenticates users and
|
|
|
|
|
daemons. CephX operates like Kerberos, but it has no single
|
|
|
|
|
point of failure. See the :ref:`High-availability
|
|
|
|
|
Authentication section<arch_high_availability_authentication>`
|
|
|
|
|
of the Architecture document and the :ref:`CephX Configuration
|
|
|
|
|
Reference<rados-cephx-config-ref>`.
|
2018-06-28 03:49:29 +00:00
|
|
|
|
|
2023-02-25 19:51:07 +00:00
|
|
|
|
Client
|
|
|
|
|
A client is any program external to Ceph that uses a Ceph
|
|
|
|
|
Cluster to store and replicate data.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Cloud Platforms
|
|
|
|
|
Cloud Stacks
|
|
|
|
|
Third party cloud provisioning platforms such as OpenStack,
|
2022-11-05 16:59:23 +00:00
|
|
|
|
CloudStack, OpenNebula, and Proxmox VE.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Cluster Map
|
2022-11-22 18:04:48 +00:00
|
|
|
|
The set of maps consisting of the monitor map, OSD map, PG map,
|
2022-11-21 16:52:23 +00:00
|
|
|
|
MDS map, and CRUSH map, which together report the state of the
|
2022-11-09 13:12:54 +00:00
|
|
|
|
Ceph cluster. See :ref:`the "Cluster Map" section of the
|
|
|
|
|
Architecture document<architecture_cluster_map>` for details.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
CRUSH
|
2022-12-25 04:02:29 +00:00
|
|
|
|
**C**\ontrolled **R**\eplication **U**\nder **S**\calable
|
|
|
|
|
**H**\ashing. The algorithm that Ceph uses to compute object
|
|
|
|
|
storage locations.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
CRUSH rule
|
|
|
|
|
The CRUSH data placement rule that applies to a particular
|
2022-12-25 04:02:29 +00:00
|
|
|
|
pool or pools.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-12-05 13:43:29 +00:00
|
|
|
|
DAS
|
2022-12-31 04:22:26 +00:00
|
|
|
|
**D**\irect-\ **A**\ttached **S**\torage. Storage that is
|
2022-12-25 04:02:29 +00:00
|
|
|
|
attached directly to the computer accessing it, without passing
|
|
|
|
|
through a network. Contrast with NAS and SAN.
|
2022-12-05 13:43:29 +00:00
|
|
|
|
|
2022-11-29 17:26:04 +00:00
|
|
|
|
:ref:`Dashboard<mgr-dashboard>`
|
2022-10-02 10:48:36 +00:00
|
|
|
|
A built-in web-based Ceph management and monitoring application
|
|
|
|
|
to administer various aspects and objects of the cluster. The
|
|
|
|
|
dashboard is implemented as a Ceph Manager module. See
|
|
|
|
|
:ref:`mgr-dashboard` for more details.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Dashboard Module
|
2022-11-29 17:26:04 +00:00
|
|
|
|
Another name for :term:`Dashboard`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Dashboard Plugin
|
2022-12-14 05:59:51 +00:00
|
|
|
|
FQDN
|
|
|
|
|
**F**\ully **Q**\ualified **D**\omain **N**\ame. A domain name
|
|
|
|
|
that is applied to a node in a network and that specifies the
|
|
|
|
|
node's exact location in the tree hierarchy of the DNS.
|
|
|
|
|
|
|
|
|
|
In the context of Ceph cluster administration, FQDNs are often
|
|
|
|
|
applied to hosts. In this documentation, the term "FQDN" is
|
|
|
|
|
used mostly to distinguish between FQDNs and relatively simpler
|
|
|
|
|
hostnames, which do not specify the exact location of the host
|
|
|
|
|
in the tree hierarchy of the DNS but merely name the host.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Host
|
2022-11-27 23:45:25 +00:00
|
|
|
|
Any single machine or server in a Ceph Cluster. See :term:`Ceph
|
2022-11-21 16:49:49 +00:00
|
|
|
|
Node`.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2023-04-13 12:01:44 +00:00
|
|
|
|
Hybrid OSD
|
|
|
|
|
Refers to an OSD that has both HDD and SSD drives.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
LVM tags
|
2022-12-25 04:02:29 +00:00
|
|
|
|
**L**\ogical **V**\olume **M**\anager tags. Extensible metadata
|
|
|
|
|
for LVM volumes and groups. They are used to store
|
|
|
|
|
Ceph-specific information about devices and its relationship
|
|
|
|
|
with OSDs.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-11-14 04:06:55 +00:00
|
|
|
|
:ref:`MDS<cephfs_add_remote_mds>`
|
2022-12-25 04:02:29 +00:00
|
|
|
|
The Ceph **M**\eta\ **D**\ata **S**\erver daemon. Also referred
|
2022-12-26 06:05:32 +00:00
|
|
|
|
to as "ceph-mds". The Ceph metadata server daemon must be
|
|
|
|
|
running in any Ceph cluster that runs the CephFS file system.
|
|
|
|
|
The MDS stores all filesystem metadata.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
MGR
|
|
|
|
|
The Ceph manager software, which collects all the state from
|
|
|
|
|
the whole cluster in one place.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
MON
|
|
|
|
|
The Ceph monitor software.
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Node
|
2022-11-21 16:49:49 +00:00
|
|
|
|
See :term:`Ceph Node`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Object Storage Device
|
2022-11-06 17:45:24 +00:00
|
|
|
|
See :term:`OSD`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
OSD
|
2023-01-08 08:04:43 +00:00
|
|
|
|
Probably :term:`Ceph OSD`, but not necessarily. Sometimes
|
|
|
|
|
(especially in older correspondence, and especially in
|
|
|
|
|
documentation that is not written specifically for Ceph), "OSD"
|
|
|
|
|
means "**O**\bject **S**\torage **D**\evice", which refers to a
|
|
|
|
|
physical or logical storage unit (for example: LUN). The Ceph
|
|
|
|
|
community has always used the term "OSD" to refer to
|
|
|
|
|
:term:`Ceph OSD Daemon` despite an industry push in the
|
|
|
|
|
mid-2010s to insist that "OSD" should refer to "Object Storage
|
|
|
|
|
Device", so it is important to know which meaning is intended.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
|
|
|
|
OSD fsid
|
2022-12-27 05:28:31 +00:00
|
|
|
|
This is a unique identifier used to identify an OSD. It is
|
|
|
|
|
found in the OSD path in a file called ``osd_fsid``. The
|
|
|
|
|
term ``fsid`` is used interchangeably with ``uuid``
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
|
|
|
|
OSD id
|
|
|
|
|
The integer that defines an OSD. It is generated by the
|
2022-12-29 04:42:25 +00:00
|
|
|
|
monitors during the creation of each OSD.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
|
|
|
|
OSD uuid
|
2022-12-27 05:28:31 +00:00
|
|
|
|
This is the unique identifier of an OSD. This term is used
|
|
|
|
|
interchangeably with ``fsid``
|
2014-02-11 21:28:04 +00:00
|
|
|
|
|
2023-02-17 22:03:06 +00:00
|
|
|
|
Period
|
|
|
|
|
In the context of :term:`RGW`, a period is the configuration
|
|
|
|
|
state of the :term:`Realm`. The period stores the configuration
|
|
|
|
|
state of a multi-site configuration. When the period is updated,
|
|
|
|
|
the "epoch" is said thereby to have been changed.
|
|
|
|
|
|
2023-04-22 08:55:38 +00:00
|
|
|
|
Placement Groups (PGs)
|
|
|
|
|
Placement groups (PGs) are subsets of each logical Ceph pool.
|
|
|
|
|
Placement groups perform the function of placing objects (as a
|
|
|
|
|
group) into OSDs. Ceph manages data internally at
|
|
|
|
|
placement-group granularity: this scales better than would
|
|
|
|
|
managing individual (and therefore more numerous) RADOS
|
|
|
|
|
objects. A cluster that has a larger number of placement groups
|
|
|
|
|
(for example, 100 per OSD) is better balanced than an otherwise
|
|
|
|
|
identical cluster with a smaller number of placement groups.
|
|
|
|
|
|
|
|
|
|
Ceph's internal RADOS objects are each mapped to a specific
|
|
|
|
|
placement group, and each placement group belongs to exactly
|
|
|
|
|
one Ceph pool.
|
|
|
|
|
|
2022-11-11 17:37:11 +00:00
|
|
|
|
:ref:`Pool<rados_pools>`
|
|
|
|
|
A pool is a logical partition used to store objects.
|
|
|
|
|
|
2014-02-11 21:28:04 +00:00
|
|
|
|
Pools
|
2022-11-11 17:37:11 +00:00
|
|
|
|
See :term:`pool`.
|
2013-05-07 14:57:40 +00:00
|
|
|
|
|
2023-09-12 10:56:03 +00:00
|
|
|
|
:ref:`Primary Affinity <rados_ops_primary_affinity>`
|
|
|
|
|
The characteristic of an OSD that governs the likelihood that
|
|
|
|
|
a given OSD will be selected as the primary OSD (or "lead
|
|
|
|
|
OSD") in an acting set. Primary affinity was introduced in
|
|
|
|
|
Firefly (v. 0.80). See :ref:`Primary Affinity
|
|
|
|
|
<rados_ops_primary_affinity>`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
RADOS
|
2022-11-18 08:12:24 +00:00
|
|
|
|
**R**\eliable **A**\utonomic **D**\istributed **O**\bject
|
|
|
|
|
**S**\tore. RADOS is the object store that provides a scalable
|
|
|
|
|
service for variably-sized objects. The RADOS object store is
|
|
|
|
|
the core component of a Ceph cluster. `This blog post from
|
|
|
|
|
2009
|
|
|
|
|
<https://ceph.io/en/news/blog/2009/the-rados-distributed-object-store/>`_
|
|
|
|
|
provides a beginner's introduction to RADOS. Readers interested
|
|
|
|
|
in a deeper understanding of RADOS are directed to `RADOS: A
|
|
|
|
|
Scalable, Reliable Storage Service for Petabyte-scale Storage
|
|
|
|
|
Clusters <https://ceph.io/assets/pdfs/weil-rados-pdsw07.pdf>`_.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
RADOS Cluster
|
2022-11-29 16:52:31 +00:00
|
|
|
|
A proper subset of the Ceph Cluster consisting of
|
|
|
|
|
:term:`OSD`\s, :term:`Ceph Monitor`\s, and :term:`Ceph
|
|
|
|
|
Manager`\s.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
RADOS Gateway
|
2022-11-20 03:39:22 +00:00
|
|
|
|
See :term:`RGW`.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
RBD
|
2022-12-25 04:02:29 +00:00
|
|
|
|
**R**\ADOS **B**\lock **D**\evice. See :term:`Ceph Block
|
|
|
|
|
Device`.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
2023-02-15 07:51:41 +00:00
|
|
|
|
:ref:`Realm<rgw-realms>`
|
|
|
|
|
In the context of RADOS Gateway (RGW), a realm is a globally
|
|
|
|
|
unique namespace that consists of one or more zonegroups.
|
|
|
|
|
|
2022-12-30 01:32:31 +00:00
|
|
|
|
Releases
|
|
|
|
|
|
|
|
|
|
Ceph Interim Release
|
|
|
|
|
A version of Ceph that has not yet been put through
|
|
|
|
|
quality assurance testing. May contain new features.
|
|
|
|
|
|
|
|
|
|
Ceph Point Release
|
|
|
|
|
Any ad hoc release that includes only bug fixes and
|
|
|
|
|
security fixes.
|
|
|
|
|
|
|
|
|
|
Ceph Release
|
|
|
|
|
Any distinct numbered version of Ceph.
|
|
|
|
|
|
|
|
|
|
Ceph Release Candidate
|
|
|
|
|
A major version of Ceph that has undergone initial
|
|
|
|
|
quality assurance testing and is ready for beta
|
|
|
|
|
testers.
|
|
|
|
|
|
|
|
|
|
Ceph Stable Release
|
|
|
|
|
A major version of Ceph where all features from the
|
|
|
|
|
preceding interim releases have been put through
|
|
|
|
|
quality assurance testing successfully.
|
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Reliable Autonomic Distributed Object Store
|
|
|
|
|
The core set of storage software which stores the user's data
|
2022-11-18 08:12:24 +00:00
|
|
|
|
(MON+OSD). See also :term:`RADOS`.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
2022-11-20 03:39:22 +00:00
|
|
|
|
:ref:`RGW<object-gateway>`
|
2022-12-25 04:02:29 +00:00
|
|
|
|
**R**\ADOS **G**\ate\ **w**\ay.
|
2022-11-20 03:39:22 +00:00
|
|
|
|
|
2022-12-25 04:02:29 +00:00
|
|
|
|
Also called "Ceph Object Gateway". The component of Ceph that
|
|
|
|
|
provides a gateway to both the Amazon S3 RESTful API and the
|
|
|
|
|
OpenStack Swift API.
|
2022-10-02 10:48:36 +00:00
|
|
|
|
|
2023-03-27 20:48:32 +00:00
|
|
|
|
scrubs
|
|
|
|
|
|
|
|
|
|
The processes by which Ceph ensures data integrity. During the
|
|
|
|
|
process of scrubbing, Ceph generates a catalog of all objects
|
|
|
|
|
in a placement group, then ensures that none of the objects are
|
|
|
|
|
missing or mismatched by comparing each primary object against
|
|
|
|
|
its replicas, which are stored across other OSDs. Any PG
|
|
|
|
|
is determined to have a copy of an object that is different
|
|
|
|
|
than the other copies or is missing entirely is marked
|
|
|
|
|
"inconsistent" (that is, the PG is marked "inconsistent").
|
|
|
|
|
|
|
|
|
|
There are two kinds of scrubbing: light scrubbing and deep
|
|
|
|
|
scrubbing (also called "normal scrubbing" and "deep scrubbing",
|
|
|
|
|
respectively). Light scrubbing is performed daily and does
|
|
|
|
|
nothing more than confirm that a given object exists and that
|
|
|
|
|
its metadata is correct. Deep scrubbing is performed weekly and
|
|
|
|
|
reads the data and uses checksums to ensure data integrity.
|
|
|
|
|
|
|
|
|
|
See :ref:`Scrubbing <rados_config_scrubbing>` in the RADOS OSD
|
|
|
|
|
Configuration Reference Guide and page 141 of *Mastering Ceph,
|
|
|
|
|
second edition* (Fisk, Nick. 2019).
|
|
|
|
|
|
2022-12-13 04:39:35 +00:00
|
|
|
|
secrets
|
|
|
|
|
Secrets are credentials used to perform digital authentication
|
|
|
|
|
whenever privileged users must access systems that require
|
|
|
|
|
authentication. Secrets can be passwords, API keys, tokens, SSH
|
|
|
|
|
keys, private certificates, or encryption keys.
|
|
|
|
|
|
2022-11-20 18:09:46 +00:00
|
|
|
|
SDS
|
2022-12-25 04:02:29 +00:00
|
|
|
|
**S**\oftware-**d**\efined **S**\torage.
|
2022-11-20 18:09:46 +00:00
|
|
|
|
|
2017-08-17 13:27:12 +00:00
|
|
|
|
systemd oneshot
|
2022-10-02 10:48:36 +00:00
|
|
|
|
A systemd ``type`` where a command is defined in ``ExecStart``
|
|
|
|
|
which will exit upon completion (it is not intended to
|
|
|
|
|
daemonize)
|
2017-08-17 13:27:12 +00:00
|
|
|
|
|
2022-10-02 10:48:36 +00:00
|
|
|
|
Teuthology
|
|
|
|
|
The collection of software that performs scripted tests on Ceph.
|
2017-08-17 19:38:18 +00:00
|
|
|
|
|
2023-03-24 10:56:51 +00:00
|
|
|
|
User
|
|
|
|
|
An individual or a system actor (for example, an application)
|
|
|
|
|
that uses Ceph clients to interact with the :term:`Ceph Storage
|
|
|
|
|
Cluster`. See :ref:`User<rados-ops-user>` and :ref:`User
|
|
|
|
|
Management<user-management>`.
|
|
|
|
|
|
2023-02-26 18:18:50 +00:00
|
|
|
|
Zone
|
|
|
|
|
In the context of :term:`RGW`, a zone is a logical group that
|
|
|
|
|
consists of one or more :term:`RGW` instances. A zone's
|
|
|
|
|
configuration state is stored in the :term:`period`. See
|
|
|
|
|
:ref:`Zones<radosgw-zones>`.
|
|
|
|
|
|
2017-10-23 11:26:28 +00:00
|
|
|
|
.. _https://github.com/ceph: https://github.com/ceph
|
2022-11-14 04:06:55 +00:00
|
|
|
|
.. _Cluster Map: ../architecture#cluster-map
|