mirror of
https://github.com/ceph/ceph
synced 2025-02-01 07:52:57 +00:00
doc/glossary.rst: alphabetize glossary terms
This commit (finally) alphabetizes the terms in the glossary. This is not a grammar-correcting or usage-correcting commit. Signed-off-by: Zac Dover <zac.dover@gmail.com>
This commit is contained in:
parent
3b364de9e3
commit
10b33bdabe
309
doc/glossary.rst
309
doc/glossary.rst
@ -14,70 +14,148 @@ reflect either technical terms or legacy ways of referring to Ceph systems.
|
|||||||
|
|
||||||
.. glossary::
|
.. glossary::
|
||||||
|
|
||||||
Ceph Project
|
bluestore
|
||||||
The aggregate term for the people, software, mission and infrastructure
|
OSD BlueStore is a new back end for OSD daemons (kraken and
|
||||||
of Ceph.
|
newer versions). Unlike :term:`filestore` it stores objects
|
||||||
|
directly on the Ceph block devices without any file system
|
||||||
cephx
|
interface.
|
||||||
The Ceph authentication protocol. Cephx operates like Kerberos, but it
|
|
||||||
has no single point of failure.
|
|
||||||
|
|
||||||
Ceph
|
Ceph
|
||||||
Ceph Platform
|
|
||||||
All Ceph software, which includes any piece of code hosted at
|
|
||||||
`https://github.com/ceph`_.
|
|
||||||
|
|
||||||
Ceph System
|
|
||||||
Ceph Stack
|
|
||||||
A collection of two or more components of Ceph.
|
|
||||||
|
|
||||||
Ceph Node
|
|
||||||
Node
|
|
||||||
Host
|
|
||||||
Any single machine or server in a Ceph System.
|
|
||||||
|
|
||||||
Ceph Storage Cluster
|
|
||||||
Ceph Object Store
|
|
||||||
RADOS
|
|
||||||
RADOS Cluster
|
|
||||||
Reliable Autonomic Distributed Object Store
|
|
||||||
The core set of storage software which stores the user's data (MON+OSD).
|
|
||||||
|
|
||||||
Ceph Cluster Map
|
|
||||||
Cluster Map
|
|
||||||
The set of maps comprising the monitor map, OSD map, PG map, MDS map and
|
|
||||||
CRUSH map. See `Cluster Map`_ for details.
|
|
||||||
|
|
||||||
Ceph Object Storage
|
|
||||||
The object storage "product", service or capabilities, which consists
|
|
||||||
essentially of a Ceph Storage Cluster and a Ceph Object Gateway.
|
|
||||||
|
|
||||||
Ceph Object Gateway
|
|
||||||
RADOS Gateway
|
|
||||||
RGW
|
|
||||||
The S3/Swift gateway component of Ceph.
|
|
||||||
|
|
||||||
Ceph Block Device
|
Ceph Block Device
|
||||||
RBD
|
|
||||||
The block storage component of Ceph.
|
|
||||||
|
|
||||||
Ceph Block Storage
|
Ceph Block Storage
|
||||||
The block storage "product," service or capabilities when used in
|
The block storage "product," service or capabilities when used
|
||||||
conjunction with ``librbd``, a hypervisor such as QEMU or Xen, and a
|
in conjunction with ``librbd``, a hypervisor such as QEMU or
|
||||||
hypervisor abstraction layer such as ``libvirt``.
|
Xen, and a hypervisor abstraction layer such as ``libvirt``.
|
||||||
|
|
||||||
|
Ceph Client
|
||||||
|
The collection of Ceph components which can access a Ceph
|
||||||
|
Storage Cluster. These include the Ceph Object Gateway, the
|
||||||
|
Ceph Block Device, the Ceph File System, and their
|
||||||
|
corresponding libraries, kernel modules, and FUSEs.
|
||||||
|
|
||||||
|
Ceph Client Libraries
|
||||||
|
The collection of libraries that can be used to interact with
|
||||||
|
components of the Ceph System.
|
||||||
|
|
||||||
|
Ceph Clients
|
||||||
|
Ceph Cluster Map
|
||||||
|
Ceph Dashboard
|
||||||
Ceph File System
|
Ceph File System
|
||||||
CephFS
|
CephFS
|
||||||
Ceph FS
|
Ceph FS
|
||||||
The POSIX filesystem components of Ceph. Refer
|
The POSIX filesystem components of Ceph. Refer :ref:`CephFS
|
||||||
:ref:`CephFS Architecture <arch-cephfs>` and :ref:`ceph-file-system` for
|
Architecture <arch-cephfs>` and :ref:`ceph-file-system` for
|
||||||
more details.
|
more details.
|
||||||
|
|
||||||
|
Ceph Interim Release
|
||||||
|
Versions of Ceph that have not yet been put through quality
|
||||||
|
assurance testing, but may contain new features.
|
||||||
|
|
||||||
|
Ceph Kernel Modules
|
||||||
|
The collection of kernel modules which can be used to interact
|
||||||
|
with the Ceph System (e.g., ``ceph.ko``, ``rbd.ko``).
|
||||||
|
|
||||||
|
Ceph Manager
|
||||||
|
Ceph Manager Dashboard
|
||||||
|
Ceph Metadata Server
|
||||||
|
Ceph Monitor
|
||||||
|
Ceph Node
|
||||||
|
Ceph Object Gateway
|
||||||
|
Ceph Object Storage
|
||||||
|
The object storage "product", service or capabilities, which
|
||||||
|
consists essentially of a Ceph Storage Cluster and a Ceph Object
|
||||||
|
Gateway.
|
||||||
|
|
||||||
|
Ceph Object Store
|
||||||
|
Ceph OSD
|
||||||
|
The Ceph OSD software, which interacts with a logical
|
||||||
|
disk (:term:`OSD`). Sometimes, Ceph users use the
|
||||||
|
term "OSD" to refer to "Ceph OSD Daemon", though the
|
||||||
|
proper term is "Ceph OSD".
|
||||||
|
|
||||||
|
Ceph OSD Daemon
|
||||||
|
Ceph OSD Daemons
|
||||||
|
Ceph Platform
|
||||||
|
All Ceph software, which includes any piece of code hosted at
|
||||||
|
`https://github.com/ceph`_.
|
||||||
|
|
||||||
|
Ceph Point Release
|
||||||
|
Any ad-hoc release that includes only bug or security fixes.
|
||||||
|
|
||||||
|
Ceph Project
|
||||||
|
The aggregate term for the people, software, mission and
|
||||||
|
infrastructure of Ceph.
|
||||||
|
|
||||||
|
Ceph Release
|
||||||
|
Any distinct numbered version of Ceph.
|
||||||
|
|
||||||
|
Ceph Release Candidate
|
||||||
|
A major version of Ceph that has undergone initial quality
|
||||||
|
assurance testing and is ready for beta testers.
|
||||||
|
|
||||||
|
Ceph Stable Release
|
||||||
|
A major version of Ceph where all features from the preceding
|
||||||
|
interim releases have been put through quality assurance
|
||||||
|
testing successfully.
|
||||||
|
|
||||||
|
Ceph Stack
|
||||||
|
A collection of two or more components of Ceph.
|
||||||
|
|
||||||
|
Ceph Storage Cluster
|
||||||
|
Ceph System
|
||||||
|
Ceph Test Framework
|
||||||
|
cephx
|
||||||
|
The Ceph authentication protocol. Cephx operates like Kerberos,
|
||||||
|
but it has no single point of failure.
|
||||||
|
|
||||||
Cloud Platforms
|
Cloud Platforms
|
||||||
Cloud Stacks
|
Cloud Stacks
|
||||||
Third party cloud provisioning platforms such as OpenStack, CloudStack,
|
Third party cloud provisioning platforms such as OpenStack,
|
||||||
OpenNebula, Proxmox VE, etc.
|
CloudStack, OpenNebula, Proxmox VE, etc.
|
||||||
|
|
||||||
|
Cluster Map
|
||||||
|
The set of maps comprising the monitor map, OSD map, PG map,
|
||||||
|
MDS map and CRUSH map. See `Cluster Map`_ for details.
|
||||||
|
|
||||||
|
CRUSH
|
||||||
|
Controlled Replication Under Scalable Hashing. It is the
|
||||||
|
algorithm Ceph uses to compute object storage locations.
|
||||||
|
|
||||||
|
CRUSH rule
|
||||||
|
The CRUSH data placement rule that applies to a particular
|
||||||
|
pool(s).
|
||||||
|
|
||||||
|
Dashboard
|
||||||
|
A built-in web-based Ceph management and monitoring application
|
||||||
|
to administer various aspects and objects of the cluster. The
|
||||||
|
dashboard is implemented as a Ceph Manager module. See
|
||||||
|
:ref:`mgr-dashboard` for more details.
|
||||||
|
|
||||||
|
Dashboard Module
|
||||||
|
Dashboard Plugin
|
||||||
|
filestore
|
||||||
|
A back end for OSD daemons, where a Journal is needed and files
|
||||||
|
are written to the filesystem.
|
||||||
|
|
||||||
|
Host
|
||||||
|
Any single machine or server in a Ceph System.
|
||||||
|
|
||||||
|
LVM tags
|
||||||
|
Extensible metadata for LVM volumes and groups. It is used to
|
||||||
|
store Ceph-specific information about devices and its
|
||||||
|
relationship with OSDs.
|
||||||
|
|
||||||
|
MDS
|
||||||
|
The Ceph metadata software.
|
||||||
|
|
||||||
|
MGR
|
||||||
|
The Ceph manager software, which collects all the state from
|
||||||
|
the whole cluster in one place.
|
||||||
|
|
||||||
|
MON
|
||||||
|
The Ceph monitor software.
|
||||||
|
|
||||||
|
Node
|
||||||
Object Storage Device
|
Object Storage Device
|
||||||
OSD
|
OSD
|
||||||
A physical or logical storage unit (*e.g.*, LUN).
|
A physical or logical storage unit (*e.g.*, LUN).
|
||||||
@ -85,115 +163,44 @@ reflect either technical terms or legacy ways of referring to Ceph systems.
|
|||||||
term "OSD" to refer to :term:`Ceph OSD Daemon`, though the
|
term "OSD" to refer to :term:`Ceph OSD Daemon`, though the
|
||||||
proper term is "Ceph OSD".
|
proper term is "Ceph OSD".
|
||||||
|
|
||||||
Ceph OSD Daemon
|
OSD fsid
|
||||||
Ceph OSD Daemons
|
This is a unique identifier used to further improve the
|
||||||
Ceph OSD
|
uniqueness of an OSD and it is found in the OSD path in a file
|
||||||
The Ceph OSD software, which interacts with a logical
|
called ``osd_fsid``. This ``fsid`` term is used interchangeably
|
||||||
disk (:term:`OSD`). Sometimes, Ceph users use the
|
with ``uuid``
|
||||||
term "OSD" to refer to "Ceph OSD Daemon", though the
|
|
||||||
proper term is "Ceph OSD".
|
|
||||||
|
|
||||||
OSD id
|
OSD id
|
||||||
The integer that defines an OSD. It is generated by the monitors as part
|
The integer that defines an OSD. It is generated by the
|
||||||
of the creation of a new OSD.
|
monitors as part of the creation of a new OSD.
|
||||||
|
|
||||||
OSD fsid
|
|
||||||
This is a unique identifier used to further improve the uniqueness of an
|
|
||||||
OSD and it is found in the OSD path in a file called ``osd_fsid``. This
|
|
||||||
``fsid`` term is used interchangeably with ``uuid``
|
|
||||||
|
|
||||||
OSD uuid
|
OSD uuid
|
||||||
Just like the OSD fsid, this is the OSD unique identifier and is used
|
Just like the OSD fsid, this is the OSD unique identifier and
|
||||||
interchangeably with ``fsid``
|
is used interchangeably with ``fsid``
|
||||||
|
|
||||||
bluestore
|
|
||||||
OSD BlueStore is a new back end for OSD daemons (kraken and newer
|
|
||||||
versions). Unlike :term:`filestore` it stores objects directly on the
|
|
||||||
Ceph block devices without any file system interface.
|
|
||||||
|
|
||||||
filestore
|
|
||||||
A back end for OSD daemons, where a Journal is needed and files are
|
|
||||||
written to the filesystem.
|
|
||||||
|
|
||||||
Ceph Monitor
|
|
||||||
MON
|
|
||||||
The Ceph monitor software.
|
|
||||||
|
|
||||||
Ceph Manager
|
|
||||||
MGR
|
|
||||||
The Ceph manager software, which collects all the state from the whole
|
|
||||||
cluster in one place.
|
|
||||||
|
|
||||||
Ceph Manager Dashboard
|
|
||||||
Ceph Dashboard
|
|
||||||
Dashboard Module
|
|
||||||
Dashboard Plugin
|
|
||||||
Dashboard
|
|
||||||
A built-in web-based Ceph management and monitoring application to
|
|
||||||
administer various aspects and objects of the cluster. The dashboard is
|
|
||||||
implemented as a Ceph Manager module. See :ref:`mgr-dashboard` for more
|
|
||||||
details.
|
|
||||||
|
|
||||||
Ceph Metadata Server
|
|
||||||
MDS
|
|
||||||
The Ceph metadata software.
|
|
||||||
|
|
||||||
Ceph Clients
|
|
||||||
Ceph Client
|
|
||||||
The collection of Ceph components which can access a Ceph Storage
|
|
||||||
Cluster. These include the Ceph Object Gateway, the Ceph Block Device,
|
|
||||||
the Ceph File System, and their corresponding libraries, kernel modules,
|
|
||||||
and FUSEs.
|
|
||||||
|
|
||||||
Ceph Kernel Modules
|
|
||||||
The collection of kernel modules which can be used to interact with the
|
|
||||||
Ceph System (e.g., ``ceph.ko``, ``rbd.ko``).
|
|
||||||
|
|
||||||
Ceph Client Libraries
|
|
||||||
The collection of libraries that can be used to interact with components
|
|
||||||
of the Ceph System.
|
|
||||||
|
|
||||||
Ceph Release
|
|
||||||
Any distinct numbered version of Ceph.
|
|
||||||
|
|
||||||
Ceph Point Release
|
|
||||||
Any ad-hoc release that includes only bug or security fixes.
|
|
||||||
|
|
||||||
Ceph Interim Release
|
|
||||||
Versions of Ceph that have not yet been put through quality assurance
|
|
||||||
testing, but may contain new features.
|
|
||||||
|
|
||||||
Ceph Release Candidate
|
|
||||||
A major version of Ceph that has undergone initial quality assurance
|
|
||||||
testing and is ready for beta testers.
|
|
||||||
|
|
||||||
Ceph Stable Release
|
|
||||||
A major version of Ceph where all features from the preceding interim
|
|
||||||
releases have been put through quality assurance testing successfully.
|
|
||||||
|
|
||||||
Ceph Test Framework
|
|
||||||
Teuthology
|
|
||||||
The collection of software that performs scripted tests on Ceph.
|
|
||||||
|
|
||||||
CRUSH
|
|
||||||
Controlled Replication Under Scalable Hashing. It is the algorithm
|
|
||||||
Ceph uses to compute object storage locations.
|
|
||||||
|
|
||||||
CRUSH rule
|
|
||||||
The CRUSH data placement rule that applies to a particular pool(s).
|
|
||||||
|
|
||||||
Pool
|
Pool
|
||||||
Pools
|
Pools
|
||||||
Pools are logical partitions for storing objects.
|
Pools are logical partitions for storing objects.
|
||||||
|
|
||||||
systemd oneshot
|
RADOS
|
||||||
A systemd ``type`` where a command is defined in ``ExecStart`` which will
|
RADOS Cluster
|
||||||
exit upon completion (it is not intended to daemonize)
|
RADOS Gateway
|
||||||
|
RBD
|
||||||
|
The block storage component of Ceph.
|
||||||
|
|
||||||
LVM tags
|
Reliable Autonomic Distributed Object Store
|
||||||
Extensible metadata for LVM volumes and groups. It is used to store
|
The core set of storage software which stores the user's data
|
||||||
Ceph-specific information about devices and its relationship with
|
(MON+OSD).
|
||||||
OSDs.
|
|
||||||
|
RGW
|
||||||
|
The S3/Swift gateway component of Ceph.
|
||||||
|
|
||||||
|
systemd oneshot
|
||||||
|
A systemd ``type`` where a command is defined in ``ExecStart``
|
||||||
|
which will exit upon completion (it is not intended to
|
||||||
|
daemonize)
|
||||||
|
|
||||||
|
Teuthology
|
||||||
|
The collection of software that performs scripted tests on Ceph.
|
||||||
|
|
||||||
.. _https://github.com/ceph: https://github.com/ceph
|
.. _https://github.com/ceph: https://github.com/ceph
|
||||||
.. _Cluster Map: ../architecture#cluster-map
|
.. _Cluster Map: ../architecture#cluster-map
|
||||||
|
Loading…
Reference in New Issue
Block a user