mirror of
https://github.com/ceph/ceph
synced 2025-01-20 18:21:57 +00:00
doc/glossary.rst: alphabetize glossary terms
This commit (finally) alphabetizes the terms in the glossary. This is not a grammar-correcting or usage-correcting commit. Signed-off-by: Zac Dover <zac.dover@gmail.com>
This commit is contained in:
parent
3b364de9e3
commit
10b33bdabe
309
doc/glossary.rst
309
doc/glossary.rst
@ -14,70 +14,148 @@ reflect either technical terms or legacy ways of referring to Ceph systems.
|
||||
|
||||
.. glossary::
|
||||
|
||||
Ceph Project
|
||||
The aggregate term for the people, software, mission and infrastructure
|
||||
of Ceph.
|
||||
|
||||
cephx
|
||||
The Ceph authentication protocol. Cephx operates like Kerberos, but it
|
||||
has no single point of failure.
|
||||
bluestore
|
||||
OSD BlueStore is a new back end for OSD daemons (kraken and
|
||||
newer versions). Unlike :term:`filestore` it stores objects
|
||||
directly on the Ceph block devices without any file system
|
||||
interface.
|
||||
|
||||
Ceph
|
||||
Ceph Platform
|
||||
All Ceph software, which includes any piece of code hosted at
|
||||
`https://github.com/ceph`_.
|
||||
|
||||
Ceph System
|
||||
Ceph Stack
|
||||
A collection of two or more components of Ceph.
|
||||
|
||||
Ceph Node
|
||||
Node
|
||||
Host
|
||||
Any single machine or server in a Ceph System.
|
||||
|
||||
Ceph Storage Cluster
|
||||
Ceph Object Store
|
||||
RADOS
|
||||
RADOS Cluster
|
||||
Reliable Autonomic Distributed Object Store
|
||||
The core set of storage software which stores the user's data (MON+OSD).
|
||||
|
||||
Ceph Cluster Map
|
||||
Cluster Map
|
||||
The set of maps comprising the monitor map, OSD map, PG map, MDS map and
|
||||
CRUSH map. See `Cluster Map`_ for details.
|
||||
|
||||
Ceph Object Storage
|
||||
The object storage "product", service or capabilities, which consists
|
||||
essentially of a Ceph Storage Cluster and a Ceph Object Gateway.
|
||||
|
||||
Ceph Object Gateway
|
||||
RADOS Gateway
|
||||
RGW
|
||||
The S3/Swift gateway component of Ceph.
|
||||
|
||||
Ceph Block Device
|
||||
RBD
|
||||
The block storage component of Ceph.
|
||||
|
||||
Ceph Block Storage
|
||||
The block storage "product," service or capabilities when used in
|
||||
conjunction with ``librbd``, a hypervisor such as QEMU or Xen, and a
|
||||
hypervisor abstraction layer such as ``libvirt``.
|
||||
The block storage "product," service or capabilities when used
|
||||
in conjunction with ``librbd``, a hypervisor such as QEMU or
|
||||
Xen, and a hypervisor abstraction layer such as ``libvirt``.
|
||||
|
||||
Ceph Client
|
||||
The collection of Ceph components which can access a Ceph
|
||||
Storage Cluster. These include the Ceph Object Gateway, the
|
||||
Ceph Block Device, the Ceph File System, and their
|
||||
corresponding libraries, kernel modules, and FUSEs.
|
||||
|
||||
Ceph Client Libraries
|
||||
The collection of libraries that can be used to interact with
|
||||
components of the Ceph System.
|
||||
|
||||
Ceph Clients
|
||||
Ceph Cluster Map
|
||||
Ceph Dashboard
|
||||
Ceph File System
|
||||
CephFS
|
||||
Ceph FS
|
||||
The POSIX filesystem components of Ceph. Refer
|
||||
:ref:`CephFS Architecture <arch-cephfs>` and :ref:`ceph-file-system` for
|
||||
more details.
|
||||
The POSIX filesystem components of Ceph. Refer :ref:`CephFS
|
||||
Architecture <arch-cephfs>` and :ref:`ceph-file-system` for
|
||||
more details.
|
||||
|
||||
Ceph Interim Release
|
||||
Versions of Ceph that have not yet been put through quality
|
||||
assurance testing, but may contain new features.
|
||||
|
||||
Ceph Kernel Modules
|
||||
The collection of kernel modules which can be used to interact
|
||||
with the Ceph System (e.g., ``ceph.ko``, ``rbd.ko``).
|
||||
|
||||
Ceph Manager
|
||||
Ceph Manager Dashboard
|
||||
Ceph Metadata Server
|
||||
Ceph Monitor
|
||||
Ceph Node
|
||||
Ceph Object Gateway
|
||||
Ceph Object Storage
|
||||
The object storage "product", service or capabilities, which
|
||||
consists essentially of a Ceph Storage Cluster and a Ceph Object
|
||||
Gateway.
|
||||
|
||||
Ceph Object Store
|
||||
Ceph OSD
|
||||
The Ceph OSD software, which interacts with a logical
|
||||
disk (:term:`OSD`). Sometimes, Ceph users use the
|
||||
term "OSD" to refer to "Ceph OSD Daemon", though the
|
||||
proper term is "Ceph OSD".
|
||||
|
||||
Ceph OSD Daemon
|
||||
Ceph OSD Daemons
|
||||
Ceph Platform
|
||||
All Ceph software, which includes any piece of code hosted at
|
||||
`https://github.com/ceph`_.
|
||||
|
||||
Ceph Point Release
|
||||
Any ad-hoc release that includes only bug or security fixes.
|
||||
|
||||
Ceph Project
|
||||
The aggregate term for the people, software, mission and
|
||||
infrastructure of Ceph.
|
||||
|
||||
Ceph Release
|
||||
Any distinct numbered version of Ceph.
|
||||
|
||||
Ceph Release Candidate
|
||||
A major version of Ceph that has undergone initial quality
|
||||
assurance testing and is ready for beta testers.
|
||||
|
||||
Ceph Stable Release
|
||||
A major version of Ceph where all features from the preceding
|
||||
interim releases have been put through quality assurance
|
||||
testing successfully.
|
||||
|
||||
Ceph Stack
|
||||
A collection of two or more components of Ceph.
|
||||
|
||||
Ceph Storage Cluster
|
||||
Ceph System
|
||||
Ceph Test Framework
|
||||
cephx
|
||||
The Ceph authentication protocol. Cephx operates like Kerberos,
|
||||
but it has no single point of failure.
|
||||
|
||||
Cloud Platforms
|
||||
Cloud Stacks
|
||||
Third party cloud provisioning platforms such as OpenStack, CloudStack,
|
||||
OpenNebula, Proxmox VE, etc.
|
||||
Third party cloud provisioning platforms such as OpenStack,
|
||||
CloudStack, OpenNebula, Proxmox VE, etc.
|
||||
|
||||
Cluster Map
|
||||
The set of maps comprising the monitor map, OSD map, PG map,
|
||||
MDS map and CRUSH map. See `Cluster Map`_ for details.
|
||||
|
||||
CRUSH
|
||||
Controlled Replication Under Scalable Hashing. It is the
|
||||
algorithm Ceph uses to compute object storage locations.
|
||||
|
||||
CRUSH rule
|
||||
The CRUSH data placement rule that applies to a particular
|
||||
pool(s).
|
||||
|
||||
Dashboard
|
||||
A built-in web-based Ceph management and monitoring application
|
||||
to administer various aspects and objects of the cluster. The
|
||||
dashboard is implemented as a Ceph Manager module. See
|
||||
:ref:`mgr-dashboard` for more details.
|
||||
|
||||
Dashboard Module
|
||||
Dashboard Plugin
|
||||
filestore
|
||||
A back end for OSD daemons, where a Journal is needed and files
|
||||
are written to the filesystem.
|
||||
|
||||
Host
|
||||
Any single machine or server in a Ceph System.
|
||||
|
||||
LVM tags
|
||||
Extensible metadata for LVM volumes and groups. It is used to
|
||||
store Ceph-specific information about devices and its
|
||||
relationship with OSDs.
|
||||
|
||||
MDS
|
||||
The Ceph metadata software.
|
||||
|
||||
MGR
|
||||
The Ceph manager software, which collects all the state from
|
||||
the whole cluster in one place.
|
||||
|
||||
MON
|
||||
The Ceph monitor software.
|
||||
|
||||
Node
|
||||
Object Storage Device
|
||||
OSD
|
||||
A physical or logical storage unit (*e.g.*, LUN).
|
||||
@ -85,115 +163,44 @@ reflect either technical terms or legacy ways of referring to Ceph systems.
|
||||
term "OSD" to refer to :term:`Ceph OSD Daemon`, though the
|
||||
proper term is "Ceph OSD".
|
||||
|
||||
Ceph OSD Daemon
|
||||
Ceph OSD Daemons
|
||||
Ceph OSD
|
||||
The Ceph OSD software, which interacts with a logical
|
||||
disk (:term:`OSD`). Sometimes, Ceph users use the
|
||||
term "OSD" to refer to "Ceph OSD Daemon", though the
|
||||
proper term is "Ceph OSD".
|
||||
OSD fsid
|
||||
This is a unique identifier used to further improve the
|
||||
uniqueness of an OSD and it is found in the OSD path in a file
|
||||
called ``osd_fsid``. This ``fsid`` term is used interchangeably
|
||||
with ``uuid``
|
||||
|
||||
OSD id
|
||||
The integer that defines an OSD. It is generated by the monitors as part
|
||||
of the creation of a new OSD.
|
||||
|
||||
OSD fsid
|
||||
This is a unique identifier used to further improve the uniqueness of an
|
||||
OSD and it is found in the OSD path in a file called ``osd_fsid``. This
|
||||
``fsid`` term is used interchangeably with ``uuid``
|
||||
The integer that defines an OSD. It is generated by the
|
||||
monitors as part of the creation of a new OSD.
|
||||
|
||||
OSD uuid
|
||||
Just like the OSD fsid, this is the OSD unique identifier and is used
|
||||
interchangeably with ``fsid``
|
||||
|
||||
bluestore
|
||||
OSD BlueStore is a new back end for OSD daemons (kraken and newer
|
||||
versions). Unlike :term:`filestore` it stores objects directly on the
|
||||
Ceph block devices without any file system interface.
|
||||
|
||||
filestore
|
||||
A back end for OSD daemons, where a Journal is needed and files are
|
||||
written to the filesystem.
|
||||
|
||||
Ceph Monitor
|
||||
MON
|
||||
The Ceph monitor software.
|
||||
|
||||
Ceph Manager
|
||||
MGR
|
||||
The Ceph manager software, which collects all the state from the whole
|
||||
cluster in one place.
|
||||
|
||||
Ceph Manager Dashboard
|
||||
Ceph Dashboard
|
||||
Dashboard Module
|
||||
Dashboard Plugin
|
||||
Dashboard
|
||||
A built-in web-based Ceph management and monitoring application to
|
||||
administer various aspects and objects of the cluster. The dashboard is
|
||||
implemented as a Ceph Manager module. See :ref:`mgr-dashboard` for more
|
||||
details.
|
||||
|
||||
Ceph Metadata Server
|
||||
MDS
|
||||
The Ceph metadata software.
|
||||
|
||||
Ceph Clients
|
||||
Ceph Client
|
||||
The collection of Ceph components which can access a Ceph Storage
|
||||
Cluster. These include the Ceph Object Gateway, the Ceph Block Device,
|
||||
the Ceph File System, and their corresponding libraries, kernel modules,
|
||||
and FUSEs.
|
||||
|
||||
Ceph Kernel Modules
|
||||
The collection of kernel modules which can be used to interact with the
|
||||
Ceph System (e.g., ``ceph.ko``, ``rbd.ko``).
|
||||
|
||||
Ceph Client Libraries
|
||||
The collection of libraries that can be used to interact with components
|
||||
of the Ceph System.
|
||||
|
||||
Ceph Release
|
||||
Any distinct numbered version of Ceph.
|
||||
|
||||
Ceph Point Release
|
||||
Any ad-hoc release that includes only bug or security fixes.
|
||||
|
||||
Ceph Interim Release
|
||||
Versions of Ceph that have not yet been put through quality assurance
|
||||
testing, but may contain new features.
|
||||
|
||||
Ceph Release Candidate
|
||||
A major version of Ceph that has undergone initial quality assurance
|
||||
testing and is ready for beta testers.
|
||||
|
||||
Ceph Stable Release
|
||||
A major version of Ceph where all features from the preceding interim
|
||||
releases have been put through quality assurance testing successfully.
|
||||
|
||||
Ceph Test Framework
|
||||
Teuthology
|
||||
The collection of software that performs scripted tests on Ceph.
|
||||
|
||||
CRUSH
|
||||
Controlled Replication Under Scalable Hashing. It is the algorithm
|
||||
Ceph uses to compute object storage locations.
|
||||
|
||||
CRUSH rule
|
||||
The CRUSH data placement rule that applies to a particular pool(s).
|
||||
Just like the OSD fsid, this is the OSD unique identifier and
|
||||
is used interchangeably with ``fsid``
|
||||
|
||||
Pool
|
||||
Pools
|
||||
Pools are logical partitions for storing objects.
|
||||
|
||||
systemd oneshot
|
||||
A systemd ``type`` where a command is defined in ``ExecStart`` which will
|
||||
exit upon completion (it is not intended to daemonize)
|
||||
RADOS
|
||||
RADOS Cluster
|
||||
RADOS Gateway
|
||||
RBD
|
||||
The block storage component of Ceph.
|
||||
|
||||
LVM tags
|
||||
Extensible metadata for LVM volumes and groups. It is used to store
|
||||
Ceph-specific information about devices and its relationship with
|
||||
OSDs.
|
||||
Reliable Autonomic Distributed Object Store
|
||||
The core set of storage software which stores the user's data
|
||||
(MON+OSD).
|
||||
|
||||
RGW
|
||||
The S3/Swift gateway component of Ceph.
|
||||
|
||||
systemd oneshot
|
||||
A systemd ``type`` where a command is defined in ``ExecStart``
|
||||
which will exit upon completion (it is not intended to
|
||||
daemonize)
|
||||
|
||||
Teuthology
|
||||
The collection of software that performs scripted tests on Ceph.
|
||||
|
||||
.. _https://github.com/ceph: https://github.com/ceph
|
||||
.. _Cluster Map: ../architecture#cluster-map
|
||||
|
Loading…
Reference in New Issue
Block a user