mirror of
https://github.com/ceph/ceph
synced 2025-04-21 06:45:48 +00:00
docs: warning and remove few docs section for Filestore
Update docs after filestore removal. Signed-off-by: Nitzan Mordechai <nmordech@redhat.com>
This commit is contained in:
parent
7870a6290d
commit
d79f2a8154
doc
@ -65,10 +65,8 @@ comes through a :term:`Ceph Block Device`, :term:`Ceph Object Storage`, the
|
|||||||
:term:`Ceph File System` or a custom implementation you create using
|
:term:`Ceph File System` or a custom implementation you create using
|
||||||
``librados``-- which is stored as RADOS objects. Each object is stored on an
|
``librados``-- which is stored as RADOS objects. Each object is stored on an
|
||||||
:term:`Object Storage Device`. Ceph OSD Daemons handle read, write, and
|
:term:`Object Storage Device`. Ceph OSD Daemons handle read, write, and
|
||||||
replication operations on storage drives. With the older Filestore back end,
|
replication operations on storage drives. With the default BlueStore back end,
|
||||||
each RADOS object was stored as a separate file on a conventional filesystem
|
objects are stored in a monolithic database-like fashion.
|
||||||
(usually XFS). With the new and default BlueStore back end, objects are
|
|
||||||
stored in a monolithic database-like fashion.
|
|
||||||
|
|
||||||
.. ditaa::
|
.. ditaa::
|
||||||
|
|
||||||
|
@ -81,15 +81,11 @@ The systemd unit will look for the matching OSD device, and by looking at its
|
|||||||
#. Mount the device in the corresponding location (by convention this is
|
#. Mount the device in the corresponding location (by convention this is
|
||||||
``/var/lib/ceph/osd/<cluster name>-<osd id>/``)
|
``/var/lib/ceph/osd/<cluster name>-<osd id>/``)
|
||||||
|
|
||||||
#. Ensure that all required devices are ready for that OSD. In the case of
|
#. Ensure that all required devices are ready for that OSD.
|
||||||
a journal (when ``--filestore`` is selected) the device will be queried (with
|
|
||||||
``blkid`` for partitions, and lvm for logical volumes) to ensure that the
|
|
||||||
correct device is being linked. The symbolic link will *always* be re-done to
|
|
||||||
ensure that the correct device is linked.
|
|
||||||
|
|
||||||
#. Start the ``ceph-osd@0`` systemd unit
|
#. Start the ``ceph-osd@0`` systemd unit
|
||||||
|
|
||||||
.. note:: The system infers the objectstore type (filestore or bluestore) by
|
.. note:: The system infers the objectstore type by
|
||||||
inspecting the LVM tags applied to the OSD devices
|
inspecting the LVM tags applied to the OSD devices
|
||||||
|
|
||||||
Existing OSDs
|
Existing OSDs
|
||||||
@ -112,10 +108,3 @@ To recap the ``activate`` process for :term:`bluestore`:
|
|||||||
pointing it to the OSD ``block`` device.
|
pointing it to the OSD ``block`` device.
|
||||||
#. The systemd unit will ensure all devices are ready and linked
|
#. The systemd unit will ensure all devices are ready and linked
|
||||||
#. The matching ``ceph-osd`` systemd unit will get started
|
#. The matching ``ceph-osd`` systemd unit will get started
|
||||||
|
|
||||||
And for :term:`filestore`:
|
|
||||||
|
|
||||||
#. Require both :term:`OSD id` and :term:`OSD uuid`
|
|
||||||
#. Enable the system unit with matching id and uuid
|
|
||||||
#. The systemd unit will ensure all devices are ready and mounted (if needed)
|
|
||||||
#. The matching ``ceph-osd`` systemd unit will get started
|
|
||||||
|
@ -12,8 +12,8 @@ same code path. All ``batch`` does is to calculate the appropriate sizes of all
|
|||||||
volumes and skip over already created volumes.
|
volumes and skip over already created volumes.
|
||||||
|
|
||||||
All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``,
|
All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``,
|
||||||
avoiding ``systemd`` units from starting, defining bluestore or filestore,
|
avoiding ``systemd`` units from starting, defining bluestore,
|
||||||
are supported.
|
is supported.
|
||||||
|
|
||||||
|
|
||||||
.. _ceph-volume-lvm-batch_auto:
|
.. _ceph-volume-lvm-batch_auto:
|
||||||
|
@ -17,7 +17,6 @@ immediately after completion.
|
|||||||
|
|
||||||
The backing objectstore can be specified with:
|
The backing objectstore can be specified with:
|
||||||
|
|
||||||
* :ref:`--filestore <ceph-volume-lvm-prepare_filestore>`
|
|
||||||
* :ref:`--bluestore <ceph-volume-lvm-prepare_bluestore>`
|
* :ref:`--bluestore <ceph-volume-lvm-prepare_bluestore>`
|
||||||
|
|
||||||
All command line flags and options are the same as ``ceph-volume lvm prepare``.
|
All command line flags and options are the same as ``ceph-volume lvm prepare``.
|
||||||
|
@ -62,8 +62,8 @@ compatibility and prevent ceph-disk from breaking, ceph-volume uses the same
|
|||||||
naming convention *although it does not make sense for the new encryption
|
naming convention *although it does not make sense for the new encryption
|
||||||
workflow*.
|
workflow*.
|
||||||
|
|
||||||
After the common steps of setting up the OSD during the "prepare stage" (either
|
After the common steps of setting up the OSD during the "prepare stage" (
|
||||||
with :term:`filestore` or :term:`bluestore`), the logical volume is left ready
|
with :term:`bluestore`), the logical volume is left ready
|
||||||
to be activated, regardless of the state of the device (encrypted or
|
to be activated, regardless of the state of the device (encrypted or
|
||||||
decrypted).
|
decrypted).
|
||||||
|
|
||||||
|
@ -19,15 +19,13 @@ play in the Ceph cluster (for example: BlueStore data or BlueStore WAL+DB).
|
|||||||
:term:`BlueStore<bluestore>` is the default backend. Ceph permits changing
|
:term:`BlueStore<bluestore>` is the default backend. Ceph permits changing
|
||||||
the backend, which can be done by using the following flags and arguments:
|
the backend, which can be done by using the following flags and arguments:
|
||||||
|
|
||||||
* :ref:`--filestore <ceph-volume-lvm-prepare_filestore>`
|
|
||||||
* :ref:`--bluestore <ceph-volume-lvm-prepare_bluestore>`
|
* :ref:`--bluestore <ceph-volume-lvm-prepare_bluestore>`
|
||||||
|
|
||||||
.. _ceph-volume-lvm-prepare_bluestore:
|
.. _ceph-volume-lvm-prepare_bluestore:
|
||||||
|
|
||||||
``bluestore``
|
``bluestore``
|
||||||
-------------
|
-------------
|
||||||
:term:`Bluestore<bluestore>` is the default backend for new OSDs. It
|
:term:`Bluestore<bluestore>` is the default backend for new OSDs. Bluestore
|
||||||
offers more flexibility for devices than :term:`filestore` does. Bluestore
|
|
||||||
supports the following configurations:
|
supports the following configurations:
|
||||||
|
|
||||||
* a block device, a block.wal device, and a block.db device
|
* a block device, a block.wal device, and a block.db device
|
||||||
@ -103,8 +101,10 @@ a volume group and a logical volume using the following conventions:
|
|||||||
|
|
||||||
``filestore``
|
``filestore``
|
||||||
-------------
|
-------------
|
||||||
|
.. warning:: Filestore has been deprecated in the Reef release and is no longer supported.
|
||||||
|
|
||||||
``Filestore<filestore>`` is the OSD backend that prepares logical volumes for a
|
``Filestore<filestore>`` is the OSD backend that prepares logical volumes for a
|
||||||
:term:`filestore`-backed object-store OSD.
|
`filestore`-backed object-store OSD.
|
||||||
|
|
||||||
|
|
||||||
``Filestore<filestore>`` uses a logical volume to store OSD data and it uses
|
``Filestore<filestore>`` uses a logical volume to store OSD data and it uses
|
||||||
@ -270,8 +270,7 @@ can be started later (for detailed metadata description see
|
|||||||
Crush device class
|
Crush device class
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
To set the crush device class for the OSD, use the ``--crush-device-class`` flag. This will
|
To set the crush device class for the OSD, use the ``--crush-device-class`` flag.
|
||||||
work for both bluestore and filestore OSDs::
|
|
||||||
|
|
||||||
ceph-volume lvm prepare --bluestore --data vg/lv --crush-device-class foo
|
ceph-volume lvm prepare --bluestore --data vg/lv --crush-device-class foo
|
||||||
|
|
||||||
@ -306,11 +305,6 @@ regardless of the type of volume (journal or data) or OSD objectstore:
|
|||||||
* ``osd_id``
|
* ``osd_id``
|
||||||
* ``crush_device_class``
|
* ``crush_device_class``
|
||||||
|
|
||||||
For :term:`filestore` these tags will be added:
|
|
||||||
|
|
||||||
* ``journal_device``
|
|
||||||
* ``journal_uuid``
|
|
||||||
|
|
||||||
For :term:`bluestore` these tags will be added:
|
For :term:`bluestore` these tags will be added:
|
||||||
|
|
||||||
* ``block_device``
|
* ``block_device``
|
||||||
@ -336,15 +330,3 @@ To recap the ``prepare`` process for :term:`bluestore`:
|
|||||||
#. monmap is fetched for activation
|
#. monmap is fetched for activation
|
||||||
#. Data directory is populated by ``ceph-osd``
|
#. Data directory is populated by ``ceph-osd``
|
||||||
#. Logical Volumes are assigned all the Ceph metadata using lvm tags
|
#. Logical Volumes are assigned all the Ceph metadata using lvm tags
|
||||||
|
|
||||||
|
|
||||||
And the ``prepare`` process for :term:`filestore`:
|
|
||||||
|
|
||||||
#. Accepts raw physical devices, partitions on physical devices or logical volumes as arguments.
|
|
||||||
#. Generate a UUID for the OSD
|
|
||||||
#. Ask the monitor get an OSD ID reusing the generated UUID
|
|
||||||
#. OSD data directory is created and data volume mounted
|
|
||||||
#. Journal is symlinked from data volume to journal location
|
|
||||||
#. monmap is fetched for activation
|
|
||||||
#. devices is mounted and data directory is populated by ``ceph-osd``
|
|
||||||
#. data and journal volumes are assigned all the Ceph metadata using lvm tags
|
|
||||||
|
@ -73,8 +73,7 @@ identify the OSD and its devices, and it will proceed to:
|
|||||||
# mount the device in the corresponding location (by convention this is
|
# mount the device in the corresponding location (by convention this is
|
||||||
``/var/lib/ceph/osd/<cluster name>-<osd id>/``)
|
``/var/lib/ceph/osd/<cluster name>-<osd id>/``)
|
||||||
|
|
||||||
# ensure that all required devices are ready for that OSD and properly linked,
|
# ensure that all required devices are ready for that OSD and properly linked.
|
||||||
regardless of objectstore used (filestore or bluestore). The symbolic link will
|
The symbolic link will **always** be re-done to ensure that the correct device is linked.
|
||||||
**always** be re-done to ensure that the correct device is linked.
|
|
||||||
|
|
||||||
# start the ``ceph-osd@0`` systemd unit
|
# start the ``ceph-osd@0`` systemd unit
|
||||||
|
@ -14,8 +14,7 @@ clusters can be converted to a state in which they can be managed by
|
|||||||
Limitations
|
Limitations
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
* Cephadm works only with BlueStore OSDs. FileStore OSDs that are in your
|
* Cephadm works only with BlueStore OSDs.
|
||||||
cluster cannot be managed with ``cephadm``.
|
|
||||||
|
|
||||||
Preparation
|
Preparation
|
||||||
-----------
|
-----------
|
||||||
|
@ -37,10 +37,6 @@ Options
|
|||||||
|
|
||||||
Create an erasure pool.
|
Create an erasure pool.
|
||||||
|
|
||||||
.. option:: -f, --filestore
|
|
||||||
|
|
||||||
Use filestore as the osd objectstore backend.
|
|
||||||
|
|
||||||
.. option:: --hitset <pool> <hit_set_type>
|
.. option:: --hitset <pool> <hit_set_type>
|
||||||
|
|
||||||
Enable hitset tracking.
|
Enable hitset tracking.
|
||||||
|
@ -52,14 +52,9 @@
|
|||||||
"PrimaryLogPG" -> "ObjectStore"
|
"PrimaryLogPG" -> "ObjectStore"
|
||||||
"PrimaryLogPG" -> "OSDMap"
|
"PrimaryLogPG" -> "OSDMap"
|
||||||
|
|
||||||
"ObjectStore" -> "FileStore"
|
|
||||||
"ObjectStore" -> "BlueStore"
|
"ObjectStore" -> "BlueStore"
|
||||||
|
|
||||||
"BlueStore" -> "rocksdb"
|
"BlueStore" -> "rocksdb"
|
||||||
|
|
||||||
"FileStore" -> "xfs"
|
|
||||||
"FileStore" -> "btrfs"
|
|
||||||
"FileStore" -> "ext4"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,93 +0,0 @@
|
|||||||
=============
|
|
||||||
OSD Throttles
|
|
||||||
=============
|
|
||||||
|
|
||||||
There are three significant throttles in the FileStore OSD back end:
|
|
||||||
wbthrottle, op_queue_throttle, and a throttle based on journal usage.
|
|
||||||
|
|
||||||
WBThrottle
|
|
||||||
----------
|
|
||||||
The WBThrottle is defined in src/os/filestore/WBThrottle.[h,cc] and
|
|
||||||
included in FileStore as FileStore::wbthrottle. The intention is to
|
|
||||||
bound the amount of outstanding IO we need to do to flush the journal.
|
|
||||||
At the same time, we don't want to necessarily do it inline in case we
|
|
||||||
might be able to combine several IOs on the same object close together
|
|
||||||
in time. Thus, in FileStore::_write, we queue the fd for asynchronous
|
|
||||||
flushing and block in FileStore::_do_op if we have exceeded any hard
|
|
||||||
limits until the background flusher catches up.
|
|
||||||
|
|
||||||
The relevant config options are filestore_wbthrottle*. There are
|
|
||||||
different defaults for XFS and Btrfs. Each set has hard and soft
|
|
||||||
limits on bytes (total dirty bytes), ios (total dirty ios), and
|
|
||||||
inodes (total dirty fds). The WBThrottle will begin flushing
|
|
||||||
when any of these hits the soft limit and will block in throttle()
|
|
||||||
while any has exceeded the hard limit.
|
|
||||||
|
|
||||||
Tighter soft limits will cause writeback to happen more quickly,
|
|
||||||
but may cause the OSD to miss opportunities for write coalescing.
|
|
||||||
Tighter hard limits may cause a reduction in latency variance by
|
|
||||||
reducing time spent flushing the journal, but may reduce writeback
|
|
||||||
parallelism.
|
|
||||||
|
|
||||||
op_queue_throttle
|
|
||||||
-----------------
|
|
||||||
The op queue throttle is intended to bound the amount of queued but
|
|
||||||
uncompleted work in the filestore by delaying threads calling
|
|
||||||
queue_transactions more and more based on how many ops and bytes are
|
|
||||||
currently queued. The throttle is taken in queue_transactions and
|
|
||||||
released when the op is applied to the file system. This period
|
|
||||||
includes time spent in the journal queue, time spent writing to the
|
|
||||||
journal, time spent in the actual op queue, time spent waiting for the
|
|
||||||
wbthrottle to open up (thus, the wbthrottle can push back indirectly
|
|
||||||
on the queue_transactions caller), and time spent actually applying
|
|
||||||
the op to the file system. A BackoffThrottle is used to gradually
|
|
||||||
delay the queueing thread after each throttle becomes more than
|
|
||||||
filestore_queue_low_threshhold full (a ratio of
|
|
||||||
filestore_queue_max_(bytes|ops)). The throttles will block once the
|
|
||||||
max value is reached (filestore_queue_max_(bytes|ops)).
|
|
||||||
|
|
||||||
The significant config options are:
|
|
||||||
filestore_queue_low_threshhold
|
|
||||||
filestore_queue_high_threshhold
|
|
||||||
filestore_expected_throughput_ops
|
|
||||||
filestore_expected_throughput_bytes
|
|
||||||
filestore_queue_high_delay_multiple
|
|
||||||
filestore_queue_max_delay_multiple
|
|
||||||
|
|
||||||
While each throttle is at less than low_threshold of the max,
|
|
||||||
no delay happens. Between low and high, the throttle will
|
|
||||||
inject a per-op delay (per op or byte) ramping from 0 at low to
|
|
||||||
high_delay_multiple/expected_throughput at high. From high to
|
|
||||||
1, the delay will ramp from high_delay_multiple/expected_throughput
|
|
||||||
to max_delay_multiple/expected_throughput.
|
|
||||||
|
|
||||||
filestore_queue_high_delay_multiple and
|
|
||||||
filestore_queue_max_delay_multiple probably do not need to be
|
|
||||||
changed.
|
|
||||||
|
|
||||||
Setting these properly should help to smooth out op latencies by
|
|
||||||
mostly avoiding the hard limit.
|
|
||||||
|
|
||||||
See FileStore::throttle_ops and FileSTore::throttle_bytes.
|
|
||||||
|
|
||||||
journal usage throttle
|
|
||||||
----------------------
|
|
||||||
See src/os/filestore/JournalThrottle.h/cc
|
|
||||||
|
|
||||||
The intention of the journal usage throttle is to gradually slow
|
|
||||||
down queue_transactions callers as the journal fills up in order
|
|
||||||
to smooth out hiccup during filestore syncs. JournalThrottle
|
|
||||||
wraps a BackoffThrottle and tracks journaled but not flushed
|
|
||||||
journal entries so that the throttle can be released when the
|
|
||||||
journal is flushed. The configs work very similarly to the
|
|
||||||
op_queue_throttle.
|
|
||||||
|
|
||||||
The significant config options are:
|
|
||||||
journal_throttle_low_threshhold
|
|
||||||
journal_throttle_high_threshhold
|
|
||||||
filestore_expected_throughput_ops
|
|
||||||
filestore_expected_throughput_bytes
|
|
||||||
journal_throttle_high_multiple
|
|
||||||
journal_throttle_max_multiple
|
|
||||||
|
|
||||||
.. literalinclude:: osd_throttles.txt
|
|
@ -1,21 +0,0 @@
|
|||||||
Messenger throttle (number and size)
|
|
||||||
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
|
||||||
FileStore op_queue throttle (number and size, includes a soft throttle based on filestore_expected_throughput_(ops|bytes))
|
|
||||||
|--------------------------------------------------------|
|
|
||||||
WBThrottle
|
|
||||||
|---------------------------------------------------------------------------------------------------------|
|
|
||||||
Journal (size, includes a soft throttle based on filestore_expected_throughput_bytes)
|
|
||||||
|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
|
||||||
|----------------------------------------------------------------------------------------------------> flushed ----------------> synced
|
|
||||||
|
|
|
||||||
Op: Read Header --DispatchQ--> OSD::_dispatch --OpWQ--> PG::do_request --journalq--> Journal --FileStore::OpWQ--> Apply Thread --Finisher--> op_applied -------------------------------------------------------------> Complete
|
|
||||||
| |
|
|
||||||
SubOp: --Messenger--> ReadHeader --DispatchQ--> OSD::_dispatch --OpWQ--> PG::do_request --journalq--> Journal --FileStore::OpWQ--> Apply Thread --Finisher--> sub_op_applied -
|
|
||||||
|
|
|
||||||
|-----------------------------> flushed ----------------> synced
|
|
||||||
|------------------------------------------------------------------------------------------|
|
|
||||||
Journal (size)
|
|
||||||
|---------------------------------|
|
|
||||||
WBThrottle
|
|
||||||
|-----------------------------------------------------|
|
|
||||||
FileStore op_queue throttle (number and size)
|
|
@ -14,10 +14,10 @@
|
|||||||
was designed specifically for use with Ceph. BlueStore was
|
was designed specifically for use with Ceph. BlueStore was
|
||||||
introduced in the Ceph Kraken release. In the Ceph Luminous
|
introduced in the Ceph Kraken release. In the Ceph Luminous
|
||||||
release, BlueStore became Ceph's default storage back end,
|
release, BlueStore became Ceph's default storage back end,
|
||||||
supplanting FileStore. Unlike :term:`filestore`, BlueStore
|
supplanting FileStore. BlueStore stores objects directly on
|
||||||
stores objects directly on Ceph block devices without any file
|
Ceph block devices without any file system interface.
|
||||||
system interface. Since Luminous (12.2), BlueStore has been
|
Since Luminous (12.2), BlueStore has been Ceph's default
|
||||||
Ceph's default and recommended storage back end.
|
and recommended storage back end.
|
||||||
|
|
||||||
Bucket
|
Bucket
|
||||||
In the context of :term:`RGW`, a bucket is a group of objects.
|
In the context of :term:`RGW`, a bucket is a group of objects.
|
||||||
@ -234,10 +234,6 @@
|
|||||||
Another name for :term:`Dashboard`.
|
Another name for :term:`Dashboard`.
|
||||||
|
|
||||||
Dashboard Plugin
|
Dashboard Plugin
|
||||||
filestore
|
|
||||||
A back end for OSD daemons, where a Journal is needed and files
|
|
||||||
are written to the filesystem.
|
|
||||||
|
|
||||||
FQDN
|
FQDN
|
||||||
**F**\ully **Q**\ualified **D**\omain **N**\ame. A domain name
|
**F**\ully **Q**\ualified **D**\omain **N**\ame. A domain name
|
||||||
that is applied to a node in a network and that specifies the
|
that is applied to a node in a network and that specifies the
|
||||||
|
@ -353,45 +353,6 @@ activate):
|
|||||||
sudo ceph-volume lvm activate 0 a7f64266-0894-4f1e-a635-d0aeaca0e993
|
sudo ceph-volume lvm activate 0 a7f64266-0894-4f1e-a635-d0aeaca0e993
|
||||||
|
|
||||||
|
|
||||||
filestore
|
|
||||||
^^^^^^^^^
|
|
||||||
#. Create the OSD. ::
|
|
||||||
|
|
||||||
ssh {osd node}
|
|
||||||
sudo ceph-volume lvm create --filestore --data {data-path} --journal {journal-path}
|
|
||||||
|
|
||||||
For example::
|
|
||||||
|
|
||||||
ssh osd-node1
|
|
||||||
sudo ceph-volume lvm create --filestore --data /dev/hdd1 --journal /dev/hdd2
|
|
||||||
|
|
||||||
Alternatively, the creation process can be split in two phases (prepare, and
|
|
||||||
activate):
|
|
||||||
|
|
||||||
#. Prepare the OSD. ::
|
|
||||||
|
|
||||||
ssh {node-name}
|
|
||||||
sudo ceph-volume lvm prepare --filestore --data {data-path} --journal {journal-path}
|
|
||||||
|
|
||||||
For example::
|
|
||||||
|
|
||||||
ssh osd-node1
|
|
||||||
sudo ceph-volume lvm prepare --filestore --data /dev/hdd1 --journal /dev/hdd2
|
|
||||||
|
|
||||||
Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for
|
|
||||||
activation. These can be obtained by listing OSDs in the current server::
|
|
||||||
|
|
||||||
sudo ceph-volume lvm list
|
|
||||||
|
|
||||||
#. Activate the OSD::
|
|
||||||
|
|
||||||
sudo ceph-volume lvm activate --filestore {ID} {FSID}
|
|
||||||
|
|
||||||
For example::
|
|
||||||
|
|
||||||
sudo ceph-volume lvm activate --filestore 0 a7f64266-0894-4f1e-a635-d0aeaca0e993
|
|
||||||
|
|
||||||
|
|
||||||
Long Form
|
Long Form
|
||||||
---------
|
---------
|
||||||
|
|
||||||
|
@ -80,9 +80,8 @@ batch
|
|||||||
|
|
||||||
.. program:: ceph-volume lvm batch
|
.. program:: ceph-volume lvm batch
|
||||||
|
|
||||||
Creates OSDs from a list of devices using a ``filestore``
|
Creates OSDs from a list of devices using a ``bluestore`` (default) setup.
|
||||||
or ``bluestore`` (default) setup. It will create all necessary volume groups
|
It will create all necessary volume groups and logical volumes required to have a working OSD.
|
||||||
and logical volumes required to have a working OSD.
|
|
||||||
|
|
||||||
Example usage with three devices::
|
Example usage with three devices::
|
||||||
|
|
||||||
@ -98,10 +97,6 @@ Optional arguments:
|
|||||||
|
|
||||||
Use the bluestore objectstore (default)
|
Use the bluestore objectstore (default)
|
||||||
|
|
||||||
.. option:: --filestore
|
|
||||||
|
|
||||||
Use the filestore objectstore
|
|
||||||
|
|
||||||
.. option:: --yes
|
.. option:: --yes
|
||||||
|
|
||||||
Skip the report and prompt to continue provisioning
|
Skip the report and prompt to continue provisioning
|
||||||
@ -179,10 +174,6 @@ Optional Arguments:
|
|||||||
|
|
||||||
bluestore objectstore (default)
|
bluestore objectstore (default)
|
||||||
|
|
||||||
.. option:: --filestore
|
|
||||||
|
|
||||||
filestore objectstore
|
|
||||||
|
|
||||||
.. option:: --all
|
.. option:: --all
|
||||||
|
|
||||||
Activate all OSDs found in the system
|
Activate all OSDs found in the system
|
||||||
@ -202,13 +193,12 @@ prepare
|
|||||||
|
|
||||||
.. program:: ceph-volume lvm prepare
|
.. program:: ceph-volume lvm prepare
|
||||||
|
|
||||||
Prepares a logical volume to be used as an OSD and journal using a ``filestore``
|
Prepares a logical volume to be used as an OSD and journal using a ``bluestore`` (default) setup.
|
||||||
or ``bluestore`` (default) setup. It will not create or modify the logical volumes
|
It will not create or modify the logical volumes except for adding extra metadata.
|
||||||
except for adding extra metadata.
|
|
||||||
|
|
||||||
Usage::
|
Usage::
|
||||||
|
|
||||||
ceph-volume lvm prepare --filestore --data <data lv> --journal <journal device>
|
ceph-volume lvm prepare --bluestore --data <data lv> --journal <journal device>
|
||||||
|
|
||||||
Optional arguments:
|
Optional arguments:
|
||||||
|
|
||||||
@ -232,10 +222,6 @@ Optional arguments:
|
|||||||
|
|
||||||
Path to a bluestore block.db logical volume or partition
|
Path to a bluestore block.db logical volume or partition
|
||||||
|
|
||||||
.. option:: --filestore
|
|
||||||
|
|
||||||
Use the filestore objectstore
|
|
||||||
|
|
||||||
.. option:: --dmcrypt
|
.. option:: --dmcrypt
|
||||||
|
|
||||||
Enable encryption for the underlying OSD devices
|
Enable encryption for the underlying OSD devices
|
||||||
@ -493,10 +479,6 @@ Optional Arguments:
|
|||||||
|
|
||||||
bluestore objectstore (default)
|
bluestore objectstore (default)
|
||||||
|
|
||||||
.. option:: --filestore
|
|
||||||
|
|
||||||
filestore objectstore
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
It requires a matching JSON file with the following format::
|
It requires a matching JSON file with the following format::
|
||||||
|
@ -831,8 +831,7 @@ Per mapping (block device) `rbd device map` options:
|
|||||||
drop discards that are too small. For bluestore, the recommended setting is
|
drop discards that are too small. For bluestore, the recommended setting is
|
||||||
bluestore_min_alloc_size (currently set to 4K for all types of drives,
|
bluestore_min_alloc_size (currently set to 4K for all types of drives,
|
||||||
previously used to be set to 64K for hard disk drives and 16K for
|
previously used to be set to 64K for hard disk drives and 16K for
|
||||||
solid-state drives). For filestore with filestore_punch_hole = false, the
|
solid-state drives).
|
||||||
recommended setting is image object size (typically 4M).
|
|
||||||
|
|
||||||
* crush_location=x - Specify the location of the client in terms of CRUSH
|
* crush_location=x - Specify the location of the client in terms of CRUSH
|
||||||
hierarchy (since 5.8). This is a set of key-value pairs separated from
|
hierarchy (since 5.8). This is a set of key-value pairs separated from
|
||||||
|
@ -103,20 +103,6 @@ Reference`_.
|
|||||||
OSDs
|
OSDs
|
||||||
====
|
====
|
||||||
|
|
||||||
When Ceph production clusters deploy :term:`Ceph OSD Daemons`, the typical
|
|
||||||
arrangement is that one node has one OSD daemon running Filestore on one
|
|
||||||
storage device. BlueStore is now the default back end, but when using Filestore
|
|
||||||
you must specify a journal size. For example:
|
|
||||||
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[osd]
|
|
||||||
osd_journal_size = 10000
|
|
||||||
|
|
||||||
[osd.0]
|
|
||||||
host = {hostname} #manual deployments only.
|
|
||||||
|
|
||||||
|
|
||||||
By default, Ceph expects to store a Ceph OSD Daemon's data on the following
|
By default, Ceph expects to store a Ceph OSD Daemon's data on the following
|
||||||
path::
|
path::
|
||||||
|
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
============================
|
============================
|
||||||
Filestore Config Reference
|
Filestore Config Reference
|
||||||
============================
|
============================
|
||||||
|
.. warning:: Filestore has been deprecated in the Reef release and is no longer supported.
|
||||||
|
|
||||||
The Filestore back end is no longer the default when creating new OSDs,
|
The Filestore back end is no longer the default when creating new OSDs,
|
||||||
though Filestore OSDs are still supported.
|
though Filestore OSDs are still supported.
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
==========================
|
==========================
|
||||||
Journal Config Reference
|
Journal Config Reference
|
||||||
==========================
|
==========================
|
||||||
|
.. warning:: Filestore has been deprecated in the Reef release and is no longer supported.
|
||||||
.. index:: journal; journal configuration
|
.. index:: journal; journal configuration
|
||||||
|
|
||||||
Filestore OSDs use a journal for two reasons: speed and consistency. Note
|
Filestore OSDs use a journal for two reasons: speed and consistency. Note
|
||||||
|
@ -7,10 +7,6 @@
|
|||||||
QoS support in Ceph is implemented using a queuing scheduler based on `the
|
QoS support in Ceph is implemented using a queuing scheduler based on `the
|
||||||
dmClock algorithm`_. See :ref:`dmclock-qos` section for more details.
|
dmClock algorithm`_. See :ref:`dmclock-qos` section for more details.
|
||||||
|
|
||||||
.. note:: The *mclock_scheduler* is supported for BlueStore OSDs. For Filestore
|
|
||||||
OSDs the *osd_op_queue* is set to *wpq* and is enforced even if you
|
|
||||||
attempt to change it.
|
|
||||||
|
|
||||||
To make the usage of mclock more user-friendly and intuitive, mclock config
|
To make the usage of mclock more user-friendly and intuitive, mclock config
|
||||||
profiles are introduced. The mclock profiles mask the low level details from
|
profiles are introduced. The mclock profiles mask the low level details from
|
||||||
users, making it easier to configure and use mclock.
|
users, making it easier to configure and use mclock.
|
||||||
|
@ -7,7 +7,7 @@
|
|||||||
You can configure Ceph OSD Daemons in the Ceph configuration file (or in recent
|
You can configure Ceph OSD Daemons in the Ceph configuration file (or in recent
|
||||||
releases, the central config store), but Ceph OSD
|
releases, the central config store), but Ceph OSD
|
||||||
Daemons can use the default values and a very minimal configuration. A minimal
|
Daemons can use the default values and a very minimal configuration. A minimal
|
||||||
Ceph OSD Daemon configuration sets ``osd journal size`` (for Filestore), ``host``, and
|
Ceph OSD Daemon configuration sets ``host`` and
|
||||||
uses default values for nearly everything else.
|
uses default values for nearly everything else.
|
||||||
|
|
||||||
Ceph OSD Daemons are numerically identified in incremental fashion, beginning
|
Ceph OSD Daemons are numerically identified in incremental fashion, beginning
|
||||||
|
@ -71,6 +71,8 @@ For more information, see :doc:`bluestore-config-ref` and :doc:`/rados/operation
|
|||||||
|
|
||||||
FileStore
|
FileStore
|
||||||
---------
|
---------
|
||||||
|
.. warning:: Filestore has been deprecated in the Reef release and is no longer supported.
|
||||||
|
|
||||||
|
|
||||||
FileStore is the legacy approach to storing objects in Ceph. It
|
FileStore is the legacy approach to storing objects in Ceph. It
|
||||||
relies on a standard file system (normally XFS) in combination with a
|
relies on a standard file system (normally XFS) in combination with a
|
||||||
|
@ -1,6 +1,8 @@
|
|||||||
=====================
|
=====================
|
||||||
BlueStore Migration
|
BlueStore Migration
|
||||||
=====================
|
=====================
|
||||||
|
.. warning:: Filestore has been deprecated in the Reef release and is no longer supported.
|
||||||
|
Please migrate to BlueStore.
|
||||||
|
|
||||||
Each OSD must be formatted as either Filestore or BlueStore. However, a Ceph
|
Each OSD must be formatted as either Filestore or BlueStore. However, a Ceph
|
||||||
cluster can operate with a mixture of both Filestore OSDs and BlueStore OSDs.
|
cluster can operate with a mixture of both Filestore OSDs and BlueStore OSDs.
|
||||||
|
Loading…
Reference in New Issue
Block a user