mirror of
https://github.com/ceph/ceph
synced 2025-03-11 02:39:05 +00:00
doc/cephfs: add note to isolate metadata pool osds
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This commit is contained in:
parent
4c13a61ab2
commit
4e2e61f164
@ -15,6 +15,10 @@ There are important considerations when planning these pools:
|
||||
- We recommend the fastest feasible low-latency storage devices (NVMe, Optane,
|
||||
or at the very least SAS/SATA SSD) for the metadata pool, as this will
|
||||
directly affect the latency of client file system operations.
|
||||
- We strongly suggest that the CephFS metadata pool be provisioned on dedicated
|
||||
SSD / NVMe OSDs. This ensures that high client workload does not adversely
|
||||
impact metadata operations. See :ref:`device_classes` to configure pools this
|
||||
way.
|
||||
- The data pool used to create the file system is the "default" data pool and
|
||||
the location for storing all inode backtrace information, which is used for hard link
|
||||
management and disaster recovery. For this reason, all CephFS inodes
|
||||
|
@ -221,6 +221,8 @@ To view the contents of the rules, run the following command:
|
||||
|
||||
ceph osd crush rule dump
|
||||
|
||||
.. _device_classes:
|
||||
|
||||
Device classes
|
||||
--------------
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user