doc/cephfs: add note to isolate metadata pool osds

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This commit is contained in:
Patrick Donnelly 2023-07-07 08:42:58 -04:00
parent 4c13a61ab2
commit 4e2e61f164
No known key found for this signature in database
GPG Key ID: BE69BB7D36E459B4
2 changed files with 6 additions and 0 deletions

View File

@ -15,6 +15,10 @@ There are important considerations when planning these pools:
- We recommend the fastest feasible low-latency storage devices (NVMe, Optane,
or at the very least SAS/SATA SSD) for the metadata pool, as this will
directly affect the latency of client file system operations.
- We strongly suggest that the CephFS metadata pool be provisioned on dedicated
SSD / NVMe OSDs. This ensures that high client workload does not adversely
impact metadata operations. See :ref:`device_classes` to configure pools this
way.
- The data pool used to create the file system is the "default" data pool and
the location for storing all inode backtrace information, which is used for hard link
management and disaster recovery. For this reason, all CephFS inodes

View File

@ -221,6 +221,8 @@ To view the contents of the rules, run the following command:
ceph osd crush rule dump
.. _device_classes:
Device classes
--------------