1
0
mirror of https://github.com/ceph/ceph synced 2025-03-22 10:17:23 +00:00

doc/rbd: improve nvmeof-requirements.rst

Address quibbles raised in the course of assessing
https://github.com/ceph/ceph/pull/61982.

Signed-off-by: Zac Dover <zac.dover@proton.me>
This commit is contained in:
Zac Dover 2025-02-27 22:07:39 +10:00
parent 05ec90da5d
commit fa0598af33

View File

@ -2,28 +2,27 @@
NVMe-oF Gateway Requirements
============================
- At least 8 GB of RAM dedicated to each gateway instance
- It is hightly recommended to dedicate at least four CPU threads / vcores to each
gateway. One can work but performance may be below expectations. It is
ideal to dedicate servers to NVMe-oF gateway service so that these
and other Ceph services do not degrade each other.
- At minimum a 10 Gb/s network link to the Ceph public network. For best
latency and throughput we recommend 25 Gb/s or 100 Gb/s links.
- Bonding of network links, with an appropriate xmit hash policy, is ideal
for high availability. Note that the throughput of a given NVMe-oF client
can be no higher than that of a single link within a bond. Thus, if four
10 Gb/s links are bonded together on gateway nodes, no one client will
realize more than 10 Gb/s throughput. Moreover, remember that Ceph
NVMe-oF gateways also communicate with backing OSDs over the public
network at the same time, which contends with traffic between clients
and gateways. Provision networking generously to avoid congestion and
saturation.
- Provision at least two NVMe-oF gateways in a gateway group, on separate
Ceph cluster nodes, for a highly-availability Ceph NVMe/TCP solution.
- At least 8 GB of RAM dedicated to each NVME-oF gateway instance
- We highly recommend dedicating at least four CPU threads or vcores to each
NVME-oF gateway. A setup with only one CPU thread or vcore can work, but
performance may be below expectations. It is preferable to dedicate servers
to the NVMe-oF gateway service so that these and other Ceph services do not
degrade each other.
- Provide at minimum a 10 Gb/s network link to the Ceph public network. For
best latency and throughput, we recommend 25 Gb/s or 100 Gb/s links.
- Bonding of network links, with an appropriate xmit hash policy, is ideal for
high availability. Note that the throughput of a given NVMe-oF client can be
no higher than that of a single link within a bond. Thus, if four 10 Gb/s
links are bonded together on gateway nodes, no one client will realize more
than 10 Gb/s throughput. Remember that Ceph NVMe-oF gateways also communicate
with backing OSDs over the public network at the same time, which contends
with traffic between clients and gateways. Make sure to provision networking
resources generously to avoid congestion and saturation.
- Provision at least two NVMe-oF gateways in a gateway group, on separate Ceph
cluster nodes, for a highly-available Ceph NVMe/TCP solution.
- Ceph NVMe-oF gateway containers comprise multiple components that communicate
with each other. If the nodes running these containers require HTTP/HTTPS
proxy configuration to reach container registries or other external resources,
these settings may confound this internal communication. If you experience
gRPC or other errors when provisioning NVMe-oF gateways, you may need to
adjust your proxy configuration.
with each other. If the nodes that run these containers require HTTP/HTTPS
proxy configuration to reach container registries or other external
resources, these settings may confound this internal communication. If you
experience gRPC or other errors when provisioning NVMe-oF gateways, you may
need to adjust your proxy configuration.