mirror of
https://github.com/ceph/ceph
synced 2024-12-24 12:24:19 +00:00
9f86c35a0d
- Add nvmeof-initiator-esx.rst - Add nvmeof-initiator-linux.rst - Add nvmeof-initiators.rst - Add nvmeof-overview.rst - Add nvmeof-requirements.rst - Add nvmeof-target-configure.rst - Add links to rbd-integrations.rst Co-authored-by: Ilya Dryomov <idryomov@redhat.com> Co-authored-by: Zac Dover <zac.dover@proton.me> Signed-off-by: Orit Wasserman <owasserm@ibm.com>
15 lines
612 B
ReStructuredText
15 lines
612 B
ReStructuredText
============================
|
|
NVME-oF Gateway Requirements
|
|
============================
|
|
|
|
We recommend that you provision at least two NVMe/TCP gateways on different
|
|
nodes to implement a highly-available Ceph NVMe/TCP solution.
|
|
|
|
We recommend at a minimum a single 10Gb Ethernet link in the Ceph public
|
|
network for the gateway. For hardware recommendations, see
|
|
:ref:`hardware-recommendations` .
|
|
|
|
.. note:: On the NVMe-oF gateway, the memory footprint is a function of the
|
|
number of mapped RBD images and can grow to be large. Plan memory
|
|
requirements accordingly based on the number RBD images to be mapped.
|