Merge pull request #46659 from anthonyeleven/anthonyeleven-46637-followup

doc/start: Polish network section of hardware-recommendations.rst
This commit is contained in:
Anthony D'Atri 2022-06-13 16:58:08 -07:00 committed by GitHub
commit bb5f95a15a
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -375,43 +375,47 @@ multiple OSDs per host.
Networks
========
Provision at least 10Gbps+ networking in your racks.
Provision at least 10 Gb/s networking in your racks.
Speed
-----
It takes three hours to replicate 1TB of data across a 1Gbps network and it
takes thirty hours to replicate 10TB across a 1Gbps network. But it takes only
twenty minutes to replicate 1TB of data across a 10Gbps network, and it takes
only one hour to replicate 10TB across a 10Gpbs network.
It takes three hours to replicate 1 TB of data across a 1 Gb/s network and it
takes thirty hours to replicate 10 TB across a 1 Gb/s network. But it takes only
twenty minutes to replicate 1 TB across a 10 Gb/s network, and it takes
only one hour to replicate 10 TB across a 10 Gb/s network.
Cost
----
In a petabyte-scale cluster, OSD failure is certain on a long enough timeline.
The larger the Ceph cluster, the more common OSD failures will be.
The faster that a placement group (PG) can recover from a ``degraded`` state to
an ``active + clean`` state, the better. Of course, when provisioning your network, you will have to balance price against performance.
an ``active + clean`` state, the better. Notably, fast recovery minimizes
the liklihood of multiple, overlapping failures that can cause data to become
temporarily unavailable or even lost. Of course, when provisioning your
network, you will have to balance price against performance.
Some deployment tools employ VLANs to make hardware and network cabling more
manageable. VLANs that use the 802.1q protocol require VLAN-capable NICs and
switches. The added expense of this hardware may be offset by the operational
cost savings on network setup and maintenance. When using VLANs to handle VM
traffic between the cluster and compute stacks (e.g., OpenStack, CloudStack,
etc.), there is additional value in using 10G Ethernet or better; 40Gb or
25/50/100 Gb networking as of 2020 is common for production clusters.
etc.), there is additional value in using 10 Gb/s Ethernet or better; 40 Gb/s or
25/50/100 Gb/s networking as of 2022 is common for production clusters.
Top-of-rack routers for each network must be able to communicate with
spine routers that have even faster throughput, often 40Gbp/s or more.
Top-of-rack (TOR) switches also need fast and redundant uplinks to spind
spine switches / routers, often at least 40 Gb/s.
Baseboard Management Controller (BMC)
-------------------------------------
Your server hardware should have a Baseboard Management Controller (BMC).
Your server chassis should have a Baseboard Management Controller (BMC).
Well-known examples are iDRAC (Dell), CIMC (Cisco UCS), and iLO (HPE).
Administration and deployment tools may also use BMCs extensively, especially
via IPMI or Redfish, so consider the cost/benefit tradeoff of an out-of-band
network for administration. Hypervisor SSH access, VM image uploads, OS image
installs, management sockets, etc. can impose significant loads on a network.
network for security and administration. Hypervisor SSH access, VM image uploads,
OS image installs, management sockets, etc. can impose significant loads on a network.
Running three networks may seem like overkill, but each traffic path represents
a potential capacity, throughput and/or performance bottleneck that you should
carefully consider before deploying a large scale data cluster.