mirror of
https://github.com/ceph/ceph
synced 2025-04-07 10:03:50 +00:00
doc/start: update hardware recs
This PR picks up the parts of https://github.com/ceph/ceph/pull/44466 that were not merged back in January, when that pull request was raised. Matters added here: * improved organzation of matter * emphasis of IOPs per core over cores per OSD Signed-off-by: Zac Dover <zac.dover@gmail.com>
This commit is contained in:
parent
45ecfc6ac7
commit
29c44a3bd6
@ -21,21 +21,30 @@ data cluster (e.g., OpenStack, CloudStack, etc).
|
||||
CPU
|
||||
===
|
||||
|
||||
CephFS metadata servers are CPU intensive, so they should have significant
|
||||
processing power (e.g., quad core or better CPUs) and benefit from higher clock
|
||||
rate (frequency in GHz). Ceph OSDs run the :term:`RADOS` service, calculate
|
||||
data placement with :term:`CRUSH`, replicate data, and maintain their own copy of the
|
||||
cluster map. Therefore, OSD nodes should have a reasonable amount of processing
|
||||
power. Requirements vary by use-case; a starting point might be one core per
|
||||
OSD for light / archival usage, and two cores per OSD for heavy workloads such
|
||||
as RBD volumes attached to VMs. Monitor / manager nodes do not have heavy CPU
|
||||
demands so a modest processor can be chosen for them. Also consider whether the
|
||||
host machine will run CPU-intensive processes in addition to Ceph daemons. For
|
||||
example, if your hosts will run computing VMs (e.g., OpenStack Nova), you will
|
||||
need to ensure that these other processes leave sufficient processing power for
|
||||
Ceph daemons. We recommend running additional CPU-intensive processes on
|
||||
separate hosts to avoid resource contention.
|
||||
CephFS metadata servers (MDS) are CPU-intensive. CephFS metadata servers (MDS)
|
||||
should therefore have quad-core (or better) CPUs and high clock rates (GHz). OSD
|
||||
nodes need enough processing power to run the RADOS service, to calculate data
|
||||
placement with CRUSH, to replicate data, and to maintain their own copies of the
|
||||
cluster map.
|
||||
|
||||
The requirements of one Ceph cluster are not the same as the requirements of
|
||||
another, but here are some general guidelines.
|
||||
|
||||
In earlier versions of Ceph, we would make hardware recommendations based on
|
||||
the number of cores per OSD, but this cores-per-OSD metric is no longer as
|
||||
useful a metric as the number of cycles per IOP and the number of IOPs per OSD.
|
||||
For example, for NVMe drives, Ceph can easily utilize five or six cores on real
|
||||
clusters and up to about fourteen cores on single OSDs in isolation. So cores
|
||||
per OSD are no longer as pressing a concern as they were. When selecting
|
||||
hardware, select for IOPs per core.
|
||||
|
||||
Monitor nodes and manager nodes have no heavy CPU demands and require only
|
||||
modest processors. If your host machines will run CPU-intensive processes in
|
||||
addition to Ceph daemons, make sure that you have enough processing power to
|
||||
run both the CPU-intensive processes and the Ceph daemons. (OpenStack Nova is
|
||||
one such example of a CPU-intensive process.) We recommend that you run
|
||||
non-Ceph CPU-intensive processes on separate hosts (that is, on hosts that are
|
||||
not your monitor and manager nodes) in order to avoid resource contention.
|
||||
|
||||
RAM
|
||||
===
|
||||
|
Loading…
Reference in New Issue
Block a user