2019-03-31 18:32:15 +00:00
|
|
|
.. _hardware-recommendations:
|
|
|
|
|
2012-05-03 03:31:35 +00:00
|
|
|
==========================
|
|
|
|
Hardware Recommendations
|
|
|
|
==========================
|
2012-07-10 15:14:42 +00:00
|
|
|
|
2012-09-07 03:31:46 +00:00
|
|
|
Ceph was designed to run on commodity hardware, which makes building and
|
|
|
|
maintaining petabyte-scale data clusters economically feasible.
|
|
|
|
When planning out your cluster hardware, you will need to balance a number
|
|
|
|
of considerations, including failure domains and potential performance
|
|
|
|
issues. Hardware planning should include distributing Ceph daemons and
|
|
|
|
other processes that use Ceph across many hosts. Generally, we recommend
|
|
|
|
running Ceph daemons of a specific type on a host configured for that type
|
|
|
|
of daemon. We recommend using other hosts for processes that utilize your
|
2013-10-23 00:11:07 +00:00
|
|
|
data cluster (e.g., OpenStack, CloudStack, etc).
|
2012-04-11 18:21:43 +00:00
|
|
|
|
|
|
|
|
2018-07-17 10:13:58 +00:00
|
|
|
.. tip:: Check out the `Ceph blog`_ too.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
|
|
|
|
2012-09-17 23:25:54 +00:00
|
|
|
CPU
|
|
|
|
===
|
|
|
|
|
2020-10-07 22:21:28 +00:00
|
|
|
CephFS metadata servers are CPU intensive, so they should have significant
|
|
|
|
processing power (e.g., quad core or better CPUs) and benefit from higher clock
|
|
|
|
rate (frequency in GHz). Ceph OSDs run the :term:`RADOS` service, calculate
|
2014-05-02 00:04:43 +00:00
|
|
|
data placement with :term:`CRUSH`, replicate data, and maintain their own copy of the
|
2020-10-07 22:21:28 +00:00
|
|
|
cluster map. Therefore, OSD nodes should have a reasonable amount of processing
|
|
|
|
power. Requirements vary by use-case; a starting point might be one core per
|
|
|
|
OSD for light / archival usage, and two cores per OSD for heavy workloads such
|
|
|
|
as RBD volumes attached to VMs. Monitor / manager nodes do not have heavy CPU
|
|
|
|
demands so a modest processor can be chosen for them. Also consider whether the
|
2012-09-17 23:25:54 +00:00
|
|
|
host machine will run CPU-intensive processes in addition to Ceph daemons. For
|
|
|
|
example, if your hosts will run computing VMs (e.g., OpenStack Nova), you will
|
|
|
|
need to ensure that these other processes leave sufficient processing power for
|
|
|
|
Ceph daemons. We recommend running additional CPU-intensive processes on
|
2020-10-07 22:21:28 +00:00
|
|
|
separate hosts to avoid resource contention.
|
2012-09-17 23:25:54 +00:00
|
|
|
|
|
|
|
|
|
|
|
RAM
|
|
|
|
===
|
|
|
|
|
2020-10-07 22:21:28 +00:00
|
|
|
Generally, more RAM is better. Monitor / manager nodes for a modest cluster
|
|
|
|
might do fine with 64GB; for a larger cluster with hundreds of OSDs 128GB
|
|
|
|
is a reasonable target. There is a memory target for BlueStore OSDs that
|
|
|
|
defaults to 4GB. Factor in a prudent margin for the operating system and
|
|
|
|
administrative tasks (like monitoring and metrics) as well as increased
|
|
|
|
consumption during recovery: provisioning ~8GB per BlueStore OSD
|
|
|
|
is advised.
|
2018-10-05 22:21:53 +00:00
|
|
|
|
|
|
|
Monitors and managers (ceph-mon and ceph-mgr)
|
|
|
|
---------------------------------------------
|
|
|
|
|
|
|
|
Monitor and manager daemon memory usage generally scales with the size of the
|
2020-10-07 22:21:28 +00:00
|
|
|
cluster. Note that at boot-time and during topology changes and recovery these
|
|
|
|
daemons will need more RAM than they do during steady-state operation, so plan
|
2022-02-24 07:22:42 +00:00
|
|
|
for peak usage. For very small clusters, 32 GB suffices. For clusters of up to,
|
|
|
|
say, 300 OSDs go with 64GB. For clusters built with (or which will grow to)
|
|
|
|
even more OSDs you should provision 128GB. You may also want to consider
|
|
|
|
tuning the following settings:
|
|
|
|
|
|
|
|
* :confval:`mon_osd_cache_size`
|
|
|
|
* :confval:`rocksdb_cache_size`
|
|
|
|
|
2018-10-05 22:21:53 +00:00
|
|
|
|
|
|
|
Metadata servers (ceph-mds)
|
|
|
|
---------------------------
|
|
|
|
|
|
|
|
The metadata daemon memory utilization depends on how much memory its cache is
|
|
|
|
configured to consume. We recommend 1 GB as a minimum for most systems. See
|
2022-02-24 07:22:42 +00:00
|
|
|
:confval:`mds_cache_memory_limit`.
|
2018-10-05 22:21:53 +00:00
|
|
|
|
|
|
|
|
2020-03-26 20:07:50 +00:00
|
|
|
Memory
|
|
|
|
======
|
|
|
|
|
|
|
|
Bluestore uses its own memory to cache data rather than relying on the
|
|
|
|
operating system page cache. In bluestore you can adjust the amount of memory
|
2022-02-24 07:22:42 +00:00
|
|
|
the OSD attempts to consume with the :confval:`osd_memory_target` configuration
|
2020-03-26 20:07:50 +00:00
|
|
|
option.
|
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
- Setting the :confval:`osd_memory_target` below 2GB is typically not
|
|
|
|
recommended (it may fail to keep the memory that low and may also cause
|
|
|
|
extremely slow performance.
|
2020-03-26 20:07:50 +00:00
|
|
|
|
|
|
|
- Setting the memory target between 2GB and 4GB typically works but may result
|
2022-02-24 07:22:42 +00:00
|
|
|
in degraded performance: metadata may be read from disk during IO unless the
|
2020-03-26 20:07:50 +00:00
|
|
|
active data set is relatively small.
|
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
- 4GB is the current default :confval:`osd_memory_target` size. This default
|
|
|
|
was chosen for typical use cases, and is intended to balance memory
|
|
|
|
requirements and OSD performance for typical use cases.
|
2020-03-26 20:07:50 +00:00
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
- Setting the :confval:`osd_memory_target` higher than 4GB can improve
|
|
|
|
performance when there many (small) objects or large (256GB/OSD or more) data
|
|
|
|
sets are processed.
|
2020-03-26 20:07:50 +00:00
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
.. important:: OSD memory autotuning is "best effort". Although the OSD may
|
2020-03-26 20:07:50 +00:00
|
|
|
unmap memory to allow the kernel to reclaim it, there is no guarantee that
|
2022-02-24 07:22:42 +00:00
|
|
|
the kernel will actually reclaim freed memory within a specific time
|
|
|
|
frame. This is especially true in older versions of Ceph where transparent
|
|
|
|
huge pages can prevent the kernel from reclaiming memory that was freed from
|
2020-03-26 20:07:50 +00:00
|
|
|
fragmented huge pages. Modern versions of Ceph disable transparent huge
|
2022-02-24 07:22:42 +00:00
|
|
|
pages at the application level to avoid this, but that does not
|
|
|
|
guarantee that the kernel will immediately reclaim unmapped memory. The OSD
|
|
|
|
may still at times exceed its memory target. We recommend budgeting
|
|
|
|
approximately 20% extra memory on your system to prevent OSDs from going OOM
|
|
|
|
(**O**\ut **O**\f **M**\emory) during temporary spikes or due to delay in
|
|
|
|
the kernel reclaiming freed pages. That 20% value might be more or less than
|
|
|
|
needed, depending on the exact configuration of the system.
|
2020-03-26 20:07:50 +00:00
|
|
|
|
|
|
|
When using the legacy FileStore backend, the page cache is used for caching
|
|
|
|
data, so no tuning is normally needed, and the OSD memory consumption is
|
|
|
|
generally related to the number of PGs per daemon in the system.
|
2012-09-17 23:25:54 +00:00
|
|
|
|
|
|
|
|
|
|
|
Data Storage
|
|
|
|
============
|
|
|
|
|
2013-02-01 00:34:02 +00:00
|
|
|
Plan your data storage configuration carefully. There are significant cost and
|
|
|
|
performance tradeoffs to consider when planning for data storage. Simultaneous
|
2022-02-24 07:22:42 +00:00
|
|
|
OS operations and simultaneous requests from multiple daemons for read and
|
|
|
|
write operations against a single drive can slow performance.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
|
|
|
Hard Disk Drives
|
|
|
|
----------------
|
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
OSDs should have plenty of storage drive space for object data. We recommend a
|
|
|
|
minimum disk drive size of 1 terabyte. Consider the cost-per-gigabyte advantage
|
|
|
|
of larger disks. We recommend dividing the price of the disk drive by the
|
|
|
|
number of gigabytes to arrive at a cost per gigabyte, because larger drives may
|
|
|
|
have a significant impact on the cost-per-gigabyte. For example, a 1 terabyte
|
|
|
|
hard disk priced at $75.00 has a cost of $0.07 per gigabyte (i.e., $75 / 1024 =
|
|
|
|
0.0732). By contrast, a 3 terabyte disk priced at $150.00 has a cost of $0.05
|
|
|
|
per gigabyte (i.e., $150 / 3072 = 0.0488). In the foregoing example, using the
|
|
|
|
1 terabyte disks would generally increase the cost per gigabyte by
|
|
|
|
40%--rendering your cluster substantially less cost efficient.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
2020-10-07 22:21:28 +00:00
|
|
|
.. tip:: Running multiple OSDs on a single SAS / SATA drive
|
|
|
|
is **NOT** a good idea. NVMe drives, however, can achieve
|
2022-01-01 16:33:10 +00:00
|
|
|
improved performance by being split into two or more OSDs.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
|
|
|
.. tip:: Running an OSD and a monitor or a metadata server on a single
|
2020-10-07 22:21:28 +00:00
|
|
|
drive is also **NOT** a good idea.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
.. tip:: With spinning disks, the SATA and SAS interface increasingly
|
|
|
|
becomes a bottleneck at larger capacities. See also the `Storage Networking
|
|
|
|
Industry Association's Total Cost of Ownership calculator`_.
|
|
|
|
|
|
|
|
|
2013-02-01 00:34:02 +00:00
|
|
|
Storage drives are subject to limitations on seek time, access time, read and
|
|
|
|
write times, as well as total throughput. These physical limitations affect
|
|
|
|
overall system performance--especially during recovery. We recommend using a
|
2020-10-07 22:21:28 +00:00
|
|
|
dedicated (ideally mirrored) drive for the operating system and software, and
|
|
|
|
one drive for each Ceph OSD Daemon you run on the host (modulo NVMe above).
|
2022-02-24 07:22:42 +00:00
|
|
|
Many "slow OSD" issues (when they are not attributable to hardware failure)
|
|
|
|
arise from running an operating system and multiple OSDs on the same drive.
|
|
|
|
|
|
|
|
It is technically possible to run multiple Ceph OSD Daemons per SAS / SATA
|
|
|
|
drive, but this will lead to resource contention and diminish overall
|
|
|
|
throughput.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
To get the best performance out of Ceph, run the following on separate drives:
|
|
|
|
(1) operating systems, (2) OSD data, and (3) BlueStore db. For more
|
|
|
|
information on how to effectively use a mix of fast drives and slow drives in
|
|
|
|
your Ceph cluster, see the `block and block.db`_ section of the Bluestore
|
|
|
|
Configuration Reference.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
|
|
|
Solid State Drives
|
|
|
|
------------------
|
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
Ceph performance can be improved by using solid-state drives (SSDs). This
|
|
|
|
reduces random access time and reduces latency while accelerating throughput.
|
|
|
|
|
|
|
|
SSDs cost more per gigabyte than do hard disk drives, but SSDs often offer
|
|
|
|
access times that are, at a minimum, 100 times faster than hard disk drives.
|
|
|
|
SSDs avoid hotspot issues and bottleneck issues within busy clusters, and
|
|
|
|
they may offer better economics when TCO is evaluated holistically.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
SSDs do not have moving mechanical parts, so they are not necessarily subject
|
|
|
|
to the same types of limitations as hard disk drives. SSDs do have significant
|
2013-02-01 00:34:02 +00:00
|
|
|
limitations though. When evaluating SSDs, it is important to consider the
|
2021-12-05 14:44:09 +00:00
|
|
|
performance of sequential reads and writes.
|
2013-01-18 06:13:12 +00:00
|
|
|
|
|
|
|
.. important:: We recommend exploring the use of SSDs to improve performance.
|
|
|
|
However, before making a significant investment in SSDs, we **strongly
|
2022-02-24 07:22:42 +00:00
|
|
|
recommend** reviewing the performance metrics of an SSD and testing the
|
|
|
|
SSD in a test configuration in order to gauge performance.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
2013-10-23 00:39:35 +00:00
|
|
|
Relatively inexpensive SSDs may appeal to your sense of economy. Use caution.
|
2022-02-24 07:22:42 +00:00
|
|
|
Acceptable IOPS are not the only factor to consider when selecting an SSD for
|
|
|
|
use with Ceph.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
SSDs have historically been cost prohibitive for object storage, but emerging
|
|
|
|
QLC drives are closing the gap, offering greater density with lower power
|
|
|
|
consumption and less power spent on cooling. HDD OSDs may see a significant
|
2020-10-07 22:21:28 +00:00
|
|
|
performance improvement by offloading WAL+DB onto an SSD.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
2022-02-24 07:22:42 +00:00
|
|
|
To get a better sense of the factors that determine the cost of storage, you
|
|
|
|
might use the `Storage Networking Industry Association's Total Cost of
|
|
|
|
Ownership calculator`_
|
|
|
|
|
|
|
|
Partition Alignment
|
|
|
|
~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
When using SSDs with Ceph, make sure that your partitions are properly aligned.
|
|
|
|
Improperly aligned partitions suffer slower data transfer speeds than do
|
|
|
|
properly aligned partitions. For more information about proper partition
|
|
|
|
alignment and example commands that show how to align partitions properly, see
|
|
|
|
`Werner Fischer's blog post on partition alignment`_.
|
|
|
|
|
|
|
|
CephFS Metadata Segregation
|
|
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
|
|
|
|
One way that Ceph accelerates CephFS file system performance is by segregating
|
|
|
|
the storage of CephFS metadata from the storage of the CephFS file contents.
|
|
|
|
Ceph provides a default ``metadata`` pool for CephFS metadata. You will never
|
|
|
|
have to create a pool for CephFS metadata, but you can create a CRUSH map
|
|
|
|
hierarchy for your CephFS metadata pool that points only to SSD storage media.
|
|
|
|
See :ref:`CRUSH Device Class<crush-map-device-class>` for details.
|
2013-02-01 00:34:02 +00:00
|
|
|
|
|
|
|
|
|
|
|
Controllers
|
|
|
|
-----------
|
|
|
|
|
2020-10-07 22:21:28 +00:00
|
|
|
Disk controllers (HBAs) can have a significant impact on write throughput.
|
2022-02-24 07:22:42 +00:00
|
|
|
Carefully consider your selection of HBAs to ensure that they do not create a
|
|
|
|
performance bottleneck. Notably, RAID-mode (IR) HBAs may exhibit higher latency
|
|
|
|
than simpler "JBOD" (IT) mode HBAs. The RAID SoC, write cache, and battery
|
|
|
|
backup can substantially increase hardware and maintenance costs. Some RAID
|
|
|
|
HBAs can be configured with an IT-mode "personality".
|
2013-02-01 00:34:02 +00:00
|
|
|
|
2018-07-17 10:13:58 +00:00
|
|
|
.. tip:: The `Ceph blog`_ is often an excellent source of information on Ceph
|
2013-02-01 00:34:02 +00:00
|
|
|
performance issues. See `Ceph Write Throughput 1`_ and `Ceph Write
|
|
|
|
Throughput 2`_ for additional details.
|
|
|
|
|
|
|
|
|
2021-11-08 20:47:12 +00:00
|
|
|
Benchmarking
|
|
|
|
------------
|
|
|
|
|
|
|
|
BlueStore opens block devices in O_DIRECT and uses fsync frequently to ensure
|
|
|
|
that data is safely persisted to media. You can evaluate a drive's low-level
|
|
|
|
write performance using ``fio``. For example, 4kB random write performance is
|
|
|
|
measured as follows:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# fio --name=/dev/sdX --ioengine=libaio --direct=1 --fsync=1 --readwrite=randwrite --blocksize=4k --runtime=300
|
|
|
|
|
|
|
|
Write Caches
|
|
|
|
------------
|
|
|
|
|
|
|
|
Enterprise SSDs and HDDs normally include power loss protection features which
|
|
|
|
use multi-level caches to speed up direct or synchronous writes. These devices
|
|
|
|
can be toggled between two caching modes -- a volatile cache flushed to
|
|
|
|
persistent media with fsync, or a non-volatile cache written synchronously.
|
|
|
|
|
|
|
|
These two modes are selected by either "enabling" or "disabling" the write
|
|
|
|
(volatile) cache. When the volatile cache is enabled, Linux uses a device in
|
|
|
|
"write back" mode, and when disabled, it uses "write through".
|
|
|
|
|
|
|
|
The default configuration (normally caching enabled) may not be optimal, and
|
|
|
|
OSD performance may be dramatically increased in terms of increased IOPS and
|
|
|
|
decreased commit_latency by disabling the write cache.
|
|
|
|
|
|
|
|
Users are therefore encouraged to benchmark their devices with ``fio`` as
|
|
|
|
described earlier and persist the optimal cache configuration for their
|
|
|
|
devices.
|
|
|
|
|
|
|
|
The cache configuration can be queried with ``hdparm``, ``sdparm``,
|
|
|
|
``smartctl`` or by reading the values in ``/sys/class/scsi_disk/*/cache_type``,
|
|
|
|
for example:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# hdparm -W /dev/sda
|
|
|
|
|
|
|
|
/dev/sda:
|
|
|
|
write-caching = 1 (on)
|
|
|
|
|
|
|
|
# sdparm --get WCE /dev/sda
|
|
|
|
/dev/sda: ATA TOSHIBA MG07ACA1 0101
|
|
|
|
WCE 1 [cha: y]
|
|
|
|
# smartctl -g wcache /dev/sda
|
|
|
|
smartctl 7.1 2020-04-05 r5049 [x86_64-linux-4.18.0-305.19.1.el8_4.x86_64] (local build)
|
|
|
|
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
|
|
|
|
|
|
|
|
Write cache is: Enabled
|
|
|
|
|
|
|
|
# cat /sys/class/scsi_disk/0\:0\:0\:0/cache_type
|
|
|
|
write back
|
|
|
|
|
|
|
|
The write cache can be disabled with those same tools:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# hdparm -W0 /dev/sda
|
|
|
|
|
|
|
|
/dev/sda:
|
|
|
|
setting drive write-caching to 0 (off)
|
|
|
|
write-caching = 0 (off)
|
|
|
|
|
|
|
|
# sdparm --clear WCE /dev/sda
|
|
|
|
/dev/sda: ATA TOSHIBA MG07ACA1 0101
|
|
|
|
# smartctl -s wcache,off /dev/sda
|
|
|
|
smartctl 7.1 2020-04-05 r5049 [x86_64-linux-4.18.0-305.19.1.el8_4.x86_64] (local build)
|
|
|
|
Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org
|
|
|
|
|
|
|
|
=== START OF ENABLE/DISABLE COMMANDS SECTION ===
|
|
|
|
Write cache disabled
|
|
|
|
|
|
|
|
Normally, disabling the cache using ``hdparm``, ``sdparm``, or ``smartctl``
|
|
|
|
results in the cache_type changing automatically to "write through". If this is
|
|
|
|
not the case, you can try setting it directly as follows. (Users should note
|
|
|
|
that setting cache_type also correctly persists the caching mode of the device
|
|
|
|
until the next reboot):
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# echo "write through" > /sys/class/scsi_disk/0\:0\:0\:0/cache_type
|
|
|
|
|
|
|
|
# hdparm -W /dev/sda
|
|
|
|
|
|
|
|
/dev/sda:
|
|
|
|
write-caching = 0 (off)
|
|
|
|
|
|
|
|
.. tip:: This udev rule (tested on CentOS 8) will set all SATA/SAS device cache_types to "write
|
|
|
|
through":
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# cat /etc/udev/rules.d/99-ceph-write-through.rules
|
|
|
|
ACTION=="add", SUBSYSTEM=="scsi_disk", ATTR{cache_type}:="write through"
|
|
|
|
|
|
|
|
.. tip:: This udev rule (tested on CentOS 7) will set all SATA/SAS device cache_types to "write
|
|
|
|
through":
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# cat /etc/udev/rules.d/99-ceph-write-through-el7.rules
|
|
|
|
ACTION=="add", SUBSYSTEM=="scsi_disk", RUN+="/bin/sh -c 'echo write through > /sys/class/scsi_disk/$kernel/cache_type'"
|
|
|
|
|
|
|
|
.. tip:: The ``sdparm`` utility can be used to view/change the volatile write
|
|
|
|
cache on several devices at once:
|
|
|
|
|
|
|
|
.. code-block:: console
|
|
|
|
|
|
|
|
# sdparm --get WCE /dev/sd*
|
|
|
|
/dev/sda: ATA TOSHIBA MG07ACA1 0101
|
|
|
|
WCE 0 [cha: y]
|
|
|
|
/dev/sdb: ATA TOSHIBA MG07ACA1 0101
|
|
|
|
WCE 0 [cha: y]
|
|
|
|
# sdparm --clear WCE /dev/sd*
|
|
|
|
/dev/sda: ATA TOSHIBA MG07ACA1 0101
|
|
|
|
/dev/sdb: ATA TOSHIBA MG07ACA1 0101
|
|
|
|
|
2013-02-01 00:34:02 +00:00
|
|
|
Additional Considerations
|
|
|
|
-------------------------
|
|
|
|
|
2020-10-07 22:21:28 +00:00
|
|
|
You typically will run multiple OSDs per host, but you should ensure that the
|
|
|
|
aggregate throughput of your OSD drives doesn't exceed the network bandwidth
|
2013-01-18 06:13:12 +00:00
|
|
|
required to service a client's need to read or write data. You should also
|
2013-02-01 00:34:02 +00:00
|
|
|
consider what percentage of the overall data the cluster stores on each host. If
|
|
|
|
the percentage on a particular host is large and the host fails, it can lead to
|
|
|
|
problems such as exceeding the ``full ratio``, which causes Ceph to halt
|
|
|
|
operations as a safety precaution that prevents data loss.
|
|
|
|
|
|
|
|
When you run multiple OSDs per host, you also need to ensure that the kernel
|
|
|
|
is up to date. See `OS Recommendations`_ for notes on ``glibc`` and
|
|
|
|
``syncfs(2)`` to ensure that your hardware performs as expected when running
|
|
|
|
multiple OSDs per host.
|
2013-01-18 06:13:12 +00:00
|
|
|
|
2012-09-17 23:25:54 +00:00
|
|
|
|
|
|
|
Networks
|
|
|
|
========
|
|
|
|
|
2020-10-07 22:21:28 +00:00
|
|
|
Provision at least 10Gbps+ networking in your racks. Replicating 1TB of data
|
2020-03-26 20:07:50 +00:00
|
|
|
across a 1Gbps network takes 3 hours, and 10TBs takes 30 hours! By contrast,
|
2020-10-07 22:21:28 +00:00
|
|
|
with a 10Gbps network, the replication times would be 20 minutes and 1 hour
|
|
|
|
respectively. In a petabyte-scale cluster, failure of an OSD drive is an
|
2020-03-26 20:07:50 +00:00
|
|
|
expectation, not an exception. System administrators will appreciate PGs
|
|
|
|
recovering from a ``degraded`` state to an ``active + clean`` state as rapidly
|
|
|
|
as possible, with price / performance tradeoffs taken into consideration.
|
|
|
|
Additionally, some deployment tools employ VLANs to make hardware and network
|
|
|
|
cabling more manageable. VLANs using 802.1q protocol require VLAN-capable NICs
|
|
|
|
and Switches. The added hardware expense may be offset by the operational cost
|
|
|
|
savings for network setup and maintenance. When using VLANs to handle VM
|
|
|
|
traffic between the cluster and compute stacks (e.g., OpenStack, CloudStack,
|
2020-10-07 22:21:28 +00:00
|
|
|
etc.), there is additional value in using 10G Ethernet or better; 40Gb or
|
|
|
|
25/50/100 Gb networking as of 2020 is common for production clusters.
|
|
|
|
|
|
|
|
Top-of-rack routers for each network also need to be able to communicate with
|
|
|
|
spine routers that have even faster throughput, often 40Gbp/s or more.
|
|
|
|
|
2012-09-27 23:50:16 +00:00
|
|
|
|
|
|
|
Your server hardware should have a Baseboard Management Controller (BMC).
|
2020-10-07 22:21:28 +00:00
|
|
|
Administration and deployment tools may also use BMCs extensively, especially
|
|
|
|
via IPMI or Redfish, so consider
|
2012-09-27 23:50:16 +00:00
|
|
|
the cost/benefit tradeoff of an out-of-band network for administration.
|
|
|
|
Hypervisor SSH access, VM image uploads, OS image installs, management sockets,
|
|
|
|
etc. can impose significant loads on a network. Running three networks may seem
|
|
|
|
like overkill, but each traffic path represents a potential capacity, throughput
|
|
|
|
and/or performance bottleneck that you should carefully consider before
|
|
|
|
deploying a large scale data cluster.
|
|
|
|
|
2012-09-17 23:25:54 +00:00
|
|
|
|
|
|
|
Failure Domains
|
|
|
|
===============
|
|
|
|
|
|
|
|
A failure domain is any failure that prevents access to one or more OSDs. That
|
2022-02-24 07:22:42 +00:00
|
|
|
could be a stopped daemon on a host; a disk failure, an OS crash, a
|
|
|
|
malfunctioning NIC, a failed power supply, a network outage, a power outage,
|
|
|
|
and so forth. When planning out your hardware needs, you must balance the
|
2012-09-17 23:25:54 +00:00
|
|
|
temptation to reduce costs by placing too many responsibilities into too few
|
|
|
|
failure domains, and the added costs of isolating every potential failure
|
|
|
|
domain.
|
|
|
|
|
|
|
|
|
2012-07-10 15:14:42 +00:00
|
|
|
Minimum Hardware Recommendations
|
|
|
|
================================
|
|
|
|
|
|
|
|
Ceph can run on inexpensive commodity hardware. Small production clusters
|
|
|
|
and development clusters can run successfully with modest hardware.
|
|
|
|
|
2013-10-23 00:39:35 +00:00
|
|
|
+--------------+----------------+-----------------------------------------+
|
|
|
|
| Process | Criteria | Minimum Recommended |
|
|
|
|
+==============+================+=========================================+
|
2020-03-26 20:07:50 +00:00
|
|
|
| ``ceph-osd`` | Processor | - 1 core minimum |
|
|
|
|
| | | - 1 core per 200-500 MB/s |
|
|
|
|
| | | - 1 core per 1000-3000 IOPS |
|
|
|
|
| | | |
|
|
|
|
| | | * Results are before replication. |
|
|
|
|
| | | * Results may vary with different |
|
|
|
|
| | | CPU models and Ceph features. |
|
|
|
|
| | | (erasure coding, compression, etc) |
|
|
|
|
| | | * ARM processors specifically may |
|
|
|
|
| | | require additional cores. |
|
|
|
|
| | | * Actual performance depends on many |
|
2020-10-07 22:21:28 +00:00
|
|
|
| | | factors including drives, net, and |
|
2020-03-26 20:07:50 +00:00
|
|
|
| | | client throughput and latency. |
|
|
|
|
| | | Benchmarking is highly recommended. |
|
2013-10-23 00:39:35 +00:00
|
|
|
| +----------------+-----------------------------------------+
|
2020-03-26 20:07:50 +00:00
|
|
|
| | RAM | - 4GB+ per daemon (more is better) |
|
|
|
|
| | | - 2-4GB often functions (may be slow) |
|
|
|
|
| | | - Less than 2GB not recommended |
|
2013-10-23 00:39:35 +00:00
|
|
|
| +----------------+-----------------------------------------+
|
|
|
|
| | Volume Storage | 1x storage drive per daemon |
|
|
|
|
| +----------------+-----------------------------------------+
|
2020-03-26 20:07:50 +00:00
|
|
|
| | DB/WAL | 1x SSD partition per daemon (optional) |
|
2013-10-23 00:39:35 +00:00
|
|
|
| +----------------+-----------------------------------------+
|
2020-03-26 20:07:50 +00:00
|
|
|
| | Network | 1x 1GbE+ NICs (10GbE+ recommended) |
|
2013-10-23 00:39:35 +00:00
|
|
|
+--------------+----------------+-----------------------------------------+
|
2020-10-07 22:21:28 +00:00
|
|
|
| ``ceph-mon`` | Processor | - 2 cores minimum |
|
2013-10-23 00:39:35 +00:00
|
|
|
| +----------------+-----------------------------------------+
|
2021-10-04 16:33:49 +00:00
|
|
|
| | RAM | 2-4GB+ per daemon |
|
2013-10-23 00:39:35 +00:00
|
|
|
| +----------------+-----------------------------------------+
|
2020-10-07 22:21:28 +00:00
|
|
|
| | Disk Space | 60 GB per daemon |
|
2013-10-23 00:39:35 +00:00
|
|
|
| +----------------+-----------------------------------------+
|
2020-03-26 20:07:50 +00:00
|
|
|
| | Network | 1x 1GbE+ NICs |
|
2013-10-23 00:39:35 +00:00
|
|
|
+--------------+----------------+-----------------------------------------+
|
2020-10-07 22:21:28 +00:00
|
|
|
| ``ceph-mds`` | Processor | - 2 cores minimum |
|
2013-10-23 00:39:35 +00:00
|
|
|
| +----------------+-----------------------------------------+
|
2020-03-26 20:07:50 +00:00
|
|
|
| | RAM | 2GB+ per daemon |
|
2013-10-23 00:39:35 +00:00
|
|
|
| +----------------+-----------------------------------------+
|
|
|
|
| | Disk Space | 1 MB per daemon |
|
|
|
|
| +----------------+-----------------------------------------+
|
2020-03-26 20:07:50 +00:00
|
|
|
| | Network | 1x 1GbE+ NICs |
|
2013-10-23 00:39:35 +00:00
|
|
|
+--------------+----------------+-----------------------------------------+
|
2012-07-12 00:35:01 +00:00
|
|
|
|
2012-11-07 22:09:30 +00:00
|
|
|
.. tip:: If you are running an OSD with a single disk, create a
|
2012-07-12 00:35:01 +00:00
|
|
|
partition for your volume storage that is separate from the partition
|
|
|
|
containing the OS. Generally, we recommend separate disks for the
|
|
|
|
OS and the volume storage.
|
|
|
|
|
2012-09-17 23:25:54 +00:00
|
|
|
|
2013-02-01 00:34:02 +00:00
|
|
|
|
2022-03-01 13:53:29 +00:00
|
|
|
.. _block and block.db: https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#block-and-block-db
|
2018-07-17 10:13:58 +00:00
|
|
|
.. _Ceph blog: https://ceph.com/community/blog/
|
2013-02-01 00:34:02 +00:00
|
|
|
.. _Ceph Write Throughput 1: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/
|
|
|
|
.. _Ceph Write Throughput 2: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/
|
2017-03-15 08:24:39 +00:00
|
|
|
.. _Mapping Pools to Different Types of OSDs: ../../rados/operations/crush-map#placing-different-pools-on-different-osds
|
2014-05-02 00:04:43 +00:00
|
|
|
.. _OS Recommendations: ../os-recommendations
|
2022-02-24 07:22:42 +00:00
|
|
|
.. _Storage Networking Industry Association's Total Cost of Ownership calculator: https://www.snia.org/forums/cmsi/programs/TCOcalc
|
2022-03-01 13:53:29 +00:00
|
|
|
.. _Werner Fischer's blog post on partition alignment: https://www.thomas-krenn.com/en/wiki/Partition_Alignment_detailed_explanation
|