mirror of
https://github.com/ceph/ceph
synced 2024-12-29 23:12:27 +00:00
44 lines
3.8 KiB
Plaintext
44 lines
3.8 KiB
Plaintext
|
<big>About this Document</big>
|
|||
|
This document contains planning and implementation procedures for Ceph. The audience for this document includes technical support personnel, installation engineers, system administrators, and quality assurance.
|
|||
|
<B>Prerequisites<b>
|
|||
|
Users of this document must be familiar with Linux command line options. They must also be familiar with the overall Ceph product.
|
|||
|
Before You Begin
|
|||
|
Before implementing a new Ceph System, first answer the questions in the Ceph Getting Started Guide to determine your configuration needs. Once you have determined your hardware and configuration needs, the following decisions must be made:
|
|||
|
<EFBFBD> Determine what level of technical support you need. Pick from the Ceph Technical Support options in the next section.
|
|||
|
<EFBFBD> Determine how much and what level of training your organization needs.
|
|||
|
Ceph Technical Support Options
|
|||
|
The Ceph Technical support model provides 4 tiers of technical support options:
|
|||
|
1st <20> This option is for brand new customers that need installation, configuration, and setup on their production environment.
|
|||
|
2nd <20> This level of support requires a trouble ticket to be generated on a case by case basis as customer difficulties arise. Customers can choose between two maintenance options; they can either purchase a yearly maintenance contract, or pay for each trouble resolution as it occurs.
|
|||
|
3rd <20> This option comes with our bundled packages for customers who have also purchased our hosting plans. In this case, the customer is a service provider. The Help Desk can generally provide this level of incident resolution. (NEED MORE INFO)
|
|||
|
4th <20> This level of support requires a Service Level Agreement (SLA) between the customer and Dreamhost. This level is used for handling the most difficult or advanced problems.
|
|||
|
Planning a Ceph Cluster Configuration
|
|||
|
The following section contains guidelines for planning the deployment for a Ceph cluster configuration. A Ceph cluster consists of the following core components:
|
|||
|
<EFBFBD> Monitors <20> These must be an odd number, such as one, three, or five. Three is the preferred configuration.
|
|||
|
<EFBFBD> Object Storage Devices (OSD) <20> used as storage nodes
|
|||
|
<EFBFBD> Metadata Servers (MDS)
|
|||
|
For redundancy, you should employ several of these components.
|
|||
|
Monitors
|
|||
|
The monitors handle central cluster management, configuration, and state.
|
|||
|
Hardware Requirements:
|
|||
|
<EFBFBD> A few gigs of local disk space
|
|||
|
<EFBFBD> A fixed network address
|
|||
|
Warning: Never configure 2 monitors per cluster. If you do, they will both have to be up all of the time, which will greatly degrade system performance.
|
|||
|
Object Storage Devices
|
|||
|
The OSDs store the actual data on the disks. A minimum of two is required.
|
|||
|
Hardware Requirements:
|
|||
|
<EFBFBD> As many disks as possible for faster performance and scalability
|
|||
|
<EFBFBD> An SSD or NVRAM for a journal, or a RAID controller with a battery-backed NVRAM.
|
|||
|
<EFBFBD> Ample RAM for better file system caching
|
|||
|
<EFBFBD> Fast network
|
|||
|
Metadata Servers
|
|||
|
The metadata server daemon commands act as a distributed, coherent cache of file system metadata. They do not store data locally; all metadata is stored on disk via the storage nodes.
|
|||
|
Metadata servers can be added into the cluster on an as-needed basis. The load is automatically balanced. The max_mds parameter controls how many cmds instances are active. Any additional running instances are put in standby mode and can be activated if one of the active daemons becomes unresponsive.
|
|||
|
Hardware Requirements:
|
|||
|
<EFBFBD> Large amount of RAM
|
|||
|
<EFBFBD> Fast CPU
|
|||
|
<EFBFBD> Fast (low latency) network
|
|||
|
<EFBFBD> At least two servers for redundancy and load balancing
|
|||
|
TIPS: If you have just a few nodes, put cmon, cmds, and cosd on the same node. For moderate node configurations, put cmon and cmds together, and cosd on the disk nodes. For large node configurations, put cmon, cmds, and cosd each on their own dedicated machine.
|
|||
|
|