mirror of
https://github.com/ceph/ceph
synced 2024-12-11 14:09:09 +00:00
175ba5b6c3
git-svn-id: https://ceph.svn.sf.net/svnroot/ceph@946 29311d96-e01e-0410-9327-a35deaab8ce9
34 lines
4.0 KiB
Plaintext
34 lines
4.0 KiB
Plaintext
|
|
<div class="mainsegment">
|
|
<h3>Tasks</h3>
|
|
<div>
|
|
Although Ceph is currently a working prototype that demonstrates the key features of the architecture, a variety of key features need to be implemented in order to make Ceph a stable file system that can be used in production environments. Some of the key tasks are outlined below. If you are a kernel or file system developer and are interested in contributing to Ceph, please join the email list and <a href="mailto:ceph-devel@lists.sourceforge.net">drop us a line</a>.
|
|
|
|
<p>
|
|
|
|
<h4>Ebofs</h4>
|
|
<div>
|
|
Each Ceph OSD (storage node) runs a custom object "file system" called EBOFS to store objects on locally attached disks. Although the current implementation of EBOFS is fully functional and already demonstrates promising performance (outperforming ext2/3, XFS, and ReiserFS under the workloads we anticipate), a range of improvements will be needed before it is ready for prime-time. These include:
|
|
<ul>
|
|
<li><b>Intelligent use of NVRAM for metadata and/or data journaling.</b> A subset of write requests need to be synchronously committed to stable storage in order to ensure both good performance and strong file system integrity guarantees. Even a small amount of NVRAM (whether it is FLASH, MRAM, battery-backed DRAM, etc.) will enable very low latency for small writes while maintaining data safety.
|
|
<li><b>RAID-aware allocation.</b> Although we conceptually think of each OSD as a disk with an attached CPU, memory, and network interface, it is more likely that the actual OSDs deployed in production systems will be small to medium sized storage servers: a standard server with a locally attached array of SAS or SATA disks. In order to properly take advantage of the parallelism inherent in the use of multiple disks, the EBOFS allocator and disk scheduling algorithms have to be aware of the underlying structure of the array (be it RAID0, 1, 5, 10, etc.) in order to reap the performance and reliability rewards.
|
|
</ul>
|
|
</div>
|
|
|
|
|
|
<h4>Native kernel client</h4>
|
|
<div>
|
|
The prototype Ceph client is implemented as a user-space library. Although it can be mounted under Linux via the <a href="http://fuse.sourceforge.net/">FUSE (file system in userspace)</a> library, this incurs a significant performance penalty and limits Ceph's ability to provide strong POSIX semantics and consistency. A native Linux kernel implementation of the client in needed in order to properly take advantage of the performance and consistency features of Ceph. Because the client interaction with Ceph MDS and OSDs is more complicated than existing network file systems like NFS, this is a non-trivial endeavor. We are actively looking for experienced kernel programmers to help us out!
|
|
</div>
|
|
|
|
<h4>CRUSH tool</h4>
|
|
<div>
|
|
Ceph utilizes a novel data distribution function called CRUSH to distribute data (in the form of objects) to storage nodes (OSDs). CRUSH is designed to generate a balanced distribution will allowing the storage cluster to be dynamically expanded or contracted, and to separate object replicas across failure domains to enhance data safety. There is a certain amount of finesse involved in properly managing the OSD hierarchy from which CRUSH generates its distribution in order to minimize the amount of data migration that results from changes. An administrator tool would be useful for helping to manage the CRUSH mapping function in order to best exploit the available storage and network infrastructure. For more information, please refer to the technical paper describing <a href="publications.html">CRUSH</a>.
|
|
</div>
|
|
|
|
<p>The Ceph project is always looking for more participants. If any of these projects sound interesting to you, please <a href="http://lists.sourceforge.net/mailman/listinfo/ceph-devel">join our mailing list</a> and <a href="mailto:ceph-devel@lists.sourceforge.net">drop us a line</a>.
|
|
</div>
|
|
</div>
|
|
|
|
|
|
<b>Please feel free to <a href="mailto:sage@newdream.net">contact us</a> with any questions or comments.</b> |