2019-09-09 19:36:04 +00:00
|
|
|
.. _ceph-file-system:
|
2018-07-01 13:46:14 +00:00
|
|
|
|
2013-05-31 03:28:22 +00:00
|
|
|
=================
|
2019-09-09 19:36:04 +00:00
|
|
|
Ceph File System
|
2013-05-31 03:28:22 +00:00
|
|
|
=================
|
2012-06-06 17:45:26 +00:00
|
|
|
|
2019-08-29 17:38:32 +00:00
|
|
|
The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on
|
|
|
|
top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide
|
|
|
|
a state-of-the-art, multi-use, highly available, and performant file store for
|
|
|
|
a variety of applications, including traditional use-cases like shared home
|
|
|
|
directories, HPC scratch space, and distributed workflow shared storage.
|
|
|
|
|
|
|
|
CephFS achieves these goals through the use of some novel architectural
|
|
|
|
choices. Notably, file metadata is stored in a separate RADOS pool from file
|
|
|
|
data and served via a resizable cluster of *Metadata Servers*, or **MDS**,
|
|
|
|
which may scale to support higher throughput metadata workloads. Clients of
|
|
|
|
the file system have direct access to RADOS for reading and writing file data
|
|
|
|
blocks. For this reason, workloads may linearly scale with the size of the
|
|
|
|
underlying RADOS object store; that is, there is no gateway or broker mediating
|
|
|
|
data I/O for clients.
|
|
|
|
|
|
|
|
Access to data is coordinated through the cluster of MDS which serve as
|
|
|
|
authorities for the state of the distributed metadata cache cooperatively
|
|
|
|
maintained by clients and MDS. Mutations to metadata are aggregated by each MDS
|
|
|
|
into a series of efficient writes to a journal on RADOS; no metadata state is
|
|
|
|
stored locally by the MDS. This model allows for coherent and rapid
|
|
|
|
collaboration between clients within the context of a POSIX file system.
|
|
|
|
|
|
|
|
.. image:: cephfs-architecture.svg
|
|
|
|
|
|
|
|
CephFS is the subject of numerous academic papers for its novel designs and
|
|
|
|
contributions to file system research. It is the oldest storage interface in
|
|
|
|
Ceph and was once the primary use-case for RADOS. Now it is joined by two
|
|
|
|
other storage interfaces to form a modern unified storage system: RBD (Ceph
|
|
|
|
Block Devices) and RGW (Ceph Object Storage Gateway).
|
2013-05-17 22:56:59 +00:00
|
|
|
|
2016-06-30 15:18:46 +00:00
|
|
|
.. note:: If you are evaluating CephFS for the first time, please review
|
2016-06-30 15:20:20 +00:00
|
|
|
the best practices for deployment: :doc:`/cephfs/best-practices`
|
2013-05-17 22:56:59 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
2016-11-04 10:07:45 +00:00
|
|
|
Using CephFS
|
|
|
|
============
|
|
|
|
|
2019-09-09 19:36:04 +00:00
|
|
|
Using the Ceph File System requires at least one :term:`Ceph Metadata Server` in
|
2013-05-31 03:28:22 +00:00
|
|
|
your Ceph Storage Cluster.
|
2013-05-17 22:56:59 +00:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
.. raw:: html
|
|
|
|
|
|
|
|
<style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
|
|
|
|
<table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Metadata Server</h3>
|
|
|
|
|
2019-09-09 19:36:04 +00:00
|
|
|
To run the Ceph File System, you must have a running Ceph Storage Cluster with at
|
2013-05-31 03:28:22 +00:00
|
|
|
least one :term:`Ceph Metadata Server` running.
|
2013-05-17 22:56:59 +00:00
|
|
|
|
2012-06-06 17:45:26 +00:00
|
|
|
|
|
|
|
.. toctree::
|
2012-11-14 22:57:51 +00:00
|
|
|
:maxdepth: 1
|
2012-06-06 17:45:26 +00:00
|
|
|
|
2019-06-22 00:17:01 +00:00
|
|
|
Provision/Add/Remove MDS(s) <add-remove-mds>
|
2016-03-30 11:06:27 +00:00
|
|
|
MDS failover and standby configuration <standby>
|
|
|
|
MDS Configuration Settings <mds-config-ref>
|
2016-07-25 03:21:29 +00:00
|
|
|
Client Configuration Settings <client-config-ref>
|
2012-12-22 00:14:23 +00:00
|
|
|
Journaler Configuration <journaler>
|
2013-05-17 22:56:59 +00:00
|
|
|
Manpage ceph-mds <../../man/8/ceph-mds>
|
|
|
|
|
|
|
|
.. raw:: html
|
|
|
|
|
2015-02-02 11:36:43 +00:00
|
|
|
</td><td><h3>Step 2: Mount CephFS</h3>
|
2013-05-17 22:56:59 +00:00
|
|
|
|
|
|
|
Once you have a healthy Ceph Storage Cluster with at least
|
2019-09-09 19:36:04 +00:00
|
|
|
one Ceph Metadata Server, you may create and mount your Ceph File System.
|
2018-01-05 14:07:31 +00:00
|
|
|
Ensure that your client has network connectivity and the proper
|
2013-05-17 22:56:59 +00:00
|
|
|
authentication keyring.
|
|
|
|
|
|
|
|
.. toctree::
|
|
|
|
:maxdepth: 1
|
|
|
|
|
2019-06-22 00:17:01 +00:00
|
|
|
Create a CephFS file system <createfs>
|
2019-07-10 08:37:22 +00:00
|
|
|
Mount CephFS with the Kernel Driver <kernel>
|
2015-02-02 11:36:43 +00:00
|
|
|
Mount CephFS as FUSE <fuse>
|
|
|
|
Mount CephFS in fstab <fstab>
|
2018-08-02 21:54:58 +00:00
|
|
|
Use the CephFS Shell <cephfs-shell>
|
2018-08-24 09:30:18 +00:00
|
|
|
Supported Features of Kernel Driver <kernel-features>
|
2012-11-14 22:57:51 +00:00
|
|
|
Manpage ceph-fuse <../../man/8/ceph-fuse>
|
|
|
|
Manpage mount.ceph <../../man/8/mount.ceph>
|
2018-01-05 07:11:37 +00:00
|
|
|
Manpage mount.fuse.ceph <../../man/8/mount.fuse.ceph>
|
2013-05-17 22:56:59 +00:00
|
|
|
|
|
|
|
|
|
|
|
.. raw:: html
|
|
|
|
|
|
|
|
</td><td><h3>Additional Details</h3>
|
|
|
|
|
|
|
|
.. toctree::
|
2016-11-04 10:07:45 +00:00
|
|
|
:maxdepth: 1
|
2013-05-17 22:56:59 +00:00
|
|
|
|
2016-06-30 15:20:20 +00:00
|
|
|
Deployment best practices <best-practices>
|
2018-07-11 19:56:06 +00:00
|
|
|
MDS States <mds-states>
|
2016-02-17 14:58:16 +00:00
|
|
|
Administrative commands <administration>
|
2017-11-28 13:02:31 +00:00
|
|
|
Understanding MDS Cache Size Limits <cache-size-limits>
|
2016-11-04 10:07:45 +00:00
|
|
|
POSIX compatibility <posix>
|
|
|
|
Experimental Features <experimental-features>
|
|
|
|
CephFS Quotas <quota>
|
|
|
|
Using Ceph with Hadoop <hadoop>
|
|
|
|
cephfs-journal-tool <cephfs-journal-tool>
|
|
|
|
File layouts <file-layouts>
|
|
|
|
Client eviction <eviction>
|
2019-09-09 19:36:04 +00:00
|
|
|
Handling full file systems <full>
|
2016-07-07 15:45:08 +00:00
|
|
|
Health messages <health-messages>
|
2016-11-04 10:07:45 +00:00
|
|
|
Troubleshooting <troubleshooting>
|
|
|
|
Disaster recovery <disaster-recovery>
|
|
|
|
Client authentication <client-auth>
|
2019-09-09 19:36:04 +00:00
|
|
|
Upgrading old file systems <upgrading>
|
2016-11-17 17:27:37 +00:00
|
|
|
Configuring directory fragmentation <dirfrags>
|
2017-03-07 14:08:22 +00:00
|
|
|
Configuring multiple active MDS daemons <multimds>
|
2018-03-15 12:47:17 +00:00
|
|
|
Export over NFS <nfs>
|
2018-08-30 13:21:16 +00:00
|
|
|
Application best practices <app-best-practices>
|
2019-03-15 09:26:47 +00:00
|
|
|
Scrub <scrub>
|
2019-03-15 10:11:57 +00:00
|
|
|
LazyIO <lazyio>
|
2019-09-05 10:42:25 +00:00
|
|
|
Distributed Metadata Cache <mdcache>
|
2013-05-17 22:56:59 +00:00
|
|
|
|
2018-07-10 09:41:52 +00:00
|
|
|
.. toctree::
|
|
|
|
:hidden:
|
|
|
|
|
|
|
|
Advanced: Metadata repair <disaster-recovery-experts>
|
|
|
|
|
2013-05-17 22:56:59 +00:00
|
|
|
.. raw:: html
|
|
|
|
|
|
|
|
</td></tr></tbody></table>
|
2016-11-04 10:07:45 +00:00
|
|
|
|
|
|
|
For developers
|
|
|
|
==============
|
|
|
|
|
|
|
|
.. toctree::
|
|
|
|
:maxdepth: 1
|
|
|
|
|
2017-03-16 06:34:08 +00:00
|
|
|
Client's Capabilities <capabilities>
|
2016-11-04 10:07:45 +00:00
|
|
|
libcephfs <../../api/libcephfs-java/>
|
|
|
|
Mantle <mantle>
|
|
|
|
|