.. _ceph-file-system: ================= Ceph File System ================= The Ceph File System, or **CephFS**, is a POSIX-compliant file system built on top of Ceph's distributed object store, **RADOS**. CephFS endeavors to provide a state-of-the-art, multi-use, highly available, and performant file store for a variety of applications, including traditional use-cases like shared home directories, HPC scratch space, and distributed workflow shared storage. CephFS achieves these goals through the use of some novel architectural choices. Notably, file metadata is stored in a separate RADOS pool from file data and served via a resizable cluster of *Metadata Servers*, or **MDS**, which may scale to support higher throughput metadata workloads. Clients of the file system have direct access to RADOS for reading and writing file data blocks. For this reason, workloads may linearly scale with the size of the underlying RADOS object store; that is, there is no gateway or broker mediating data I/O for clients. Access to data is coordinated through the cluster of MDS which serve as authorities for the state of the distributed metadata cache cooperatively maintained by clients and MDS. Mutations to metadata are aggregated by each MDS into a series of efficient writes to a journal on RADOS; no metadata state is stored locally by the MDS. This model allows for coherent and rapid collaboration between clients within the context of a POSIX file system. .. image:: cephfs-architecture.svg CephFS is the subject of numerous academic papers for its novel designs and contributions to file system research. It is the oldest storage interface in Ceph and was once the primary use-case for RADOS. Now it is joined by two other storage interfaces to form a modern unified storage system: RBD (Ceph Block Devices) and RGW (Ceph Object Storage Gateway). Getting Started with CephFS ^^^^^^^^^^^^^^^^^^^^^^^^^^^ For most deployments of Ceph, setting up a CephFS file system is as simple as: .. code:: bash ceph fs volume create The Ceph `Orchestrator`_ will automatically create and configure MDS for your file system if the back-end deployment technology supports it (see `Orchestrator deployment table`_). Otherwise, please `deploy MDS manually as needed`_. Finally, to mount CephFS on your client nodes, see `Mount CephFS: Prerequisites`_ page. Additionally, a command-line shell utility is available for interactive access or scripting via the `cephfs-shell`_. .. _Orchestrator: ../mgr/orchestrator .. _deploy MDS manually as needed: add-remove-mds .. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status .. _Mount CephFS\: Prerequisites: mount-prerequisites .. _cephfs-shell: cephfs-shell .. raw:: html .. toctree:: :maxdepth: 1 :hidden: Create a CephFS file system Administrative commands Provision/Add/Remove MDS(s) MDS failover and standby configuration MDS Cache Size Limits MDS Configuration Settings Manual: ceph-mds <../../man/8/ceph-mds> Export over NFS Export over NFS with volume nfs interface Application best practices FS volume and subvolumes CephFS Quotas Health messages Upgrading old file systems .. raw:: html .. toctree:: :maxdepth: 1 :hidden: Client Configuration Settings Client Authentication Mount CephFS: Prerequisites Mount CephFS using Kernel Driver Mount CephFS using FUSE Use the CephFS Shell Supported Features of Kernel Driver Manual: ceph-fuse <../../man/8/ceph-fuse> Manual: mount.ceph <../../man/8/mount.ceph> Manual: mount.fuse.ceph <../../man/8/mount.fuse.ceph> .. raw:: html .. toctree:: :maxdepth: 1 :hidden: MDS States POSIX compatibility MDS Journaling File layouts Distributed Metadata Cache Dynamic Metadata Management in CephFS CephFS IO Path LazyIO Directory fragmentation Multiple active MDS daemons .. raw:: html .. toctree:: :hidden: Client eviction Scrubbing the File System Handling full file systems Metadata repair Troubleshooting Disaster recovery cephfs-journal-tool .. raw:: html .. toctree:: :maxdepth: 1 :hidden: Journaler Configuration Client's Capabilities Java and Python bindings Mantle .. raw:: html .. toctree:: :maxdepth: 1 :hidden: Experimental Features