================= Ceph Filesystem ================= The :term:`Ceph Filesystem` (Ceph FS) is a POSIX-compliant filesystem that uses a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3 and Swift APIs, or native bindings (librados). .. important:: CephFS currently lacks a robust 'fsck' check and repair function. Please use caution when storing important data as the disaster recovery tools are still under development. For more information about using CephFS today, see :doc:`/cephfs/early-adopters` .. ditaa:: +-----------------------+ +------------------------+ | | | CephFS FUSE | | | +------------------------+ | | | | +------------------------+ | CephFS Kernel Object | | CephFS Library | | | +------------------------+ | | | | +------------------------+ | | | librados | +-----------------------+ +------------------------+ +---------------+ +---------------+ +---------------+ | OSDs | | MDSs | | Monitors | +---------------+ +---------------+ +---------------+ Using the Ceph Filesystem requires at least one :term:`Ceph Metadata Server` in your Ceph Storage Cluster. .. raw:: html
Step 1: Metadata ServerTo run the Ceph Filesystem, you must have a running Ceph Storage Cluster with at least one :term:`Ceph Metadata Server` running. .. toctree:: :maxdepth: 1 Add/Remove MDS <../../rados/deployment/ceph-deploy-mds> MDS Configuration | Step 2: Mount CephFSOnce you have a healthy Ceph Storage Cluster with at least one Ceph Metadata Server, you may create and mount your Ceph Filesystem. Ensure that you client has network connectivity and the proper authentication keyring. .. toctree:: :maxdepth: 1 Create CephFS | Additional Details.. toctree:: :maxdepth: 1 CephFS Quotas |