2019-09-09 19:36:04 +00:00
=========================
Create a Ceph file system
=========================
2014-08-18 15:57:25 +00:00
2017-08-30 13:23:27 +00:00
Creating pools
==============
2014-08-18 15:57:25 +00:00
2019-09-09 19:36:04 +00:00
A Ceph file system requires at least two RADOS pools, one for data and one for metadata.
2014-08-18 15:57:25 +00:00
When configuring these pools, you might consider:
2019-06-22 00:17:01 +00:00
- Using a higher replication level for the metadata pool, as any data loss in
2019-09-09 19:36:04 +00:00
this pool can render the whole file system inaccessible.
2019-06-22 00:17:01 +00:00
- Using lower-latency storage such as SSDs for the metadata pool, as this will
2019-09-09 19:36:04 +00:00
directly affect the observed latency of file system operations on clients.
2019-06-22 00:17:01 +00:00
- The data pool used to create the file system is the "default" data pool and
the location for storing all inode backtrace information, used for hard link
management and disaster recovery. For this reason, all inodes created in
CephFS have at least one object in the default data pool. If erasure-coded
pools are planned for the file system, it is usually better to use a
replicated pool for the default data pool to improve small-object write and
read performance for updating backtraces. Separately, another erasure-coded
data pool can be added (see also :ref: `ecpool` ) that can be used on an entire
hierarchy of directories and files (see also :ref: `file-layouts` ).
2014-08-18 15:57:25 +00:00
Refer to :doc: `/rados/operations/pools` to learn more about managing pools. For
2019-09-09 19:36:04 +00:00
example, to create two pools with default settings for use with a file system, you
2014-08-18 15:57:25 +00:00
might run the following commands:
.. code :: bash
2019-09-19 15:47:07 +00:00
$ ceph osd pool create cephfs_data
$ ceph osd pool create cephfs_metadata
2014-08-18 15:57:25 +00:00
2019-06-22 00:17:01 +00:00
Generally, the metadata pool will have at most a few gigabytes of data. For
this reason, a smaller PG count is usually recommended. 64 or 128 is commonly
used in practice for large clusters.
2020-05-14 10:06:22 +00:00
.. note :: The names of the file systems, metadata pools, and data pools can
only have characters in the set [a-zA-Z0-9\_-.].
2019-06-22 00:17:01 +00:00
2019-09-09 19:36:04 +00:00
Creating a file system
======================
2017-08-30 13:23:27 +00:00
2019-09-09 19:36:04 +00:00
Once the pools are created, you may enable the file system using the `` fs new `` command:
2014-08-18 15:57:25 +00:00
.. code :: bash
$ ceph fs new <fs_name> <metadata> <data>
For example:
.. code :: bash
$ ceph fs new cephfs cephfs_metadata cephfs_data
$ ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
2019-09-09 19:36:04 +00:00
Once a file system has been created, your MDS(s) will be able to enter
2014-08-18 15:57:25 +00:00
an *active* state. For example, in a single MDS system:
.. code :: bash
$ ceph mds stat
2018-09-12 04:57:22 +00:00
cephfs-1/1/1 up {0=a=up:active}
2014-08-18 15:57:25 +00:00
2019-09-09 19:36:04 +00:00
Once the file system is created and the MDS is active, you are ready to mount
the file system. If you have created more than one file system, you will
2017-03-22 15:03:06 +00:00
choose which to use when mounting.
2014-08-18 15:57:25 +00:00
2015-10-13 07:10:35 +00:00
- `Mount CephFS`_
- `Mount CephFS as FUSE`_
2014-08-18 15:57:25 +00:00
2020-03-10 09:05:14 +00:00
.. _Mount CephFS: ../../cephfs/mount-using-kernel-driver
.. _Mount CephFS as FUSE: ../../cephfs/mount-using-fuse
2017-08-30 13:23:27 +00:00
2019-09-09 19:36:04 +00:00
If you have created more than one file system, and a client does not
specify a file system when mounting, you can control which file system
2018-03-07 14:33:42 +00:00
they will see by using the `ceph fs set-default` command.
2017-08-30 13:23:27 +00:00
Using Erasure Coded pools with CephFS
=====================================
You may use Erasure Coded pools as CephFS data pools as long as they have overwrites enabled, which is done as follows:
.. code :: bash
ceph osd pool set my_ec_pool allow_ec_overwrites true
Note that EC overwrites are only supported when using OSDS with the BlueStore backend.
You may not use Erasure Coded pools as CephFS metadata pools, because CephFS metadata is stored using RADOS *OMAP* data structures, which EC pools cannot store.