2016-04-07 01:28:51 +00:00
|
|
|
|
|
|
|
Experimental Features
|
|
|
|
=====================
|
|
|
|
|
|
|
|
CephFS includes a number of experimental features which are not fully stabilized
|
|
|
|
or qualified for users to turn on in real deployments. We generally do our best
|
2017-07-27 17:01:29 +00:00
|
|
|
to clearly demarcate these and fence them off so they cannot be used by mistake.
|
2016-04-07 01:28:51 +00:00
|
|
|
|
|
|
|
Some of these features are closer to being done than others, though. We describe
|
|
|
|
each of them with an approximation of how risky they are and briefly describe
|
|
|
|
what is required to enable them. Note that doing so will *irrevocably* flag maps
|
|
|
|
in the monitor as having once enabled this flag to improve debugging and
|
|
|
|
support processes.
|
|
|
|
|
|
|
|
Inline data
|
|
|
|
-----------
|
|
|
|
By default, all CephFS file data is stored in RADOS objects. The inline data
|
|
|
|
feature enables small files (generally <2KB) to be stored in the inode
|
|
|
|
and served out of the MDS. This may improve small-file performance but increases
|
|
|
|
load on the MDS. It is not sufficiently tested to support at this time, although
|
|
|
|
failures within it are unlikely to make non-inlined data inaccessible
|
|
|
|
|
|
|
|
Inline data has always been off by default and requires setting
|
2018-06-21 06:07:08 +00:00
|
|
|
the ``inline_data`` flag.
|
2016-04-07 01:28:51 +00:00
|
|
|
|
2016-02-02 19:25:42 +00:00
|
|
|
Mantle: Programmable Metadata Load Balancer
|
2017-10-27 07:13:02 +00:00
|
|
|
-------------------------------------------
|
2016-02-02 19:25:42 +00:00
|
|
|
|
|
|
|
Mantle is a programmable metadata balancer built into the MDS. The idea is to
|
|
|
|
protect the mechanisms for balancing load (migration, replication,
|
|
|
|
fragmentation) but stub out the balancing policies using Lua. For details, see
|
|
|
|
:doc:`/cephfs/mantle`.
|
|
|
|
|
2016-04-07 01:28:51 +00:00
|
|
|
Snapshots
|
|
|
|
---------
|
|
|
|
Like multiple active MDSes, CephFS is designed from the ground up to support
|
|
|
|
snapshotting of arbitrary directories. There are no known bugs at the time of
|
|
|
|
writing, but there is insufficient testing to provide stability guarantees and
|
|
|
|
every expansion of testing has generally revealed new issues. If you do enable
|
|
|
|
snapshots and experience failure, manual intervention will be needed.
|
|
|
|
|
2016-07-25 21:21:15 +00:00
|
|
|
Snapshots are known not to work properly with multiple filesystems (below) in
|
|
|
|
some cases. Specifically, if you share a pool for multiple FSes and delete
|
|
|
|
a snapshot in one FS, expect to lose snapshotted file data in any other FS using
|
|
|
|
snapshots. See the :doc:`/dev/cephfs-snapshots` page for more information.
|
|
|
|
|
2017-10-27 07:23:20 +00:00
|
|
|
For somewhat obscure implementation reasons, the kernel client only supports up
|
|
|
|
to 400 snapshots (http://tracker.ceph.com/issues/21420).
|
|
|
|
|
2018-06-21 06:03:41 +00:00
|
|
|
Snapshotting was blocked off with the ``allow_new_snaps`` flag prior to Mimic.
|
2016-04-07 01:28:51 +00:00
|
|
|
|
|
|
|
Multiple filesystems within a Ceph cluster
|
|
|
|
------------------------------------------
|
|
|
|
Code was merged prior to the Jewel release which enables administrators
|
|
|
|
to create multiple independent CephFS filesystems within a single Ceph cluster.
|
|
|
|
These independent filesystems have their own set of active MDSes, cluster maps,
|
|
|
|
and data. But the feature required extensive changes to data structures which
|
|
|
|
are not yet fully qualified, and has security implications which are not all
|
|
|
|
apparent nor resolved.
|
|
|
|
|
|
|
|
There are no known bugs, but any failures which do result from having multiple
|
|
|
|
active filesystems in your cluster will require manual intervention and, so far,
|
|
|
|
will not have been experienced by anybody else -- knowledgeable help will be
|
|
|
|
extremely limited. You also probably do not have the security or isolation
|
|
|
|
guarantees you want or think you have upon doing so.
|
|
|
|
|
2016-07-25 21:21:15 +00:00
|
|
|
Note that snapshots and multiple filesystems are *not* tested in combination
|
|
|
|
and may not work together; see above.
|
|
|
|
|
2016-04-07 01:28:51 +00:00
|
|
|
Multiple filesystems were available starting in the Jewel release candidates
|
2018-06-21 06:03:41 +00:00
|
|
|
but must be turned on via the ``enable_multiple`` flag until declared stable.
|
2017-03-09 14:17:06 +00:00
|
|
|
|
|
|
|
|
|
|
|
Previously experimental features
|
|
|
|
================================
|
|
|
|
|
|
|
|
Directory Fragmentation
|
|
|
|
-----------------------
|
|
|
|
|
|
|
|
Directory fragmentation was considered experimental prior to the *Luminous*
|
|
|
|
(12.2.x). It is now enabled by default on new filesystems. To enable directory
|
|
|
|
fragmentation on filesystems created with older versions of Ceph, set
|
|
|
|
the ``allow_dirfrags`` flag on the filesystem:
|
|
|
|
|
|
|
|
::
|
|
|
|
|
2017-12-11 12:32:20 +00:00
|
|
|
ceph fs set <filesystem name> allow_dirfrags 1
|
2017-03-09 14:17:06 +00:00
|
|
|
|
2017-05-18 10:54:13 +00:00
|
|
|
Multiple active metadata servers
|
|
|
|
--------------------------------
|
|
|
|
|
|
|
|
Prior to the *Luminous* (12.2.x) release, running multiple active metadata
|
|
|
|
servers within a single filesystem was considered experimental. Creating
|
|
|
|
multiple active metadata servers is now permitted by default on new
|
|
|
|
filesystems.
|
|
|
|
|
|
|
|
Filesystems created with older versions of Ceph still require explicitly
|
|
|
|
enabling multiple active metadata servers as follows:
|
|
|
|
|
|
|
|
::
|
|
|
|
|
2017-12-11 12:32:20 +00:00
|
|
|
ceph fs set <filesystem name> allow_multimds 1
|
2017-05-18 10:54:13 +00:00
|
|
|
|
|
|
|
Note that the default size of the active mds cluster (``max_mds``) is
|
|
|
|
still set to 1 initially.
|
|
|
|
|