1
0
mirror of https://github.com/ceph/ceph synced 2025-03-11 02:39:05 +00:00

Merge PR into master

* refs/pull/32522/head:
	doc/cephfs: improve wording in mount-prerequisites.rst
	doc: migrate best practices recommendations to relevant docs
	doc/cephfs: rename doc/cephfs/kernel.rst & doc/cephfs/fuse.rst

Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This commit is contained in:
Patrick Donnelly 2020-01-21 15:15:12 -08:00
commit c74a261e5b
No known key found for this signature in database
GPG Key ID: 3A2A7E25BEA8AADB
7 changed files with 189 additions and 217 deletions

View File

@ -1,88 +0,0 @@
CephFS best practices
=====================
This guide provides recommendations for best results when deploying CephFS.
For the actual configuration guide for CephFS, please see the instructions
at :doc:`/cephfs/index`.
Which Ceph version?
-------------------
Use at least the Jewel (v10.2.0) release of Ceph. This is the first
release to include stable CephFS code and fsck/repair tools. Make sure
you are using the latest point release to get bug fixes.
Note that Ceph releases do not include a kernel, this is versioned
and released separately. See below for guidance of choosing an
appropriate kernel version if you are using the kernel client
for CephFS.
Most stable configuration
-------------------------
Some features in CephFS are still experimental. See
:doc:`/cephfs/experimental-features` for guidance on these.
For the best chance of a happy healthy file system, use a **single active MDS**
and **do not use snapshots**. Both of these are the default.
Note that creating multiple MDS daemons is fine, as these will simply be
used as standbys. However, for best stability you should avoid
adjusting ``max_mds`` upwards, as this would cause multiple MDS
daemons to be active at once.
Which client?
-------------
The FUSE client is the most accessible and the easiest to upgrade to the
version of Ceph used by the storage cluster, while the kernel client will
often give better performance.
The clients do not always provide equivalent functionality, for example
the fuse client supports client-enforced quotas while the kernel client
does not.
When encountering bugs or performance issues, it is often instructive to
try using the other client, in order to find out whether the bug was
client-specific or not (and then to let the developers know).
Which kernel version?
---------------------
Because the kernel client is distributed as part of the linux kernel (not
as part of packaged ceph releases),
you will need to consider which kernel version to use on your client nodes.
Older kernels are known to include buggy ceph clients, and may not support
features that more recent Ceph clusters support.
Remember that the "latest" kernel in a stable linux distribution is likely
to be years behind the latest upstream linux kernel where Ceph development
takes place (including bug fixes).
As a rough guide, as of Ceph 10.x (Jewel), you should be using a least a
4.x kernel. If you absolutely have to use an older kernel, you should use
the fuse client instead of the kernel client.
This advice does not apply if you are using a linux distribution that
includes CephFS support, as in this case the distributor will be responsible
for backporting fixes to their stable kernel: check with your vendor.
Reporting issues
----------------
If you have identified a specific issue, please report it with as much
information as possible. Especially important information:
* Ceph versions installed on client and server
* Whether you are using the kernel or fuse client
* If you are using the kernel client, what kernel version?
* How many clients are in play, doing what kind of workload?
* If a system is 'stuck', is that affecting all clients or just one?
* Any ceph health messages
* Any backtraces in the ceph logs from crashes
If you are satisfied that you have found a bug, please file it on
`the tracker <http://tracker.ceph.com>`_. For more general queries please write
to the `ceph-users mailing list <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com/>`_.

View File

@ -1,16 +1,17 @@
=====================
Experimental Features
=====================
CephFS includes a number of experimental features which are not fully stabilized
or qualified for users to turn on in real deployments. We generally do our best
to clearly demarcate these and fence them off so they cannot be used by mistake.
CephFS includes a number of experimental features which are not fully
stabilized or qualified for users to turn on in real deployments. We generally
do our best to clearly demarcate these and fence them off so they cannot be
used by mistake.
Some of these features are closer to being done than others, though. We describe
each of them with an approximation of how risky they are and briefly describe
what is required to enable them. Note that doing so will *irrevocably* flag maps
in the monitor as having once enabled this flag to improve debugging and
support processes.
Some of these features are closer to being done than others, though. We
describe each of them with an approximation of how risky they are and briefly
describe what is required to enable them. Note that doing so will
*irrevocably* flag maps in the monitor as having once enabled this flag to
improve debugging and support processes.
Inline data
-----------
@ -52,7 +53,7 @@ to 400 snapshots (http://tracker.ceph.com/issues/21420).
Snapshotting was blocked off with the ``allow_new_snaps`` flag prior to Mimic.
Multiple file systems within a Ceph cluster
Multiple File Systems within a Ceph Cluster
-------------------------------------------
Code was merged prior to the Jewel release which enables administrators
to create multiple independent CephFS file systems within a single Ceph cluster.
@ -62,9 +63,9 @@ are not yet fully qualified, and has security implications which are not all
apparent nor resolved.
There are no known bugs, but any failures which do result from having multiple
active file systems in your cluster will require manual intervention and, so far,
will not have been experienced by anybody else -- knowledgeable help will be
extremely limited. You also probably do not have the security or isolation
active file systems in your cluster will require manual intervention and, so
far, will not have been experienced by anybody else -- knowledgeable help will
be extremely limited. You also probably do not have the security or isolation
guarantees you want or think you have upon doing so.
Note that snapshots and multiple file systems are *not* tested in combination
@ -86,11 +87,9 @@ Directory Fragmentation
-----------------------
Directory fragmentation was considered experimental prior to the *Luminous*
(12.2.x). It is now enabled by default on new file systems. To enable directory
fragmentation on file systems created with older versions of Ceph, set
the ``allow_dirfrags`` flag on the file system:
::
(12.2.x). It is now enabled by default on new file systems. To enable
directory fragmentation on file systems created with older versions of Ceph,
set the ``allow_dirfrags`` flag on the file system::
ceph fs set <file system name> allow_dirfrags 1
@ -103,12 +102,9 @@ multiple active metadata servers is now permitted by default on new
file systems.
File Systems created with older versions of Ceph still require explicitly
enabling multiple active metadata servers as follows:
::
enabling multiple active metadata servers as follows::
ceph fs set <file system name> allow_multimds 1
Note that the default size of the active mds cluster (``max_mds``) is
still set to 1 initially.

View File

@ -44,15 +44,20 @@ For most deployments of Ceph, setting up a CephFS file system is as simple as:
ceph fs volume create <fs name>
The Ceph `Orchestrator`_ will automatically create and configure MDS for your
file system if the back-end deployment technology supports it (see
`Orchestrator deployment table`_). Otherwise, please :doc:`deploy MDS manually
as needed </cephfs/add-remove-mds>`.
The Ceph `Orchestrator`_ will automatically create and configure MDS for
your file system if the back-end deployment technology supports it (see
`Orchestrator deployment table`_). Otherwise, please `deploy MDS manually
as needed`_.
Finally, to mount CephFS on your client nodes, setup a :doc:`FUSE mount
</cephfs/fuse>` or :doc:`kernel mount </cephfs/kernel>`. Additionally, a
command-line shell utility is available for interactive access or scripting via
the :doc:`cephfs-shell </cephfs/cephfs-shell>`.
Finally, to mount CephFS on your client nodes, see `Mount CephFS:
Prerequisites`_ page. Additionally, a command-line shell utility is available
for interactive access or scripting via the `cephfs-shell`_.
.. _Orchestrator: ../mgr/orchestrator_cli
.. _deploy MDS manually as needed: add-remove-mds
.. _Orchestrator deployment table: ../mgr/orchestrator_cli/#current-implementation-status
.. _Mount CephFS\: Prerequisites: mount-prerequisites
.. _cephfs-shell: cephfs-shell
.. raw:: html
@ -70,7 +75,6 @@ Administration
:maxdepth: 1
:hidden:
Deployment best practices <best-practices>
Create a CephFS file system <createfs>
Administrative commands <administration>
Provision/Add/Remove MDS(s) <add-remove-mds>
@ -102,9 +106,10 @@ Mounting CephFS
:hidden:
Client Configuration Settings <client-config-ref>
Client authentication <client-auth>
Mount CephFS using Kernel Driver <kernel>
Mount CephFS using FUSE <fuse>
Client Authentication <client-auth>
Mount CephFS: Prerequisites <mount-prerequisites>
Mount CephFS using Kernel Driver <mount-using-kernel-driver>
Mount CephFS using FUSE <mount-using-fuse>
Use the CephFS Shell <cephfs-shell>
Supported Features of Kernel Driver <kernel-features>
Manual: ceph-fuse <../../man/8/ceph-fuse>
@ -200,7 +205,3 @@ Additional Details
Experimental Features <experimental-features>
Using Ceph with Hadoop <hadoop>
.. _Orchestrator: ../mgr/orchestrator_cli
.. _Orchestrator deployment table: ..//mgr/orchestrator_cli/#current-implementation-status

View File

@ -0,0 +1,70 @@
Mount CephFS: Prerequisites
===========================
You can use CephFS by mounting it to your local filesystem or by using
`cephfs-shell`_. CephFS can be mounted `using kernel`_ as well as `using
FUSE`_. Both have their own advantages. Read the following section to
understand more about both of these ways to mount CephFS.
Which CephFS Client?
--------------------
The FUSE client is the most accessible and the easiest to upgrade to the
version of Ceph used by the storage cluster, while the kernel client will
always gives better performance.
When encountering bugs or performance issues, it is often instructive to
try using the other client, in order to find out whether the bug was
client-specific or not (and then to let the developers know).
General Pre-requisite for Mounting CephFS
-----------------------------------------
Before mounting CephFS, ensure that the client host (where CephFS has to be
mounted and used) has a copy of the Ceph configuration file (i.e.
``ceph.conf``) and a keyring of the CephX user that has permission to access
the MDS. Both of these files must already be present on the host where the
Ceph MON resides.
#. Generate a minimal conf file for the client host and place it at a
standard location::
# on client host
mkdir -p -m 755 /etc/ceph
ssh {user}@{mon-host} "sudo ceph config generate-minimal-conf" | sudo tee /etc/ceph/ceph.conf
Alternatively, you may copy the conf file. But the above method generates
a conf with minimal details which is usually sufficient. For more
information, see `Client Authentication`_ and :ref:`bootstrap-options`.
#. Ensure that the conf has appropriate permissions::
chmod 644 /etc/ceph/ceph.conf
#. Create a CephX user and get its secret key::
ssh {user}@{mon-host} "sudo ceph fs authorize cephfs client.foo / rw" | sudo tee /etc/ceph/ceph.client.foo.keyring
In above command, replace ``cephfs`` with the name of your CephFS, ``foo``
by the name you want for your CephX user and ``/`` by the path within your
CephFS for which you want to allow access to the client host and ``rw``
stands for both read and write permissions. Alternatively, you may copy the
Ceph keyring from the MON host to client host at ``/etc/ceph`` but creating
a keyring specific to the client host is better. While creating a CephX
keyring/client, using same client name across multiple machines is perfectly
fine.
.. note:: If you get 2 prompts for password while running above any of 2
above command, run ``sudo ls`` (or any other trivial command with
sudo) immediately before these commands.
#. Ensure that the keyring has appropriate permissions::
chmod 600 /etc/ceph/ceph.client.foo.keyring
.. note:: There might be few more prerequisites for kernel and FUSE mounts
individually, please check respective mount documents.
.. _Client Authentication: ../client-auth
.. _cephfs-shell: ../cephfs-shell
.. _using kernel: ../mount-using-kernel-driver
.. _using FUSE: ../mount-using-fuse

View File

@ -2,57 +2,33 @@
Mount CephFS using FUSE
========================
Prerequisite
------------
Before mounting CephFS, ensure that the client host (where CephFS has to be
mounted and used) has a copy of the Ceph configuration file (i.e.
``ceph.conf``) and a keyring of the CephX user that has CAPS for the Ceph MDS.
Both of these files must be present on the host where the Ceph MON resides.
`ceph-fuse`_ is an alternate way of mounting CephFS, although it mounts it
in userspace. Therefore, performance of FUSE can be relatively lower but FUSE
clients can be more manageable, especially while upgrading CephFS.
#. Generate a minimal conf for the client host. The conf file should be
placed at ``/etc/ceph``::
Prerequisites
=============
# on client host
mkdir /etc/ceph
ssh {user}@{mon-host} "sudo ceph config generate-minimal-conf" | sudo tee /etc/ceph/ceph.conf
Complete General Prerequisites
------------------------------
Go through the prerequisites required by both, kernel as well as FUSE mounts,
in `Mount CephFS: Prerequisites`_ page.
Alternatively, you may copy the conf file. But the method which generates
the minimal config is usually sufficient. For more information, see
:ref:`bootstrap-options`.
``fuse.conf`` option
--------------------
#. Ensure that the conf has appropriate permissions::
chmod 644 /etc/ceph/ceph.conf
#. Create the CephX user and get its secret key::
ssh {user}@{mon-host} "sudo ceph fs authorize cephfs client.foo / rw" | sudo tee /etc/ceph/ceph.client.foo.keyring
In above command, replace ``cephfs`` with the name of your CephFS, ``foo``
by the name you want for your CephX user and ``/`` by the path within your
CephFS for which you want to allow access to the client host and ``rw``
stands for both read and write permissions. Alternatively, you may copy the
Ceph keyring from the MON host to client host at ``/etc/ceph`` but creating
a keyring specific to the client host is better. While creating a CephX
keyring/client, using same client name across multiple machines is perfectly
fine.
.. note:: If you get 2 prompts for password while running above any of 2 above
command, run ``sudo ls`` (or any other trivial command with sudo)
immediately before these commands.
#. Ensure that the keyring has appropriate permissions::
chmod 600 /etc/ceph/ceph.client.foo.keyring
#. If you are mounting Ceph with FUSE not as superuser/root user/system admin
you would need to add the option ``user_allow_other`` to ``/etc/fuse.conf``
(under no section in the conf).
Synopsis
--------
========
In general, the command to mount CephFS via FUSE looks like this::
ceph-fuse {mountpoint} {options}
Mounting CephFS
---------------
===============
To FUSE-mount the Ceph file system, use the ``ceph-fuse`` command::
mkdir /mnt/mycephfs
@ -86,7 +62,7 @@ If you have more than one FS on your Ceph cluster, use the option
You may also add a ``client_mds_namespace`` setting to your ``ceph.conf``
Unmounting CephFS
-----------------
=================
Use ``umount`` to unmount CephFS like any other FS::
@ -96,7 +72,7 @@ Use ``umount`` to unmount CephFS like any other FS::
executing this command.
Persistent Mounts
-----------------
=================
To mount CephFS as a file system in user space, add the following to ``/etc/fstab``::
@ -129,3 +105,4 @@ manual for more options it can take. For troubleshooting, see
:ref:`ceph_fuse_debugging`.
.. _ceph-fuse: ../../man/8/ceph-fuse/#options
.. _Mount CephFS\: Prerequisites: ../mount-prerequisites

View File

@ -2,64 +2,58 @@
Mount CephFS using Kernel Driver
=================================
Prerequisite
------------
Before mounting CephFS, copy the Ceph configuration file and keyring for the
CephX user that has CAPS to mount MDS to the client host (where CephFS will be
mounted and used) from the host where Ceph Monitor resides. Please note that
it's possible to mount CephFS without conf and keyring, but in that case, you
would have to pass the MON's socket and CephX user's secret key manually to
every mount command you run.
The CephFS kernel driver is part of the Linux kernel. It allows mounting
CephFS as a regular file system with native kernel performance. It is the
client of choice for most use-cases.
#. Generate a minimal conf file for the client host and place it at a
standard location::
Prerequisites
=============
# on client host
mkdir /etc/ceph
ssh {user}@{mon-host} "sudo ceph config generate-minimal-conf" | sudo tee /etc/ceph/ceph.conf
Complete General Prerequisites
------------------------------
Go through the prerequisites required by both, kernel as well as FUSE mounts,
in `Mount CephFS: Prerequisites`_ page.
Alternatively, you may copy the conf file. But the above method creates a
conf with minimum details which is better.
#. Ensure that the conf file has appropriate permissions::
chmod 644 /etc/ceph/ceph.conf
#. Create a CephX user and get its secret key::
ssh {user}@{mon-host} "sudo ceph fs authorize cephfs client.foo / rw" | sudo tee /etc/ceph/ceph.client.foo.keyring
In above command, replace ``cephfs`` with the name of your CephFS, ``foo``
by the name you want for CephX user and ``/`` by the path within your
CephFS for which you want to allow access to the client and ``rw`` stands
for, both, read and write permissions. Alternatively, you may copy the Ceph
keyring from the MON host to client host at ``/etc/ceph`` but creating a
keyring specific to the client host is better.
.. note:: If you get 2 prompts for password while running above any of 2 above
command, run ``sudo ls`` (or any other trivial command with sudo)
immediately before these commands.
#. Ensure that the keyring has appropriate permissions::
chmod 600 /etc/ceph/ceph.client.foo.keyring
#. ``mount.ceph`` helper is installed with Ceph packages. If for some reason
installing these packages is not feasible and/or ``mount.ceph`` is not
present on the system, you can still mount CephFS, but you'll need to
explicitly pass the monitor addreses and CephX user keyring. To verify that
it is installed, do::
Is mount helper is present?
---------------------------
``mount.ceph`` helper is installed by Ceph packages. The helper passes the
monitor address(es) and CephX user keyrings automatically saving the Ceph
admin the effort to pass these details explicitly while mountng CephFS. In
case the helper is not present on the client machine, CephFS can still be
mounted using kernel but by passing these details explicitly to the ``mount``
command. To check whether it is present on your system, do::
stat /sbin/mount.ceph
Which Kernel Version?
---------------------
Because the kernel client is distributed as part of the linux kernel (not
as part of packaged ceph releases), you will need to consider which kernel
version to use on your client nodes. Older kernels are known to include buggy
ceph clients, and may not support features that more recent Ceph clusters
support.
Remember that the "latest" kernel in a stable linux distribution is likely
to be years behind the latest upstream linux kernel where Ceph development
takes place (including bug fixes).
As a rough guide, as of Ceph 10.x (Jewel), you should be using a least a 4.x
kernel. If you absolutely have to use an older kernel, you should use the
fuse client instead of the kernel client.
This advice does not apply if you are using a linux distribution that
includes CephFS support, as in this case the distributor will be responsible
for backporting fixes to their stable kernel: check with your vendor.
Synopsis
--------
========
In general, the command to mount CephFS via kernel driver looks like this::
mount -t ceph {device-string}:{path-to-mounted} {mount-point} -o {key-value-args} {other-args}
Mounting CephFS
---------------
===============
On Ceph clusters, CephX is enabled by default. Use ``mount`` command to
mount CephFS with the kernel driver::
@ -103,7 +97,7 @@ non-default FS on your local FS as follows::
mount -t ceph :/ /mnt/mycephfs2 -o name=fs,mds_namespace=mycephfs2
Unmounting CephFS
-----------------
=================
To unmount the Ceph file system, use the ``umount`` command as usual::
umount /mnt/mycephfs
@ -112,7 +106,7 @@ To unmount the Ceph file system, use the ``umount`` command as usual::
executing this command.
Persistent Mounts
------------------
==================
To mount CephFS in your file systems table as a kernel driver, add the
following to ``/etc/fstab``::
@ -132,5 +126,6 @@ manual for more options it can take. For troubleshooting, see
:ref:`kernel_mount_debugging`.
.. _fstab: ../fstab/#kernel-driver
.. _User Management: ../../rados/operations/user-management/
.. _Mount CephFS\: Prerequisites: ../mount-prerequisites
.. _mount.ceph: ../../man/8/mount.ceph/
.. _User Management: ../../rados/operations/user-management/

View File

@ -187,3 +187,24 @@ Dynamic Debugging
You can enable dynamic debug against the CephFS module.
Please see: https://github.com/ceph/ceph/blob/master/src/script/kcon_all.sh
Reporting Issues
================
If you have identified a specific issue, please report it with as much
information as possible. Especially important information:
* Ceph versions installed on client and server
* Whether you are using the kernel or fuse client
* If you are using the kernel client, what kernel version?
* How many clients are in play, doing what kind of workload?
* If a system is 'stuck', is that affecting all clients or just one?
* Any ceph health messages
* Any backtraces in the ceph logs from crashes
If you are satisfied that you have found a bug, please file it on `the bug
tracker`. For more general queries, please write to the `ceph-users mailing
list`.
.. _the bug tracker: http://tracker.ceph.com
.. _ceph-users mailing list: http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com/