doc: update CephFS Quick Start doc

Also, skip the details about CephX user's keyring and monitor's socket
since the kernel driver can figure out these details automatically now.

Fixes: https://tracker.ceph.com/issues/41872
Signed-off-by: Rishabh Dave <ridave@redhat.com>
This commit is contained in:
Rishabh Dave 2019-09-16 18:48:49 +05:30
parent e5b766ed46
commit cd3e0acb8d

View File

@ -3,117 +3,170 @@
===================
To use the :term:`CephFS` Quick Start guide, you must have executed the
procedures in the `Storage Cluster Quick Start`_ guide first. Execute this quick
start on the Admin Host.
procedures in the `Storage Cluster Quick Start`_ guide first. Execute this
quick start on the admin host.
Prerequisites
=============
#. Verify that you have an appropriate version of the Linux kernel.
#. Verify that you have an appropriate version of the Linux kernel.
See `OS Recommendations`_ for details. ::
lsb_release -a
uname -r
#. On the admin node, use ``ceph-deploy`` to install Ceph on your
#. On the admin node, use ``ceph-deploy`` to install Ceph on your
``ceph-client`` node. ::
ceph-deploy install ceph-client
#. Optionally, if you want a FUSE-mounted file system, you would need to
install ``ceph-fuse`` package as well.
#. Ensure that the :term:`Ceph Storage Cluster` is running and in an ``active +
clean`` state. Also, ensure that you have at least one :term:`Ceph Metadata
Server` running. ::
clean`` state. ::
ceph -s [-m {monitor-ip-address}] [-k {path/to/ceph.client.admin.keyring}]
Create a Filesystem
===================
You have already created an MDS (`Storage Cluster Quick Start`_) but it will not
become active until you create some pools and a file system. See :doc:`/cephfs/createfs`.
::
ceph osd pool create cephfs_data
ceph osd pool create cephfs_metadata
ceph fs new <fs_name> cephfs_metadata cephfs_data
Create a Secret File
Create a File System
====================
The Ceph Storage Cluster runs with authentication turned on by default.
You should have a file containing the secret key (i.e., not the keyring
itself). To obtain the secret key for a particular user, perform the
following procedure:
You have already created an MDS (`Storage Cluster Quick Start`_) but it will not
become active until you create some pools and a file system. See
:doc:`/cephfs/createfs`. ::
#. Identify a key for a user within a keyring file. For example::
ceph osd pool create cephfs_data 32
ceph osd pool create cephfs_meta 32
ceph fs new mycephfs cephfs_meta cephfs_data
cat ceph.client.admin.keyring
.. note:: In case you have multiple Ceph applications and/or have multiple
CephFSs on the same cluster, it would be easier to name your pools as
<application>.<fs-name>.<pool-name>. In that case, the above pools would
be named as cephfs.mycehfs.data and cephfs.mycehfs.meta.
#. Copy the key of the user who will be using the mounted CephFS file system.
It should look something like this::
[client.admin]
key = AQCj2YpRiAe6CxAA7/ETt7Hcl9IyxyYciVs47w==
Quick word about Pools and PGs
------------------------------
#. Open a text editor.
Replication Number/Pool Size
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Since the default replication number/size is 3, you'd need 3 OSDs to get
``active+clean`` for all PGs. Alternatively, you may change the replication
number for the pool to match the number of OSDs::
#. Paste the key into an empty file. It should look something like this::
ceph osd pool set cephfs_data size {number-of-osds}
ceph osd pool set cephfs_meta size {number-of-osds}
AQCj2YpRiAe6CxAA7/ETt7Hcl9IyxyYciVs47w==
Usually, setting ``pg_num`` to 32 gives a perfectly healthy cluster. To pick
appropriate value for ``pg_num``, refer `Placement Group`_. You can also use
pg_autoscaler plugin instead. Introduced by Nautilus release, it can
automatically increase/decrease value of ``pg_num``; refer the
`Placement Group`_ to find out more about it.
#. Save the file with the user ``name`` as an attribute
(e.g., ``admin.secret``).
When all OSDs are on the same node...
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
And, in case you have deployed all of the OSDs on the same node, you would need
to create a new CRUSH rule to replicate data across OSDs and set the rule on the
CephFS pools, since the default CRUSH rule is to replicate data across
different nodes::
#. Ensure the file permissions are appropriate for the user, but not
visible to other users.
ceph osd crush rule create-replicated rule_foo default osd
ceph osd pool set cephfs_data crush_rule rule_foo
ceph osd pool set cephfs_meta crush_rule rule_foo
Using Erasure Coded pools
^^^^^^^^^^^^^^^^^^^^^^^^^
You may also use Erasure Coded pools which can be more effecient and
cost-saving since they allow stripping object data across OSDs and
replicating these stripes with encoded redundancy information. The number
of OSDs across which the data is stripped is `k` and number of replica is `m`.
You'll need to pick up these values before creating CephFS pools. The
following commands create a erasure code profile, creates a pool that'll
use it and then enables it on the pool::
ceph osd erasure-code-profile set ec-42-profile k=4 m=2 crush-failure-domain=host crush-device-class=ssd
ceph osd pool create cephfs_data_ec42 64 erasure ec-42-profile
ceph osd pool set cephfs_data_ec42 allow_ec_overwrites true
ceph fs add_data_pool mycephfs cephfs_data_ec42
You can also mark directories so that they are only stored on certain pools::
setfattr -n ceph.dir.layout -v pool=cephfs_data_ec42 /mnt/mycephfs/logs
This way you can choose the replication strategy for each directory on your
Ceph file system.
.. note:: Erasure Coded pools can not be used for CephFS metadata pools.
Erasure coded pool were introduced in Firefly and could be used directly by
CephFS Luminous onwards. Refer `this article <https://ceph.io/community/new-luminous-erasure-coding-rbd-cephfs/>`_
by Sage Weil to understand EC, it's background, limitations and other details
in Ceph's context. Read more about `Erasure Code`_ here.
Mounting the File System
========================
Using Kernel Driver
-------------------
The command to mount CephFS using kernel driver looks like this::
sudo mount -t ceph :{path-to-mounted} {mount-point} -o name={user-name}
sudo mount -t ceph :/ /mnt/mycephfs -o name=admin # usable version
``{path-to-be-mounted}`` is the path within CephFS that will be mounted,
``{mount-point}`` is the point in your file system upon which CephFS will be
mounted and ``{user-name}`` is the name of CephX user that has the
authorization to mount CephFS on the machine. Following command is the
extended form, however these extra details are automatically figured out by
by the mount.ceph helper program::
sudo mount -t ceph {ip-address-of-MON}:{port-number-of-MON}:{path-to-be-mounted} -o name={user-name},secret={secret-key} {mount-point}
If you have multiple file systems on your cluster you would need to pass
``mds_namespace={fs-name}`` option to ``-o`` option to the ``mount`` command::
sudo mount -t ceph :/ /mnt/kcephfs2 -o name=admin,mds_namespace=mycephfs2
Refer `mount.ceph man page`_ and `Mount CephFS using Kernel Driver`_ to read
more about this.
Kernel Driver
=============
Using FUSE
----------
Mount CephFS as a kernel driver. ::
To mount CephFS using FUSE (Filesystem in User Space) run::
sudo mkdir /mnt/mycephfs
sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs
sudo ceph-fuse /mnt/mycephfs
The Ceph Storage Cluster uses authentication by default. Specify a user ``name``
and the ``secretfile`` you created in the `Create a Secret File`_ section. For
example::
To mount a particular directory within CephFS you can use ``-r``::
sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o name=admin,secretfile=admin.secret
sudo ceph-fuse -r {path-to-be-mounted} /mnt/mycephfs
If you have multiple file systems on your cluster you would need to pass
``--client_mds_namespace {fs-name}`` to the ``ceph-fuse`` command::
.. note:: Mount the CephFS file system on the admin node,
not the server node. See `FAQ`_ for details.
sudo ceph-fuse /mnt/mycephfs2 --client_mds_namespace mycephfs2
Refer `ceph-fuse man page`_ and `Mount CephFS using FUSE`_ to read more about
this.
Filesystem in User Space (FUSE)
===============================
Mount CephFS as a Filesystem in User Space (FUSE). ::
sudo mkdir ~/mycephfs
sudo ceph-fuse -m {ip-address-of-monitor}:6789 ~/mycephfs
The Ceph Storage Cluster uses authentication by default. Specify a keyring if it
is not in the default location (i.e., ``/etc/ceph``)::
sudo ceph-fuse -k ./ceph.client.admin.keyring -m 192.168.0.1:6789 ~/mycephfs
.. note:: Mount the CephFS file system on the admin node, not the server node.
Additional Information
======================
See `CephFS`_ for additional information. CephFS is not quite as stable
as the Ceph Block Device and Ceph Object Storage. See `Troubleshooting`_
if you encounter trouble.
See `CephFS`_ for additional information. See `Troubleshooting`_ if you
encounter trouble.
.. _Storage Cluster Quick Start: ../quick-ceph-deploy
.. _CephFS: ../../cephfs/
.. _FAQ: http://wiki.ceph.com/How_Can_I_Give_Ceph_a_Try
.. _Troubleshooting: ../../cephfs/troubleshooting
.. _OS Recommendations: ../os-recommendations
.. _Placement Group: ../../rados/operations/placement-groups
.. _mount.ceph man page: ../../man/8/mount.ceph
.. _Mount CephFS using Kernel Driver: ../cephfs/kernel
.. _ceph-fuse man page: ../../man/8/ceph-fuse
.. _Mount CephFS using FUSE: ../../cephfs/fuse
.. _Erasure Code: ../../rados/operations/erasure-code