======================================= mkcephfs -- create a ceph file system ======================================= .. program:: mkcephfs Synopsis ======== | **mkcephfs** -c *ceph.conf* [ --mkbtrfs ] [ -a, --all-hosts [ -k */path/to/admin.keyring* ] ] Description =========== **mkcephfs** is used to create an empty Ceph file system, possibly spanning multiple hosts. The ceph.conf file describes the composition of the entire Ceph cluster, including which hosts are participating, which daemons run where, and which paths are used to store file system data or metadata. The mkcephfs tool can be used in two ways. If -a is used, it will use ssh and scp to connect to remote hosts on your behalf and do the setup of the entire cluster. This is the easiest solution, but can also be inconvenient (if you don't have ssh to connect without prompting for passwords) or slow (if you have a large cluster). Alternatively, you can run each setup phase manually. First, you need to prepare a monmap that will be shared by each node:: # prepare master# mkdir /tmp/foo master# mkcephfs -c /etc/ceph/ceph.conf \ --prepare-monmap -d /tmp/foo Share the ``/tmp/foo`` directory with other nodes in whatever way is convenient for you. On each OSD and MDS node:: osdnode# mkcephfs --init-local-daemons osd -d /tmp/foo mdsnode# mkcephfs --init-local-daemons mds -d /tmp/foo Collect the contents of the /tmp/foo directories back onto a single node, and then:: master# mkcephfs --prepare-mon -d /tmp/foo Finally, distribute ``/tmp/foo`` to all monitor nodes and, on each of those nodes:: monnode# mkcephfs --init-local-daemons mon -d /tmp/foo Options ======= .. option:: -a, --allhosts Performs the necessary initialization steps on all hosts in the cluster, executing commands via SSH. .. option:: -c ceph.conf, --conf=ceph.conf Use the given conf file instead of the default ``/etc/ceph/ceph.conf``. .. option:: -k /path/to/keyring When ``-a`` is used, we can specify a location to copy the client.admin keyring, which is used to administer the cluster. The default is ``/etc/ceph/keyring`` (or whatever is specified in the config file). .. option:: --mkbtrfs Create and mount the any btrfs file systems specified in the ceph.conf for OSD data storage using mkfs.btrfs. The "btrfs devs" and (if it differs from "osd data") "btrfs path" options must be defined. **NOTE** Btrfs is still considered experimental. This option can ease some configuration pain, but is the use of btrfs is not required when ``osd data`` directories are mounted manually by the adminstrator. **NOTE** This option is deprecated and will be removed in a future release. .. option:: --no-copy-conf By default, mkcephfs with -a will copy the new configuration to /etc/ceph/ceph.conf on each node in the cluster. This option disables that behavior. Subcommands =========== The sub-commands performed during cluster setup can be run individually with .. option:: --prepare-monmap -d dir -c ceph.conf Create an initial monmap with a random fsid/uuid and store it and the ceph.conf in dir. .. option:: --init-local-daemons type -d dir Initialize any daemons of type type on the local host using the monmap in dir. For types osd and mds, the resulting authentication keys will be placed in dir. For type mon, the initial data files generated by --prepare-mon (below) are expected in dir. .. option:: --prepare-mon -d dir Prepare the initial monitor data based on the monmap, OSD, and MDS authentication keys collected in dir, and put the result in dir. Availability ============ **mkcephfs** is part of the Ceph distributed file system. Please refer to the Ceph documentation at http://ceph.com/docs for more information. See also ======== :doc:`ceph `\(8), :doc:`monmaptool `\(8), :doc:`osdmaptool `\(8), :doc:`crushtool `\(8)