mirror of
https://github.com/ceph/ceph
synced 2024-12-13 15:08:33 +00:00
d74ecc2630
Not beautiful, but at least it's accurate. Signed-off-by: Sage Weil <sage@newdream.net>
101 lines
3.3 KiB
Groff
101 lines
3.3 KiB
Groff
.TH MKCEPHFS 8
|
|
.SH NAME
|
|
mkcephfs \- create a ceph file system
|
|
.SH SYNOPSIS
|
|
.B mkcephfs
|
|
[ \fB\-c\fP\fI ceph.conf\fP ]
|
|
[ \fB\-k\fI /path/to/admin.keyring\fP ]
|
|
[ \fB\-\-mkbtrfs\fP ]
|
|
[ \fB\-a\fR, \fB\-\-all-hosts\fP ]
|
|
.SH DESCRIPTION
|
|
.B mkcephfs
|
|
is used to create an empty Ceph file system, possibly spanning
|
|
multiple hosts. The \fIceph.conf\fP file describes the composition of
|
|
the entire Ceph cluster, including which hosts are participating,
|
|
which daemons run where, and which paths are used to store file system
|
|
data or metadata.
|
|
.P
|
|
The
|
|
.B mkcephfs
|
|
tool can be used in two ways. If \fB\-a\fR is used, it will use ssh
|
|
and scp to connect to remote hosts on your behalf and do the setup of
|
|
the entire cluster. This is the easiest solution, but can also be
|
|
inconvenient (if you don't have ssh to connect without prompting for
|
|
passwords) or slow (if you have a large cluster).
|
|
.P
|
|
Alternatively, you can run each setup phase manually. First, you need to prepare
|
|
a monmap that will be shared by each node:
|
|
.IP
|
|
.nf
|
|
# prepare
|
|
master# mkdir /tmp/foo
|
|
master# mkcephfs -c /etc/ceph/ceph.conf \\
|
|
--prepare-monmap -d /tmp/foo
|
|
.fi
|
|
.P
|
|
Share the /tmp/foo directory with other nodes in whatever way is convenient for you. On each
|
|
OSD and MDS node,
|
|
.IP
|
|
.nf
|
|
osdnode# mkcephfs --init-local-daemons osd -d /tmp/foo
|
|
mdsnode# mkcephfs --init-local-daemons mds -d /tmp/foo
|
|
.fi
|
|
.P
|
|
Collect the contents of the /tmp/foo directories back onto a single node, and then
|
|
.IP
|
|
.nf
|
|
master# mkcephfs --prepare-mon -d /tmp/foo
|
|
.fi
|
|
.P
|
|
Finally, distribute /tmp/foo to all monitor nodes and, on each of those nodes,
|
|
.IP
|
|
.nf
|
|
monnode# mkcephfs --init-local-daemons mon -d /tmp/foo
|
|
.fi
|
|
.SH OPTIONS
|
|
.TP
|
|
\fB\-a\fR, \fB\-\-allhosts\fR
|
|
Performs the necessary initialization steps on all hosts in the cluster,
|
|
executing commands via SSH.
|
|
.TP
|
|
\fB\-c\fI ceph.conf\fR, \fB\-\-conf=\fIceph.conf\fR
|
|
Use the given conf file instead of the default \fI/etc/ceph/ceph.conf\fP.
|
|
.TP
|
|
\fB\-k\fI /path/to/keyring\fR
|
|
Location to write the client.admin keyring, which is used to administer the cluster. The default is \fI/etc/ceph/keyring\fP.
|
|
.TP
|
|
\fB\-\-mkbtrfs\fR
|
|
Create and mount the any btrfs file systems specified in the
|
|
\fBceph.conf\fP for OSD data storage using \fBmkfs.btrfs\fP. The
|
|
"btrfs devs" and (if it differs from
|
|
"osd data") "btrfs path" options must be defined.
|
|
|
|
.SH SUBCOMMANDS
|
|
The sub-commands performed during cluster setup can be run individually with
|
|
.TP
|
|
\fB\-\-prepare\-monmap\fR \fB\-d \fIdir\fB
|
|
Create an initial monmap with a random fsid/uuid and store it in
|
|
\fIdir\fR.
|
|
.TP
|
|
\fB\-\-init\-local\-daemons\fR \fItype\fR \fB\-d \fIdir\fB Initialize
|
|
any daemons of type \fItype\fR on the local host using the monmap in
|
|
\fIdir\fR. For types \fIosd\fR and \fImds\fR, the resulting
|
|
authentication keys will be placed in \fIdir\fR. For type \fImon\fR,
|
|
the initial data files generated by \fB\-\-prepare\-mon\fR (below) are
|
|
expected in \fIdir\fR.
|
|
.TP
|
|
\fB\-\-prepare\-mon\fR \fB\-d \fIdir\fB
|
|
Prepare the initial monitor data based on the monmap, OSD, and MDS
|
|
authentication keys collected in \fIdir\fR, and put the result in
|
|
\fIdir\fR.
|
|
|
|
.SH AVAILABILITY
|
|
.B mkcephfs
|
|
is part of the Ceph distributed file system. Please refer to the Ceph wiki at
|
|
http://ceph.newdream.net/wiki for more information.
|
|
.SH SEE ALSO
|
|
.BR ceph (8),
|
|
.BR monmaptool (8),
|
|
.BR osdmaptool (8),
|
|
.BR crushmaptool (8)
|