mirror of
https://github.com/ceph/ceph
synced 2024-12-23 11:54:11 +00:00
a4936a5e26
Signed-off-by: Sage Weil <sage@newdream.net>
103 lines
3.4 KiB
Groff
103 lines
3.4 KiB
Groff
.TH MKCEPHFS 8
|
|
.SH NAME
|
|
mkcephfs \- create a ceph file system
|
|
.SH SYNOPSIS
|
|
.B mkcephfs
|
|
[ \fB\-c\fP\fI ceph.conf\fP ]
|
|
[ \fB\-\-mkbtrfs\fP ]
|
|
[ \fB\-a\fR, \fB\-\-all-hosts\fP [ \fB\-k\fI /path/to/admin.keyring\fP ] ]
|
|
.SH DESCRIPTION
|
|
.B mkcephfs
|
|
is used to create an empty Ceph file system, possibly spanning
|
|
multiple hosts. The \fIceph.conf\fP file describes the composition of
|
|
the entire Ceph cluster, including which hosts are participating,
|
|
which daemons run where, and which paths are used to store file system
|
|
data or metadata.
|
|
.P
|
|
The
|
|
.B mkcephfs
|
|
tool can be used in two ways. If \fB\-a\fR is used, it will use ssh
|
|
and scp to connect to remote hosts on your behalf and do the setup of
|
|
the entire cluster. This is the easiest solution, but can also be
|
|
inconvenient (if you don't have ssh to connect without prompting for
|
|
passwords) or slow (if you have a large cluster).
|
|
.P
|
|
Alternatively, you can run each setup phase manually. First, you need to prepare
|
|
a monmap that will be shared by each node:
|
|
.IP
|
|
.nf
|
|
# prepare
|
|
master# mkdir /tmp/foo
|
|
master# mkcephfs -c /etc/ceph/ceph.conf \\
|
|
--prepare-monmap -d /tmp/foo
|
|
.fi
|
|
.P
|
|
Share the /tmp/foo directory with other nodes in whatever way is convenient for you. On each
|
|
OSD and MDS node,
|
|
.IP
|
|
.nf
|
|
osdnode# mkcephfs --init-local-daemons osd -d /tmp/foo
|
|
mdsnode# mkcephfs --init-local-daemons mds -d /tmp/foo
|
|
.fi
|
|
.P
|
|
Collect the contents of the /tmp/foo directories back onto a single node, and then
|
|
.IP
|
|
.nf
|
|
master# mkcephfs --prepare-mon -d /tmp/foo
|
|
.fi
|
|
.P
|
|
Finally, distribute /tmp/foo to all monitor nodes and, on each of those nodes,
|
|
.IP
|
|
.nf
|
|
monnode# mkcephfs --init-local-daemons mon -d /tmp/foo
|
|
.fi
|
|
.SH OPTIONS
|
|
.TP
|
|
\fB\-a\fR, \fB\-\-allhosts\fR
|
|
Performs the necessary initialization steps on all hosts in the cluster,
|
|
executing commands via SSH.
|
|
.TP
|
|
\fB\-c\fI ceph.conf\fR, \fB\-\-conf=\fIceph.conf\fR
|
|
Use the given conf file instead of the default \fI/etc/ceph/ceph.conf\fP.
|
|
.TP
|
|
\fB\-k\fI /path/to/keyring\fR
|
|
When \fB\-a\fR is used, we can special a location to copy the
|
|
client.admin keyring, which is used to administer the cluster. The
|
|
default is \fI/etc/ceph/keyring\fP (or whatever is specified in the
|
|
config file).
|
|
.TP
|
|
\fB\-\-mkbtrfs\fR
|
|
Create and mount the any btrfs file systems specified in the
|
|
\fBceph.conf\fP for OSD data storage using \fBmkfs.btrfs\fP. The
|
|
"btrfs devs" and (if it differs from
|
|
"osd data") "btrfs path" options must be defined.
|
|
|
|
.SH SUBCOMMANDS
|
|
The sub-commands performed during cluster setup can be run individually with
|
|
.TP
|
|
\fB\-\-prepare\-monmap\fR \fB\-d \fIdir\fB \fB\-c \fIceph.conf\fR
|
|
Create an initial monmap with a random fsid/uuid and store it and
|
|
the \fIceph.conf\fR in \fIdir\fR.
|
|
.TP
|
|
\fB\-\-init\-local\-daemons\fR \fItype\fR \fB\-d \fIdir\fB
|
|
Initialize any daemons of type \fItype\fR on the local host using the
|
|
monmap in \fIdir\fR. For types \fIosd\fR and \fImds\fR, the resulting
|
|
authentication keys will be placed in \fIdir\fR. For type \fImon\fR,
|
|
the initial data files generated by \fB\-\-prepare\-mon\fR (below) are
|
|
expected in \fIdir\fR.
|
|
.TP
|
|
\fB\-\-prepare\-mon\fR \fB\-d \fIdir\fB
|
|
Prepare the initial monitor data based on the monmap, OSD, and MDS
|
|
authentication keys collected in \fIdir\fR, and put the result in
|
|
\fIdir\fR.
|
|
|
|
.SH AVAILABILITY
|
|
.B mkcephfs
|
|
is part of the Ceph distributed file system. Please refer to the Ceph wiki at
|
|
http://ceph.newdream.net/wiki for more information.
|
|
.SH SEE ALSO
|
|
.BR ceph (8),
|
|
.BR monmaptool (8),
|
|
.BR osdmaptool (8),
|
|
.BR crushmaptool (8)
|