ceph/man/ceph-syn.8

149 lines
4.0 KiB
Groff

.\" Man page generated from reStructuredText.
.
.TH "CEPH-SYN" "8" "December 09, 2013" "dev" "Ceph"
.SH NAME
ceph-syn \- ceph synthetic workload generator
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.SH SYNOPSIS
.nf
\fBceph\-syn\fP [ \-m \fImonaddr\fP:\fIport\fP ] \-\-syn \fIcommand\fP \fI\&...\fP
.fi
.sp
.SH DESCRIPTION
.sp
\fBceph\-syn\fP is a simple synthetic workload generator for the Ceph
distributed file system. It uses the userspace client library to
generate simple workloads against a currently running file system. The
file system need not be mounted via ceph\-fuse(8) or the kernel client.
.sp
One or more \fB\-\-syn\fP command arguments specify the particular
workload, as documented below.
.SH OPTIONS
.INDENT 0.0
.TP
.B \-d
Detach from console and daemonize after startup.
.UNINDENT
.INDENT 0.0
.TP
.B \-c ceph.conf, \-\-conf=ceph.conf
Use \fIceph.conf\fP configuration file instead of the default
\fB/etc/ceph/ceph.conf\fP to determine monitor addresses during
startup.
.UNINDENT
.INDENT 0.0
.TP
.B \-m monaddress[:port]
Connect to specified monitor (instead of looking through
\fBceph.conf\fP).
.UNINDENT
.INDENT 0.0
.TP
.B \-\-num_client num
Run num different clients, each in a separate thread.
.UNINDENT
.INDENT 0.0
.TP
.B \-\-syn workloadspec
Run the given workload. May be specified as many times as
needed. Workloads will normally run sequentially.
.UNINDENT
.SH WORKLOADS
.sp
Each workload should be preceded by \fB\-\-syn\fP on the command
line. This is not a complete list.
.INDENT 0.0
.TP
.B \fBmknap\fP \fIpath\fP \fIsnapname\fP
Create a snapshot called \fIsnapname\fP on \fIpath\fP\&.
.TP
.B \fBrmsnap\fP \fIpath\fP \fIsnapname\fP
Delete snapshot called \fIsnapname\fP on \fIpath\fP\&.
.TP
.B \fBrmfile\fP \fIpath\fP
Delete/unlink \fIpath\fP\&.
.TP
.B \fBwritefile\fP \fIsizeinmb\fP \fIblocksize\fP
Create a file, named after our client id, that is \fIsizeinmb\fP MB by
writing \fIblocksize\fP chunks.
.TP
.B \fBreadfile\fP \fIsizeinmb\fP \fIblocksize\fP
Read file, named after our client id, that is \fIsizeinmb\fP MB by
writing \fIblocksize\fP chunks.
.TP
.B \fBrw\fP \fIsizeinmb\fP \fIblocksize\fP
Write file, then read it back, as above.
.TP
.B \fBmakedirs\fP \fInumsubdirs\fP \fInumfiles\fP \fIdepth\fP
Create a hierarchy of directories that is \fIdepth\fP levels deep. Give
each directory \fInumsubdirs\fP subdirectories and \fInumfiles\fP files.
.TP
.B \fBwalk\fP
Recursively walk the file system (like find).
.UNINDENT
.SH AVAILABILITY
.sp
\fBceph\-syn\fP is part of the Ceph distributed storage system. Please refer to
the Ceph documentation at \fI\%http://ceph.com/docs\fP for more information.
.SH SEE ALSO
.sp
\fBceph\fP(8),
\fBceph\-fuse\fP(8)
.SH COPYRIGHT
2010-2013, Inktank Storage, Inc. and contributors. Licensed under Creative Commons BY-SA
.\" Generated by docutils manpage writer.
.