ceph/man/rados.8

268 lines
6.3 KiB
Groff

.\" Man page generated from reStructuredText.
.
.TH "RADOS" "8" "May 29, 2014" "dev" "Ceph"
.SH NAME
rados \- rados object storage utility
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.SH SYNOPSIS
.nf
\fBrados\fP [ \-m \fImonaddr\fP ] [ mkpool | rmpool \fIfoo\fP ] [ \-p | \-\-pool
\fIpool\fP ] [ \-s | \-\-snap \fIsnap\fP ] [ \-i \fIinfile\fP ] [ \-o \fIoutfile\fP ]
\fIcommand\fP ...
.fi
.sp
.SH DESCRIPTION
.sp
\fBrados\fP is a utility for interacting with a Ceph object storage
cluster (RADOS), part of the Ceph distributed storage system.
.SH OPTIONS
.INDENT 0.0
.TP
.B \-p pool, \-\-pool pool
Interact with the given pool. Required by most commands.
.UNINDENT
.INDENT 0.0
.TP
.B \-s snap, \-\-snap snap
Read from the given pool snapshot. Valid for all pool\-specific read operations.
.UNINDENT
.INDENT 0.0
.TP
.B \-i infile
will specify an input file to be passed along as a payload with the
command to the monitor cluster. This is only used for specific
monitor commands.
.UNINDENT
.INDENT 0.0
.TP
.B \-o outfile
will write any payload returned by the monitor cluster with its
reply to outfile. Only specific monitor commands (e.g. osd getmap)
return a payload.
.UNINDENT
.INDENT 0.0
.TP
.B \-c ceph.conf, \-\-conf=ceph.conf
Use ceph.conf configuration file instead of the default
/etc/ceph/ceph.conf to determine monitor addresses during startup.
.UNINDENT
.INDENT 0.0
.TP
.B \-m monaddress[:port]
Connect to specified monitor (instead of looking through ceph.conf).
.UNINDENT
.SH GLOBAL COMMANDS
.INDENT 0.0
.TP
.B \fBlspools\fP
List object pools
.TP
.B \fBdf\fP
Show utilization statistics, including disk usage (bytes) and object
counts, over the entire system and broken down by pool.
.TP
.B \fBmkpool\fP \fIfoo\fP
Create a pool with name foo.
.TP
.B \fBrmpool\fP \fIfoo\fP [ \fIfoo\fP \-\-yes\-i\-really\-really\-mean\-it ]
Delete the pool foo (and all its data)
.UNINDENT
.SH POOL SPECIFIC COMMANDS
.INDENT 0.0
.TP
.B \fBget\fP \fIname\fP \fIoutfile\fP
Read object name from the cluster and write it to outfile.
.TP
.B \fBput\fP \fIname\fP \fIinfile\fP
Write object name to the cluster with contents from infile.
.TP
.B \fBrm\fP \fIname\fP
Remove object name.
.TP
.B \fBls\fP \fIoutfile\fP
List objects in given pool and write to outfile.
.TP
.B \fBlssnap\fP
List snapshots for given pool.
.TP
.B \fBclonedata\fP \fIsrcname\fP \fIdstname\fP \-\-object\-locator \fIkey\fP
Clone object byte data from \fIsrcname\fP to \fIdstname\fP\&. Both objects must be stored with the locator key \fIkey\fP (usually either \fIsrcname\fP or \fIdstname\fP). Object attributes and omap keys are not copied or cloned.
.TP
.B \fBmksnap\fP \fIfoo\fP
Create pool snapshot named \fIfoo\fP\&.
.TP
.B \fBrmsnap\fP \fIfoo\fP
Remove pool snapshot named \fIfoo\fP\&.
.TP
.B \fBbench\fP \fIseconds\fP \fImode\fP [ \-b \fIobjsize\fP ] [ \-t \fIthreads\fP ]
Benchmark for \fIseconds\fP\&. The mode can be \fIwrite\fP, \fIseq\fP, or
\fIrand\fP\&. \fIseq\fP and \fIrand\fP are read benchmarks, either
sequential or random. Before running one of the reading benchmarks,
run a write benchmark with the \fI\-\-no\-cleanup\fP option. The default
object size is 4 MB, and the default number of simulated threads
(parallel writes) is 16.
.UNINDENT
.sp
\fBcleanup\fP
.INDENT 0.0
.TP
.B \fBlistomapkeys\fP \fIname\fP
List all the keys stored in the object map of object name.
.TP
.B \fBlistomapvals\fP \fIname\fP
List all key/value pairs stored in the object map of object name.
The values are dumped in hexadecimal.
.TP
.B \fBgetomapval\fP \fIname\fP \fIkey\fP
Dump the hexadecimal value of key in the object map of object name.
.TP
.B \fBsetomapval\fP \fIname\fP \fIkey\fP \fIvalue\fP
Set the value of key in the object map of object name.
.TP
.B \fBrmomapkey\fP \fIname\fP \fIkey\fP
Remove key from the object map of object name.
.TP
.B \fBgetomapheader\fP \fIname\fP
Dump the hexadecimal value of the object map header of object name.
.TP
.B \fBsetomapheader\fP \fIname\fP \fIvalue\fP
Set the value of the object map header of object name.
.UNINDENT
.SH EXAMPLES
.sp
To view cluster utilization:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
rados df
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To get a list object in pool foo sent to stdout:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
rados \-p foo ls \-
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To write an object:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
rados \-p foo put myobject blah.txt
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To create a snapshot:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
rados \-p foo mksnap mysnap
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To delete the object:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
rados \-p foo rm myobject
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
To read a previously snapshotted version of an object:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
rados \-p foo \-s mysnap get myobject blah.txt.old
.ft P
.fi
.UNINDENT
.UNINDENT
.SH AVAILABILITY
.sp
\fBrados\fP is part of the Ceph distributed storage system. Please refer to
the Ceph documentation at \fI\%http://ceph.com/docs\fP for more information.
.SH SEE ALSO
.sp
\fBceph\fP(8)
.SH COPYRIGHT
2010-2014, Inktank Storage, Inc. and contributors. Licensed under Creative Commons BY-SA
.\" Generated by docutils manpage writer.
.