Ceph is a distributed object, block, and file storage platform
Go to file
Greg Farnum 51385930cd Objecter: remove unused variable in op_submit
These flags are probably relics from when the function got split;
they belong in send_op now.

Signed-off-by: Greg Farnum <gregory.farnum@dreamhost.com>
2011-11-28 12:30:46 -08:00
admin doc: add documentation for librbd python bindings 2011-09-30 11:37:15 -07:00
debian debian init: Do not stop or start daemons when installing or upgrading 2011-11-28 09:01:50 -08:00
doc corrected variable (con) to be consistent with prior examples (cluster) 2011-11-23 15:56:52 -08:00
fusetrace
keys
man rbd: Document the rbd showmapped cmd 2011-11-07 09:27:15 -08:00
qa workunits: rados python workunit should be executable 2011-11-14 08:18:47 -08:00
src Objecter: remove unused variable in op_submit 2011-11-28 12:30:46 -08:00
udev udev: c* -> ceph-* rename: missed crbdnamer. 2011-09-23 15:55:01 -07:00
wireshark
.gitignore gitignore: just ignore all test_ files 2011-11-03 13:59:25 -07:00
AUTHORS
autogen.sh
ceph.spec.in use libuuid for fsid 2011-11-11 20:48:59 -08:00
ChangeLog
CodingStyle
configure.ac use libuuid for fsid 2011-11-11 20:48:59 -08:00
COPYING
do_autogen.sh
Doxyfile
INSTALL
Makefile.am Makefile: include run-cli-tests-maybe-unset-ccache in dist tarball. 2011-09-23 15:55:01 -07:00
NEWS
README
RELEASE_CHECKLIST
SubmittingPatches

Ceph - a scalable distributed storage system
-----------------------------------------

Please see http://ceph.newdream.net/ for current info.

----

To build the server daemons, and FUSE client,

$ ./autogen.sh
$ ./configure
$ make

(Note that the FUSE client will only be built if libfuse is present.)

----

A quick summary of binaries that will be built in src/

daemons:
 ceph-mon -- monitor daemon.  handles cluster state and configuration
         information.
 ceph-osd -- storage daemon.  stores objects on a given block device.
 ceph-mds -- metadata daemon.  handles file system namespace.
 ceph-fuse -- fuse client.

tools:
 ceph -- send management commands to the monitor cluster.
 rados -- interact with the object store
 rbd -- manipulate rados block device images
 monmaptool -- create/edit mon map
 osdmaptool -- create/edit osd map 
 crushtool -- create/edit crush map

scripts:
 mkcephfs -- cluster mkfs tool
 init-ceph -- init.d start/stop script