2014-08-05 18:48:38 +00:00
=================================
Developer Guide (Quick)
=================================
This guide will describe how to build and test Ceph for development.
Development
-----------
2014-12-06 22:59:54 +00:00
The `` run-make-check.sh `` script will install Ceph dependencies,
2015-02-23 15:27:40 +00:00
compile everything in debug mode and run a number of tests to verify
2014-12-06 22:59:54 +00:00
the result behaves as expected.
2014-08-05 18:48:38 +00:00
.. code ::
2014-12-06 22:59:54 +00:00
$ ./run-make-check.sh
2014-08-05 18:48:38 +00:00
Running a development deployment
--------------------------------
2015-06-30 13:58:17 +00:00
Ceph contains a script called `` vstart.sh `` (see also `deploying a development cluster <https://ceph.com/docs/master/dev/dev_cluster_deployement/> `_ ) which allows developers to quickly test their code using
2014-08-05 18:48:38 +00:00
a simple deployment on your development system. Once the build finishes successfully, start the ceph
deployment using the following command:
.. code ::
$ cd src
$ ./vstart.sh -d -n -x
You can also configure `` vstart.sh `` to use only one monitor and one metadata server by using the following:
.. code ::
$ MON=1 MDS=1 ./vstart.sh -d -n -x
2014-10-20 11:43:56 +00:00
The system creates three pools on startup: `cephfs_data` , `cephfs_metadata` , and `rbd` . Let's get some stats on
2014-08-05 18:48:38 +00:00
the current pools:
.. code ::
$ ./ceph osd pool stats
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ** *
2014-10-20 11:43:56 +00:00
pool rbd id 0
2014-08-05 18:48:38 +00:00
nothing is going on
2014-10-20 11:43:56 +00:00
pool cephfs_data id 1
2014-08-05 18:48:38 +00:00
nothing is going on
2014-10-20 11:43:56 +00:00
pool cephfs_metadata id 2
2014-08-05 18:48:38 +00:00
nothing is going on
2014-10-20 11:43:56 +00:00
$ ./ceph osd pool stats cephfs_data
2014-08-05 18:48:38 +00:00
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ** *
2014-10-20 11:43:56 +00:00
pool cephfs_data id 1
2014-08-05 18:48:38 +00:00
nothing is going on
$ ./rados df
pool name category KB objects clones degraded unfound rd rd KB wr wr KB
rbd - 0 0 0 0 0 0 0 0 0
2014-10-20 11:43:56 +00:00
cephfs_data - 0 0 0 0 0 0 0 0 0
cephfs_metadata - 2 20 0 40 0 0 0 21 8
2014-08-05 18:48:38 +00:00
total used 12771536 20
total avail 3697045460
total space 3709816996
Make a pool and run some benchmarks against it:
.. code ::
$ ./rados mkpool mypool
$ ./rados -p mypool bench 10 write -b 123
Place a file into the new pool:
.. code ::
$ ./rados -p mypool put objectone <somefile>
$ ./rados -p mypool put objecttwo <anotherfile>
List the objects in the pool:
.. code ::
$ ./rados -p mypool ls
Once you are done, type the following to stop the development ceph deployment:
.. code ::
$ ./stop.sh
2014-08-20 16:21:36 +00:00
Running a RadosGW development environment
-----------------------------------------
Add the `` -r `` to vstart.sh to enable the RadosGW
.. code ::
$ cd src
$ ./vstart.sh -d -n -x -r
You can now use the swift python client to communicate with the RadosGW.
.. code ::
2014-08-25 18:43:53 +00:00
2014-10-08 05:04:07 +00:00
$ swift -A http://localhost:8000/auth -U tester:testing -K asdf list
$ swift -A http://localhost:8000/auth -U tester:testing -K asdf upload mycontainer ceph
$ swift -A http://localhost:8000/auth -U tester:testing -K asdf list
2014-08-20 16:21:36 +00:00
2014-08-05 18:48:38 +00:00
Run unit tests
--------------
The tests are located in `src/tests` . To run them type:
2014-08-05 18:51:16 +00:00
.. code ::
2014-08-05 18:48:38 +00:00
$ make check