2014-08-05 18:48:38 +00:00
|
|
|
=================================
|
|
|
|
Developer Guide (Quick)
|
|
|
|
=================================
|
|
|
|
|
|
|
|
This guide will describe how to build and test Ceph for development.
|
|
|
|
|
|
|
|
Development
|
|
|
|
-----------
|
|
|
|
|
2014-12-06 22:59:54 +00:00
|
|
|
The ``run-make-check.sh`` script will install Ceph dependencies,
|
2015-02-23 15:27:40 +00:00
|
|
|
compile everything in debug mode and run a number of tests to verify
|
2014-12-06 22:59:54 +00:00
|
|
|
the result behaves as expected.
|
2014-08-05 18:48:38 +00:00
|
|
|
|
|
|
|
.. code::
|
|
|
|
|
2014-12-06 22:59:54 +00:00
|
|
|
$ ./run-make-check.sh
|
2014-08-05 18:48:38 +00:00
|
|
|
|
|
|
|
|
|
|
|
Running a development deployment
|
|
|
|
--------------------------------
|
2015-07-02 12:50:48 +00:00
|
|
|
Ceph contains a script called ``vstart.sh`` (see also :doc:`/dev/dev_cluster_deployement`) which allows developers to quickly test their code using
|
2014-08-05 18:48:38 +00:00
|
|
|
a simple deployment on your development system. Once the build finishes successfully, start the ceph
|
|
|
|
deployment using the following command:
|
|
|
|
|
|
|
|
.. code::
|
|
|
|
|
2016-09-21 07:59:45 +00:00
|
|
|
$ cd ceph/build # Assuming this is where you ran cmake
|
2016-09-26 15:25:29 +00:00
|
|
|
$ make vstart
|
2016-09-21 07:59:45 +00:00
|
|
|
$ ../src/vstart.sh -d -n -x
|
2014-08-05 18:48:38 +00:00
|
|
|
|
|
|
|
You can also configure ``vstart.sh`` to use only one monitor and one metadata server by using the following:
|
|
|
|
|
|
|
|
.. code::
|
|
|
|
|
2016-09-21 07:59:45 +00:00
|
|
|
$ MON=1 MDS=1 ../src/vstart.sh -d -n -x
|
2014-08-05 18:48:38 +00:00
|
|
|
|
2014-10-20 11:43:56 +00:00
|
|
|
The system creates three pools on startup: `cephfs_data`, `cephfs_metadata`, and `rbd`. Let's get some stats on
|
2014-08-05 18:48:38 +00:00
|
|
|
the current pools:
|
|
|
|
|
|
|
|
.. code::
|
|
|
|
|
2016-09-21 07:59:45 +00:00
|
|
|
$ bin/ceph osd pool stats
|
2014-08-05 18:48:38 +00:00
|
|
|
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
|
2014-10-20 11:43:56 +00:00
|
|
|
pool rbd id 0
|
2014-08-05 18:48:38 +00:00
|
|
|
nothing is going on
|
2014-10-20 11:43:56 +00:00
|
|
|
|
|
|
|
pool cephfs_data id 1
|
2014-08-05 18:48:38 +00:00
|
|
|
nothing is going on
|
|
|
|
|
2014-10-20 11:43:56 +00:00
|
|
|
pool cephfs_metadata id 2
|
2014-08-05 18:48:38 +00:00
|
|
|
nothing is going on
|
|
|
|
|
2016-09-21 07:59:45 +00:00
|
|
|
$ bin/ceph osd pool stats cephfs_data
|
2014-08-05 18:48:38 +00:00
|
|
|
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
|
2014-10-20 11:43:56 +00:00
|
|
|
pool cephfs_data id 1
|
2014-08-05 18:48:38 +00:00
|
|
|
nothing is going on
|
|
|
|
|
|
|
|
$ ./rados df
|
|
|
|
pool name category KB objects clones degraded unfound rd rd KB wr wr KB
|
|
|
|
rbd - 0 0 0 0 0 0 0 0 0
|
2014-10-20 11:43:56 +00:00
|
|
|
cephfs_data - 0 0 0 0 0 0 0 0 0
|
|
|
|
cephfs_metadata - 2 20 0 40 0 0 0 21 8
|
2014-08-05 18:48:38 +00:00
|
|
|
total used 12771536 20
|
|
|
|
total avail 3697045460
|
|
|
|
total space 3709816996
|
|
|
|
|
|
|
|
|
|
|
|
Make a pool and run some benchmarks against it:
|
|
|
|
|
|
|
|
.. code::
|
|
|
|
|
2016-09-21 07:59:45 +00:00
|
|
|
$ bin/rados mkpool mypool
|
|
|
|
$ bin/rados -p mypool bench 10 write -b 123
|
2014-08-05 18:48:38 +00:00
|
|
|
|
|
|
|
Place a file into the new pool:
|
|
|
|
|
|
|
|
.. code::
|
|
|
|
|
2016-09-21 07:59:45 +00:00
|
|
|
$ bin/rados -p mypool put objectone <somefile>
|
|
|
|
$ bin/rados -p mypool put objecttwo <anotherfile>
|
2014-08-05 18:48:38 +00:00
|
|
|
|
|
|
|
List the objects in the pool:
|
|
|
|
|
|
|
|
.. code::
|
|
|
|
|
2016-09-21 07:59:45 +00:00
|
|
|
$ bin/rados -p mypool ls
|
2014-08-05 18:48:38 +00:00
|
|
|
|
|
|
|
Once you are done, type the following to stop the development ceph deployment:
|
|
|
|
|
|
|
|
.. code::
|
|
|
|
|
2016-09-21 07:59:45 +00:00
|
|
|
$ ../src/stop.sh
|
|
|
|
|
|
|
|
Resetting your vstart environment
|
|
|
|
---------------------------------
|
|
|
|
|
|
|
|
The vstart script creates out/ and dev/ directories which contain
|
|
|
|
the cluster's state. If you want to quickly reset your environment,
|
|
|
|
you might do something like this:
|
|
|
|
|
|
|
|
.. code::
|
|
|
|
|
|
|
|
[build]$ ../src/stop.sh
|
|
|
|
[build]$ rm -rf out dev
|
|
|
|
[build]$ MDS=1 MON=1 OSD=3 ../src/vstart.sh -n -d
|
2014-08-05 18:48:38 +00:00
|
|
|
|
2014-08-20 16:21:36 +00:00
|
|
|
Running a RadosGW development environment
|
|
|
|
-----------------------------------------
|
2017-05-19 12:36:11 +00:00
|
|
|
|
|
|
|
Set the ``RGW`` environment variable when running vstart.sh to enable the RadosGW.
|
2014-08-20 16:21:36 +00:00
|
|
|
|
|
|
|
.. code::
|
|
|
|
|
2016-09-21 07:59:45 +00:00
|
|
|
$ cd build
|
2017-05-19 12:36:11 +00:00
|
|
|
$ RGW=1 ../src/vstart.sh -d -n -x
|
2014-08-20 16:21:36 +00:00
|
|
|
|
|
|
|
You can now use the swift python client to communicate with the RadosGW.
|
|
|
|
|
|
|
|
.. code::
|
2014-08-25 18:43:53 +00:00
|
|
|
|
2017-01-13 11:27:45 +00:00
|
|
|
$ swift -A http://localhost:8000/auth -U test:tester -K testing list
|
|
|
|
$ swift -A http://localhost:8000/auth -U test:tester -K testing upload mycontainer ceph
|
|
|
|
$ swift -A http://localhost:8000/auth -U test:tester -K testing list
|
2014-08-20 16:21:36 +00:00
|
|
|
|
|
|
|
|
2014-08-05 18:48:38 +00:00
|
|
|
Run unit tests
|
|
|
|
--------------
|
|
|
|
|
|
|
|
The tests are located in `src/tests`. To run them type:
|
|
|
|
|
2014-08-05 18:51:16 +00:00
|
|
|
.. code::
|
|
|
|
|
2014-08-05 18:48:38 +00:00
|
|
|
$ make check
|
|
|
|
|