1
0
mirror of https://github.com/ceph/ceph synced 2025-04-04 23:42:13 +00:00

doc: dev: improve the s3tests with vstart document

update with the current qa workunits example and drop old info, also
update vstart doc dropping the -r and --num options to prefer env vars

Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
This commit is contained in:
Abhishek Lekshmanan 2017-05-19 18:32:31 +02:00 committed by Abhishek Lekshmanan
parent a4727f3fef
commit 8dcf8d6e5f
2 changed files with 7 additions and 67 deletions

View File

@ -45,10 +45,6 @@ Options
Add *config* to all sections in the ceph configuration.
.. option:: -r
Start radosgw on port starting from 8000.
.. option:: --nodaemon
Use ceph-run as wrapper for mon/osd/mds.
@ -73,10 +69,6 @@ Options
Launch the osd/mds/mon/all the ceph binaries using valgrind with the specified tool and arguments.
.. option:: --{mon,osd,mds}_num
Set the count of mon/osd/mds daemons
.. option:: --bluestore
Use bluestore as the objectstore backend for osds

View File

@ -1434,28 +1434,14 @@ Refer to :doc:`install/build-ceph`.
You can do step 2 separately while it is building.
Step 2 - s3-tests
-----------------
The test suite is in a separate git repo, and is written in python. Perform the
following steps for jewel::
git clone git://github.com/ceph/s3-tests
cd s3-tests
git checkout ceph-jewel
./bootstrap
For kraken, checkout the ``ceph-kraken`` branch instead of ``ceph-jewel``. For
master, use ``ceph-master``.
Step 3 - vstart
Step 2 - vstart
---------------
When the build completes, and still in the top-level directory of the git
clone where you built Ceph, do the following::
clone where you built Ceph, do the following, for cmake builds::
cd src/
./vstart.sh -n -r --mds_num 0
cd build/
RGW=1 ../vstart.sh -n
This will produce a lot of output as the vstart cluster is started up. At the
end you should see a message like::
@ -1464,51 +1450,13 @@ end you should see a message like::
This means the cluster is running.
Step 4 - prepare S3 environment
-------------------------------
The s3-tests suite expects to run in a particular environment (S3 users, keys,
configuration file).
Before you try to prepare the environment, make sure you don't have any
existing keyring or ``ceph.conf`` files in ``/etc/ceph``.
For jewel, Abhishek Lekshmanan wrote a script that can be used for this
purpose. Assuming you are testing jewel, run the following commands from the
``src/`` directory of your ceph clone (where you just started the vstart
cluster)::
pushd ~
wget https://gist.githubusercontent.com/theanalyst/2fee6bc2780f67c79cad7802040fcddc/raw/b497ddba053d9a6fb5d91b73924cbafcfc32f137/s3tests-bootstrap.sh
popd
sh ~/s3tests-bootstrap.sh
If the script is successful, it will display a blob of JSON and create a file
called ``s3.conf`` in the current directory.
Step 5 - run s3-tests
Step 3 - run s3-tests
---------------------
To actually run the tests, take note of the full path to the ``s3.conf`` file
created in the previous step and then move to the directory where you cloned
``s3-tests`` in Step 2.
First, verify that the test suite is there and can be run::
S3TEST_CONF=/path/to/s3.conf ./virtualenv/bin/nosetests -a '!fails_on_rgw' -v --collect-only
This should complete quickly - it is like a "dry run" of all the tests in the
suite.
Finally, run the test suite itself::
S3TEST_CONF=/path/to/s3.conf ./virtualenv/bin/nosetests -a '!fails_on_rgw' -v
Note: the following test is expected to error - this is a problem in the test
setup (WIP), not an actual test failure::
ERROR: s3tests.functional.test_s3.test_bucket_acl_grant_email
To run the s3tests suite do the following::
$ ../qa/workunits/rgw/run-s3tests.sh
.. WIP
.. ===