Merge PR #29907 into master

* refs/pull/29907/head:
	doc: add a doc for vstart_runner.py

Reviewed-by: Varsha Rao <varao@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This commit is contained in:
Patrick Donnelly 2019-09-13 12:04:44 -07:00
commit faac21f54f
No known key found for this signature in database
GPG Key ID: 3A2A7E25BEA8AADB
2 changed files with 101 additions and 5 deletions

View File

@ -102,3 +102,4 @@ GabrielBrascher Gabriel Brascher <gabriel@apache.org>
BlaineEXE Blaine Gardner <bgardner@suse.com>
travisn Travis Nielsen <tnielsen@redhat.com>
sidharthanup Sidharth Anupkrishnan <sanupkri@redhat.com>
varshar16 Varsha Rao <varao@redhat.com>

View File

@ -1535,9 +1535,11 @@ server list`` on the teuthology machine, but the target VM hostnames (e.g.
``target149202171058.teuthology``) are resolvable within the teuthology
cluster.
Running tests from `qa/` locally
===================================
Testing - how to run s3-tests locally
=====================================
How to run s3-tests locally
-------------------------------------
RGW code can be tested by building Ceph locally from source, starting a vstart
cluster, and running the "s3-tests" suite against it.
@ -1545,14 +1547,14 @@ cluster, and running the "s3-tests" suite against it.
The following instructions should work on jewel and above.
Step 1 - build Ceph
-------------------
^^^^^^^^^^^^^^^^^^^
Refer to :doc:`/install/build-ceph`.
You can do step 2 separately while it is building.
Step 2 - vstart
---------------
^^^^^^^^^^^^^^^
When the build completes, and still in the top-level directory of the git
clone where you built Ceph, do the following, for cmake builds::
@ -1569,12 +1571,105 @@ This means the cluster is running.
Step 3 - run s3-tests
---------------------
^^^^^^^^^^^^^^^^^^^^^
To run the s3tests suite do the following::
$ ../qa/workunits/rgw/run-s3tests.sh
Running test using vstart_runner.py
--------------------------------------
CephFS and Ceph Manager code is be tested using `vstart_runner.py`_.
Running your first test
^^^^^^^^^^^^^^^^^^^^^^^^^^
The Python tests in Ceph repository can be executed on your local machine
using `vstart_runner.py`_. To do that, you'd need `teuthology`_ installed::
$ git clone https://github.com/ceph/teuthology
$ cd teuthology/
$ virtualenv ./venv
$ source venv/bin/activate
$ pip install --upgrade pip
$ pip install -r requirements.txt
$ python setup.py develop
$ deactivate
.. note:: The pip command above is pip2, not pip3.
The above steps installs teuthology in a virtual environment. Before running
a test locally, build Ceph successfully from the source (refer
:doc:`/install/build-ceph`) and do::
$ cd build
$ ../src/vstart.sh -n -d -l
$ source ~/path/to/teuthology/venv/bin/activate
To run a specific test, say `test_reconnect_timeout`_ from
`TestClientRecovery`_ in ``qa/tasks/cephfs/test_client_recovery``, you can
do::
$ python2 ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery.TestClientRecovery.test_reconnect_timeout
The above command runs vstart_runner.py and passes the test to be executed as
an argument to vstart_runner.py. In a similar way, you can also run the group
of tests in the following manner::
$ # run all tests in class TestClientRecovery
$ python2 ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery.TestClientRecovery
$ # run all tests in test_client_recovery.py
$ python2 ../qa/tasks/vstart_runner.py tasks.cephfs.test_client_recovery
Based on the argument passed, vstart_runner.py collects tests and executes as
it would execute a single test.
.. note:: vstart_runner.py as well as most tests in ``qa/`` are only
compatible with ``python2``. Therefore, use ``python2`` to run the
tests locally.
vstart_runner.py can take 3 options -
--create create Ceph cluster before running a test
--create-cluster-only creates the cluster and quits; tests can be issued
later
--interactive drops a Python shell when a test fails
Internal working of vstart_runner.py -
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
vstart_runner.py primarily does three things -
* collects and runs the tests
vstart_runner.py setups/teardowns the cluster and collects and runs the
test. This is implemented using methods ``scan_tests()``, ``load_tests()``
and ``exec_test()``. This is where all the options that vstart_runner.py
takes are implemented along with other features like logging and copying
the traceback to the bottom of the log.
* provides an interface for issuing and testing shell commands
The tests are written assuming that the cluster exists on remote machines.
vstart_runner.py provides an interface to run the same tests with the
cluster that exists within the local machine. This is done using the class
``LocalRemote``. Class ``LocalRemoteProcess`` can manage the process that
executes the commands from ``LocalRemote``, class ``LocalDaemon`` provides
an interface to handle Ceph daemons and class ``LocalFuseMount`` can
create and handle FUSE mounts.
* provides an interface to operate Ceph cluster
``LocalCephManager`` provides methods to run Ceph cluster commands with
and without admin socket and ``LocalCephCluster`` provides methods to set
or clear ``ceph.conf``.
.. note:: vstart_runner.py can mount CephFS only with FUSE. Therefore, make
sure that the package for FUSE is installed and enabled on your
system.
.. note:: Make sure that ``use_allow_other`` is added to ``/etc/fuse.conf``.
.. _vstart_runner.py: https://github.com/ceph/ceph/blob/master/qa/tasks/vstart_runner.py
.. _test_reconnect_timeout: https://github.com/ceph/ceph/blob/master/qa/tasks/cephfs/test_client_recovery.py#L133
.. _TestClientRecovery: https://github.com/ceph/ceph/blob/master/qa/tasks/cephfs/test_client_recovery.py#L86
.. WIP
.. ===
..