Merge branch 'wip-teuthREAD-wusui'

This commit is contained in:
Warren Usui 2013-07-19 19:18:28 -07:00
commit 4036547e04

View File

@ -168,6 +168,44 @@ context managers is the ``contextlib.contextmanager`` decorator; look
for that string in the existing tasks to see examples, and note where
they use ``yield``.
Further details on some of the more complex tasks such as install or workunit
can be obtained via python help. For example::
>>> import teuthology.task.workunit
>>> help(teuthology.task.workunit)
displays a page of more documentation and more concrete examples.
Some of the more important / commonly used tasks include:
* chef -- Run the chef task.
* ceph -- Bring up Ceph
* install -- by default, the install task goes to gitbuilder and installs the results of the latest build. You can, however, add additional parameters to the test configuration to cause it to install any branch, SHA, archive or URL. The following are valid parameters.
- branch -- specify a branch (bobtail, cuttlefish...)
- flavor -- specify a flavor (next, unstable...). Flavors can be thought of as subsets of branches. Sometimes (unstable, for example) they may have a predefined meaning.
- project -- specify a project (ceph, samba...)
- sha1 -- install the build with this sha1 value.
- tag -- specify a tag/identifying text for this build (v47.2, v48.1...)
* overrides -- override behavior. Typically, this includes sub-tasks being overridden. Sub-tasks can nest further information. For example, overrides of install tasks are project specific, so the following section of a yaml file would cause all ceph installation to default into using the cuttlefish branch::
overrides:
install:
ceph:
branch: cuttlefish
* workunit -- workunits are a way of grouping tasks and behavior on targets.
* sequential -- group the sub-tasks into a unit where the sub-tasks run sequentially as listed.
* parallel -- group the sub-tasks into a unit where the sub-task all run in parallel.
Sequential and parallel tasks can be nested. Tasks run sequentially if not specified.
The above list is a very incomplete description of the tasks available on
teuthology. The teuthology/task subdirectory contains all the python files
that implement tasks.
Many of these tasks are used to run shell scripts that are defined in the
ceph/ceph-qa-suite.
Troubleshooting
===============
@ -184,7 +222,7 @@ chance to inspect the system -- both through Teuthology and via extra
SSH connections -- and the cleanup completes only when you choose so.
Just exit the interactive Python session to continue the cleanup.
TODO: this only catches exceptions *between* the tasks. If a task
Note that this only catches exceptions *between* the tasks. If a task
calls multiple subtasks, e.g. with ``contextutil.nested``, those
cleanups *will* be performed. Later on, we can let tasks communicate
the subtasks they wish to invoke to the top-level runner, avoiding
@ -212,12 +250,13 @@ directory.
VIRTUAL MACHINE SUPPORT
======= ======= =======
=======================
Teuthology also supports virtual machines, which can function like
physical machines but differ in the following ways:
VPSHOST:
--------
A new entry, vpshost, has been added to the teuthology database of
available machines. For physical machines, this value is null. For
@ -230,7 +269,41 @@ any other machine. The existence of a vpshost field is how teuthology
knows whether or not a database entry represents a physical or a virtual
machine.
The following needs to be set in ~/.libvirt/libvirt.conf in order to get the
right virtual machine associations for the Inktank lab::
uri_aliases = [
'mira001=qemu+ssh://ubuntu@mira001.front.sepia.ceph.com/system?no_tty',
'mira003=qemu+ssh://ubuntu@mira003.front.sepia.ceph.com/system?no_tty',
'mira004=qemu+ssh://ubuntu@mira004.front.sepia.ceph.com/system?no_tty',
'mira006=qemu+ssh://ubuntu@mira006.front.sepia.ceph.com/system?no_tty',
'mira007=qemu+ssh://ubuntu@mira007.front.sepia.ceph.com/system?no_tty',
'mira008=qemu+ssh://ubuntu@mira008.front.sepia.ceph.com/system?no_tty',
'mira009=qemu+ssh://ubuntu@mira009.front.sepia.ceph.com/system?no_tty',
'mira010=qemu+ssh://ubuntu@mira010.front.sepia.ceph.com/system?no_tty',
'mira011=qemu+ssh://ubuntu@mira011.front.sepia.ceph.com/system?no_tty',
'mira013=qemu+ssh://ubuntu@mira013.front.sepia.ceph.com/system?no_tty',
'mira014=qemu+ssh://ubuntu@mira014.front.sepia.ceph.com/system?no_tty',
'mira015=qemu+ssh://ubuntu@mira015.front.sepia.ceph.com/system?no_tty',
'mira017=qemu+ssh://ubuntu@mira017.front.sepia.ceph.com/system?no_tty',
'mira018=qemu+ssh://ubuntu@mira018.front.sepia.ceph.com/system?no_tty',
'mira020=qemu+ssh://ubuntu@mira020.front.sepia.ceph.com/system?no_tty',
'vercoi01=qemu+ssh://ubuntu@vercoi01.front.sepia.ceph.com/system?no_tty',
'vercoi02=qemu+ssh://ubuntu@vercoi02.front.sepia.ceph.com/system?no_tty',
'vercoi03=qemu+ssh://ubuntu@vercoi03.front.sepia.ceph.com/system?no_tty',
'vercoi04=qemu+ssh://ubuntu@vercoi04.front.sepia.ceph.com/system?no_tty',
'vercoi05=qemu+ssh://ubuntu@vercoi05.front.sepia.ceph.com/system?no_tty',
'vercoi06=qemu+ssh://ubuntu@vercoi06.front.sepia.ceph.com/system?no_tty',
'vercoi07=qemu+ssh://ubuntu@vercoi07.front.sepia.ceph.com/system?no_tty',
'vercoi08=qemu+ssh://ubuntu@vercoi08.front.sepia.ceph.com/system?no_tty',
'senta01=qemu+ssh://ubuntu@senta01.front.sepia.ceph.com/system?no_tty',
'senta02=qemu+ssh://ubuntu@senta02.front.sepia.ceph.com/system?no_tty',
'senta03=qemu+ssh://ubuntu@senta03.front.sepia.ceph.com/system?no_tty',
'senta04=qemu+ssh://ubuntu@senta04.front.sepia.ceph.com/system?no_tty',
]
DOWNBURST:
----------
When a virtual machine is locked, downburst is run on that machine to
install a new image. This allows the user to set different virtual
@ -255,6 +328,7 @@ downburst:
These values are used by downburst to create the virtual machine.
HOST KEYS:
----------
Because teuthology reinstalls a new machine, a new hostkey is generated.
After locking, once a connection is established to the new machine,
@ -263,12 +337,87 @@ the new keys. When vps machines are locked using the --lock-many option,
a message is displayed indicating that --list-targets should be run later.
CEPH-QA-CHEF:
-------------
Once teuthology starts after a new vm is installed, teuthology
checks for the existence of /ceph-qa-ready. If this file is not
present, ceph-qa-chef is run when teuthology first comes up.
ASSUMPTIONS:
------------
It is assumed that downburst is on the user's PATH.
Test Suites
===========
Most of the current teuthology test suite execution scripts automatically
download their tests from the master branch of the appropriate github
repository. People who want to run experimental test suites usually modify
the download method in the teuthology/task script to use some other branch
or repository. This should be generalized in later teuthology releases.
Teuthology QA suites can be found in src/ceph-qa-suite. Make sure that this
directory exists in your source tree before running the test suites.
Each suite name is determined by the name of the directory in ceph-qa-suite
that contains that suite. The directory contains subdirectories and yaml files,
which, when assembled, produce valid tests that can be run. The test suite
application generates combinations of these files and thus ends up running
a set of tests based off the data in the directory for the suite.
To run a suite, enter::
./schedule_suite.sh <suite> <ceph> <kernel> <email> <flavor> <teuth> <mtype> <template>
where:
* *suite* -- the name of the suite (the directory in ceph-qa-suite).
* *ceph* -- ceph branch to be used.
* *kernel* -- version of the kernel to be used.
* *email* -- email address to send the results to.
* *flavor* -- flavor of the test
* *teuth* -- version of teuthology to run
* *mtype* -- machine type of the run
* *templates* -- template file used for further modifying the suite (optional)
For example, consider::
schedule_suite.sh rbd wip-fix cuttlefish bob.smith@foo.com master cuttlefish plana
The above command runs the rbd suite using wip-fix as the ceph branch,
a straight cuttlefish kernel, and the master flavor of cuttlefish teuthology.
It will run on plana machines.
In order for a queued task to be run, a teuthworker thread on
teuthology.front.sepia.ceph.com needs to remove the task from the queue.
On teuthology.front.sepia.ceph.com, run ``ps aux | grep teuthology-worker``
to view currently running tasks. If no processes are reading from the test
version that you are running, additonal teuthworker tasks need to be started.
To start these tasks:
* copy your build tree to /home/teuthworker on teuthology.front.sepia.ceph.com.
Give it a unique name (in this example, xxx)
* start up some number of worker threads (as many as machines you are testing
with, there are 60 running for the default queue)::
/home/virtualenv/bin/python
/var/lib/teuthworker/xxx/virtualenv/bin/teuthworker
/var/lib/teuthworker/archive --tube xxx
--log-dir /var/lib/teuthworker/archive/worker_logs
Note: The threads on teuthology.front.sepia.ceph.com are started via
~/teuthworker/start.sh. You can use that file as a model for your
own threads, or add to this file if you want your threads to be
more permanent.
Once the suite completes, an email message is sent to the users specified,
and a large amount of information is left on teuthology.front.sepia.ceph.com
in /var/lib/teuthworker/archive. This is symbolically linked to /a for
convenience. A new directory is created whose name consists of a concatenation
of the date and time that the suite was started, the name of the suite,
the ceph branch tested, the kernel used, and the flavor. For every test run
there is a directory whose name is the pid number of the pid of that test.
Each of these directory contains a copy of the teuthology.log for that process.
Other information from the suite is stored in files in the directory, and
task-specific yaml files and other logs are saved in the subdirectories.