Rather than blocking the main op queue, just pause for that amount of
time between state machine cycles.
Also, add osd_snap_trim_sleep to a few of the thrasher yamls.
Signed-off-by: Samuel Just <sjust@redhat.com>
in given keyring file, should alert user and should not allow this import.
Because in 'ceph auth list' we keep all the keyrings with caps and importing
'client.admin' user keyring without caps locks the cluster with error[1]
because admin keyring caps are missing in 'ceph auth'.
[1] Error connecting to cluster: PermissionDeniedError
Fixes: http://tracker.ceph.com/issues/18932
Signed-off-by: Vikhyat Umrao <vumrao@redhat.com>
we should not update pools_to_fix_pgp_num if the pool is not expanded or
the pg_num is not increased due to pgs being created. this prevent us
from fixing the pgp_num after done with thrashing if we actually did
nothing when fixing the pgp_num when thrashing, but we removed the pool
from pools_to_fix_pgp_num after set_pool_pgpnum() returns.
Signed-off-by: Kefu Chai <kchai@redhat.com>
This script currently has a syntax error, but still exits with
success, which is hiding that failure. Expose it by allowing
the 'sudo' exit code to be the script's exit code.
Signed-off-by: Dan Mick <dan.mick@redhat.com>
This is based on a script that I've been using for a while for basic
smoke testing. The matrix has exploded with the addition of data-pool
and now it's primarily a data-pool test fixture that takes minutes to
run, so turning it into a workunit.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
as "workunits" reside in ceph/qa/workunits, it's more intuitive to
respect suite-repo option when cloning workunits.
Signed-off-by: Kefu Chai <kchai@redhat.com>
osd: have clients resend ops on pg split
Reviewed-by: Greg Farnum <gfarnum@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Reviewed-by: Samuel Just <sjust@redhat.com>
we have
2017-02-04T16:15:46.090 INFO:tasks.workunit.client.0.mira032.stdout:error in 22088
2017-02-04T16:15:46.092 INFO:tasks.workunit.client.0.mira032.stderr:bash: line 1: 22092 Alarm clock ceph_test_rados_api_aio 2>&1
2017-02-04T16:15:46.096 INFO:tasks.workunit.client.0.mira032.stderr: 22093 Done | tee ceph_test_rados_api_aio.log
2017-02-04T16:15:46.099 INFO:tasks.workunit.client.0.mira032.stderr: 22094 Done | sed "s/^/ api_aio: /"
2017-02-04T16:15:46.102 INFO:tasks.workunit.client.0.mira032.stderr:+
if a unittest in rados/test.sh fails in teuthology.log, but it would
be desirable to have the failed test name in the line of "error in
22088".
Signed-off-by: Kefu Chai <kchai@redhat.com>
It should live in teuthology, not in Ceph. And it is currently broken:
there is no need to keep it around.
Fixes: http://tracker.ceph.com/issues/18846
Signed-off-by: Loic Dachary <loic@dachary.org>
These were running so few ops that they weren't
giving any meaningful exercise to a multimds
system beyond what we're already covering in
the fs suite.
Signed-off-by: John Spray <john.spray@redhat.com>
There were some cases where we would leave a mountpoint
that would cause the teuthology teardown to get hung up
when it tried to look inside cephtest/
Signed-off-by: John Spray <john.spray@redhat.com>
Thrashing MDS will often result in failures which often do not stop the
test. The failure may also cause the test to stall which will force the
machines to needlessly be locked until a timeout is reached. This
watchdog will unmount mounts and kill daemons when a failure is
detected.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
While the trasher supports the behavior desired by issue 10792 [1], the
bugs uncovered due to deactivating MDS (and sometimes killing
deactivating MDS) are presently a distraction from addressing issues
during normal failures. So now thrashing max_mds is turned off by
default. I have added a TODO to deactivate ranks in order (configurably)
as random deactivation causes a lot of other problems.
This also fixes a bug: random.randrange(0.0, 1.0) always returns 0.
Oops.
[1] http://tracker.ceph.com/issues/10792
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
The thrasher expects in some scenarios for the cluster to stabilize with
a new MDS taking over when there are no standbys available. This can
cause the thrasher to quit because the cluster never stabilizes.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Currently multimds is prone to many failures when killing an active or
stopping MDS when there are MDS in the cluster which have been
deactivated (stopping). Have this turned off by default for now.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
The thrasher can enter an infinite loop waiting for an MDS to take a
certain rank when a replacement may not be possible. For example,
max_mds actives are already running.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
During the course of thrashing max_mds, the ranks assigned to MDSs may
develop holes. This causes the thrasher to try to wrongly deactivate
ranks that are not assigned.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
https://github.com/ceph/ceph/pull/13194 introduced a regression:
2017-02-06T16:14:23.162 INFO:tasks.thrashosds.thrasher:Traceback (most recent call last):
File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/ceph_manager.py", line 722, in wrapper
return func(self)
File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/ceph_manager.py", line 839, in do_thrash
self.choose_action()()
File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/ceph_manager.py", line 305, in kill_osd
output = proc.stderr.getvalue()
AttributeError: 'NoneType' object has no attribute 'getvalue'
This is because the original patch failed to pass "stderr=StringIO()" to run().
Fixes: http://tracker.ceph.com/issues/16263
Signed-off-by: Nathan Cutler <ncutler@suse.com>
Signed-off-by: Kefu Chai <kchai@redhat.com>
`set +o` prints out the full command line which is echoed if "xtrace" is
enabled. this increases the verbosity of get_timeout_delays().
in this change, we follow the way of kill_daemons() to kill the extra
output. see aefcf6d.
Signed-off-by: Kefu Chai <kchai@redhat.com>
If Thrasher.__init__() spawns the do_thrash thread before initializing the
ceph_objectstore_tool property, do_thrash races with the rest
of Thrasher.__init__() and in some cases do_thrash can call kill_osd() before
Trasher.__init__() progresses much further. This can lead to an exception
("AttributeError: Thrasher instance has no attribute 'ceph_objectstore_tool'")
being thrown in kill_osd().
This commit eliminates the race by making sure the ceph_objectstore_tool
attribute is initialized before the do_thrash thread is spawned.
Fixes: http://tracker.ceph.com/issues/18799
Signed-off-by: Nathan Cutler <ncutler@suse.com>
The umount process can get stuck, in which case
we want to fail the test rather than waiting around for it.
During teardown of the kclient task catch this
timeout explicitly so that we will powercycle the node if
needed.
Signed-off-by: John Spray <john.spray@redhat.com>
no need to mention ceph_dev_branch explicitly. it will be taken from the
ceph branch value mentioned in the teuthology-suite command
Signed-off-by: Tamil Muthamizhan <tmuthami@redhat.com>
This var is mostly used when running rbd_mirror test scripts on
teuthology. It can be used locally though to speedup re-running the
tests:
Set a test temp directory:
export RBD_MIRROR_TEMDIR=/tmp/tmp.rbd_mirror
Run the tests the first time with NOCLEANUP flag (the cluster and
daemons are not stopped on finish):
RBD_MIRROR_NOCLEANUP=1 ../qa/workunits/rbd/rbd_mirror.sh
Now, to re-run the test without restarting the cluster, run cleanup
with USE_EXISTING_CLUSTER flag:
RBD_MIRROR_USE_EXISTING_CLUSTER=1 \
../qa/workunits/rbd/rbd_mirror_ha.sh cleanup
and then run the tests:
RBD_MIRROR_USE_EXISTING_CLUSTER=1
../qa/workunits/rbd/rbd_mirror_ha.sh
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
by optionally specifyning daemon instance after cluster name and
colon, like:
start_mirror ${cluster}:${instance}
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
Currently if user perform image rename operation and user give pool
name as a optional parameter (--pool=<pool_name>) then currently
its taking this optional pool name for source pool and making
destination pool name default pool name.
With this fix if user provide pool name as a optional pool name
parameter then it will consider both soruce and destination pool
name as optional parameter pool name.
Fixes: http://tracker.ceph.com/issues/18326
Reported-by: МАРК КОРЕНБЕРГ <socketpair@gmail.com>
Signed-off-by: Gaurav Kumar Garg <garg.gaurav52@gmail.com>
Do the write after opening the file, so that we get good
behaviour wrt the change in Mount.open_background that uses
file existence to confirm that the open happened.
Signed-off-by: John Spray <john.spray@redhat.com>
Previously we could readily end up hanging on teardown
when something had gone wrong with umount. Forcing
is a big hammer (umount_wait will power cycle the node
if umount isn't working), so if we had to do that
then raise an exception to indicate that something
was wrong with the test.
Fixes: http://tracker.ceph.com/issues/18663
Signed-off-by: John Spray <john.spray@redhat.com>
Using cephfs_[meta]data collides with the pools that teuthology
already creates if an mds is defined.
This became a (noticeable) problem with 052c3d3f68
Signed-off-by: Sage Weil <sage@redhat.com>
This mimics the OpenStack tempest gate tests that OpenStack
Zuul executes as a gate test.
Fixes: http://tracker.ceph.com/issues/18594
Signed-off-by: Jason Dillaman <dillaman@redhat.com>