We were kicking-off the timeout as soon as we started; it's better however
to kick if off only when we are told to stop (as long as 'at-least-once'
is true).
Signed-off-by: Joao Eduardo Luis <jecluis@gmail.com>
This code change is so that instead of pulling the tarball of github
which can be unreliable at times it instead uses the ceph repo mirror
and serves as the same function. Now it is using git archive and no
longer uses wget. Because of this less tar-fu is needed to extract
the necessary files as it can be done directly through git archive.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Reviewed-by: Sam Lang <sam.lang@inktank.com>
The workunit task assumes that a mount exists
at /tmp/cephtest/mnt.{id}
This patch creates the path if it doesn't
exist, enabling workunits to run in the absense
of kclient or ceph-fuse tasks.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Reviewed-by: Sam Lang <sam.lang@inktank.com>
at-least-once Runs at least once, even if we are told to stop.
(default: True)
at-least-once-timeout If we were told to stop but we are attempting to
run at least once, timeout after this many
seconds. (default: 300)
Fixes: #3854
Signed-off-by: Joao Eduardo Luis <jecluis@gmail.com>
Test 167 was failing due to running out of space on the scratch
file system. The test reserves 21MB in a file, and repeats 50
times. It required just over 1GB, so I bumped the default size
for the testing device to 1200 MB. I increased the test device
size as well.
This resolves http://tracker.newdream.net/issues/3864.
Signed-off-by: Alex Elder <elder@inktank.com>
This runs cram tests, which are an easy way to test output
stays consistent. We already use cram for basic cli tests with no cluster,
and now we can use it for whole system tests too.
ceph.git master now separates across crush hosts without this setting.
For teuthology clusters, we don't want that (unless the tests specifies
otherwise).
This patch adds the ability to barrier between
parallel exec tasks so that all tasks will perform
the following step (after the barrier) at the same
time.
Signed-off-by: Sam Lang <sam.lang@inktank.com>
We don't want to do an exec per role, but per-host. We
were already doing an exec per host, but the names were confusing.
This fixes the names up and removes the role parameters.
Signed-off-by: Sam Lang <sam.lang@inktank.com>
Will run for as long as teuthology runs. By default, fails if any clock
skews higher than 0.05 seconds are detected, but will only fail when the
teuthology run finishes and after reporting a list of all the detected
skews.
Accepted options:
interval amount of seconds to wait in-between checks. (default: 30.0)
max-skew maximum skew, in seconds, that is considered tolerable
before issuing a warning. (default: 0.05)
expect-skew 'true' or 'false', to indicate whether to expect a skew
during the run or not. If 'true', the test will fail if no
skew is found, and succeed if a skew is indeed found; if
'false', it's the other way around. (default: false)
never-fail Don't fail the run if a skew is detected and we weren't
expecting it, or if no skew is detected and we were
expecting it. (default: False)
Signed-off-by: Joao Eduardo Luis <jecluis@gmail.com>
This new config option obviously defaults to 'true' in order to not only
maintain compatibility, but because it makes sense.
Signed-off-by: Joao Eduardo Luis <jecluis@gmail.com>
This task configures and starts a Hadoop cluster.
It does not run any jobs, that must be done after
this task runs.
Can run on either Ceph or HDFS.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
This generates a new keypair, pushes it to all nodes
in the context and adds all hosts to all other hosts
.ssh/authorized_keys file.
Cleans up all keys and authorized_keys entries
afterwards.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Reviewed-by: Sam Lang <sam.lang@inktank.com>