in cf24535, we use $CEPH_ROOT to specify the $top_srcdir to unify
cmake and autotools, but this breaks ceph-qa-suite/tasks/workunit.py,
as it only clones the necessary qa/workunits directory, and does not
pass $CEPH_ROOT to the test scripts. so we need to set a default
$CEPH_ROOT if it is not set.
Signed-off-by: Kefu Chai <kchai@redhat.com>
Replaced relative paths in test/cephtool-test-mon.sh,
qa/workunits/cephtool/test.sh, and test/cephtool-test-mon.sh
to work with CEPH_FOO environment variables set in cmake.
Signed-off-by: Ali Maredia <amaredia@redhat.com>
Replaced relative paths in encode-decode-non-regression.sh
to work with CEPH_FOO environment variables set in
cmake.
Signed-off-by: Ali Maredia <amaredia@redhat.com>
Moved all the libraries in CMAKE_BINARY_DIR/lib
and all the binaries in CMAKE_BINARY_DIR/bin. Set
various environment variables for test-ceph-helpers.
Put those variables throughout
qa/workunits/ceph-helpers.sh.
NOTE: This is a very rough draft of these fixes.
Signed-off-by: Ali Maredia <amaredia@redhat.com>
FreeBSD once in a while forgets to remove *pid files (this is probably a bug).
But taking care of it this way is probably much in line of what is actually needs to be done
Signed-off-by: Willem Jan Withagen wjw@digiware.nl
Protect a number of unstable/experimental features behind durable flags
https://github.com/ceph/ceph/pull/8383
Reviewed-by: John Spray <john.spray@redhat.com>
Method preprocess_remove_snaps() is designed to fast check whether
we can safely handle a remove-snaps-request without changing the osdmap.
The original design is to be able to handle snaps from multiple pools,
including those snaps even from a non-existent pool by simply skipping
over them. However, this method will quit on successfully detecting
any vaild snap which is truly needed to be removed and forward this
request to prepare_remove_snaps() for further processing.
From the above analysis, the prepare_remove_snaps() method will
theoretically also encounter some snaps which possibly belong to
non-existent pools.
This pr solves the above problem by adding a sanity check against
pool existense associated with the specified snap to be removed, which
shall be considered as a defensive move and makes prepare_remove_snaps()
stronger.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
The current code was waiting 10s to expect the file being put.
If the file was put in a shorter time than 10s, the test just waits for
nothing reducing the execution speed of that test.
This patch simply check if the file is actually available every second
during 10sec to exit prematurely.
This patch saves exactly 10 sec on a local system, surely a little bit
less on an infra but still saves time.
Signed-off-by: Erwan Velu <erwan@redhat.com>
The actual code double the wait time between two calls leading to a
possible 511s of waiting time which sounds a little bit excessive.
This patch offer to reduce the global wait time to 300s and test more
often the rados status to exit the loop earlier. In a local test, that
saves 6 secs per run.
Signed-off-by: Erwan Velu <erwan@redhat.com>
ceph_watch_wait() is doing a sleep _before_ doing the test which could
stop this loop.
It's better doing the action first as it could exit immediately and
avoid a useless sleep.
That's a minor optimization but everything count when trying to get
something smooth.
Signed-off-by: Erwan Velu <erwan@redhat.com>
OSDs are taking some time to be up but waiting 10 secs seems execessive
here between two loops. In the worst case, we can be in a situation of
waiting 10secs for nothing as we are just a few microsecs after the osd
is up.
This patch simply reduce the sleep from 10 to 1 seconds.
Signed-off-by: Erwan Velu <erwan@redhat.com>
It could sounds like nothing but the actual sleeping rampup is counter
productive.
The code does : kill <proc>; sleep 0; kill <proc>; sleep 0; kill <proc;
sleep 1; and then it grows up 120 seconds by a smooth rampup.
But actually there is almost no chance the process dies so fast meaning
that by default we switch to the sleep 1.
Moving from sleep 0 to sleep 1 doesn't seems a big win but as
kill_daemons() is called very often we can save a lot of time by then
end.
This patch offer to sleep first a 1/10th of second instead of 0 and then
1/20th of second instead of 0.
The sleep call is also moved after the kill call as it's not necessary
waiting before executing the command.
This patch makes the running time of a test like osd-scrub-repair.sh
dropping from 7m30 to 7m7.
Saving another ~30seconds is an interesting win at make check level.
Signed-off-by: Erwan Velu <erwan@redhat.com>
wait_for_clean() is a very common call when running the make check.
It does wait the cluster to be stable before continuing.
This script was doing the same calls twice and could be optimized by
making the useful calls only once.
is_clean() function was checking num_pgs & get_num_active_clean()
The main loop itself was also calling get_num_active_clean()
This patch is inlining the is_clean() inside this loop to benefit from a
single get_num_active_clean() call. This avoid a useless call of (ceph +
xmlstarlet).
This patch does move all the 'timer reset' conditions into an else
avoiding spawning other ceph+xmlstarlet call while we already know we
should reset the timer.
The last modification is to reduce the sleeping time as the state of the
cluster is changing very fast.
This whole patch could looks like almost not a big win but for a test
like test/osd/osd-scrub-repair.sh, we drop from 9m56 to 9m30 while
reducing the number system calls.
At the scale of make check, that's a lot of saving.
Signed-off-by: Erwan Velu <erwan@redhat.com>
get_num_active_clean() is called very often but spawn 1 useless process.
The current "grep -v | wc -l" can be easily replaced by "grep -cv" which
do the same while spawning one process less.
Signed-off-by: Erwan Velu <erwan@redhat.com>
The current code of kill_daemons() was killing daemons one after the
other and wait it to actually die before switching to the next one.
This patch makes the kill_daemons() loop being run in parallel to avoid
this bottleneck.
Signed-off-by: Erwan Velu <erwan@redhat.com>
This commit introduce two new functions in ceph-helpers.sh to ease
parallelism in tests.
It's based on two functions : run_in_background() & wait_background()
The first one allow you to spawn processes or functions in background and saves
the associated pid in a variable passed as first argument.
The second one waits for thoses pids to complete and report their exit status.
If one or more failed then wait_background() reports a failure.
A typical usage looks like :
pids1=""
run_in_background pids1 bash -c 'sleep 5; exit 0'
run_in_background pids1 bash -c 'sleep 1; exit 1'
run_in_background pids1 my_bash_function
wait_background pids1
The variable that contains pids is local making possible to do nested calls of
thoses two new functions.
Signed-off-by: Erwan Velu <erwan@redhat.com>
Since the merge of pr #7693, 'ceph command' to get the help is invalid.
As a result, 'test/cephtool-test-mon.sh' test was broken
This patch simply change the 'ceph command' by a 'ceph --help command'
Since this change the test is passing again.
Signed-off-by: Erwan Velu <erwan@redhat.com>
"mds stat" now gives fsmap output rather than
mdsmap. Update the rest api test's expectations.
Fixes: http://tracker.ceph.com/issues/15309
Signed-off-by: John Spray <john.spray@redhat.com>
Since the ceph task was already creating filesystems
during setup, I presume that these calls only ever
worked when they were using the same names as
the existing filesystem.
Fixes: http://tracker.ceph.com/issues/15309
Signed-off-by: John Spray <john.spray@redhat.com>
The new default must be taken into account by make check scripts
otherwise they fail.
Followup of 5b3da26.
Signed-off-by: Loic Dachary <loic@dachary.org>
The python scripts are not yet compatible with python3, yet it is the
default on jessie. Force the creation of the virtualenv to use python2.7
instead. The wheelhouse is already explicitly populated for both python3
and python2.7 by install-deps.sh, regardless of the default interpreter.
Signed-off-by: Loic Dachary <loic@dachary.org>