This is to allow running CephFSTestCase tests
against a vstart cluster, for much faster turnaround
during development than running teuthology against
built ceph packages.
Not everything will be runnable this way, but for
certain things like filesystem repair scenarios we
have everything we need within a vstart environment.
Signed-off-by: John Spray <john.spray@redhat.com>
For tests to advertise that they need the client
to be able to trim its cache (i.e. currently that
means requiring run as root)
Signed-off-by: John Spray <john.spray@redhat.com>
A means for test cases to mark particular methods
as long running, so that the vstart runner can skip
them when running for developers.
This is not a scientific thing, anything that takes
more than about 2 minutes due to lots of iteration
or sleeps.
Signed-off-by: John Spray <john.spray@redhat.com>
In teuthology this isn't needed because we join the
mds child processes after killing them. In vstart
we're killing them asynchronously, so be a bit more
careful to ensure they can't re-insert themselves
to the mdsmap between our calling fail and our calling
fs rm.
Signed-off-by: John Spray <john.spray@redhat.com>
...into the part that requires a network-isolated
client and the part that doesn't.
This happens to also be the part that won't work with
vstart vs. the part that will. teuthology yaml will
still pick up and run both parts.
Signed-off-by: John Spray <john.spray@redhat.com>
* Instead of creating files in background, create
them in foreground (simpler).
* Instead of creating max_request*2 files, just create
max_requests plus a litle bit.
* Set max_requests to 1000 instead of 5000 to run a bit
faster.
Signed-off-by: John Spray <john.spray@redhat.com>
We weren't waiting for export dir to complete (the asok
just starts the process). This wasn't noticeable when running
remotely due to latency between the test runner and the MDS,
but it shows up when running against a local vstart cluster.
Signed-off-by: John Spray <john.spray@redhat.com>
I am seeing a strange thing where it seems like sometimes
a ls of /sys/fs/fuse/connections is returning empty when
connections do exist. It is pretty easy to make this
a non-issue by waiting for "more conns than we started with"
instead of "list of conns is different", so do that.
Signed-off-by: John Spray <john.spray@redhat.com>
Previously failure to stat mnt dir was interpreted
as being unmounted. For "transport endpoint no connected"
error we do want to recognise that it is mounted, albeit
with no ceph-fuse process.
Signed-off-by: John Spray <john.spray@redhat.com>
Use this during test setup to check whether
a filesystem is configured at all, before
trying to tear it down.
Signed-off-by: John Spray <john.spray@redhat.com>
So that my vstart subclass can put ./ before
all the commands.
One could set $PATH, but I like to unambiguously point
it at the local built binaries in case someone also
has some systemwide packages.
Signed-off-by: John Spray <john.spray@redhat.com>
A new test verifies that we are stopped by the pool quota (and get
the right error messages or block). See ceph.git
32962740ce.
Signed-off-by: Sage Weil <sage@redhat.com>
The existing logic is to ceph-deploy osd create --zap-disk which will
zap the data device before preparing it. However it will not zap the
journal device (see http://tracker.ceph.com/issues/13291).
If ceph-deploy osd create fails, a fall back will zap both the data
device and the journal and try prepare again. This could work if
the device preparation and activation was synchronous and catch all
errors that could be caused by an unclean journal device. However,
the activation is asynchronous and it is entirely possible for a device
to be prepared successfully and fail to activate in the background.
The data and journal device are always zapped before calling ceph-deploy
osd create. The logic is simpler and the overhead is low.
http://tracker.ceph.com/issues/13000Fixes: #13000
Signed-off-by: Loic Dachary <loic@dachary.org>
Blackhole filestore ops so that we ensure it doesn't complete
the pg deletions before the restart function does a clean shutdown
etc.
Signed-off-by: Sage Weil <sage@redhat.com>
Restart can be slow enough that osd.1 and 2 finish deleting the
pgs. Verifying one osd sees the instance is sufficient.
Signed-off-by: Sage Weil <sage@redhat.com>
CentOS 6.5 needs to install a package and reboot to grow the root file
system. Instead of assuming a common user-data.txt file can fit all
Operating Systems, make one user data per os-type/os-version combination.
Signed-off-by: Loic Dachary <loic@dachary.org>