This relies on quota-ish stuff that doesn't exist
in kclient. We can still run the outer part
of the test though.
Signed-off-by: John Spray <john.spray@redhat.com>
Should have been to umount_wait, not umount (i.e.
the blocking foreground version). This happened
to matter because umount_wait is more tolerant
of being called if the client is not already mounted.
Signed-off-by: John Spray <john.spray@redhat.com>
This test was probably buggy and only happened to work
with ceph-fuse, because it expects the MDS to immediately
respond to updates to the client's auth caps, but that
doesn't happen.
Signed-off-by: John Spray <john.spray@redhat.com>
With the kernel client, this was proceeding to attempt
a split before the directory had persisted, because
there was no fsync after creating it.
Signed-off-by: John Spray <john.spray@redhat.com>
This tests a fuse-only feature, ticket for adding
it in kclient is:
http://tracker.ceph.com/issues/17805
Signed-off-by: John Spray <john.spray@redhat.com>
Change the Mount interface to take it as an
argument to mount() instead of setting it
out of band in a config file as we used to
for the fuse client.
Signed-off-by: John Spray <john.spray@redhat.com>
It was not correct to expect a client to block after
eviction unless it was also deauthorised. I guess
this was working with fuse because fuse does a less
good job at re-establishing a session than the kclient?
Signed-off-by: John Spray <john.spray@redhat.com>
Instead of asserting in configure_auth (which in fact
works fine with KernelMount.write_secret_file), raise
a SkipTest in test_session_reject (because the kernel
client cannot handle the client_metadata setting to
inject bogus data)
Signed-off-by: John Spray <john.spray@redhat.com>
Credit to John Spray for identifying the problem/cause.
Fixes: http://tracker.ceph.com/issues/17894
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This commit amends the MDS thrasher task to also work on multimds
clusters. Main changes:
o New FSStatus class in tasks/cephfs/filesystem.py which gets a snapshot
of the fsmap (`ceph fs dump`). This allows consecutive operations on
the same fsmap without repeated fs dumps.
o Only one MDSThrasher is started for each file system.
o The MDSThrasher operates on ranks instead of names (and groups of
standbys following the initial active).
o The MDSThrasher also will change the max_mds for the cluster to a new
value [1, current) or (current, starting max_mds]. When reduced,
randomly selected MDSs other than rank 0 will be deactivated to reach
the new max_mds. The likelihood of changing max_mds in a given cycle of
the MDSThrasher is set by the "thrash_max_mds" config.
o The MDSThrasher prints out stats on completion, e.g. number of
mds deactivated or mds_max changed.
Pre-requisite for: http://tracker.ceph.com/issues/10792
Partially fixes: http://tracker.ceph.com/issues/15134
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
A more generic CephTestCase and CephCluster, for
writeing non-cephfs test cases.
This avoids overloading one class with the functionality
needed by lots of different subsystems.
Signed-off-by: John Spray <john.spray@redhat.com>
The branches got mixed up and the merged one wasn't
the same one that was tested. This is the one that
works!
Signed-off-by: John Spray <john.spray@redhat.com>
Check that the total size shown by the df output of a mounted volume
is same as the volume size and the quota set on the volume.
Signed-off-by: Ramana Raja <rraja@redhat.com>
So that for folks with sources in typical locations
(or typical on my workstation at least!) invoking
vstart_runner is less of a mouthful.
Signed-off-by: John Spray <john.spray@redhat.com>
vstart_runner can't find arguments to ceph daemons to identify them with
ps -x because commands are cut off at terminal width. Add -ww for wide
output.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Fortunately we already have a test that creates the condition,
so just tweak it to exceed the 150% threshold for the health warning,
and check that the health message appears.
Signed-off-by: John Spray <john.spray@redhat.com>
Test the usecases for the authentication metadata stored
by the volume client:
* Obtain the list of auth IDs having access to a volume.
* Restrict volume access to auth IDs of a single (OpenStack)
tenant to enforce strong tenant isolation of volumes.
Signed-off-by: Ramana Raja <rraja@redhat.com>
Message is logged as, filesystem is mounted,
despite the vstart_runner just trying to mount
at this stage.
Signed-off-by: Ramana Raja <rraja@redhat.com>
... instead of iterating over all blacklist entries. Fall
back to the old way if the new way doesn't work (i.e. on
old ceph)
Signed-off-by: John Spray <john.spray@redhat.com>
``long_running`` needs a better name, it's really just a
marker that says a test is for use in teuthology but not vstart.
Signed-off-by: John Spray <john.spray@redhat.com>