This commit amends the MDS thrasher task to also work on multimds
clusters. Main changes:
o New FSStatus class in tasks/cephfs/filesystem.py which gets a snapshot
of the fsmap (`ceph fs dump`). This allows consecutive operations on
the same fsmap without repeated fs dumps.
o Only one MDSThrasher is started for each file system.
o The MDSThrasher operates on ranks instead of names (and groups of
standbys following the initial active).
o The MDSThrasher also will change the max_mds for the cluster to a new
value [1, current) or (current, starting max_mds]. When reduced,
randomly selected MDSs other than rank 0 will be deactivated to reach
the new max_mds. The likelihood of changing max_mds in a given cycle of
the MDSThrasher is set by the "thrash_max_mds" config.
o The MDSThrasher prints out stats on completion, e.g. number of
mds deactivated or mds_max changed.
Pre-requisite for: http://tracker.ceph.com/issues/10792
Partially fixes: http://tracker.ceph.com/issues/15134
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Don't construct Filesystem and MDSCluster if there
are no MDSs in the system, don't construct MgrCluster
if there are no mgrs in the system.
Signed-off-by: John Spray <john.spray@redhat.com>
A more generic CephTestCase and CephCluster, for
writeing non-cephfs test cases.
This avoids overloading one class with the functionality
needed by lots of different subsystems.
Signed-off-by: John Spray <john.spray@redhat.com>
The branches got mixed up and the merged one wasn't
the same one that was tested. This is the one that
works!
Signed-off-by: John Spray <john.spray@redhat.com>
mon_seesaw task replaces a monitor with a newly reployed one, in a
single-mon test including this task OSDs will not be able to connect
to cluster if the tracker#17558 is not fixed on the monitor side.
http://tracker.ceph.com/issues/17558
Signed-off-by: Kefu Chai <kchai@redhat.com>
This was only used in this task, and it is much too
ceph-specific to belong in teuthology.
Fixes: http://tracker.ceph.com/issues/17614
Signed-off-by: John Spray <john.spray@redhat.com>
Check that the total size shown by the df output of a mounted volume
is same as the volume size and the quota set on the volume.
Signed-off-by: Ramana Raja <rraja@redhat.com>
we will fail 'ceph-monstore-tool' command if the caps is empty so user
won't assign a key without any caps when rebuilding the monstore.
Signed-off-by: Kefu Chai <kchai@redhat.com>
This forces them to be unclean, *then* stale. This ensures
that after they are both down they are both *always* unclean,
whereas previously it would be possible for them to be only
stale and not unclean.
Signed-off-by: Sage Weil <sage@redhat.com>
So that for folks with sources in typical locations
(or typical on my workstation at least!) invoking
vstart_runner is less of a mouthful.
Signed-off-by: John Spray <john.spray@redhat.com>
Only do the failure injection 50% of the time; otherwise, just
kill as usual.
Signed-off-by: Sage Weil <sage@redhat.com>
# Conflicts:
# tasks/ceph_manager.py
* tasks/rebuild_mondb.py: this task
1. removes all store.db on all monitors
2. rebuild the store.db for the first mon
3. start the first mon
4. run mkfs on other mon
5. and revive them
* suites/rados/singleton/all/rebuild-mon-db.yaml
1. run rados/test.sh
2. run rebuild_mondb task
Fixes: http://tracker.ceph.com/issues/17179
Signed-off-by: Kefu Chai <kchai@redhat.com>