Do not exclude the ceph-test package otherwise the ceph-coverage
executable is not installed.
Fixes: http://tracker.ceph.com/issues/16506
Signed-off-by: Loic Dachary <loic@dachary.org>
* tasks/rebuild_mondb.py: this task
1. removes all store.db on all monitors
2. rebuild the store.db for the first mon
3. start the first mon
4. run mkfs on other mon
5. and revive them
* suites/rados/singleton/all/rebuild-mon-db.yaml
1. run rados/test.sh
2. run rebuild_mondb task
Fixes: http://tracker.ceph.com/issues/17179
Signed-off-by: Kefu Chai <kchai@redhat.com>
to make sure that load_pgs() is finished before checking its output
Fixes: http://tracker.ceph.com/issues/16157
Signed-off-by: Kefu Chai <kchai@redhat.com>
Otherwise a pre-single-major kernel override is a headache,
particularly with non-standard yaml configs.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
The unmap test uses one remote, so the end result is the same.
However, overriding the most specific role is nicer and allows
scheduling with
kernel:
client:
branch: testing
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Kernel 3.13, which is used in pre-single-major.yaml test, doesn't
support firefly tunables (default in jewel, up from bobtail tunables).
This went unnoticed for a while because of a kernel task regression -
the pre-single-major override was ignored.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Similarly to how single-major-off.yaml and single-major-on.yaml check
the value of /sys/module/rbd/parameters/single_major, assert that it's
not there for pre-single-major.yaml.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
The v10.2.x cls_rbd test case will not pass against a v10.2.0
OSD. Disable the offending test.
Fixes: http://tracker.ceph.com/issues/16529
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Ported from Zheng's #22: 2e283ce6d7
"This can reduce the test time becuase it avoids sending getattr request
whenever the kernel checks inode permission."
This is part of an effort to eliminate unnecessary differences between
multimds and fs suites.
Signed-off-by: Patrick Donnelly <batrick@batbytes.com>
The fragment configuration uses 10000 for the fragment max size. The reason for
this is that many tests add 1000 files to a single directory which will hit
this limit without fragmentation catching up.
The test_dirfrag_limit test confirms:
o That the directory fragment size cannot exceed mds_bal_fragment_size_max (using a limit of 50 in all configurations).
o That fragmentation (forced) will allow more entries to be created.
o That unlink fails when the stray directory fragment becomes too large and that unlinking may continue once those strays are purged.
Tests: https://github.com/ceph/ceph/pull/9789
Issue: http://tracker.ceph.com/issues/16164
Signed-off-by: Patrick Donnelly <batrick@batbytes.com>
ceph.restart now marks the osds down, so the objects are actually being
created while slowest of the osds boots. That causes a ton of 1 byte
objects to be created in a degraded state and causes the cleanup to take
a long time. Also, reduce length of bench since it's only being used
to ensure the osds came up correctly.
Signed-off-by: Samuel Just <sjust@redhat.com>