If given only 8GB RAM, ceph_test_msgr may abort with buffer::bad_alloc.
http://tracker.ceph.com/issues/11260Fixes: #11260
Signed-off-by: Loic Dachary <loic@dachary.org>
A new test verifies that we are stopped by the pool quota (and get
the right error messages or block). See ceph.git
32962740ce.
Signed-off-by: Sage Weil <sage@redhat.com>
Blackhole filestore ops so that we ensure it doesn't complete
the pg deletions before the restart function does a clean shutdown
etc.
Signed-off-by: Sage Weil <sage@redhat.com>
Restart can be slow enough that osd.1 and 2 finish deleting the
pgs. Verifying one osd sees the instance is sufficient.
Signed-off-by: Sage Weil <sage@redhat.com>
Link the distro directory to the directory containing all supported
distros. Add the x86_64 arch contraint required by the isa plugin to an
isolated file that is combined with all jobs.
Signed-off-by: Loic Dachary <loic@dachary.org>
- simplify this.. lots of extra cruft we don't need
- restart twice at hammer to ensure that we can still load pgs
post-upgrade
- do the same for the final version.
Fixes: #11429 (again, for ~infernalis)
Fixes: #13060
Signed-off-by: Sage Weil <sage@redhat.com>
- in general, test simple vs async vs random
- not for msgr-less workloads
- not for thrash-erasure-*.. the regular thrash
should cover it.
Signed-off-by: Sage Weil <sage@redhat.com>
Add a workload that uses the lrc erasure code plugin. Instead of adding
it to suites/rados/thrash-erasure-code/workloads, a new suite is created
at suites/rados/thrash-erasure-code-big because it needs more OSDs than
other erasure code plugins. The alternative would be to increase the
number of OSDs for all erasure code plugins, but that would needlessly
increase the resources requirements.
* cluster/12-osds.yaml creates a 12 OSDs, 3 MONs cluster
* thrash-erasure-code-big/thrashers/*.yaml are the same as
thrash-erasure-code/thrashers/*.yaml except they require that at
least 8 OSDs are in at all times (instead of 4) because lrc PGs with
k=4, m=2, l=3 are undersized if they do not have 8 OSDs. It is
possible that crush fails to map 8 OSDs when only 8 OSDs are
available, but that must not disturb the workload because min_size is
4.
http://tracker.ceph.com/issues/11666Fixes: #11666
Signed-off-by: Loic Dachary <ldachary@redhat.com>
Create divergent priors and a split and then move a pg using
ceph-objectstore-tool export/import
Add yaml file to run the reg11184 task
Fixes: #11343
Signed-off-by: David Zafman <dzafman@redhat.com>
Based on tasks/divergent_priors.py but also do simple export/remove/import on
same osd.
Add yaml file to run the divergent_priors2 task
Signed-off-by: David Zafman <dzafman@redhat.com>
Flake8 fixes
Use new set_recovery_delay admin socket command
Fix bad value set for filestore_blackhole
Make sure log trims and only require 100 objects
Use kick_recovery_wq to properly set osd_recovery_delay_start to 0
Write and remove divergent and verify removal was undone
Fix to make compatible with wip-10809-11135-10290
Make sure to set_recovery_delay in a non-racey way (while osd running but down)
Leave divergent "in" so its PGs aren't treated as strays
Add yaml file to run the divergent_priors task
Signed-off-by: David Zafman <dzafman@redhat.com>
This patch also adds some convenience facilities for making
some of the ceph_manager methods into tasks usable from a
yaml file.
Signed-off-by: Samuel Just <sjust@redhat.com>