This utilizes the recent feature in teuthology [1] to skip hidden files in
suites when building the job matrix.
Idea of this change is to enable referring to the top-level qa directory in a
position-independent way such that copies of a suite to another location do not
break any symlinks.
[1] https://github.com/ceph/teuthology/pull/1185
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
A long run of lost coin flips can lead to a timeout in
test_large_omap_detection.py.
Fixes: http://tracker.ceph.com/issues/23578
Signed-off-by: Brad Hubbard <bhubbard@redhat.com>
The newly introduced 'device-class' can be used to separate
different type of devices into different pools, e.g, hdd-pool
for backup data and all-flash-pool for DB applications.
However, if any osd of the cluster is currently running out
of space (exceeding the predefined 'full' threshold), Ceph
will mark the whole cluster as full and prevent writes to all pools,
which turns out to be very wrong.
This patch instead makes the space 'full' control at pool granularity,
which exactly leverages the pool quota logic but shall solve
the above problem.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
so we can avoid the warnings like
grep: Unmatched ( or \(
because we pass the whitelisted string to `egrep -v "$1"` directly.
Signed-off-by: Kefu Chai <kchai@redhat.com>
Valgrind runs itself on forked children, and does its cleanup when they
complete, and this is slow... slow enough that it frequently makes the
test time out.
Valgrind let's you ignore child *processes* that you exec, but I can't
find a way to skip forked children in the same address space.
Work around this by skip this validation when running under valgrind.
Fixes: http://tracker.ceph.com/issues/20602
Signed-off-by: Sage Weil <sage@redhat.com>
This reverts 693bd23851, which was
added in response to http://tracker.ceph.com/issues/18126. But
we updated the Ubuntu packages in sepia so it should be good to go.
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
Set pool size back to 2 so we don't have to have backfill
complete (despite rejection probability) in order to get back to
healthy. This way we scrub on cleanup.
Signed-off-by: Sage Weil <sage@redhat.com>
If we leave the quota set, the proxied ops will block
indefinitely, which will block scrubbing on the cache tier pgs
indefinitely.
Signed-off-by: Sage Weil <sage@redhat.com>
This reverts 693bd23851, which was
added in response to http://tracker.ceph.com/issues/18126. But
we updated the Ubuntu packages in sepia so it should be good to go.
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
This parsed out as
tasks:
- install: null
- ceph:
conf:
osd: osd max object name len = 400 osd max object namespace len = 64
- workunit:
clients:
all:
- rados/test_health_warnings.sh
which is clearly not correct.
Signed-off-by: Sage Weil <sage@redhat.com>