This is mostly for testing: a lot of tests assume that there are no
existing pools. These tests relied on a config to turn off creating the
"device_health_metrics" pool which generally exists for any new Ceph
cluster. It would be better to make these tests tolerant of the new .mgr
pool but clearly there's a lot of these. So just convert the config to
make it work.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Balancer triggers peering, which may make PGs briefly go inactive--when
they possibly haven't been active yet. E.g.,
"PG_AVAILABILITY": {
"severity": "HEALTH_WARN",
"summary": {
"message": "Reduced data availability: 3 pgs inactive, 3 pgs peering",
"count": 6
},
"detail": [
{
"message": "pg 2.6 is stuck peering since forever, current state peering, last acting [2,0]"
},
{
"message": "pg 2.1c is stuck peering since forever, current state peering, last acting [2,1]"
},
{
"message": "pg 2.7a is stuck peering since forever, current state peering, last acting [2,0]"
}
]
}
Signed-off-by: Sage Weil <sage@redhat.com>
This utilizes the recent feature in teuthology [1] to skip hidden files in
suites when building the job matrix.
Idea of this change is to enable referring to the top-level qa directory in a
position-independent way such that copies of a suite to another location do not
break any symlinks.
[1] https://github.com/ceph/teuthology/pull/1185
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
A long run of lost coin flips can lead to a timeout in
test_large_omap_detection.py.
Fixes: http://tracker.ceph.com/issues/23578
Signed-off-by: Brad Hubbard <bhubbard@redhat.com>
The newly introduced 'device-class' can be used to separate
different type of devices into different pools, e.g, hdd-pool
for backup data and all-flash-pool for DB applications.
However, if any osd of the cluster is currently running out
of space (exceeding the predefined 'full' threshold), Ceph
will mark the whole cluster as full and prevent writes to all pools,
which turns out to be very wrong.
This patch instead makes the space 'full' control at pool granularity,
which exactly leverages the pool quota logic but shall solve
the above problem.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
so we can avoid the warnings like
grep: Unmatched ( or \(
because we pass the whitelisted string to `egrep -v "$1"` directly.
Signed-off-by: Kefu Chai <kchai@redhat.com>
Valgrind runs itself on forked children, and does its cleanup when they
complete, and this is slow... slow enough that it frequently makes the
test time out.
Valgrind let's you ignore child *processes* that you exec, but I can't
find a way to skip forked children in the same address space.
Work around this by skip this validation when running under valgrind.
Fixes: http://tracker.ceph.com/issues/20602
Signed-off-by: Sage Weil <sage@redhat.com>
This reverts 693bd23851, which was
added in response to http://tracker.ceph.com/issues/18126. But
we updated the Ubuntu packages in sepia so it should be good to go.
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
Set pool size back to 2 so we don't have to have backfill
complete (despite rejection probability) in order to get back to
healthy. This way we scrub on cleanup.
Signed-off-by: Sage Weil <sage@redhat.com>
If we leave the quota set, the proxied ops will block
indefinitely, which will block scrubbing on the cache tier pgs
indefinitely.
Signed-off-by: Sage Weil <sage@redhat.com>