Increasing osd_object_clean_region_max_num_intervals to track more
clean regions, resulting in more partial recovery.
Signed-off-by: Neha Ojha <nojha@redhat.com>
When we are doing cache tiering, we are more sensitive to short PG logs
because the dup op entries are not perfectly promoted from the base to
the cache.
See:
http://tracker.ceph.com/issues/38358http://tracker.ceph.com/issues/24320
This works around the problem by not testing short pg logs in combination
with cache tiering. This works because the short_pg_log.yaml fragment
sets the short log in the [global] section but the cache workloads overload
it (back to a large/default value) in the [osd] section.
Signed-off-by: Sage Weil <sage@redhat.com>
Seeing some hangs when the mon is forwarding mgr commands (pg deep-scrub)
to the mgr. This is a buggy test (it should send it to the mgr directly)
but it is helpful to verify the mon forwarding behavior works.
Signed-off-by: Sage Weil <sage@redhat.com>
With automatic balancing on, and if mode is set to upmap,
balancer will fail silently if min_compat_client is lower than
luminous.
You can't figure out that unless you take a closer look at the
mgr log, which is super annoying..
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
This utilizes the recent feature in teuthology [1] to skip hidden files in
suites when building the job matrix.
Idea of this change is to enable referring to the top-level qa directory in a
position-independent way such that copies of a suite to another location do not
break any symlinks.
[1] https://github.com/ceph/teuthology/pull/1185
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
pg will be created when increasing pgp-num and pg-num. so at that
moment, PG_AVAILABILITY is reported. so whitelist it in all tests which
run rados/test.sh. that script exercises ceph_test_rados_api_list.
Fixes: http://tracker.ceph.com/issues/23763
Signed-off-by: Kefu Chai <kchai@redhat.com>
1.add tier_promote op for redirect and chunked cases.
2.rename set-chunk.yaml due to current chunked object
only for the read case.
Signed-off-by: Myoungwon Oh <omwmw@sk.com>
current chunked object and ChunkReadOp are
only for the read case.
write op and promote_object() still be tested without ChunkReadOp
by another ceph_test_rados in the same test suite (with --set_chunk)
Signed-off-by: Myoungwon Oh <omwmw@sk.com>
We can't mix the balancer compat-set testing with firefly tunables because
it requires that all buckets be straw2.
Signed-off-by: Sage Weil <sage@redhat.com>
- snapdir conversion (at-end) stuff
- merge luminous-specific collections that avoided the above back
into their normal locations
Signed-off-by: Sage Weil <sage@redhat.com>
so we can avoid the warnings like
grep: Unmatched ( or \(
because we pass the whitelisted string to `egrep -v "$1"` directly.
Signed-off-by: Kefu Chai <kchai@redhat.com>
With the peering deletes change, setting luminous sets the osdmap flag
which triggers a new peering interval. That can lead to health warnings
about PG_AVAILABILITY or PG_DEGRADED. Ignore those!
Fixes: http://tracker.ceph.com/issues/20693
Signed-off-by: Sage Weil <sage@redhat.com>