qa/suites/rbd: drop cache tiering workload tests

Cache tiering facets have been a constant source of job timeouts
accompanied by "slow request" warnings on the OSDs for at least two
years.  Same workloads pass without pool/small-cache-pool.yaml or
thrashers/cache.yaml.

See cache tiering deprecation note added in commit 535b8db33e ("doc:
deprecate the cache tiering").

Fixes: https://tracker.ceph.com/issues/63149
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
This commit is contained in:
Ilya Dryomov 2023-09-30 13:34:44 +02:00
parent 425704acdf
commit 194dd09263
13 changed files with 0 additions and 167 deletions

View File

@ -1 +0,0 @@
../.qa/

View File

@ -1,17 +0,0 @@
overrides:
ceph:
log-ignorelist:
- overall HEALTH_
- \(CACHE_POOL_NEAR_FULL\)
- \(CACHE_POOL_NO_HIT_SET\)
tasks:
- exec:
client.0:
- sudo ceph osd pool create cache 4
- sudo ceph osd tier add rbd cache
- sudo ceph osd tier cache-mode cache writeback
- sudo ceph osd tier set-overlay rbd cache
- sudo ceph osd pool set cache hit_set_type bloom
- sudo ceph osd pool set cache hit_set_count 8
- sudo ceph osd pool set cache hit_set_period 60
- sudo ceph osd pool set cache target_max_objects 250

View File

@ -1,17 +0,0 @@
overrides:
ceph:
log-ignorelist:
- overall HEALTH_
- \(CACHE_POOL_NEAR_FULL\)
- \(CACHE_POOL_NO_HIT_SET\)
tasks:
- exec:
client.0:
- sudo ceph osd pool create cache 4
- sudo ceph osd tier add rbd cache
- sudo ceph osd tier cache-mode cache writeback
- sudo ceph osd tier set-overlay rbd cache
- sudo ceph osd pool set cache hit_set_type bloom
- sudo ceph osd pool set cache hit_set_count 8
- sudo ceph osd pool set cache hit_set_period 60
- sudo ceph osd pool set cache target_max_objects 250

View File

@ -1 +0,0 @@
../.qa/

View File

@ -1,17 +0,0 @@
overrides:
ceph:
log-ignorelist:
- overall HEALTH_
- \(CACHE_POOL_NEAR_FULL\)
- \(CACHE_POOL_NO_HIT_SET\)
tasks:
- exec:
client.0:
- sudo ceph osd pool create cache 4
- sudo ceph osd tier add rbd cache
- sudo ceph osd tier cache-mode cache writeback
- sudo ceph osd tier set-overlay rbd cache
- sudo ceph osd pool set cache hit_set_type bloom
- sudo ceph osd pool set cache hit_set_count 8
- sudo ceph osd pool set cache hit_set_period 60
- sudo ceph osd pool set cache target_max_objects 250

View File

@ -1,21 +0,0 @@
overrides:
ceph:
log-ignorelist:
- overall HEALTH_
- \(CACHE_POOL_NEAR_FULL\)
- \(CACHE_POOL_NO_HIT_SET\)
tasks:
- exec:
client.0:
- sudo ceph osd erasure-code-profile set teuthologyprofile crush-failure-domain=osd m=1 k=2
- sudo ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
- sudo ceph osd pool create rbd 4 4 erasure teuthologyprofile
- sudo ceph osd pool create cache 4
- sudo ceph osd tier add rbd cache
- sudo ceph osd tier cache-mode cache writeback
- sudo ceph osd tier set-overlay rbd cache
- sudo ceph osd pool set cache hit_set_type bloom
- sudo ceph osd pool set cache hit_set_count 8
- sudo ceph osd pool set cache hit_set_period 60
- sudo ceph osd pool set cache target_max_objects 250
- rbd pool init rbd

View File

@ -1,17 +0,0 @@
overrides:
ceph:
log-ignorelist:
- overall HEALTH_
- \(CACHE_POOL_NEAR_FULL\)
- \(CACHE_POOL_NO_HIT_SET\)
tasks:
- exec:
client.0:
- sudo ceph osd pool create cache 4
- sudo ceph osd tier add rbd cache
- sudo ceph osd tier cache-mode cache writeback
- sudo ceph osd tier set-overlay rbd cache
- sudo ceph osd pool set cache hit_set_type bloom
- sudo ceph osd pool set cache hit_set_count 8
- sudo ceph osd pool set cache hit_set_period 60
- sudo ceph osd pool set cache target_max_objects 250

View File

@ -1,17 +0,0 @@
overrides:
ceph:
log-ignorelist:
- overall HEALTH_
- \(CACHE_POOL_NEAR_FULL\)
- \(CACHE_POOL_NO_HIT_SET\)
tasks:
- exec:
client.0:
- sudo ceph osd pool create cache 4
- sudo ceph osd tier add rbd cache
- sudo ceph osd tier cache-mode cache writeback
- sudo ceph osd tier set-overlay rbd cache
- sudo ceph osd pool set cache hit_set_type bloom
- sudo ceph osd pool set cache hit_set_count 8
- sudo ceph osd pool set cache hit_set_period 60
- sudo ceph osd pool set cache target_max_objects 250

View File

@ -1,21 +0,0 @@
overrides:
ceph:
log-ignorelist:
- overall HEALTH_
- \(CACHE_POOL_NEAR_FULL\)
- \(CACHE_POOL_NO_HIT_SET\)
tasks:
- exec:
client.0:
- sudo ceph osd erasure-code-profile set teuthologyprofile crush-failure-domain=osd m=1 k=2
- sudo ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
- sudo ceph osd pool create rbd 4 4 erasure teuthologyprofile
- sudo ceph osd pool create cache 4
- sudo ceph osd tier add rbd cache
- sudo ceph osd tier cache-mode cache writeback
- sudo ceph osd tier set-overlay rbd cache
- sudo ceph osd pool set cache hit_set_type bloom
- sudo ceph osd pool set cache hit_set_count 8
- sudo ceph osd pool set cache hit_set_period 60
- sudo ceph osd pool set cache target_max_objects 250
- rbd pool init rbd

View File

@ -1,17 +0,0 @@
overrides:
ceph:
log-ignorelist:
- overall HEALTH_
- \(CACHE_POOL_NEAR_FULL\)
- \(CACHE_POOL_NO_HIT_SET\)
tasks:
- exec:
client.0:
- sudo ceph osd pool create cache 4
- sudo ceph osd tier add rbd cache
- sudo ceph osd tier cache-mode cache writeback
- sudo ceph osd tier set-overlay rbd cache
- sudo ceph osd pool set cache hit_set_type bloom
- sudo ceph osd pool set cache hit_set_count 8
- sudo ceph osd pool set cache hit_set_period 60
- sudo ceph osd pool set cache target_max_objects 250

View File

@ -1,21 +0,0 @@
overrides:
ceph:
log-ignorelist:
- but it is still running
- objects unfound and apparently lost
- overall HEALTH_
- \(CACHE_POOL_NEAR_FULL\)
- \(CACHE_POOL_NO_HIT_SET\)
tasks:
- exec:
client.0:
- sudo ceph osd pool create cache 4
- sudo ceph osd tier add rbd cache
- sudo ceph osd tier cache-mode cache writeback
- sudo ceph osd tier set-overlay rbd cache
- sudo ceph osd pool set cache hit_set_type bloom
- sudo ceph osd pool set cache hit_set_count 8
- sudo ceph osd pool set cache hit_set_period 60
- sudo ceph osd pool set cache target_max_objects 250
- thrashosds:
timeout: 1200