Test `test_subvolume_create_with_desired_mode_in_group()` creates three
subvolume in a subvolume group. During cleanup, it only removed two of
the three subvolumes. This causes failure when removing the subvolume
group since it's not empty.
Signed-off-by: Venky Shankar <vshankar@redhat.com>
mgr/dashboard: Pool list shows current r/w byte usage in graph
Reviewed-by: Alfonso Martínez <almartin@redhat.com>
Reviewed-by: Ricardo Marques <rimarques@suse.com>
Reviewed-by: Volker Theile <vtheile@suse.com>
* refs/pull/27073/head:
qa/tasks: Check MDS failover during mon_thrash
qa/tasks: Compare two FSStatuses
qa/suites/fs: renamed default.yaml to mds.yaml
qa/suites/fs: mon_thrash test for fs
qa/tasks: Fix typo in the comment
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/28642/head:
mds: check last laggy before marking unresponsive client stale
mds: remove the code that skip evicting the only client
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
to be specific, ignore errors when querying erasure coded pool's
erasure-code-profile. the pool might be removed after
"test_pool_min_size" lists all pools and before queries the pools'
erasure-code-profile. in that case, we should just continue on with the
next pool.
normally, the pools are created by the "radosbench" tasks. and they
don't delete the ec profiles after removing the ec pools using them, but
i don't want to rely on this fact. so, in this change, the `try` block
guards both `ceph osd pool get <pool_name> erasure_code_profile`
and `ceph osd erasure-code-profile get <profile>` calls.
Fixes: http://tracker.ceph.com/issues/40533
Signed-off-by: Kefu Chai <kchai@redhat.com>
... of fs subvolumes and subvolume groups during their creation.
Fixes: https://tracker.ceph.com/issues/40299
Signed-off-by: Ramana Raja <rraja@redhat.com>
... of fs subvolumes and subvolume groups during their creation.
Fixes: https://tracker.ceph.com/issues/40431
Signed-off-by: Ramana Raja <rraja@redhat.com>
this case introduces multiple quotes in caps line
it will trigger the bug like http://tracker.ceph.com/issues/22227
Signed-off-by: Gu Zhongyan <guzhongyan@360.cn>
mon: Improve health status for backfill_toofull and recovery_toofull
Reviewed-by: Joao Eduardo Luis <joao@suse.de>
Reviewed-by: Neha Ojha <nojha@redhat.com>
* refs/pull/28561/head:
vstart_runner: upgrade the check for commands to be run as another user
vstart_runner: split unicode arguments into lists
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Rectify the condition that checks if command to be issued as another
user using sudo is issued as a single argument after "-c".
Signed-off-by: Rishabh Dave <ridave@redhat.com>
* refs/pull/28194/head:
test_volume_client: declare only one default for python version
test_volume_client: don't shadow class variable
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Treat backfull_toofull as a warning condition because it can resolve itself.
Includes test case for PG_BACKFILL_FULL
Includes test case for recovery_toofull / PG_RECOVERY_FULL
Fixes: https://tracker.ceph.com/issues/39555
Signed-off-by: David Zafman <dzafman@redhat.com>
When we are doing cache tiering, we are more sensitive to short PG logs
because the dup op entries are not perfectly promoted from the base to
the cache.
See:
http://tracker.ceph.com/issues/38358http://tracker.ceph.com/issues/24320
This works around the problem by not testing short pg logs in combination
with cache tiering. This works because the short_pg_log.yaml fragment
sets the short log in the [global] section but the cache workloads overload
it (back to a large/default value) in the [osd] section.
Signed-off-by: Sage Weil <sage@redhat.com>
* refs/pull/28453/head:
qa/valgrind.supp: be slightly less specific on suppression
msg/async, v2: make the reset_recv_state() unconditional.
Reviewed-by: Sage Weil <sage@redhat.com>
There is already logic that defer marking unresponsive client stale.
No reason to defer evicting the only stale client.
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Our test admin has been asking for this for the past few years:-)
Besides, this is also useful for operating on large Ceph clusters with
mutliple storage pools possibly spanning over all osds.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>