* refs/pull/36155/head:
qa: Fix traceback during fs cleanup between tests
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
* refs/pull/35944/head:
qa: defer cleaning the mountpoint's netnses and the bridge
qa/tasks/cephfs/mount.py: remove the stale netnses and bridge
qa/tasks/cephfs/mount.py: try to flush the stale ceph-brx dev info
qa/tasks/cephfs/mount.py: switch to run_shell_payload() helper
qa/tasks/cephfs/mount.py: clean up the none used code
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
mgr/cephadm: fix call to cephadm for daemon restarts etc
Reviewed-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Varsha Rao <varao@redhat.com>
* extract get_ragweed_branch() out of download() task, for better
readablity.
* use a loop for retry when the first clone fails
* drop the `raise ValueError()` clause as it never happens. we could use
an assert() here, but i don't think it is necessary anyway.
* use sh() instead of run() for better readablity.
* always set ragweed_repo. before this change this variable is
unbounded if `force-branch` is set.
Fixes: https://tracker.ceph.com/issues/46771
Signed-off-by: Kefu Chai <kchai@redhat.com>
The 'mon_allow_pool_delete' option is set to 'True'
in 'setUp' of 'TestVolumes' and is cleared of in
corresponding 'tearDown' function. Hence, any pool
deletion in parent classes such as 'CephFSTestCase'
would fail. This patch fixes the same by setting
the config 'mon_allow_pool_delete' option in the
'CephFSTestCase'.
Fixes: https://tracker.ceph.com/issues/46597
Signed-off-by: Kotresh HR <khiremat@redhat.com>
The netnses maybe created/deleted many times in the whole test cases,
we can defer cleaning them untile the last mountpoint is unmounted
or when the test is exiting.
Fixes: https://tracker.ceph.com/issues/46282
Signed-off-by: Xiubo Li <xiubli@redhat.com>
mgr/dashboard: wait longer for health status to be cleared
Reviewed-by: Ni-Feng Chang <kiefer.chang@suse.com>
Reviewed-by: Volker Theile <vtheile@suse.com>
Reviewed-by: Alfonso Martínez <almartin@redhat.com>
Reviewed-by: Stephan Müller <smueller@suse.com>
we could have following logging messages:
tasks.ceph:Waiting for all PGs to be active+clean and split+merged, waiting on ['2.6', '2.5', '1.0', '2.4'] to go clean and/or [] to split/merge
if the cluster has non-active+clean pgs when the "ceph" is about to
end. but this message is a little bit confusing in the sense it
lists "[]" in it.
in this change, only PGs being waited are listed. also, added some
cleanups:
* use "else" to check if the loop is terminated by a break
* remove "0" from the range() call
Signed-off-by: Kefu Chai <kchai@redhat.com>
If the previous test cases failed, the netnses and bridge will be
left. Here will remove them when new test cases begin.
Fixes: https://tracker.ceph.com/issues/45806
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Once we have run the test cases and the ceph-brx bridge is setup,
it will save the config in "/etc/sysconfig/network-scripts/ifcfg-ceph-brx"
or somewhere else. It will be kept after the ceph-brx bridge removed.
So next time once the ceph-brx bridge is created or added, it will
read the config from it, then when we config it again we will get
error like:
"RTNETLINK answers: File exists"
Here we need to flush it before config it.
Fixes: https://tracker.ceph.com/issues/45817
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Test that the osd doesn't crash when it gets a bad incremental osdmap.
Related-to: https://tracker.ceph.com/issues/46443
Signed-off-by: Dan van der Ster <daniel.vanderster@cern.ch>
Because of reasons the cluster needs more time to recover from
HEALTH_WARN while changes are made by `test_pool_update_metadata`.
Lets wait several times for the cluster status to be HEALTH_OK
again.
Fixes: https://tracker.ceph.com/issues/46573
Signed-off-by: Tatjana Dehler <tdehler@suse.com>
mgr/dashboard: increase API test coverage in API controllers
Reviewed-by: Alfonso Martínez <almartin@redhat.com>
Reviewed-by: Ernesto Puertat <epuertat@redhat.com>
Reviewed-by: Laura Paduano <lpaduano@suse.com>
Reviewed-by: Volker Theile <vtheile@suse.com>
keystone's dependencies are installed using its tox.ini,
which in turn uses a constraints file of
https://releases.openstack.org/constraints/upper/ussuri,
and it pins cliff to 3.1.0, which is not able to fulfill the requirement
of osc-lib 2.2.0. as it needs needs cliff>=3.2.0. per
https://releases.openstack.org/ussuri/, the latest osc-lib for
ussuri is 2.0.0. and osc-lib>=2.0.0 is required by
python-openstackclient 2.5.1, so let's use it instead of using the latest
one.
if we install cliff==3.1.0 along with python-openstackclient==5.2.1,
we will have following error, as `CommandManager.add_command_group()`
method was added to cliff in 3.2.0. see
8477c4dbd0,
so cliff failed to work with the latest openstackclient, like:
2020-06-29T17:26:23.402 INFO:teuthology.orchestra.run.smithi039.stderr:'CommandManager' object has no attribute 'add_command_group'
2020-06-29T17:26:23.402 INFO:teuthology.orchestra.run.smithi039.stderr:Traceback (most recent call last):
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/cliff/app.py", line 264, in run
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: self.initialize_app(remainder)
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 133, in
initialize_app
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: super(OpenStackShell, self).initialize_app(argv)
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/osc_lib/shell.py", line 442, in initialize_app
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr: self._load_plugins()
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 104, in
_load_plugins
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr: self.command_manager.add_command_group(cmd_group)
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr:AttributeError: 'CommandManager' object has no attribute 'add_command_group'
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr:Traceback (most recent call last):
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/osc_lib/shell.py", line 134, in run
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: ret_val = super(OpenStackShell, self).run(argv)
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/cliff/app.py", line 264, in run
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: self.initialize_app(remainder)
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 133, in
initialize_app
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: super(OpenStackShell, self).initialize_app(argv)
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/osc_lib/shell.py", line 442, in initialize_app
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: self._load_plugins()
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 104, in
_load_plugins
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: self.command_manager.add_command_group(cmd_group)
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr:AttributeError: 'CommandManager' object has no attribute 'add_command_group'
in this change the openstackclients version is pin'ed to the
latest stable of 5.2.1. will have a separated PR to bump up
the cliff version on teuthology side.
Signed-off-by: Kefu Chai <kchai@redhat.com>
rpm,deb,qa,python-common,test: drop python2 support
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Reviewed-by: Sebastian Wagner <sebastian.wagner@suse.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Laura Paduano <lpaduano@suse.com>