If a subvolumes mode or uid/gid values are changed post a snapshot,
and a clone of a snapshot prior to the change is initiated, the clone
inherits the current source subvolumes attributes, rather than the
snapshots attributes.
Fixing this by using the snapshots subvolume root attributes to create
the clone subvolumes root.
Following attributes are picked from the source subvolume snapshot:
- uid, gid, mode, data pool, pool namespace, quota
Fixes: https://tracker.ceph.com/issues/46163
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
Otherwise the files generated are not actually under the sub-directory!
This is correcting a confusing aspect of the test infrastructure but
doesn't actually require any changes to the tests.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
To make this easier to catch. It is still a RuntimeError so it should
not affect current tests by default.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
we should redirect stderr for crimson instead for default flavor. this
change addresses a regression introduced by
da76f46461
Signed-off-by: Kefu Chai <kchai@redhat.com>
The API call is a task and the response status is determined by whether
the call is completed within a pre-defined duration (2 seconds) or not.
We should also allow the status when the call takes longer.
Fixes: https://tracker.ceph.com/issues/46812
Signed-off-by: Kiefer Chang <kiefer.chang@suse.com>
* refs/pull/36155/head:
qa: Fix traceback during fs cleanup between tests
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
* refs/pull/35944/head:
qa: defer cleaning the mountpoint's netnses and the bridge
qa/tasks/cephfs/mount.py: remove the stale netnses and bridge
qa/tasks/cephfs/mount.py: try to flush the stale ceph-brx dev info
qa/tasks/cephfs/mount.py: switch to run_shell_payload() helper
qa/tasks/cephfs/mount.py: clean up the none used code
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
mgr/cephadm: fix call to cephadm for daemon restarts etc
Reviewed-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Varsha Rao <varao@redhat.com>
* extract get_ragweed_branch() out of download() task, for better
readablity.
* use a loop for retry when the first clone fails
* drop the `raise ValueError()` clause as it never happens. we could use
an assert() here, but i don't think it is necessary anyway.
* use sh() instead of run() for better readablity.
* always set ragweed_repo. before this change this variable is
unbounded if `force-branch` is set.
Fixes: https://tracker.ceph.com/issues/46771
Signed-off-by: Kefu Chai <kchai@redhat.com>
The 'mon_allow_pool_delete' option is set to 'True'
in 'setUp' of 'TestVolumes' and is cleared of in
corresponding 'tearDown' function. Hence, any pool
deletion in parent classes such as 'CephFSTestCase'
would fail. This patch fixes the same by setting
the config 'mon_allow_pool_delete' option in the
'CephFSTestCase'.
Fixes: https://tracker.ceph.com/issues/46597
Signed-off-by: Kotresh HR <khiremat@redhat.com>
The netnses maybe created/deleted many times in the whole test cases,
we can defer cleaning them untile the last mountpoint is unmounted
or when the test is exiting.
Fixes: https://tracker.ceph.com/issues/46282
Signed-off-by: Xiubo Li <xiubli@redhat.com>
we could have following logging messages:
tasks.ceph:Waiting for all PGs to be active+clean and split+merged, waiting on ['2.6', '2.5', '1.0', '2.4'] to go clean and/or [] to split/merge
if the cluster has non-active+clean pgs when the "ceph" is about to
end. but this message is a little bit confusing in the sense it
lists "[]" in it.
in this change, only PGs being waited are listed. also, added some
cleanups:
* use "else" to check if the loop is terminated by a break
* remove "0" from the range() call
Signed-off-by: Kefu Chai <kchai@redhat.com>
If the previous test cases failed, the netnses and bridge will be
left. Here will remove them when new test cases begin.
Fixes: https://tracker.ceph.com/issues/45806
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Once we have run the test cases and the ceph-brx bridge is setup,
it will save the config in "/etc/sysconfig/network-scripts/ifcfg-ceph-brx"
or somewhere else. It will be kept after the ceph-brx bridge removed.
So next time once the ceph-brx bridge is created or added, it will
read the config from it, then when we config it again we will get
error like:
"RTNETLINK answers: File exists"
Here we need to flush it before config it.
Fixes: https://tracker.ceph.com/issues/45817
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Because of reasons the cluster needs more time to recover from
HEALTH_WARN while changes are made by `test_pool_update_metadata`.
Lets wait several times for the cluster status to be HEALTH_OK
again.
Fixes: https://tracker.ceph.com/issues/46573
Signed-off-by: Tatjana Dehler <tdehler@suse.com>
mgr/dashboard: increase API test coverage in API controllers
Reviewed-by: Alfonso Martínez <almartin@redhat.com>
Reviewed-by: Ernesto Puertat <epuertat@redhat.com>
Reviewed-by: Laura Paduano <lpaduano@suse.com>
Reviewed-by: Volker Theile <vtheile@suse.com>
keystone's dependencies are installed using its tox.ini,
which in turn uses a constraints file of
https://releases.openstack.org/constraints/upper/ussuri,
and it pins cliff to 3.1.0, which is not able to fulfill the requirement
of osc-lib 2.2.0. as it needs needs cliff>=3.2.0. per
https://releases.openstack.org/ussuri/, the latest osc-lib for
ussuri is 2.0.0. and osc-lib>=2.0.0 is required by
python-openstackclient 2.5.1, so let's use it instead of using the latest
one.
if we install cliff==3.1.0 along with python-openstackclient==5.2.1,
we will have following error, as `CommandManager.add_command_group()`
method was added to cliff in 3.2.0. see
8477c4dbd0,
so cliff failed to work with the latest openstackclient, like:
2020-06-29T17:26:23.402 INFO:teuthology.orchestra.run.smithi039.stderr:'CommandManager' object has no attribute 'add_command_group'
2020-06-29T17:26:23.402 INFO:teuthology.orchestra.run.smithi039.stderr:Traceback (most recent call last):
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/cliff/app.py", line 264, in run
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: self.initialize_app(remainder)
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 133, in
initialize_app
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: super(OpenStackShell, self).initialize_app(argv)
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/osc_lib/shell.py", line 442, in initialize_app
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr: self._load_plugins()
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 104, in
_load_plugins
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr: self.command_manager.add_command_group(cmd_group)
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr:AttributeError: 'CommandManager' object has no attribute 'add_command_group'
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr:Traceback (most recent call last):
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/osc_lib/shell.py", line 134, in run
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: ret_val = super(OpenStackShell, self).run(argv)
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/cliff/app.py", line 264, in run
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: self.initialize_app(remainder)
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 133, in
initialize_app
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: super(OpenStackShell, self).initialize_app(argv)
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/osc_lib/shell.py", line 442, in initialize_app
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: self._load_plugins()
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 104, in
_load_plugins
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: self.command_manager.add_command_group(cmd_group)
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr:AttributeError: 'CommandManager' object has no attribute 'add_command_group'
in this change the openstackclients version is pin'ed to the
latest stable of 5.2.1. will have a separated PR to bump up
the cliff version on teuthology side.
Signed-off-by: Kefu Chai <kchai@redhat.com>
rpm,deb,qa,python-common,test: drop python2 support
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Reviewed-by: Sebastian Wagner <sebastian.wagner@suse.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Laura Paduano <lpaduano@suse.com>
Subvolume operations throw a traceback if the volume
doesn't exist. This patch fixes the same.
Fixes: https://tracker.ceph.com/issues/46496
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Addresses the name collisions of volumes, subvolumes,
clones, subvolume groups and snapshots within the tests.
Fixes: https://tracker.ceph.com/issues/43517
Signed-off-by: Kotresh HR <khiremat@redhat.com>
* refs/pull/35755/head:
mgr/volumes: Deprecate protect/unprotect CLI calls for subvolume snapshots
Reviewed-by: Ramana Raja <rraja@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Kotresh Hiremath Ravishankar <khiremat@redhat.com>
Reviewed-by: Victoria Martinez de la Cruz <vkmc@redhat.com>
Reviewed-by: Goutham Pacha Ravi <gouthamr@redhat.com>
Subvolume snapshots required to be protected, prior to cloning the same.
Also, protected snapshots were not allowed to be unprotected or removed,
if there were in-flight clones, whose source was the snapshot being
removed.
The protection of snapshots explicitly is not required, as these can be
prevented from being removed based only on the in-flight clones checks.
This commit hence deprecates the additional protect/unprotect requirements
prior to cloning a snapshot.
In addition to deprecating the above, support to query a subvolume for
supported features, via the info command, is added. The feature list
is set to "clone" and "auto-protect", where the latter is useful to
decide if protect/unprotect commands are required or not.
Fixes: https://tracker.ceph.com/issues/45371
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
* refs/pull/34861/head:
test: adjust scrub control tests for optional scrub status
mgr: set `task_dirty_status` on reconnect
mds: send scrub status to ceph-mgr only when scrub is running
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Currently Dashboard Pool usage calculation does not match the output of
'ceph df' command.
Fixes: https://tracker.ceph.com/issues/45185
Signed-off-by: Ernesto Puerta <epuertat@redhat.com>
Check container_image_name only if ceph cluster image is not pre-defined in config.
We shouldn't care about container_image_name if there cephadm or ceph already have image defined.
Signed-off-by: Georgios Kyratsas <gkyratsas@suse.com>
* refs/pull/35743/head:
qa/tasks/test_nfs: Add test for cluster info
mgr/volumes/nfs: Add cluster show info command
Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
fuse_mount.py. This isn't critical at all to vstart_runner.py runs but
this patch must dramatically reduce the time it takes in case the
command fails with "permission denied" due to lack of superuser
privileges since in this case the command is re-run 9 more times, each
separated by a sleep for 5 seconds.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Volume deletion wasn't validating mon_allow_pool_delete config
before destroying volume metadata. Hence when mon_allow_pool_delete
is set to false, it was deleting metadata but failed to delete pool
resulting in inconsistent state. This patch validates the config
before going ahead with deletion.
Fixes: https://tracker.ceph.com/issues/45662
Signed-off-by: Kotresh HR <khiremat@redhat.com>
qa/tasks: make sh() in vstart_runner.py identical with teuthology.orchestra.remote.sh
Reviewed-by: Kyr Shatskyy <kyrylo.shatskyy@suse.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
based on qemu task, use immutable_object_cache task to test parent cache
based on rbd_fio task, use immutable_object_cache task to test parent cache
Signed-off-by: Yin Congmin <congmin.yin@intel.com>
Signed-off-by: Feng Hualong <hualong.feng@intel.com>