This change make the Dashboard support two types of Ganesha clusters:
- Orchestrator clusters (Since Octopus)
- Deployed by the Orchestrator.
- The Dashboard gets the pool/namespace that stores Ganesha
configuration objects from the Orchestrator.
- The Dashboard gets the daemons in a cluster from the Orchestrator.
- User-defined clusters (Since Nautilus)
- Clusters defined by using `ceph dashboard
set-ganesha-clusters-rados-pool-namespace` command is treated as
user-defined clusters.
- Each daemon has its own RADOS configuration objects. The
Dashboard uses these objects to deduce daemons.
Fixes: https://tracker.ceph.com/issues/46492
Signed-off-by: Kiefer Chang <kiefer.chang@suse.com>
After deciding to always enable tracking log in early phase, there's no
need to keep "log_early" option here and remove it directly.
Suggested-by: Kefu Chai <kefu@redhat.com>
Signed-off-by: Changcheng Liu <changcheng.liu@aliyun.com>
qa/tasks: add a 'parallel' option support for the cram task
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Jason Dillaman <dillaman@redhat.com>
The mount.cleanup method will remove the mount point. This `rm -rf` will
always fail (with exit status 0).
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
vstart_runner omits the result line ("Ran X tests in Y") generated by
unittest unconditionally. Don't do so when vstart_runner triggers entire
test module at once.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Add an option that makes it possible to not to quit running testsuite on
a test failure. This way user can get a better idea on the current state
of the testsuite and test its own code patches more effectively and
easily.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
* refs/pull/29951/head:
test: add tests for validating MDS metrics via `perf stats` module
test: Filesystem class helpers to grow and shrink MDS cluster
mgr/stats: mds performance stats module
mds: support sending empty perf metrics to ceph-manager
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
For the ceph-iscsi test case we need to run the tests sequentially,
because the client test will depend on the gateway ones.
Signed-off-by: Xiubo Li <xiubli@redhat.com>
mgr/dashboard: fix the error when exporting CephFS path "/" in NFS exports
Reviewed-by: Laura Paduano lpaduano@suse.com
Reviewed-by: Stephan Müller smueller@suse.com
Reviewed-by: Varsha Rao varao@redhat.com
The admin api BucketInfo endpoint should now return 404 for buckets that
are not found where only the bucket name is passed as a parameter.
Fixes: https://tracker.ceph.com/issues/45193
Signed-off-by: Nick Janus <njanus@digitalocean.com>
... into smaller test classes. This enables breaking up the volumes yaml
into smaller yaml fragments.
Fixes: https://tracker.ceph.com/issues/47160
Signed-off-by: Ramana Raja <rraja@redhat.com>
mgr/dashboard: fix error when typing existing paths in the Ganesha form
Reviewed-by: Laura Paduano <lpaduano@suse.com>
Reviewed-by: Stephan Müller <smueller@suse.com>
* refs/pull/36554/head:
mgr/volumes: Make number of cloner threads configurable
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Shyamsundar R <srangana@redhat.com>
Disable clone action when RBD snapshot is not protected
and clone format version is 1.
Fixes: https://tracker.ceph.com/issues/37873
Signed-off-by: Tiago Melo <tmelo@suse.com>
Consider the values given by the `daemons` parameter
and do not use hard-coded values only.
Fixes: https://tracker.ceph.com/issues/47759
Signed-off-by: Tatjana Dehler <tdehler@suse.com>
- The `CephFS.ls_dir()` implementation in the backend had changed, the code in the
UI endpoint `/ui-api/nfs-ganesha/lsdir` needs to adapt.
- Add fs_name as resource_id in `/ui-api/nfs-ganesha/lsdir/<fs_name>` to distinguish FS.
- Add more checks and unit tests.
Fixes: https://tracker.ceph.com/issues/47372
Signed-off-by: Kiefer Chang <kiefer.chang@suse.com>
Rearrange the code that triggers the test runner so that in the later
commits the way tests are triggered can be changed.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Implement a root_squash mode in MDS auth caps to deny operations for
clients with uid=0 or gid=0 that need write access. It's mainly to
prevent operations such as accidental `sudo rm -rf /path`.
The root squash mode can be enforced in one of the following ways in
the MDS caps,
'allow rw root_squash'
(across file systems)
or
'allow rw fsname=a root_squash'
(on a file system)
or
'allow rw fsname=a path=/vol/group/subvol00 root_squash'
(on a file system path)
Fixes: https://tracker.ceph.com/issues/42451
Signed-off-by: Ramana Raja <rraja@redhat.com>
... methods in LocalRemote class. These methods are called in some of
the recently added cephfs tests. They were implemented in teuthology's
Remote class, but not in vstart_runner's LocalRemote class. Hence some
cephfs tests couldn't be run locally using vstart_runner without this
change.
Signed-off-by: Ramana Raja <rraja@redhat.com>
mgr/dashboard: fix perf. issue when listing large amounts of buckets
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
Reviewed-by: Laura Paduano <lpaduano@suse.com>
This is hopefully a transient issue that can be ignored.
Fixes: https://tracker.ceph.com/issues/42433
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
mds_cluster.mds_fail() runs command "mds fail" not "fs fail". The reason
for failure was PR #32581 which accidentally changed the return code
from 0 to EINVAL. Since this was reversed in PR #37159, the change
introduced by 04ed58f is not only incorrect but also redundant.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Added a separate endpoint for osd/histogram - api/osd/{svc_id}/histogram
Fixes:https://tracker.ceph.com/issues/46898
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
In filesystem.py, don't set value of reset_obj_attrs to False.
Fixes: https://tracker.ceph.com/issues/47526
Signed-off-by: Rishabh Dave <ridave@redhat.com>
distinguish unfound + impossible to find, vs start some down OSDs to get
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
This avoids unnecessary MDS_ALL_DOWN messages because the MDS daemons
have not yet been spawned.
Fixes: https://tracker.ceph.com/issues/47518
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Added required files for testing of AssumeRole and GetSessionToken API's and modified s3tests.py to handle the same.
(cherry picked from commit c2c90eaf52)
Signed-off-by: Kalpesh Pandya <kapandya@redhat.com>
qa/cephadm: Add iSCSI
Reviewed-by: Adam King <adking@redhat.com>
Reviewed-by: Georgios Kyratsas <gkyratsas@suse.com>
Reviewed-by: Matthew Oliver <moliver@suse.com>
Reviewed-by: Michael Fritch <mfritch@suse.com>
Right now, only client IDs are stashed and restored but with the recent
changes (addition of more attributes to mount objects, specifically),
this is not enough. Saving and restoring these details before and after
tests respectively ensures that mount commands rus smoothly. Not doing
this typically leads to mount command failure for the second test in the
testsuite under execution since the client IDs are saved and restored in
CephFSTestCase.setUp and CephFSTestCase.tearDown respectively but the
rest of the details are not.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Add testsuite for testing authorization on Ceph cluster with multiple
file systems and enable it to be executable with Teuthology framework.
Also add helper methods required to setup the test environment for
multi-FS tests.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
The number of cloner threads is set to 4 and it can't be
configured. This patch makes the number of cloner threads
configurable via the mgr config option "max_concurrent_clones".
On an increase in number of cloner threads, it will just
spawn the difference of threads between existing number of
cloner threads and the new configuration. It will not cancel
the running cloner threads.
On decrease in number of cloner threads, the cases are as follows.
1. If all cloner threads are waiting for the job:
In this case, all threads are notified and required number
threads are terminated.
2. If all the cloner threads are processing a job:
In this case, the condition is validated for each thread after
the current job is finished and the thread is termianted if the
condition for required number of cloner threads is not satisified.
3. If few cloner threads are processing and others are waiting:
The threads which are waiting are notified to validate the
number of threads required. If terminating those doesn't satisfy the
required number of threads, the remaining threads are terminated
upon completion of existing job.
Fixes: https://tracker.ceph.com/issues/46892
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Modify filesystem.Filesystem.delete_all_filesystems() method to make it
more succinct, move it to class MDSCluster instead and update every call
to it accordingly.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Modify cephfs.filesystem.Filesystem.recreate() method to delete only the
FS represented by the object instead of deleting the every FS on the
Ceph cluster.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
And reset_obj_attrs parameter to it so that the caller of the method can
choose to destroy the Ceph FS represented by the object without
disturbing the object attributes.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
This commit adds a new argument check_status to mount methods of
KernelMount, FuseMount, LocalKernelMount and LocalFuseMount. When value
of this argument is False, these methods would catch the
CommandFailedError exception and would return a tuple consisting of the
exception itself, and stdout and stderr of the mount command. This
allows reusing these mount methods while running negative tests for
commands.
The name "check_status" is selected so since teuthology's run() and
vstart_runner's run() use a variable with same name for the very same
purpose.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
This commit introduces following two set of changes -
First, make client keyring path, mountpoint on host FS and CephFS and
CephFS's name attributes of the object representing the mount
and update all the mount object creation calls accordingly. Also,
rewrite all the mount object creation to use keyword arguments instead
of positional arguments to avoid mistakes, especially since a new
argument was added in this commit.
Second, add remount method to mount.py so that it's possible to unmount
safely, modify the attributes of the object representing the mount and
mount again based on new state of the object *in a single call*. The
method is placed in mount.py to avoid duplication.
This change has two leads to two more changes: upgrading interface of
mount() and mount_wait() and upgrading testsuites to adapt to these
change.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Instead of 4 different GET requests, there is now one PUT request
that parses the desired action in the request body.
A description was added to describe the usage.
* Osd flag endpoints were grouped with Osd endpoints
* A few mypy issues were also resolved
Fixes: https://tracker.ceph.com/issues/46181
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
(cherry picked from commit f67d5aa8fa30859d15d314ef5fbb3d4cda87cd5b)
(cherry picked from commit dcc2ceb0a0ce30f67aee8d8096d6e1714d3e56e7)
(cherry picked from commit 5ee7f8c9b731fe249daeec6f114cf6e9c998af66)
* refs/pull/36546/head:
vstart_runner: log commands in a more usable form
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
no need to check for their existence, and prepare a replacement.
because we've migrated to python3. and we only support python3.6 and up.
Signed-off-by: Kefu Chai <kchai@redhat.com>
Currently commands are printed as ['arg1', 'arg2', 'arg3']. Instead, log
them as '> arg1 arg2 arg3' so that it's simpler to copy and run them
manually. The reason behind prepending '> ' to these logs entries is
just to follow the practice followed by teuthology logs.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Cephfs endpoints are inconsistencies with other endpoints.
* Change rm endpoints from GET to DELETE requests
* Change set_quotas from POST to PUT
* Renames endpoints accordingly
* Changes related angular files
* update a few lines related to mypy
* remove future import as the python2 compatibility is not needed
Fixes: https://tracker.ceph.com/issues/46160
Signed-off-by: Fabrizio D'Angelo <fdangelo@redhat.com>
(cherry picked from commit 222286b08bde3600d50a6deb25e7425c17e9e482)
This commit is intended for code cleanup of Histogram component.
Fixes: https://tracker.ceph.com/issues/26954
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
* refs/pull/36773/head:
mgr/volumes: Prevent subvolume recreate if trash is not-empty
mgr/volumes: Disallow subvolume group level snapshots
mgr/volumes: Add test case to ensure subvolume is marked
mgr/volumes: handle idempotent subvolume marks
mgr/volumes: Tests amended and added to ensure subvolume trash functionality
mgr/volumes: Mark subvolume root with the vxattr ceph.dir.subvolume
mgr/volumes: Move incarnations for v2 subvolumes, to subvolume trash
mgr/volumes: maintain per subvolume trash directory
mgr/volumes: make subvolume_v2::_is_retained() object property
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
with the vxattr ceph.dir.subvolume set to true.
Fixes: https://tracker.ceph.com/issues/47154
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
Amended a few test cases to ensure created subvolumes and snaps
are removed, and trash stays empty at the end of the test.
Further added one test case for create errors in a retained
v2 subvolume, to ensure metadata is sane, and created incarnation
is not present.
Fixes: https://tracker.ceph.com/issues/47154
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
* refs/pull/36684/head:
qa/tasks/nfs: Test mounting of export created with nfs command
qa/tasks/nfs: Add helper method to check nfs cluster status
qa/tasks/nfs: Cleanup created filesystem
qa/tasks/nfs: Remove unused port status function and 'stdin' keyword argument
Reviewed-by: Sebastian Wagner <swagner@suse.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* add comment to _run_tests()
* use `os.path.commonpath()` instead using string matching directly
for matching given workunit spec with executables.
* allow passing optional args to workunit
Signed-off-by: Kefu Chai <kchai@redhat.com>
as it is shell who interprets ">>" and redirect the stderr to given
file, but the shell process is launched by ubuntu:ububunt without using
sudo, so the command fails with "Permission denied" failure. to address
this issue, in this change, a file with proper priviledges is created
beforehand using `install`, so shell is able to write to it.
also, instead of creating this file in `maybe_redirect_stderr()`, it
returns the command to create the log file.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* refs/pull/36131/head:
doc: document cephfs mirroring dev work
test: add tests for `ceph fs mirror` family of commands
mds: track filesystem mirror peers in fsmap
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This new method should allow better control on the process launched by
the passed command. This is achieved by allowing arguments provided by
teuthology.orchestra.run.run().
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Allow only [A-Za-z0-9-_.] characters for FS, volume, subvolume and
subvolume group names and add test for the same.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
This commit adds the file path of ceph log directories to the job's
info.yaml log file. The motivation behind this is, in case of job
timeout, the logs would still be tranferred to teuthology host
before nuking test machines using these ceph log directory paths in
job's info.yaml log file.
Signed-off-by: Shraddha Agrawal <shraddha.agrawal000@gmail.com>
This commit adds the file path of ceph log directories to the job's
info.yaml log file. The motivation behind this is, in case of job
timeout, the logs would still be tranferred to teuthology host
before nuking test machines using these ceph log directory paths in
job's info.yaml log file.
Signed-off-by: Shraddha Agrawal <shraddha.agrawal000@gmail.com>
the test for diskprediction_cloud is never enabled, and the used
cloud-based service is not reachable anymore. let's just remove the dead
code.
Signed-off-by: Kefu Chai <kchai@redhat.com>
If a subvolumes mode or uid/gid values are changed post a snapshot,
and a clone of a snapshot prior to the change is initiated, the clone
inherits the current source subvolumes attributes, rather than the
snapshots attributes.
Fixing this by using the snapshots subvolume root attributes to create
the clone subvolumes root.
Following attributes are picked from the source subvolume snapshot:
- uid, gid, mode, data pool, pool namespace, quota
Fixes: https://tracker.ceph.com/issues/46163
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
Otherwise the files generated are not actually under the sub-directory!
This is correcting a confusing aspect of the test infrastructure but
doesn't actually require any changes to the tests.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
To make this easier to catch. It is still a RuntimeError so it should
not affect current tests by default.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
we should redirect stderr for crimson instead for default flavor. this
change addresses a regression introduced by
da76f46461
Signed-off-by: Kefu Chai <kchai@redhat.com>
The API call is a task and the response status is determined by whether
the call is completed within a pre-defined duration (2 seconds) or not.
We should also allow the status when the call takes longer.
Fixes: https://tracker.ceph.com/issues/46812
Signed-off-by: Kiefer Chang <kiefer.chang@suse.com>
* refs/pull/36155/head:
qa: Fix traceback during fs cleanup between tests
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
* refs/pull/35944/head:
qa: defer cleaning the mountpoint's netnses and the bridge
qa/tasks/cephfs/mount.py: remove the stale netnses and bridge
qa/tasks/cephfs/mount.py: try to flush the stale ceph-brx dev info
qa/tasks/cephfs/mount.py: switch to run_shell_payload() helper
qa/tasks/cephfs/mount.py: clean up the none used code
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
mgr/cephadm: fix call to cephadm for daemon restarts etc
Reviewed-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Varsha Rao <varao@redhat.com>
* extract get_ragweed_branch() out of download() task, for better
readablity.
* use a loop for retry when the first clone fails
* drop the `raise ValueError()` clause as it never happens. we could use
an assert() here, but i don't think it is necessary anyway.
* use sh() instead of run() for better readablity.
* always set ragweed_repo. before this change this variable is
unbounded if `force-branch` is set.
Fixes: https://tracker.ceph.com/issues/46771
Signed-off-by: Kefu Chai <kchai@redhat.com>
The 'mon_allow_pool_delete' option is set to 'True'
in 'setUp' of 'TestVolumes' and is cleared of in
corresponding 'tearDown' function. Hence, any pool
deletion in parent classes such as 'CephFSTestCase'
would fail. This patch fixes the same by setting
the config 'mon_allow_pool_delete' option in the
'CephFSTestCase'.
Fixes: https://tracker.ceph.com/issues/46597
Signed-off-by: Kotresh HR <khiremat@redhat.com>
The netnses maybe created/deleted many times in the whole test cases,
we can defer cleaning them untile the last mountpoint is unmounted
or when the test is exiting.
Fixes: https://tracker.ceph.com/issues/46282
Signed-off-by: Xiubo Li <xiubli@redhat.com>
we could have following logging messages:
tasks.ceph:Waiting for all PGs to be active+clean and split+merged, waiting on ['2.6', '2.5', '1.0', '2.4'] to go clean and/or [] to split/merge
if the cluster has non-active+clean pgs when the "ceph" is about to
end. but this message is a little bit confusing in the sense it
lists "[]" in it.
in this change, only PGs being waited are listed. also, added some
cleanups:
* use "else" to check if the loop is terminated by a break
* remove "0" from the range() call
Signed-off-by: Kefu Chai <kchai@redhat.com>
If the previous test cases failed, the netnses and bridge will be
left. Here will remove them when new test cases begin.
Fixes: https://tracker.ceph.com/issues/45806
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Once we have run the test cases and the ceph-brx bridge is setup,
it will save the config in "/etc/sysconfig/network-scripts/ifcfg-ceph-brx"
or somewhere else. It will be kept after the ceph-brx bridge removed.
So next time once the ceph-brx bridge is created or added, it will
read the config from it, then when we config it again we will get
error like:
"RTNETLINK answers: File exists"
Here we need to flush it before config it.
Fixes: https://tracker.ceph.com/issues/45817
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Because of reasons the cluster needs more time to recover from
HEALTH_WARN while changes are made by `test_pool_update_metadata`.
Lets wait several times for the cluster status to be HEALTH_OK
again.
Fixes: https://tracker.ceph.com/issues/46573
Signed-off-by: Tatjana Dehler <tdehler@suse.com>
mgr/dashboard: increase API test coverage in API controllers
Reviewed-by: Alfonso Martínez <almartin@redhat.com>
Reviewed-by: Ernesto Puertat <epuertat@redhat.com>
Reviewed-by: Laura Paduano <lpaduano@suse.com>
Reviewed-by: Volker Theile <vtheile@suse.com>
keystone's dependencies are installed using its tox.ini,
which in turn uses a constraints file of
https://releases.openstack.org/constraints/upper/ussuri,
and it pins cliff to 3.1.0, which is not able to fulfill the requirement
of osc-lib 2.2.0. as it needs needs cliff>=3.2.0. per
https://releases.openstack.org/ussuri/, the latest osc-lib for
ussuri is 2.0.0. and osc-lib>=2.0.0 is required by
python-openstackclient 2.5.1, so let's use it instead of using the latest
one.
if we install cliff==3.1.0 along with python-openstackclient==5.2.1,
we will have following error, as `CommandManager.add_command_group()`
method was added to cliff in 3.2.0. see
8477c4dbd0,
so cliff failed to work with the latest openstackclient, like:
2020-06-29T17:26:23.402 INFO:teuthology.orchestra.run.smithi039.stderr:'CommandManager' object has no attribute 'add_command_group'
2020-06-29T17:26:23.402 INFO:teuthology.orchestra.run.smithi039.stderr:Traceback (most recent call last):
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/cliff/app.py", line 264, in run
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: self.initialize_app(remainder)
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 133, in
initialize_app
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: super(OpenStackShell, self).initialize_app(argv)
2020-06-29T17:26:23.403 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/osc_lib/shell.py", line 442, in initialize_app
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr: self._load_plugins()
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 104, in
_load_plugins
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr: self.command_manager.add_command_group(cmd_group)
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr:AttributeError: 'CommandManager' object has no attribute 'add_command_group'
2020-06-29T17:26:23.404 INFO:teuthology.orchestra.run.smithi039.stderr:Traceback (most recent call last):
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/osc_lib/shell.py", line 134, in run
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: ret_val = super(OpenStackShell, self).run(argv)
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/cliff/app.py", line 264, in run
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: self.initialize_app(remainder)
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 133, in
initialize_app
2020-06-29T17:26:23.405 INFO:teuthology.orchestra.run.smithi039.stderr: super(OpenStackShell, self).initialize_app(argv)
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/osc_lib/shell.py", line 442, in initialize_app
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: self._load_plugins()
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: File "/home/ubuntu/cephtest/keystone/.tox/venv/lib/python3.6/site-packages/openstackclient/shell.py", line 104, in
_load_plugins
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr: self.command_manager.add_command_group(cmd_group)
2020-06-29T17:26:23.406 INFO:teuthology.orchestra.run.smithi039.stderr:AttributeError: 'CommandManager' object has no attribute 'add_command_group'
in this change the openstackclients version is pin'ed to the
latest stable of 5.2.1. will have a separated PR to bump up
the cliff version on teuthology side.
Signed-off-by: Kefu Chai <kchai@redhat.com>
rpm,deb,qa,python-common,test: drop python2 support
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Reviewed-by: Sebastian Wagner <sebastian.wagner@suse.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Laura Paduano <lpaduano@suse.com>