Commit Graph

15 Commits

Author SHA1 Message Date
Rishabh Dave
485841b255 qa: import CommandFailedError from exceptions not run
Stop importing CommandFailedError from teuthology.orchestra.run, it is
actually defined in teuthology.exception.

Fixes: https://tracker.ceph.com/issues/51226
Signed-off-by: Rishabh Dave <ridave@redhat.com>
2021-10-05 23:41:09 +05:30
Patrick Donnelly
0f505dc299
qa: update scrub start code to use comma sep scrubopts
The documentation specifies this in [1] and yet we were using (I
believe) an older syntax:

    ceph tell mds.foo:0 scrub start / recursive force

instead of

    ceph tell mds.foo:0 scrub start / recursive,force

Oddly the former works at least as recently as in [2]:

    2021-06-03T07:11:42.071 DEBUG:teuthology.orchestra.run.smithi025:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell mds.1:0 scrub start / recursive force
    ...
    2021-06-03T07:11:42.268 INFO:teuthology.orchestra.run.smithi025.stdout:{
    2021-06-03T07:11:42.268 INFO:teuthology.orchestra.run.smithi025.stdout:    "return_code": 0,
    2021-06-03T07:11:42.268 INFO:teuthology.orchestra.run.smithi025.stdout:    "scrub_tag": "cf7a74b2-3eb2-4657-9274-ea504b1ebf8f",
    2021-06-03T07:11:42.269 INFO:teuthology.orchestra.run.smithi025.stdout:    "mode": "asynchronous"
    2021-06-03T07:11:42.269 INFO:teuthology.orchestra.run.smithi025.stdout:}

[1] https://docs.ceph.com/en/latest/cephfs/scrub/
[2] /ceph/teuthology-archive/pdonnell-2021-06-03_03:40:33-fs-wip-pdonnell-testing-20210603.020013-distro-basic-smithi/6148097/teuthology.log

Fixes: https://tracker.ceph.com/issues/51146
See-also: https://tracker.ceph.com/issues/51145
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2021-06-09 07:23:05 -07:00
Patrick Donnelly
5292e88201
qa: reduce dependence on teuthology role list for mds
It's not yet possible to completely remove the dependency on
mds_ids/mds_daemons in the CephFS tests but this commit reduces it
enough for most code paths to work with cephadm.

The main change here is use of CephManager.do_rados, with some
improvements.

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2021-03-21 10:35:07 -07:00
Patrick Donnelly
0825d6aa9e
qa: simplify tests which stop MDS ranks
Instead of stopping MDS daemons and individually failing MDS daemons,
just fail the ranks or the entire file system, where possible.

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2021-03-21 10:35:06 -07:00
Patrick Donnelly
eccf961697
qa: add Filesystem.reset helper
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2021-03-21 10:35:06 -07:00
Rishabh Dave
07e493ffb5 qa/cephfs: allow reusing mount objects and add remount method
This commit introduces following two set of changes -

First, make client keyring path, mountpoint on host FS and CephFS and
CephFS's name attributes of the object representing the mount
and update all the mount object creation calls accordingly. Also,
rewrite all the mount object creation to use keyword arguments instead
of positional arguments to avoid mistakes, especially since a new
argument was added in this commit.

Second, add remount method to mount.py so that it's possible to unmount
safely, modify the attributes of the object representing the mount and
mount again based on new state of the object *in a single call*. The
method is placed in mount.py to avoid duplication.

This change has two leads to two more changes: upgrading interface of
mount() and mount_wait() and upgrading testsuites to adapt to these
change.

Signed-off-by: Rishabh Dave <ridave@redhat.com>
2020-09-10 17:10:51 +05:30
Xiubo Li
5c24d91327 qa/tasks/cephfs: add mount_wait() support to simplify the code
Mostly we should wait the mountpoint to get ready, especially for
the fuse mountpoint, sometimes it may take a few seconds to get
ready.

Fixes: https://tracker.ceph.com/issues/44044
Signed-off-by: Xiubo Li <xiubli@redhat.com>
2020-04-14 07:47:04 -04:00
Thomas Bechtold
0127cd1e88 qa: Enable flake8 tox and fix failures
There were a couple of problems found by flake8 in the qa/
directory (most of them fixed now). Enabling flake8 during the usual
check runs hopefully avoids adding new issues in the future.

Signed-off-by: Thomas Bechtold <tbechtold@suse.com>
2019-12-12 10:21:01 +01:00
Venky Shankar
1eb33745a8 test: switch using "scrub start" tell interface to initiate scrub
... and fixup doc too.

Signed-off-by: Venky Shankar <vshankar@redhat.com>
2019-01-02 08:51:00 -05:00
James Page
20aa3ad17b Correct usage of collections.abc
Some classes should still be imported directly from collections;
only OrderedDict, Iterable and Callable (in the context of the
ceph codebase) are found in collections.abc.

The current code works due to the fallback support for Python 2.

Signed-off-by: James Page <james.page@ubuntu.com>
2018-11-29 09:48:43 +00:00
Ganesh Maharaj Mahalingam
625868974b fix python collections module warning for v3.7 and above
Python 3.7 now shows a warning as below.

/usr/bin/ceph:128: DeprecationWarning: Using or importing the ABCs from
'collections' instead of from 'collections.abc' is deprecated, and in
3.8 it will stop working
  import rados

This patch addresses the that particular issue.

Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
2018-10-28 23:32:22 -07:00
Venky Shankar
f65193d955 test: make rank argument mandatory when running journal_tool
Also, fix a bunch of quirky journal_tool invocations that pass
"--rank" argument as the command argument rather than passing it
as function argument.

Fixes: https://tracker.ceph.com/issues/24780
Signed-off-by: Venky Shankar <vshankar@redhat.com>
2018-09-21 06:09:39 -04:00
Patrick Donnelly
a2ff87d4e2
qa: remove dead code
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2018-07-13 18:15:03 -07:00
Patrick Donnelly
fa25d6c8d1
qa: run asok command on correct machine
The MDS may not be on the same machine where the cluster command is run.

Fixes: http://tracker.ceph.com/issues/24858

Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
2018-07-13 16:07:30 -07:00
Douglas Fuller
6af2ae80d3 qa/cephfs: test CephFS recovery pools
Test recovering metadata in to a separate RADOS pool with
cephfs_data_scan and friends.

Signed-off-by: Douglas Fuller <dfuller@redhat.com>
2017-08-30 09:02:44 -04:00