instead of get_ceph_cmd_stdout().
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit c7c38ba558)
Conflicts:
qa/tasks/cephfs/test_mirroring.py
- Commit e4dd0e41a3 was not present on main but it is now
present on main as well as on Reef, which leads to conflict.
- The line located right before one of the patches in this
commit was modified in latest Reef branch, thus creating
conflict when PR branch was rebased on latest Reef.
Add method get_ceph_cmd_stdout() to class CephFSTestCase so that one
doesn't have to type something as long as
"self.mds_cluster.mon_manager.raw_cluster_cmd()" to execute a
command and get its output. And delete and replace
CephFSTestCase.run_cluster_cmd() too.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
(cherry picked from commit 13168834e3)
Conflicts:
qa/tasks/cephfs/caps_helper.py
- This file is very different in Reef.
qa/tasks/cephfs/test_mirroring.py
- Commit e4dd0e41a3 was not present on main but it is now
present on main as well as on Reef, which leads to conflict.
- On Reef branch, the line before that patch in this commit was
thus creating a conflict when the PR branch for this commit
series was rebased on latest Reef.
Code has been changed, in order to scrub ~mdsdir at root,
recursive flag also needs to be provided along with
scrub_mdsdir.
Fixes: https://tracker.ceph.com/issues/59350
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit e40ca408a1)
- test_stray_evaluation_with_scrub
this assures that evaluating strays with scrub works fine and no
crash is detected.
- test_flag_scrub_mdsdir
test the new flag to scrub ~mdsdir at CephFS root
Fixes: https://tracker.ceph.com/issues/51824
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
(cherry picked from commit 632c8b04cc)
Wait for scrub to finish during test_scrub_pause_and_resume_with_abort
which otherwise races and fails with an incorrect assertion.
Fixes: https://tracker.ceph.com/issues/48812
Signed-off-by: Milind Changire <mchangir@redhat.com>
It's not yet possible to completely remove the dependency on
mds_ids/mds_daemons in the CephFS tests but this commit reduces it
enough for most code paths to work with cephadm.
The main change here is use of CephManager.do_rados, with some
improvements.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Mostly we should wait the mountpoint to get ready, especially for
the fuse mountpoint, sometimes it may take a few seconds to get
ready.
Fixes: https://tracker.ceph.com/issues/44044
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Normal ceph services can send task status updates to manager.
Task status is tracked in service map implying that normal
ceph services have entries in service map and daemon tracking
index (daemon state). But the manager prunes entries from daemon
state when it receives an updated map (fs, mon, etc...). This
causes periodic pruning of service map entries to fail for normal
ceph services (those which send task status updates) since it
expects a corresponding entry in daemon state.
Signed-off-by: Venky Shankar <vshankar@redhat.com>
To be able to catch problems with python2 *and* python3, run flake8
with both versions. From the flake8 homepage:
It is very important to install Flake8 on the correct version of
Python for your needs. If you want Flake8 to properly parse new
language features in Python 3.5 (for example), you need it to be
installed on 3.5 for Flake8 to understand those features. In many
ways, Flake8 is tied to the version of Python on which it runs.
Also fix the problems with python3 on the way.
Note: This requires now the six module for teuthology. But this is
already an install_require in teuthology itself.
Signed-off-by: Thomas Bechtold <tbechtold@suse.com>
There were a couple of problems found by flake8 in the qa/
directory (most of them fixed now). Enabling flake8 during the usual
check runs hopefully avoids adding new issues in the future.
Signed-off-by: Thomas Bechtold <tbechtold@suse.com>