* refs/pull/49460/head:
qa: fix issue with fn unable to fetch port and ip
qa: fix helper function _check_nfs_cluster_status()
qa: fix testcase 'test_cluster_set_user_config_with_non_existing_clusterid'
qa: fix cluster creation failure in test_nfs.py
qa: test export creation at filepath and symlink
qa: added test case test_nfs_export_with_invalid_path
mgr/nfs: disallow non-existent paths when creating export
mgr/nfs/tests: mock check_cephfs_path
mgr/nfs/utils: add helper func to check cephfs path
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
cephadm: mount host /etc/hosts for daemon containers in podman deployments
Reviewed-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Nizamudeen A <nia@redhat.com>
Today's scheduled run failed since the newest build of main
had failed. If we add `-n 10` to the command, this will
make it so we start at the newest build and backtrack
up to 10 older builds if necessary.
A higher number than that is not necessary, as the suite
failing to run will signal to us that more than the last
10 main builds are broken in Shaman.
Signed-off-by: Laura Flores <lflores@redhat.com>
_get_port_ip_info() fails to fetch port and ip due to empty 'backend' key:
2023-02-24T20:49:09.084 DEBUG:teuthology.orchestra.run.smithi042:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster info test
2023-02-24T20:49:09.471 INFO:teuthology.orchestra.run.smithi042.stdout:{
2023-02-24T20:49:09.472 INFO:teuthology.orchestra.run.smithi042.stdout: "test": {
2023-02-24T20:49:09.472 INFO:teuthology.orchestra.run.smithi042.stdout: "backend": [],
2023-02-24T20:49:09.472 INFO:teuthology.orchestra.run.smithi042.stdout: "virtual_ip": null
2023-02-24T20:49:09.472 INFO:teuthology.orchestra.run.smithi042.stdout: }
2023-02-24T20:49:09.472 INFO:teuthology.orchestra.run.smithi042.stdout:}
it then raises:
2023-02-24T20:49:10.323 INFO:tasks.cephfs_test_runner: info_output = json.loads(self._nfs_cmd('cluster', 'info', self.cluster_id))['test']['backend'][0]
2023-02-24T20:49:10.323 INFO:tasks.cephfs_test_runner:IndexError: list index out of range
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
Comment in the code says to wait for two minutes as cluster
creation takes time but actually it's waiting for thirteen
minutes, it's not required to wait this long, i think a minute
here is more than enough, also switched to using safe_while().
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
Also adds a function _nfs_complete_cmd() that returns process obj so that stdout/stderr
can be used for evaluation(_nfs_cmd() uses raw_cluster_cmd() that returns just stdout
and it became difficult to time cluster creation errors in _test_create_cluster()).
It takes sometime to update the cluster data, therefore running the command set
(check nfs server status -> nfs cluster create test -> check cluster status) in
a loop (max six iteration with sleep of 5 secs at each iteration) fixes the issue.
Fixes: https://tracker.ceph.com/issues/58744
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
* refs/pull/47649/head:
mds: adjust MDSRank::command_tag_path invocation of enqueue_scrub()
doc/scrub: documented stray evaluation using recursive scrub
qa: added testcases
mds: make `scrub status` print flag `scrub_mdsdir`
mds: add scrub_mdsdir to ScrubHeader
mds: do not dump multiple JSON obj
mds: evaluate strays while performing scrub on root path
mds: remove inode from scrub_stack if being purged
mds: do not scrub inode if it is purging
Reviewed-by: Venky Shankar <vshankar@redhat.com>
* refs/pull/50053/head:
libcephfs: move ClearSetuid to suidsgid.cc
libcephfs: add test cases for dropping the suid/sgid in write/truncate
libcephfs: add test cases for dropping the suid/sgid in fallocate
libcephfs: fix ClearSetuid incorrectly using SETATTR_MODE mask
client: switch to clear_suid_sgid for ftruncate
client: switch to clear_suid_sgid for _write()
mds/client: clear the suid/sgid in fallocate path
client: allow unprivileged users to clear suid/sgid
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Milind Changire <mchangir@redhat.com>
* refs/pull/49773/head:
mds: add config to decide whether to mark dentry bad
qa: add missing scan_links step for data scan recovery
qa/tasks/cephfs: test damage to dentry's first is caught
qa/tasks/cephfs: use rank_asok and allow specifying rank
qa/tasks: allow specifying timeout command prefix to ceph
mds: provide test configs for creating first corruption
mds: catch damage to dentry's first field
mds: add debugging for pre_cow_old_inode
mds: cleanup code
Reviewed-by: Kotresh Hiremath Ravishankar <khiremat@redhat.com>
this helper instantiates CephfsClient, however this was
initially planned in ExportMgr class in export.py but
due to make check failure where main python thread
experienced a dead lock which after several efforts
pointed at instantiation of CephfsClient in ExportMgr
was problematic, it was decided in order to achieve
singleton behavior, func has been added inside this
helper func that restricts instantiation using functool's
lru_cache.
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
When possible. Abort the MDS before it can be written to the
journal/directory.
This is part of a series to address corruption first observed in [1].
How the corruption is introduced is yet unknown.
[1] https://tracker.ceph.com/issues/38452#note-10
Fixes: http://tracker.ceph.com/issues/58482
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>