Add `# type: ignore` comments to two dashboard functions that attempt
to set manager properties. There appear to be two approaches to fixing
the problem:
1. The _MgrProxy object that the dashboard uses has a __getattr__ method
for pulling value from the underlying mgr object. It does not have a
__setattr__ method. This means the setting values on _MgrProxy do not
propogate down to the original mgr.
mypy detects the fact that the object doesn't have __setattr__ and
complains. One could add a __setattr__ to the proxy type and mypy
is satisfied.
2. We can just suppress the type check with the comment.
Because I have no idea why the _MgrProxy exists or why it's implemented
the way it is, I feel that 2 is simpler. It is easy enough to go back
later and clean up the comments rather than me investing a lot of time
to understand the dashboard's approach just to bump up the version of
mypy.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
Add a `# type: ignore` comment to the exception handling dashboard
module just like the instance to lines below. This module does not
already import typing, so I'm not going to add it.
This change is needed in order to run mypy 0.981.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
cephadm: mount host /etc/hosts for daemon containers in podman deployments
Reviewed-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Nizamudeen A <nia@redhat.com>
Today's scheduled run failed since the newest build of main
had failed. If we add `-n 10` to the command, this will
make it so we start at the newest build and backtrack
up to 10 older builds if necessary.
A higher number than that is not necessary, as the suite
failing to run will signal to us that more than the last
10 main builds are broken in Shaman.
Signed-off-by: Laura Flores <lflores@redhat.com>
_get_port_ip_info() fails to fetch port and ip due to empty 'backend' key:
2023-02-24T20:49:09.084 DEBUG:teuthology.orchestra.run.smithi042:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster info test
2023-02-24T20:49:09.471 INFO:teuthology.orchestra.run.smithi042.stdout:{
2023-02-24T20:49:09.472 INFO:teuthology.orchestra.run.smithi042.stdout: "test": {
2023-02-24T20:49:09.472 INFO:teuthology.orchestra.run.smithi042.stdout: "backend": [],
2023-02-24T20:49:09.472 INFO:teuthology.orchestra.run.smithi042.stdout: "virtual_ip": null
2023-02-24T20:49:09.472 INFO:teuthology.orchestra.run.smithi042.stdout: }
2023-02-24T20:49:09.472 INFO:teuthology.orchestra.run.smithi042.stdout:}
it then raises:
2023-02-24T20:49:10.323 INFO:tasks.cephfs_test_runner: info_output = json.loads(self._nfs_cmd('cluster', 'info', self.cluster_id))['test']['backend'][0]
2023-02-24T20:49:10.323 INFO:tasks.cephfs_test_runner:IndexError: list index out of range
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
Comment in the code says to wait for two minutes as cluster
creation takes time but actually it's waiting for thirteen
minutes, it's not required to wait this long, i think a minute
here is more than enough, also switched to using safe_while().
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
Also adds a function _nfs_complete_cmd() that returns process obj so that stdout/stderr
can be used for evaluation(_nfs_cmd() uses raw_cluster_cmd() that returns just stdout
and it became difficult to time cluster creation errors in _test_create_cluster()).
It takes sometime to update the cluster data, therefore running the command set
(check nfs server status -> nfs cluster create test -> check cluster status) in
a loop (max six iteration with sleep of 5 secs at each iteration) fixes the issue.
Fixes: https://tracker.ceph.com/issues/58744
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
* refs/pull/47649/head:
mds: adjust MDSRank::command_tag_path invocation of enqueue_scrub()
doc/scrub: documented stray evaluation using recursive scrub
qa: added testcases
mds: make `scrub status` print flag `scrub_mdsdir`
mds: add scrub_mdsdir to ScrubHeader
mds: do not dump multiple JSON obj
mds: evaluate strays while performing scrub on root path
mds: remove inode from scrub_stack if being purged
mds: do not scrub inode if it is purging
Reviewed-by: Venky Shankar <vshankar@redhat.com>
* refs/pull/50053/head:
libcephfs: move ClearSetuid to suidsgid.cc
libcephfs: add test cases for dropping the suid/sgid in write/truncate
libcephfs: add test cases for dropping the suid/sgid in fallocate
libcephfs: fix ClearSetuid incorrectly using SETATTR_MODE mask
client: switch to clear_suid_sgid for ftruncate
client: switch to clear_suid_sgid for _write()
mds/client: clear the suid/sgid in fallocate path
client: allow unprivileged users to clear suid/sgid
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Milind Changire <mchangir@redhat.com>
Multipart upload missing encryption when we have bucket encryption
policy. Fix it by fetching bucket encryption policy and resolving
defaults at multipart init op.
Fixes: https://tracker.ceph.com/issues/59218
Signed-off-by: Tongliang Deng <dengtongliang@gmail.com>
* refs/pull/49773/head:
mds: add config to decide whether to mark dentry bad
qa: add missing scan_links step for data scan recovery
qa/tasks/cephfs: test damage to dentry's first is caught
qa/tasks/cephfs: use rank_asok and allow specifying rank
qa/tasks: allow specifying timeout command prefix to ceph
mds: provide test configs for creating first corruption
mds: catch damage to dentry's first field
mds: add debugging for pre_cow_old_inode
mds: cleanup code
Reviewed-by: Kotresh Hiremath Ravishankar <khiremat@redhat.com>
this helper instantiates CephfsClient, however this was
initially planned in ExportMgr class in export.py but
due to make check failure where main python thread
experienced a dead lock which after several efforts
pointed at instantiation of CephfsClient in ExportMgr
was problematic, it was decided in order to achieve
singleton behavior, func has been added inside this
helper func that restricts instantiation using functool's
lru_cache.
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
When possible. Abort the MDS before it can be written to the
journal/directory.
This is part of a series to address corruption first observed in [1].
How the corruption is introduced is yet unknown.
[1] https://tracker.ceph.com/issues/38452#note-10
Fixes: http://tracker.ceph.com/issues/58482
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>