The delays were applied everywhere and needlessly interfere with test
commands sent to mons from the ceph admin command. Furthermore, the
delays would not affect the kernel client. Now the delays are performed
by the MDS on clients.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Filter inherited snapshots resulted as part of a snapshot
at ancestor level while listing snapshots of a subvolume
and subvolumegroup
Also, fail the snapshot info on inherited snapshot.
Fixes: https://tracker.ceph.com/issues/48501
Signed-off-by: Kotresh HR <khiremat@redhat.com>
* refs/pull/38640/head:
qa: add test for reserved keyword feature
qa: use no client for required client feature tests
mds: do not allow setting a reserved feature by name
mds: return sv for efficiency
Reviewed-by: Jos Collin <jcollin@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
* refs/pull/38108/head:
doc, man: man page for `cephfs-top` utility
doc: document `cephfs-top` utility
test: selftest for `cephfs-top` utility
spec, deb: package cephfs-top utility
cephfs-top: top(1) like utility for Ceph Filesystem
mgr/stats: include kernel version (for kclients) in `perf stats` command output
mgr/stats: include version with `perf stats` output
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/38664/head:
qa: bump scrub timeout
qa: move cephfs_ec_profile under cephfs
qa: do not use ec pools for default data pool
qa: skip check-counters for light workloads
qa: remove empty multimds suite
qa: merge multimds:verify with fs:verify
qa: merge multimds:thrash to fs:thrash
qa: remove dead multimds:basic
qa: move functional multimds tests to fs:functional
qa: migrate multimds workloads to fs:workloads
qa: only run valgrind on cephfs daemons
qa: stop testing filestore on cephfs suites
qa: load data pools before deleting fs
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Tested-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/38693/head:
qa: execute scrubs only on rank 0
client: print debug information about resolved MDS
qa: let Client::resolve_mds lookup the rank
Reviewed-by: Xiubo Li <xiubli@redhat.com>
Orchestrator defines service name as "<service_type>.<service_id>". The ganesha
common config object name in orchestrator is "conf-<service_name>". Volume's
nfs plugin deploys nfs-ganesha clusters with 'ganesha' prefixed to cluster id
and common config object. It can cause unecessary issues in rook and cephadm.
So let's remove the prefix.
Fixes: https://tracker.ceph.com/issues/48514
Signed-off-by: Varsha Rao <varao@redhat.com>
The default was changed to INFO recently but there was no way to restore
visibility of DEBUG messages.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
rados/cephadm/smoke* does not use the install task and the adjust-ulimits
dependency is met as a part of it. create_rbd_pool needs adjust-ulimits,
so for now we will disable create_rbd_pool by default and only set it
to true for the upgrade suite.
Signed-off-by: Neha Ojha <nojha@redhat.com>
Otherwise it always looks at the default data pool. For ec pools, this
may not be where the file data is.
Fixes: https://tracker.ceph.com/issues/48756
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
The ceph task already does this and a bunch of tests rely on this pool
being already present. Can be disabled by setting create_rbd_pool to False.
Signed-off-by: Neha Ojha <nojha@redhat.com>