If the given Ceph FS, or the default Ceph FS when no Ceph FS is given,
is absent, abort the execution with AsssertionError and an error
message.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
* refs/pull/39138/head:
qa: valgrind test for cephfs-mirror daemon
cephfs-mirror: use preforker for daemonizing
test: adjust sleep time to account for valgrind runs
cephfs-mirror: gracefully shutdown threads, timers, etc..
cephfs-mirror: call ceph_release() to cleanup mount alloc
cephfs-mirror: shutdown filesystem/cluster connections on shutdown
cephfs-mirror: set init failed flag on FSMirror::init() failure
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
With ceph_volume_client and mgr-volumes co-existing
for sometime, the version of both needs to be same.
The ceph_volume_client version <=5 can't decode
'subvolumes' key in auth-metadata file. Hence to
handle version in-compatibility, the version of
ceph_volume_client is bumped up to 6 and the same
needs to be done in mgr-volume's AuthMetadataManager
Fixes: https://tracker.ceph.com/issues/49374
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Recovering dirty auth metadata file might not retain the order,
fixed the comparison in 'test_recover_auth_metadata_during_authorize'
and 'test_recover_auth_metadata_during_deauthorize'.
Fixes: https://tracker.ceph.com/issues/49192
Signed-off-by: Kotresh HR <khiremat@redhat.com>
* refs/pull/39147/head:
qa/tasks/ceph_fuse: do not createfs
qa/tasks/cephfs/fuse_mount: pass admin_socket path
qa/suites/fs/cephadm/multivolume: add basic multivolume test
mgr/mds_autoscaler: some fixes and cleanup
mgr/volumes: deploy MDSs when creating fs
Reviewed-by: Milind Changire <mchangir@redhat.com>
For ceph.py this comes from the ceph.conf.template, but it's not there
for cephadm. Instead of inflicting this on the config inside the
container, just pass it to ceph-fuse incantation here.
Signed-off-by: Sage Weil <sage@newdream.net>
mgr/volumes: Evict clients based on auth-IDs and subvolume path
Reviewed-by: Victoria Martinez de la Cruz <victoria@redhat.com>
Reviewed-by: Ramana Raja <rraja@redhat.com>
Add subvolume evict command which evicts the subvolume mounts
which are mounted using particular auth-ID.
Fixes: https://tracker.ceph.com/issues/44928
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Also filter out client-id's starting with "mirror" when
cleaning leftover auth-ids since teuthology would be
configured to create client.mirror and client.mirror_remote
clients before executing mirroring tests.
Signed-off-by: Venky Shankar <vshankar@redhat.com>
Filter inherited snapshots resulted as part of a snapshot
at ancestor level while listing snapshots of a subvolume
and subvolumegroup
Also, fail the snapshot info on inherited snapshot.
Fixes: https://tracker.ceph.com/issues/48501
Signed-off-by: Kotresh HR <khiremat@redhat.com>
This library is obsolete with the mgr volumes plugin since Nautilus.
The last remaining user of this library was Manila which will be using
the volumes plugin with Pacific and onwards.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/38640/head:
qa: add test for reserved keyword feature
qa: use no client for required client feature tests
mds: do not allow setting a reserved feature by name
mds: return sv for efficiency
Reviewed-by: Jos Collin <jcollin@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
The mds scrub code have removed the detail info about scrubbing
in the response message.
Fixes: https://tracker.ceph.com/issues/48559
Signed-off-by: Xiubo Li <xiubli@redhat.com>
* refs/pull/38664/head:
qa: bump scrub timeout
qa: move cephfs_ec_profile under cephfs
qa: do not use ec pools for default data pool
qa: skip check-counters for light workloads
qa: remove empty multimds suite
qa: merge multimds:verify with fs:verify
qa: merge multimds:thrash to fs:thrash
qa: remove dead multimds:basic
qa: move functional multimds tests to fs:functional
qa: migrate multimds workloads to fs:workloads
qa: only run valgrind on cephfs daemons
qa: stop testing filestore on cephfs suites
qa: load data pools before deleting fs
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Tested-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/38693/head:
qa: execute scrubs only on rank 0
client: print debug information about resolved MDS
qa: let Client::resolve_mds lookup the rank
Reviewed-by: Xiubo Li <xiubli@redhat.com>
Orchestrator defines service name as "<service_type>.<service_id>". The ganesha
common config object name in orchestrator is "conf-<service_name>". Volume's
nfs plugin deploys nfs-ganesha clusters with 'ganesha' prefixed to cluster id
and common config object. It can cause unecessary issues in rook and cephadm.
So let's remove the prefix.
Fixes: https://tracker.ceph.com/issues/48514
Signed-off-by: Varsha Rao <varao@redhat.com>
mgr/dashboard:minimize console log traces of Ceph backend API test
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
If the "ceph.cluster_fsid" and "ceph.client_id" vxattrs or the
"metric" debug file are not support yet, will assume the test
succeeds.
Fixes: https://tracker.ceph.com/issues/48053
Signed-off-by: Xiubo Li <xiubli@redhat.com>
1. Add testcase for authorizing auth_id which is not added by
ceph_volume_client
2. Add testcase to test 'allow_existing_id' option
3. Add testcase for deauthorizing auth_id which has got it's caps
updated out of band
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Along with host IP, sometimes Docker container IP's shows up in 'hostname -I'
output. Since this output is variable. Just check if host IP is present in the
cluster info output.
Fixes: https://tracker.ceph.com/issues/48491
Signed-off-by: Varsha Rao <varao@redhat.com>
* refs/pull/37708/head:
qa/suites/fs: enable thrashing in multifs environment
qa/workunits/fs/snaps: allow tests to be run
qa/tasks/{kclient,ceph_fuse}: allow mounting
qa/tasks: allow per file system config setting
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/38093/head:
qa: add debug information for client address for kclient
qa/kernel_mount.py: rename _read_debug_file to read_debug_file
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
... filesystems other than 'cephfs'.
It is not required to set 'allow_new_snaps' to True to allow snapshot
to be created on a filesystem. Remove that setting.
Remove 'fs/snaps/snaptest-0.sh' that is racy when running in parallel on
an another client that mounted the same file system. Include a similar
test in qa/tasks/cephfs/test_snapshots.py
Signed-off-by: Ramana Raja <rraja@redhat.com>
* refs/pull/37982/head:
qa/cephfs: add code for when config is None in __init__
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>
The 'op_r' will just acount CEPH_OSD_FLAG_READ flag, which will
include some other none real data read opcodes, like the CEPH_OSD_OP_STAT.
Signed-off-by: Xiubo Li <xiubli@redhat.com>
When tests are launched with kernel client using vstart_runner.py,
config is None and, therefore, the call "config.get()" leads to a crash.
Assigning self.rbytes None is important since leaving it undefined will
to lead a crash since the code executed later assumes that self.rbytes
is defined.
Fixes: https://tracker.ceph.com/issues/48147
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Four file systems will use all MDS and generate this warning:
2020-11-02T03:48:33.407 INFO:teuthology.orchestra.run.smithi003.stdout:2020-11-02T03:24:21.817337+0000 mon.a (mon.0) 481 : cluster [WRN] Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)
Fixes: https://tracker.ceph.com/issues/23718
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Example:
2020-10-30T03:37:33.752 INFO:tasks.cephfs_test_runner:======================================================================
2020-10-30T03:37:33.752 INFO:tasks.cephfs_test_runner:FAIL: test_mount_mon_and_osd_caps_present_mds_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
2020-10-30T03:37:33.752 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-10-30T03:37:33.753 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-10-30T03:37:33.753 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/github.com_batrick_ceph_cephfs-qa-reorg/qa/tasks/cephfs/test_multifs_auth.py", line 311, in test_mount_mon_and_osd_caps_present_mds_caps_absent
2020-10-30T03:37:33.753 INFO:tasks.cephfs_test_runner: self.check_that_mount_failed_for_right_reason(retval[2])
2020-10-30T03:37:33.753 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/github.com_batrick_ceph_cephfs-qa-reorg/qa/tasks/cephfs/test_multifs_auth.py", line 269, in check_that_mount_failed_for_right_reason
2020-10-30T03:37:33.753 INFO:tasks.cephfs_test_runner: raise AssertionError('can\'t find expected set of words in the '
2020-10-30T03:37:33.754 INFO:tasks.cephfs_test_runner:AssertionError: can't find expected set of words in the stderr
2020-10-30T03:37:33.754 INFO:tasks.cephfs_test_runner:self.errmsgs - ('permission denied', 'no mds server is up or the cluster is laggy', 'no such file or directory')
2020-10-30T03:37:33.754 INFO:tasks.cephfs_test_runner:stderr - mount error 5 = input/output error
From: /ceph/teuthology-archive/pdonnell-2020-10-30_02:26:51-fs-master-distro-basic-smithi/5573109/teuthology.log
Fixes: https://tracker.ceph.com/issues/23718
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
These were not tested with kcephfs before, let's see if there's any
bugs!
Fixes: https://tracker.ceph.com/issues/23718
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/37629/head:
qa/cephfs: add session_timeout option support
qa/cephfs: move the cephfs's opertions setting to create()
qa/cephfs: add 'cephfs:' section support
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/37652/head:
qa/tasks: tear down the background process before unmounting
qa/tasks: switch to _kill_background() helper to terminate the daemons
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
When the mds revoking the Fwbl caps, the clients need to flush
the dirty data back to the OSDs, but the flush may make the OSDs
to be overloaded and slow, which may take more than 60 seconds to
finish. Then the MDS daemons will report the WRN messages.
For the teuthology test cases, let's just increase the timeout
value to make it work.
Fixes: https://tracker.ceph.com/issues/47565
Signed-off-by: Xiubo Li <xiubli@redhat.com>
A trivial "find" command on a large directory hierarchy will cause the
client to receive caps significantly faster than it will release. The
MDS will try to have the client reduce its caps below the
mds_max_caps_per_client limit but the recall throttles prevent it from
catching up to the pace of acquisition. The solution is to throttle
readdir from client. This patch does the same.
The readdir is throttled on the condition that the number of caps
acquired is greater than certain percentage of mds_max_caps_per_client
(default is 10%) and cap acquisition via readdir is certain percentage
of mds_max_caps_per_client (the default is 50%). When the above
condition is met, the readdir request is retried after
'mds_cap_acquisition_throttle_retry_request_timeout' (default is 0.5)
seconds.
Fixes: https://tracker.ceph.com/issues/47307
Signed-off-by: Kotresh HR <khiremat@redhat.com>
If the background process keeps running by opening the mountpoint
directory, the unmount will fail with BUSY.
Fixes: https://tracker.ceph.com/issues/46883
Signed-off-by: Xiubo Li <xiubli@redhat.com>
The mount.cleanup method will remove the mount point. This `rm -rf` will
always fail (with exit status 0).
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/29951/head:
test: add tests for validating MDS metrics via `perf stats` module
test: Filesystem class helpers to grow and shrink MDS cluster
mgr/stats: mds performance stats module
mds: support sending empty perf metrics to ceph-manager
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
... into smaller test classes. This enables breaking up the volumes yaml
into smaller yaml fragments.
Fixes: https://tracker.ceph.com/issues/47160
Signed-off-by: Ramana Raja <rraja@redhat.com>
* refs/pull/36554/head:
mgr/volumes: Make number of cloner threads configurable
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Shyamsundar R <srangana@redhat.com>
Implement a root_squash mode in MDS auth caps to deny operations for
clients with uid=0 or gid=0 that need write access. It's mainly to
prevent operations such as accidental `sudo rm -rf /path`.
The root squash mode can be enforced in one of the following ways in
the MDS caps,
'allow rw root_squash'
(across file systems)
or
'allow rw fsname=a root_squash'
(on a file system)
or
'allow rw fsname=a path=/vol/group/subvol00 root_squash'
(on a file system path)
Fixes: https://tracker.ceph.com/issues/42451
Signed-off-by: Ramana Raja <rraja@redhat.com>
In filesystem.py, don't set value of reset_obj_attrs to False.
Fixes: https://tracker.ceph.com/issues/47526
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Right now, only client IDs are stashed and restored but with the recent
changes (addition of more attributes to mount objects, specifically),
this is not enough. Saving and restoring these details before and after
tests respectively ensures that mount commands rus smoothly. Not doing
this typically leads to mount command failure for the second test in the
testsuite under execution since the client IDs are saved and restored in
CephFSTestCase.setUp and CephFSTestCase.tearDown respectively but the
rest of the details are not.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Add testsuite for testing authorization on Ceph cluster with multiple
file systems and enable it to be executable with Teuthology framework.
Also add helper methods required to setup the test environment for
multi-FS tests.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
The number of cloner threads is set to 4 and it can't be
configured. This patch makes the number of cloner threads
configurable via the mgr config option "max_concurrent_clones".
On an increase in number of cloner threads, it will just
spawn the difference of threads between existing number of
cloner threads and the new configuration. It will not cancel
the running cloner threads.
On decrease in number of cloner threads, the cases are as follows.
1. If all cloner threads are waiting for the job:
In this case, all threads are notified and required number
threads are terminated.
2. If all the cloner threads are processing a job:
In this case, the condition is validated for each thread after
the current job is finished and the thread is termianted if the
condition for required number of cloner threads is not satisified.
3. If few cloner threads are processing and others are waiting:
The threads which are waiting are notified to validate the
number of threads required. If terminating those doesn't satisfy the
required number of threads, the remaining threads are terminated
upon completion of existing job.
Fixes: https://tracker.ceph.com/issues/46892
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Modify filesystem.Filesystem.delete_all_filesystems() method to make it
more succinct, move it to class MDSCluster instead and update every call
to it accordingly.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Modify cephfs.filesystem.Filesystem.recreate() method to delete only the
FS represented by the object instead of deleting the every FS on the
Ceph cluster.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
And reset_obj_attrs parameter to it so that the caller of the method can
choose to destroy the Ceph FS represented by the object without
disturbing the object attributes.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
This commit adds a new argument check_status to mount methods of
KernelMount, FuseMount, LocalKernelMount and LocalFuseMount. When value
of this argument is False, these methods would catch the
CommandFailedError exception and would return a tuple consisting of the
exception itself, and stdout and stderr of the mount command. This
allows reusing these mount methods while running negative tests for
commands.
The name "check_status" is selected so since teuthology's run() and
vstart_runner's run() use a variable with same name for the very same
purpose.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
This commit introduces following two set of changes -
First, make client keyring path, mountpoint on host FS and CephFS and
CephFS's name attributes of the object representing the mount
and update all the mount object creation calls accordingly. Also,
rewrite all the mount object creation to use keyword arguments instead
of positional arguments to avoid mistakes, especially since a new
argument was added in this commit.
Second, add remount method to mount.py so that it's possible to unmount
safely, modify the attributes of the object representing the mount and
mount again based on new state of the object *in a single call*. The
method is placed in mount.py to avoid duplication.
This change has two leads to two more changes: upgrading interface of
mount() and mount_wait() and upgrading testsuites to adapt to these
change.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
* refs/pull/36773/head:
mgr/volumes: Prevent subvolume recreate if trash is not-empty
mgr/volumes: Disallow subvolume group level snapshots
mgr/volumes: Add test case to ensure subvolume is marked
mgr/volumes: handle idempotent subvolume marks
mgr/volumes: Tests amended and added to ensure subvolume trash functionality
mgr/volumes: Mark subvolume root with the vxattr ceph.dir.subvolume
mgr/volumes: Move incarnations for v2 subvolumes, to subvolume trash
mgr/volumes: maintain per subvolume trash directory
mgr/volumes: make subvolume_v2::_is_retained() object property
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
with the vxattr ceph.dir.subvolume set to true.
Fixes: https://tracker.ceph.com/issues/47154
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
Amended a few test cases to ensure created subvolumes and snaps
are removed, and trash stays empty at the end of the test.
Further added one test case for create errors in a retained
v2 subvolume, to ensure metadata is sane, and created incarnation
is not present.
Fixes: https://tracker.ceph.com/issues/47154
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
* refs/pull/36684/head:
qa/tasks/nfs: Test mounting of export created with nfs command
qa/tasks/nfs: Add helper method to check nfs cluster status
qa/tasks/nfs: Cleanup created filesystem
qa/tasks/nfs: Remove unused port status function and 'stdin' keyword argument
Reviewed-by: Sebastian Wagner <swagner@suse.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/36131/head:
doc: document cephfs mirroring dev work
test: add tests for `ceph fs mirror` family of commands
mds: track filesystem mirror peers in fsmap
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Allow only [A-Za-z0-9-_.] characters for FS, volume, subvolume and
subvolume group names and add test for the same.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
If a subvolumes mode or uid/gid values are changed post a snapshot,
and a clone of a snapshot prior to the change is initiated, the clone
inherits the current source subvolumes attributes, rather than the
snapshots attributes.
Fixing this by using the snapshots subvolume root attributes to create
the clone subvolumes root.
Following attributes are picked from the source subvolume snapshot:
- uid, gid, mode, data pool, pool namespace, quota
Fixes: https://tracker.ceph.com/issues/46163
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
Otherwise the files generated are not actually under the sub-directory!
This is correcting a confusing aspect of the test infrastructure but
doesn't actually require any changes to the tests.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/36155/head:
qa: Fix traceback during fs cleanup between tests
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
* refs/pull/35944/head:
qa: defer cleaning the mountpoint's netnses and the bridge
qa/tasks/cephfs/mount.py: remove the stale netnses and bridge
qa/tasks/cephfs/mount.py: try to flush the stale ceph-brx dev info
qa/tasks/cephfs/mount.py: switch to run_shell_payload() helper
qa/tasks/cephfs/mount.py: clean up the none used code
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
The 'mon_allow_pool_delete' option is set to 'True'
in 'setUp' of 'TestVolumes' and is cleared of in
corresponding 'tearDown' function. Hence, any pool
deletion in parent classes such as 'CephFSTestCase'
would fail. This patch fixes the same by setting
the config 'mon_allow_pool_delete' option in the
'CephFSTestCase'.
Fixes: https://tracker.ceph.com/issues/46597
Signed-off-by: Kotresh HR <khiremat@redhat.com>
The netnses maybe created/deleted many times in the whole test cases,
we can defer cleaning them untile the last mountpoint is unmounted
or when the test is exiting.
Fixes: https://tracker.ceph.com/issues/46282
Signed-off-by: Xiubo Li <xiubli@redhat.com>
If the previous test cases failed, the netnses and bridge will be
left. Here will remove them when new test cases begin.
Fixes: https://tracker.ceph.com/issues/45806
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Once we have run the test cases and the ceph-brx bridge is setup,
it will save the config in "/etc/sysconfig/network-scripts/ifcfg-ceph-brx"
or somewhere else. It will be kept after the ceph-brx bridge removed.
So next time once the ceph-brx bridge is created or added, it will
read the config from it, then when we config it again we will get
error like:
"RTNETLINK answers: File exists"
Here we need to flush it before config it.
Fixes: https://tracker.ceph.com/issues/45817
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Subvolume operations throw a traceback if the volume
doesn't exist. This patch fixes the same.
Fixes: https://tracker.ceph.com/issues/46496
Signed-off-by: Kotresh HR <khiremat@redhat.com>
Addresses the name collisions of volumes, subvolumes,
clones, subvolume groups and snapshots within the tests.
Fixes: https://tracker.ceph.com/issues/43517
Signed-off-by: Kotresh HR <khiremat@redhat.com>
* refs/pull/35755/head:
mgr/volumes: Deprecate protect/unprotect CLI calls for subvolume snapshots
Reviewed-by: Ramana Raja <rraja@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Kotresh Hiremath Ravishankar <khiremat@redhat.com>
Reviewed-by: Victoria Martinez de la Cruz <vkmc@redhat.com>
Reviewed-by: Goutham Pacha Ravi <gouthamr@redhat.com>
Subvolume snapshots required to be protected, prior to cloning the same.
Also, protected snapshots were not allowed to be unprotected or removed,
if there were in-flight clones, whose source was the snapshot being
removed.
The protection of snapshots explicitly is not required, as these can be
prevented from being removed based only on the in-flight clones checks.
This commit hence deprecates the additional protect/unprotect requirements
prior to cloning a snapshot.
In addition to deprecating the above, support to query a subvolume for
supported features, via the info command, is added. The feature list
is set to "clone" and "auto-protect", where the latter is useful to
decide if protect/unprotect commands are required or not.
Fixes: https://tracker.ceph.com/issues/45371
Signed-off-by: Shyamsundar Ranganathan <srangana@redhat.com>
fuse_mount.py. This isn't critical at all to vstart_runner.py runs but
this patch must dramatically reduce the time it takes in case the
command fails with "permission denied" due to lack of superuser
privileges since in this case the command is re-run 9 more times, each
separated by a sleep for 5 seconds.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Volume deletion wasn't validating mon_allow_pool_delete config
before destroying volume metadata. Hence when mon_allow_pool_delete
is set to false, it was deleting metadata but failed to delete pool
resulting in inconsistent state. This patch validates the config
before going ahead with deletion.
Fixes: https://tracker.ceph.com/issues/45662
Signed-off-by: Kotresh HR <khiremat@redhat.com>
* refs/pull/35420/head:
mgr/volumes: Fix pool removal on volume deletion
Reviewed-by: Ramana Raja <rraja@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Jos Collin <jcollin@redhat.com>
* refs/pull/35664/head:
qa: add omit_sudo=False for commands ran with sudo
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>