src/common : proper handling of units in `strict_iec_cast`
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Rishabh Dave <ridave@redhat.com>
Reviewed-by: Dhairya Parmar <dparmar@redhat.com>
smb.test is an invalid earmark now it should be either smb or
smb.cluster.<cluster_id>. Update the test_volumes.py to set
valid earmarks wherever used.
Fixes: https://tracker.ceph.com/issues/68448
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
Add tests to ensure that when cluster has any health warning, especially
MDS_TRIM, confirmation flag is mandatory to change max_mds.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
mgr/nfs: generate user_id & access_key for apply_export(CephFS)
Reviewed-by: Adam King <adking@redhat.com>
Reviewed-by: Dhairya Parmar <dparmar@redhat.com>
Reviewed-by: John Mulligan <jmulligan@redhat.com>
* refs/pull/55421/head:
qa/cephfs: add test to verify backtrace update failure on deleted data pool
mds: batch backtrace updates by pool-id when expiring a log segment
mds: dump log segment in segment expiry callback
mds: dump log segment end along with offset
Reviewed-by: Xiubo Li <xiubli@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
Fixes some doc lint and also fixed qa tests for having both 3 & 4 protocols
by default in expot config.
Add unit tests for unique user ID generation, deletion and `cmount_path` handling in FSAL exports
- Ensure unique user ID generation for different FSAL blocks when creating exports.
- Test deletion behavior when multiple exports share the same user ID and one has a unique ID.
- Test default behavior when no `cmount_path` is provided (defaults to `/`).
- Add tests to validate error handling for invalid `cmount_path` values.
Signed-off-by: Avan Thakkar <athakkar@redhat.com>
* refs/pull/58419/head:
mds: generate correct path for unlinked snapped files
qa: add test for cephx path check on unlinked snapped dir tree
mds: add debugging for stray_prior_path
Reviewed-by: Milind Changire <mchangir@redhat.com>
The journal reset effectively cleared the cache so the rank may not have the
dirfrag in memory when we verify alternate name recovery.
Fixes: https://tracker.ceph.com/issues/67511
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* Make all replayer threads busy and then query for 'syncing' state
instead of just fetching the current status.
* Dropped 'current_syncing_snap' check, as it's not compulsory for
this test. The actual intension is to make threads in 'syncing' status
and 'current_syncing_snap' check is not necessary for that.
* Drop 'snaps_deleted' metrics check in test_cephfs_mirror_cancel_mirroring_and_readd.
test_cephfs_mirror_cancel_mirroring_and_readd primarily focusses
on the synchronization of the newly added directory paths post removal
of the previously added/syncing directory paths. So checking of 'snaps_deleted'
metrics is unnecessary here.
* Wait for more time to finish the new snapshot creations and the sync backoff.
We need to wait for more time in test_cephfs_mirror_cancel_mirroring_and_readd,
as the test makes all replayer threads busy.
Fixes: https://tracker.ceph.com/issues/64711
Signed-off-by: Jos Collin <jcollin@redhat.com>
Test name is test_subvolume_snapshot_info_if_clone_pending_for_no_group,
located in class TestSubvolumeSnapshotClones in test_volumes.py
5 seconds can (sometimes) be insufficient as value of the config option
"snapshot_clone_delay" in this. Increase it to avoid unnecessary race
conditions which leads to irrelevant failures.
Following is an example where 5 seconds was insufficient as waiting
period since instead it took 8 seconds -
2024-07-28T18:16:10.088 DEBUG:teuthology.orchestra.run.smithi064:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph config set mgr mgr/volumes/snapshot_clone_no_wait False
...
2024-07-28T18:16:18.694 DEBUG:teuthology.orchestra.run.smithi064:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume snapshot info cephfs subvol79370 subvol_snap40980
This issue was seen during testing of PR to which this commit belongs.
This commit has been separated from the commit that adds tests for clone
progress reporting so that it's easy to document need for this code
patch and also track it.
This commit is not being moved to a different PR and been kept on the
same PR since it can't be reproduced otherwise. This also ensures that
commit is backported to older release along with code that caused this
issue, causing no one to need to find this commit while backporting
effort.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Clone progress is shown to user through "ceph fs clone status" output
and through "ceph status" output. Test both these features.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
TestVolumesHelper._do_subvolume_io() is a helper method that allows
users to generate data for testing. mgr/vol code that reports progress
made by clone jobs depends on the value set for xattr rbytes. It takes
a bit of a time for rbytes to be set.
And, therefore, all tests in TestCloneProgressReporter needs to wait for
subvolume's rbytes xattr's value to be set to the actual amount of data
present on the subvolume before proceeding to actually testing.
So that this can be achieved make _do_subvolume_io() return size of the
data it has generated.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Add a helper method that accepts command arguments (along with rest of
paramters accepted by the method run_shell()) and return the stdout of
the command.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
1. Let caller check for multiple states. It might happen that clone
finishes while it is being cancelled, in such cases user might want
to check for both.
2. Add a helper method to check if clone is in pending state and add a
separate method to check if clone is in cancelled state.
Signed-off-by: Rishabh Dave <ridave@redhat.com>