qa/cephfs: set joinable on FS before exiting tests in TestFSFail
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Edit doc/cephfs/fs-volumes to the section "Cloning Snapshots" (but not
including the section "Cloning Snapshots".
Follows https://github.com/ceph/ceph/pull/57415
Signed-off-by: Zac Dover <zac.dover@proton.me>
common/options: link to mon_osd_blocklist_default_expire from RBD
Reviewed-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Reviewed-by: N Balachandran <nibalach@redhat.com>
Block test_idem_unaffected_root_squash temporarily and
test_multifs_single_path_rootsquash.
This test fails due to a known bug. Block it temporarily so that
test_admin.py can run fully and PRs under QA can be tested fully.
Otherwise, this test fails and that halts test_admin.py, which leaves
the PR partially untested.
This failure is then seen as an unrelated failure which lets the buggy
code get merged. This has happened recently.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
crimson/os/seastore/transaction_manager: correct the offset of the data copied from the original extents
Reviewed-by: Yingxin Cheng <yingxin.cheng@intel.com>
MDS_CLIENTS_BROKEN_ROOTSQUASH is generated and expected by
test_rootsquash_nofeature but it hasn't be added to ignorelist as a
result of which QA code marks the job as failed even though all tests
finished running successfully.
Introduced-by: bccc8ceb47
Fixes: https://tracker.ceph.com/issues/66075
Signed-off-by: Rishabh Dave <ridave@redhat.com>
After running TestFSFail, CephFSTestCase.tearDown() fails attempting
to unmount CephFS. Set joinable on FS and wait for the MDS to be up
before exiting the test. This will ensure that unmounting is
successful in teardown.
Fixes: https://tracker.ceph.com/issues/65841
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Since this --flags=locks takes the mds_lock and dumps thousands of ops, this
may take a long time to complete for each individual MDS. The entire quiesce
set may timeout (and all q ops killed) before we finish dumping ops.
Fixes: https://tracker.ceph.com/issues/65823
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
common/pick_address: check if address in subnet all public address
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
Reviewed-by: Prashant D <pdhange@redhat.com>
"number of seconds to blocklist - set to 0 for OSD default" in the
description of rbd_blocklist_expire_seconds refers to the value that is
controlled by mon_osd_blocklist_default_expire.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* refs/pull/56941/head:
mds: find a new head for the batch ops when the head is dead
Reviewed-by: Kotresh Hiremath Ravishankar <khiremat@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Dhairya Parmar <dparmar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/57275/head:
qa/fsx: use a specified sha1 to build the xfstest-dev
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Leonid Usov <leonid.usov@ibm.com>
We are currently conducting regular ceph-dencoder tests for backward compatibility.
However, we are omitting tests for forward compatibility.
This suite will introduce tests against the ceph-objects-corpus to address forward
compatibility issues that may arise.
the script will install N-2 version and run against the latest version corpus objects
that we have, then install N-1 to N version and check them as well.
Signed-off-by: Nitzan Mordechai <nmordech@redhat.com>
* refs/pull/57454/head:
mds/quiesce-db: optimize peer updates
mds/quiesce-db: track db epoch separately from the membership epoch
mds/quiesce-db: test that a peer on a newer membership epoch can ack a root
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/57274/head:
mds: don't stall the asok thread for flush commands
qa/quiescer: relax some timing requirements in the quiescer
Reviewed-by: Xiubo Li <xiubli@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Venky Shankar <vshankar@redhat.com>
We're getting the following error while initializing 64MB disks
on WS 2019: "The disk is not large enough to support a GPT
partition style.".
For this reason, we'll use MBR instead.
Signed-off-by: Lucian Petrut <lpetrut@cloudbasesolutions.com>
We're adding a test that:
* maps a configurable number of images
* runs a specified test - we're reusing the ones from stress_test,
making just a few minor changes to allow running the same test
multiple times
* restarts the ceph-rbd Windows service
* waits for the images to be reconnected and refreshes the mount
information
* reruns the test
* repeats the above workflow for a specified number of times,
reusing the same images
This test ensures that:
* mounted images are still available after a service restart
* drive letters are retained
* the image content is retained
* there are no race conditions when connecting or disconnecting
a large number of images in parallel
* the driver is capable of mapping a specified number of images
simultaneously
Signed-off-by: Lucian Petrut <lpetrut@cloudbasesolutions.com>
osd: CEPH_OSD_OP_FLAG_BYPASS_CLEAN_CACHE flag is passed from ECBackend
Reviewed-by: Igor Fedotov <ifedotov@suse.com>
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
osd: fix for segmentation fault on OSD fast shutdown
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
Reviewed-by: Igor Fedotov <ifedotov@suse.com>