Previously, the peer uuid variable was empty which resulted in the failure
to remove the duplicate peer.
Fixes: https://tracker.ceph.com/issues/47007
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
We need to temporary disable "exit on error" mode so it does not
abort when `rbd mirror pool peer add` returns "already exists"
error code.
Signed-off-by: Mykola Golub <mgolub@suse.com>
We might race with the remote rbd-mirror daemon creating a
tx-only peer when adding a new peer. Therefore, delete the
tx-only peer and attempt to re-create it.
Fixes: https://tracker.ceph.com/issues/44938
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Ensure that snapshot-based mirroring is tested in different RBD image
feature combinations.
Fixes: https://tracker.ceph.com/issues/44396
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
The 'ceph' CLI and 'rbd mirror pool/image status' commandsshould revert
to use the admin user so that it has proper credentials for the cluster.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
The mirroring site name is stored in the MON config which requires
higher privledges than the standard "client.mirror" user.
Fixes: https://tracker.ceph.com/issues/44066
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
A new functional test for snapshot-based mirroring will be created and
the other stress-tests should eventually be applied to both snapshot-
and journal-based mirroring.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Due to pipe, sdiff return code was ignored. Also the compare functions
spent most of time in xxd, which was normally unnecessary.
Signed-off-by: Mykola Golub <mgolub@suse.com>
This better mimics the behavior of teuthology and tests rbd-mirror
daemon's ability to handle a pool deletion.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
The test extracts the mon addresses from the monmap, but with the
recent v2 format change it extracted an invalid address.
Fixes: http://tracker.ceph.com/issues/38385
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
It is particularly useful when running multiple rbd-mirror instances
in Active-Passive or Active-Active mode.
Signed-off-by: Mykola Golub <mgolub@suse.com>
This var is mostly used when running rbd_mirror test scripts on
teuthology. It can be used locally though to speedup re-running the
tests:
Set a test temp directory:
export RBD_MIRROR_TEMDIR=/tmp/tmp.rbd_mirror
Run the tests the first time with NOCLEANUP flag (the cluster and
daemons are not stopped on finish):
RBD_MIRROR_NOCLEANUP=1 ../qa/workunits/rbd/rbd_mirror.sh
Now, to re-run the test without restarting the cluster, run cleanup
with USE_EXISTING_CLUSTER flag:
RBD_MIRROR_USE_EXISTING_CLUSTER=1 \
../qa/workunits/rbd/rbd_mirror_ha.sh cleanup
and then run the tests:
RBD_MIRROR_USE_EXISTING_CLUSTER=1
../qa/workunits/rbd/rbd_mirror_ha.sh
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
by optionally specifyning daemon instance after cluster name and
colon, like:
start_mirror ${cluster}:${instance}
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
Otherwise, it does not work as supposed to work in statements like below:
set -e
test_status_in_pool_dir ... && ...
(e.g. in wait_for_status_in_pool_dir)
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
This fixes a race in resync tests leading to false negative results.
Fixes: http://tracker.ceph.com/issues/18048
Signed-off-by: Mykola Golub <mgolub@mirantis.com>