This commit adds a call to `ceph-facts` role in the first play of this
playbook. This is needed so `ceph-validate` won't fail because of
following error:
```
fatal: [osd0]: FAILED! => {}
MSG:
'osd_pool_default_size' is undefined
```
`osd_pool_default_size` is set in ceph-facts.
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
No lockers are obtained, ImageNotFound exception will be output,
but tht image is always exist.when lockers number is zero,
Should not output any exceptions。
Fixes: https://tracker.ceph.com/issues/44613
Signed-off-by: zhangdaolong <zhangdaolong@fiberhome.com>
The owner of "changcheng.liu@aliyun.com" is an employee of Intel.
Update info for comming statistic.
Signed-off-by: Changcheng Liu <changcheng.liu@aliyun.com>
is_replace=true means the reset connection is going to be replaced by
another accepting connection with the same peer_addr, which currently
only happens under lossy policy when both sides wish to connect to each
other.
Signed-off-by: Yingxin Cheng <yingxin.cheng@intel.com>
Allow connect to specific peer with entity_name_t, with required
internal validation during handshake in v2.
Signed-off-by: Yingxin Cheng <yingxin.cheng@intel.com>
The objecter_finisher is already started in Client::Client(), but
in the failure path when initializing and starting the Client object,
we may not get a chance to call the Client::shutdown() to stop the
Finisher thread, which maybe still holding the mutex lock in it. Then
when destrucing the Finisher object the pthread_mutex_destroy() will
fail.
This fix will delay the objecter_finisher thread to start in ::init()
until we're ready to call Client::shutdown on any errors instead.
Fixes: https://tracker.ceph.com/issues/44389
Signed-off-by: Xiubo Li <xiubli@redhat.com>
otherwise, this error gets returned by RGWPSDataSyncModule::start_sync()
and data sync fails to start
Fixes: https://tracker.ceph.com/issues/44857
Signed-off-by: Casey Bodley <cbodley@redhat.com>
For example, when there are two RBD client in the same teuthology node,
no matter what the result of test case is, always lead to the below error :
"Error : test -f /home/ubuntu/cephtest/archive/qemu/client.1/success"
The main reason is that _setup_nfs_mount and _teardown_nfs_mount just
support single mount point.
Signed-off-by: Dehao Shang <dehao.shang@intel.com>
While testing the upgrade to Angular 9,
these 2 unit tests were consistently failling.
Fixes: https://tracker.ceph.com/issues/42929
Signed-off-by: Tiago Melo <tmelo@suse.com>
Passing an empty 'args' dict as a data argument when calling
requests.get somehow confuses the transaction, causing it to fail. Pass
'None' instead.
Fixes: https://tracker.ceph.com/issues/43720
Signed-off-by: Brad Hubbard <bhubbard@redhat.com>
Normal ceph services can send task status updates to manager.
Task status is tracked in service map implying that normal
ceph services have entries in service map and daemon tracking
index (daemon state). But the manager prunes entries from daemon
state when it receives an updated map (fs, mon, etc...). This
causes periodic pruning of service map entries to fail for normal
ceph services (those which send task status updates) since it
expects a corresponding entry in daemon state.
Signed-off-by: Venky Shankar <vshankar@redhat.com>
per Mark Nelson,
> yeah, 5% variation is way too low
> Sometimes we can stay within 5%, but especially if we are pushing the
> system hard we can see closer to 10-20% sometimes.
Signed-off-by: Kefu Chai <kchai@redhat.com>
If there happen to be an kcephfs entry in /etc/fstab for ceph-fuse's
mount point. 'mount -o remount' may get options from that entry. fuse
may not understand some options (E.g name option).
Fixes: https://tracker.ceph.com/issues/44771
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
show 'Error ENOENT: New host example (example) failed check: ["Can't communicate with
remote host, possibly because python3 is not installed there"]' instead of traceback
with OSError: cannot send(already closed?) when adding host if python3 is not on host
Signed-off-by: Daniel-Pivonka <dpivonka@redhat.com>
We want to support N - 3 client backward compatibility (special case
to support Jewel since it was a LTS release). The "get_snapshot_timestamp"
cls method does not exist in Jewel clusters so librbd should fallback
to excluding the op if it fails.
Note that this N - 3 also needs to apply for downstream releases as well,
which implies we still need Jewel for the time being.
Fixes: http://tracker.ceph.com/issues/39450
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit c644121820)
Conflicts:
src/test/librbd/image/test_mock_RefreshRequest.cc: tweaked to support pool configs
The Luminous release did not support adding images to a group (it only
included the bare-minimum support for creating groups). Commit f76df32666
incorrectly dropped support for ignoring this possible failure. This
prevents Nautilus-release clients from opening images contained within
a Luminous-release cluster.
Fixes: http://tracker.ceph.com/issues/38834
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit 6f29dc69a0)