* refs/pull/34098/head:
cephadm: relabel /etc/ganesha mount
Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Sebastian Wagner <swagner@suse.com>
First, we don't know how big they should be or what they should look like.
The caller should already know that, and/or radosgw can create the pools
itself.
This depends on https://github.com/rook/rook/pull/5058
Signed-off-by: Sage Weil <sage@redhat.com>
A few caveats here:
- enforce that realm == zone, since that is all rook does at the moment.
- we force a (bad!) pool configuration, since rook requires that these
be present (instead of allowing radosgw or the caller to create the pools)
Signed-off-by: Sage Weil <sage@redhat.com>
* refs/pull/34068/head:
mgr/cephadm: clean up client.crash.* container_image settings after upgrade
Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Sebastian Wagner <swagner@suse.com>
When a new leader acquires the lock, it will send out a lock acquired
notification along with periodic heartbeats. The get locker will attempt to
run immediately, but if a heartbeat arrives before it executes the heartbeat
will cancel the timer and reschedule it for the future. This process repeats
for each periodic heartbeat and the locker is never re-read from the OSD.
This is an issue only for namespace replayers due to the delayed fashion in
which the leader instance id is retrieved.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
If the sync request was locally canceled, we need to resume the paused
shut down logic instead of just notifying the image replayer state
machine of the change -- since it had already requested a shut down and
will not re-request it.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Even if the image is in-use, moving it to the trash does not
remove any data. This also solves a race between snapshot-based
mirroring shutting down and being able to move a mirrored image
to the trash.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
snapshot-based mirroring did not have any throttling to prevent
too many concurrent syncs from running. Since each sync might need
to iterate over every object of an image, that could potentially
put an extreme burden on the remote cluster.
A future PR will add a more intelligent throttle based on the actual
number of objects needed to be scanned.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Although it is not necessary to mark_down the connection in its
ms_handle_reset() event, but it can be more convenient to allow it.
And Heartbeat already encounters this assertion failure.
So move the assertion to close_clean() which will help identify problems
if we happen to make ms_handle_reset() wait for messenger shutdown.
Signed-off-by: Yingxin Cheng <yingxin.cheng@intel.com>
* be explicit that mark_down() won't trigger reset event;
* return void so no deadlock is possible and memory is still safe
guarded by Messenger::shutdown();
* related changes in crimson/osd;
Signed-off-by: Yingxin Cheng <yingxin.cheng@intel.com>
When a new connection tries to replace the old one, the event order
should be like:
1. reset(old);
2. accept(new);
This means we cannot just reschedule the reset event asynchronously. And
we still need to make sure the internal state is integral when reset.
Signed-off-by: Yingxin Cheng <yingxin.cheng@intel.com>