Justification behind the change is behaviour of classical OSD.
It calls PrimaryLogPG::find_object_context() far before going
through OSDOps in ::do_osd_ops().
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
we should stop config service *after* osd is stopped, as osd depends on
a working and alive config subsystem when stopping itself. for instance,
the destructor of AuthRegistry unregisters itself from the ObserverMgr,
which is in turn a member variable of ConfigProxy, so if ConfigProxy is
destroyed before we destroy mon::Client, we will have a segfault with
following backtrace
ObserverMgr<ceph::md_config_obs_impl<ceph::common::ConfigProxy>
>::remove_observer(ceph::md_config_obs_impl<ceph::common::ConfigProxy>*)
at /var/ssd/ceph/build/../src/common/config_obs_mgr.h:78
AuthRegistry::~AuthRegistry() at
/var/ssd/ceph/build/../src/crimson/common/config_proxy.h:101
(inlined by) AuthRegistry::~AuthRegistry() at
/var/ssd/ceph/build/../src/auth/AuthRegistry.cc:28
ceph::mon::Client::~Client() at
/var/ssd/ceph/build/../src/crimson/mon/MonClient.h:44
ceph::mon::Client::~Client() at
/var/ssd/ceph/build/../src/crimson/mon/MonClient.h:44
OSD::~OSD() at /usr/include/c++/9/bits/unique_ptr.h:81
Signed-off-by: Kefu Chai <kchai@redhat.com>
global/pidfile: pass string_view instead of ConfigProxy to pidfile_wr…
Reviewed-by: Samuel Just <sjust@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
Currently vstart.sh only support deploying one OSD based on NVMe SSD.
The following two cases will cause errors:
1.There are 2 more NVMe SSDs from the same vendor on the machine
2.Trying to deploy 2 more OSDs if we only get 1 pci_id available
Add the support for allowing deploying multiple OSDs on a machine with
multiple NVME SSDs.
Change-Id: I6016435c1438bb4d16aff31f4575e03ccd3c9b3d
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
The "unmap" request is asynchronous, so wait for a short amount
of time for the "rbd-nbd" daemon process to exit.
Fixes: http://tracker.ceph.com/issues/39598
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
writing an empty timestamp to the bilog prevents other zones from
applying the delete. this means that the --bypass-gc flag for
'radosgw-admin bucket rm' doesn't work in multisite
Fixes: http://tracker.ceph.com/issues/24991
Signed-off-by: Casey Bodley <cbodley@redhat.com>
mgr/dashboard: Allow the decrease of pg's of an existing pool
Reviewed-by: Alfonso Martínez <almartin@redhat.com>
Reviewed-by: Laura Paduano <lpaduano@suse.com>
Reviewed-by: Tiago Melo <tmelo@suse.com>
since luminous, recovery_deletes is always true, and octopus won't
maintain compatibility with jewel OSDs
Signed-off-by: lishuhao <lishuhao@unitedstack.com>
there is no need to pass ConfigProxy to this function. and passing a
string_view also make it easier to reuse this function.
Signed-off-by: Kefu Chai <kchai@redhat.com>
These commands help cleanup stale expired objects after a reshard has happened
Contains the following changesets:
* rgw_bucket: add a AdminOp function to cleanup stale expired objects
There is also a dry run option, which will allow listing to just view the
objects before we actually delete them
* rgw_bucket: introduce rgw_object_get_attr helper function
Since this could be re-used across other functions that try to get a single
xattr, both get acl and the new object expire stale commands now use this
function to read the value of a single xattr
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
Previously we fetched the bucket instance info which changes during a reshard
causing the bucket info to fail, since the subsequent checks will assume that
would mean a deleted bucket, the expiry hint is purged as well, leaving a non
deleted object as well as a deleted hint. This should fix newer runs of object
expiry processes. Finding out stale expired objects will require more complex
rgw-admin tooling support
Fixes: https://tracker.ceph.com/issues/39495
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
we use merge ObjectCleanRegions to calculate the data that needs recovery when merge log
Signed-off-by: Ning Yao <yaoning@unitedstack.com>
Signed-off-by: lishuhao <lishuhao@unitedstack.com>