collection default has pool(0), and NO_SHARD,
so the ghobject_t should have the same setting.
and ghobject_t default hash not within 1<<(32-4),
so set collection bits to be 0.
Signed-off-by: chunmei-liu <chunmei.liu@intel.com>
Before the patch there was a possibility that `OSDConnectionPriv`
gets destructed before a `PipelineHandle` instance that was using
it. The reason is our remote-handling operations store `conn` directly
while `handle` is defined in a parent class. Due to the language rules
the former gets deinitialized earlier.
```
==756032==ERROR: AddressSanitizer: heap-use-after-free on address 0x615000039684 at pc 0x0000020bdfa2 bp 0x7ffd3abfa370 sp 0x7ffd3abfa360
READ of size 1 at 0x615000039684 thread T0
Reactor stalled for 261 ms on shard 0. Backtrace: 0x45d9d 0xe90f6d1 0xe6b8a1d 0xe6d1205 0xe6d16a8 0xe6d1938 0xe6d1c03 0x12cdf 0xccebf 0x7f6447161b1e 0x7f644714aee8 0x7f644714eed6 0x7f644714fb36 0x7f64471420b5 0x
7f6447143f3a 0xd61d0 0x32412 0xbd8a7 0xbd134 0xbdc1a 0x20bdfa1 0x20c184e 0x352eb7f 0x352fa28 0x20b04a5 0x1be30e5 0xe694bc4 0xe6ebb8a 0xe843a11 0xe845a22 0xe29f497 0xe2a3ccd 0x1ab1841 0x3aca2 0x175698d
#0 0x20bdfa1 in seastar::shared_mutex::unlock() ../src/seastar/include/seastar/core/shared_mutex.hh:122
#1 0x20c184e in crimson::OrderedExclusivePhaseT<crimson::osd::ConnectionPipeline::GetPG>::exit() ../src/crimson/common/operation.h:548
#2 0x20c184e in crimson::OrderedExclusivePhaseT<crimson::osd::ConnectionPipeline::GetPG>::ExitBarrier::exit() ../src/crimson/common/operation.h:533
#3 0x20c184e in crimson::OrderedExclusivePhaseT<crimson::osd::ConnectionPipeline::GetPG>::ExitBarrier::cancel() ../src/crimson/common/operation.h:539
#4 0x20c184e in crimson::OrderedExclusivePhaseT<crimson::osd::ConnectionPipeline::GetPG>::ExitBarrier::~ExitBarrier() ../src/crimson/common/operation.h:543
#5 0x20c184e in crimson::OrderedExclusivePhaseT<crimson::osd::ConnectionPipeline::GetPG>::ExitBarrier::~ExitBarrier() ../src/crimson/common/operation.h:544
#6 0x352eb7f in std::default_delete<crimson::PipelineExitBarrierI>::operator()(crimson::PipelineExitBarrierI*) const /opt/rh/gcc-toolset-11/root/usr/include/c++/11/bits/unique_ptr.h:85
#7 0x352eb7f in std::unique_ptr<crimson::PipelineExitBarrierI, std::default_delete<crimson::PipelineExitBarrierI> >::~unique_ptr() /opt/rh/gcc-toolset-11/root/usr/include/c++/11/bits/unique_ptr.h:361
#8 0x352eb7f in crimson::PipelineHandle::~PipelineHandle() ../src/crimson/common/operation.h:457
#9 0x352eb7f in crimson::osd::PhasedOperationT<crimson::osd::ClientRequest>::~PhasedOperationT() ../src/crimson/osd/osd_operation.h:152
#10 0x352eb7f in crimson::osd::ClientRequest::~ClientRequest() ../src/crimson/osd/osd_operations/client_request.cc:64
#11 ...
```
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
doc/cephadm/services: the config section of service specs
Reviewed-by: Adam King <adking@redhat.com>
Reviewed-by: Anthony D'Atri <anthonyeleven@users.noreply.github.com>
make it clearer that not passing --zap will leave VG/LV, therefore all
LVM metadata associated with the osd being removed from the Ceph cluster.
Fixes: https://tracker.ceph.com/issues/56092
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
Before we were treating the `--no-mon-config` as one of the
`seastar_n_early_args`. However, this was wrong as it truly
belongs to `config_proxy_args` as:
```cpp
int md_config_t::parse_argv(ConfigValues& values,
const ConfigTracker& tracker,
std::vector<const char*>& args, int level)
{
// ...
else if (ceph_argparse_flag(args, i, "--no-mon-config", (char*)NULL)) {
values.no_mon_config = true;
}
// ...
}
```
The net result of this ignoring `--no-mon-config` which was
the reason behind many dead jobs at Sepia.
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
mds: skip fetching the dirfrags if not a directory
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Jos Collin <jcollin@redhat.com>
Reviewed-by: Ramana Raja <rraja@redhat.com>
If any clone is in pending or in-progress state then
show these clones in 'fs subvolume snapshot info'
command output.
Fixes: https://tracker.ceph.com/issues/55041
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
If any clone is in pending or in-progress state then
show these clones in 'fs subvolume snapshot info'
command output. This field only exists if clones are
in pending or in progress state.
Fixes: https://tracker.ceph.com/issues/55041
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
If any clone is in pending or in-progress state then
show these clones in 'fs subvolume snapshot info'
command output.
Fixes: https://tracker.ceph.com/issues/55041
Signed-off-by: Nikhilkumar Shelke <nshelke@redhat.com>
mds,qa: some balancer debug messages (<=5) not printed when debug_mds is >=5
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Ramana Raja <rraja@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>
Reviewed-by: Dhairya Parmar <dparmar@redhat.com>
crimson: bring alloc hints to OpsExecuter, ignore them in SeaStore
Reviewed-by: Samuel Just <sjust@redhat.com>
Reviewed-by: Nitzan Mordechai <nmordech@redhat.com>