This was attempted in commit 69a7ed4eab ("run-make-check: enable
WITH_RBD_RWL when WITH_PMEM is true") but never completed. We soon
bumped the requirement on libpmem, so WITH_SYSTEM_PMDK=ON wouldn't
have worked anyway.
Enable the RWL mode conditionally based on WITH_RBD_RWL variable.
Enable the SSD mode unconditionally as it has no special dependencies
and can be built on any architecture.
Fixes: https://tracker.ceph.com/issues/55285
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Take bf0b161115 ("test/encoding/check-generated.sh: show diff if cmp
fails") a bit further. Suggesting "cmp $tmp1 $tmp2" isn't very helpful
since cmp would report just the mismatch offset.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Initializing the individual bit field members leaves the remaining two
bits uninitialized and that garbage state gets persisted.
In general, using bit fields in a structure where the layout actually
matters is not desirable. Even with a few single bits, such as here,
their order, strictly speaking, is not guaranteed:
An implementation may allocate any addressable storage unit large
enough to hold a bit-field. If enough space remains, a bit-field
that immediately follows another bit-field in a structure shall be
packed into adjacent bits of the same unit. If insufficient space
remains, whether a bit-field that does not fit is put into the next
unit or overlaps adjacent units is implementation-defined. The
order of allocation of bit-fields within a unit (high-order to
low-order or low-order to high-order) is implementation-defined.
The alignment of the addressable storage unit is unspecified.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Given a prefix, fetch only those objects matching the prefix.
In addition, skip the entries with "delim" and instead include
those entries in common_prefixes
Signed-off-by: Soumya Koduri <skoduri@redhat.com>
The purpose of this patch is to add the initial support to
offload memory/pmem operations by sync usage through hardware path
in DML library.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
In `master` the milestone step exits and causes remaining tasks not to be run. I previously tried with the `continue-on-error` flag, but it didn't work, so let's try putting that steps at the end.
Signed-off-by: Ernesto Puerta <epuertat@redhat.com>
Limits RocksDB omap Seek operations to the relevant key range of the object's omap.
This prevents RocksDB from unnecessarily iterating over delete range tombstones in
irrelevant omap CF shards. Avoids extreme performance degradation commonly caused
by tombstones generated from RGW bucket resharding cleanup. Also prefer CFIteratorImpl
over ShardMergeIteratorImpl when we can determine that all keys within specified
IteratorBounds must be in a single CF.
Fixes: https://tracker.ceph.com/issues/55324
Signed-off-by: Cory Snyder <csnyder@iland.com>
mgr/cephadm: allow setting insecure_skip_verify for alertmanager
Reviewed-by: Francesco Pantano <fpantano@redhat.com>
Reviewed-by: Patrick Seidensal <pseidensal@suse.com>
Save the mgr container logs of cephadm inside a folder and later on
archive it and get it as an artifact on the cephadm dashboard e2e jobs
Fixes: https://tracker.ceph.com/issues/55247
Signed-off-by: Nizamudeen A <nia@redhat.com>
For the old ceph clusters the clients won't send any metrics to
them as default unless they have backported this commit, but there
has one option 'client_collect_and_send_global_metrics' still could
be used to enable it manually.
This will fix the crash bug when upgrading from old ceph clusters,
which will crash the MDSes once they receive unknown metrics.
Fixes: https://tracker.ceph.com/issues/54411
Signed-off-by: Xiubo Li <xiubli@redhat.com>
To be careful to enable this because it may crash the old MDSes while
upgrading.
Fixes: https://tracker.ceph.com/issues/54411
Signed-off-by: Xiubo Li <xiubli@redhat.com>
If the connection was accidently closed due to the socket issue or
something else the client will try to open the opened sessions, for
now the MDS will just discard the session open request.
But the client will keep waiting the reply from the mds forever.
We need to tell the clients what has happened instead of discard it
directly. And when the client get the session open reply, it can
do what needed.
Fixes: https://tracker.ceph.com/issues/53911
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Since both the sentences in the note point aren't strictly related to
each other, it's better to split that note point into two.
Signed-off-by: Rishabh Dave <ridave@redhat.com>
Otherwise the tests may run forever. This was already done for
mds upgrade sequence, justadding it in the other two places here
Related to: https://tracker.ceph.com/issues/53939
Signed-off-by: Adam King <adking@redhat.com>
if another python3 with higher version is found by
find_package(Python3), the cmake's install script would just
install the python modules/extensions into that python3's
dist-package directory, and the packaging script would fail
to find these artifacts when trying to package them.
so we need to ensure that the install directories for python
modeules/extensions are always "versioned" with WITH_PYTHON3
cmake option.
Signed-off-by: Kefu Chai <tchaikov@gmail.com>
mgr/cephadm: retry mgr fail over in case of transient failure
Reviewed-by: Michael Fritch <mfritch@suse.com>
Reviewed-by: Redouane Kachach <rkachach@redhat.com>
The global snaprealm would be created and then destroyed immediately
every time when updating it.
Fixes: https://tracker.ceph.com/issues/54362
Signed-off-by: Xiubo Li <xiubli@redhat.com>
The purpose is to make the pmem device usage more flexible
than the current solution. And prepare for the potential
offloading by hardware engine later.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Add a "secure" parameter to alertmanager spec that will cause it
to deploy alertmanagers with insecure_skip_verify as true or false
depending on the value given for "secure".
NOTE: alertmanager must still be reconfigured after applying a yaml
with this option changed.
Fixes: https://tracker.ceph.com/issues/55272
Fixes: https://tracker.ceph.com/issues/55333
Signed-off-by: Adam King <adking@redhat.com>
The type of 'num_fwd' in ceph 'MClientRequestForward' is 'int32_t',
while in 'ceph_mds_request_head' the type is '__u8'. So in case
the request bounces between MDSes exceeding 256 times, the client
will get stuck.
In this case it's ususally a bug in MDS and continue bouncing the
request makes no sense.
Fixes: https://tracker.ceph.com/issues/55129
Signed-off-by: Xiubo Li <xiubli@redhat.com>