When a cache tier promotes an object with one or more error PG log
entries, these errors need to be propagated and recorded for dup
op detection.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
If the base tier records an error against an operation, the cache
tier currently might incorrectly respond with a success return code.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
A B
SharedBlobSet::lookup()
takes lock
nref is not 0
SharedBlob::put()
--nref
returns SharedBlobRef,
++nref
takes cache lock
SharedBlobSet::remove
takes lock
removes
deletes SharedBlob
-> A ends up with a ref to deleted SharedBlob
Fix by verifying that nref is still zero in SharedBlobSet::remove(),
while we are holding the SharedBlobSet::lock. The lock ensures that we
have increased the ref for the lookup before entering remove, so we can
verify that nref is still zero before removing it. If not, we have
raced, and put() bails out and does nothing.
Fixes: http://tracker.ceph.com/issues/36526
Signed-off-by: Sage Weil <sage@redhat.com>
After the recent logging rework, ManyGatherLog and
ManyGatherLogStringAssign are identical barring the string.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Unlike ConcreteEntry, MutableEntry can be appended to. Reserving the
exact number of elements before every append is harmful: vector will
likely reallocate each time and grow linearly instead of geometrically.
This results in quadratic behaviour when we spill past the preallocated
capacity and doesn't benefit the fast path in any way.
The new test case takes half a second with this patch and many hours
spinning in memmove without this patch.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
we pack the asset() params for smaller code size, but this creates a
inlined `assert_data_ctx` instance for every compilation unit which
call ceph_assert() defined in .h .
__PRETTY_FUNCTION__ is likely to be referenced by `assert_data_ctx`
sections which are included by different compiled object files. if the
ceph_assert() call is used by header file, then there will be multiple
`assert_data_ctx` sections sharing the same identifier. these sections are
defined as "COMDAT" group sections, i.e. common data sections. when linker
see multiple COMDAT sections with the same identifer, it will simply discard
the duplicated ones, and only keep a single copy of them. without enabling
ASan, GCC can always handle this problem just fine. but the dedup feature
does not work well with ASan. if ASan is enabled, and we link the objects
with the wrong order, some references will be pointing to the discarded
sections.
to address this issue, we could audit the link command line and inspect
all .o files to make sure they are properly ordered. but this is
non-trivial. as a workaround, in this change, the assert params are not
packed, and sent to the __ceph_assert_fail() overrides which accepts
unpacked params directly, so the COMDAT section is not created.
Signed-off-by: Kefu Chai <kchai@redhat.com>
If there is a workunit task associated with the same client, the two
tasks will attempt to clone the suite repo to the same directory.
Worse, if it's parallel tasks, the two clones will clobber each
other.
Fixes: http://tracker.ceph.com/issues/36542
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
- rename from foo to bar
- foo onode is moved to bar in onode_map
- keys removed at position foo as part of txc
- new onode for foo is installed at foo in map
...
- cache trims foo
...
- new txn B does get_onode on foo, reads old foo (now bar) onode into foo ***
- txn A commits
-> onode cache has foo with stale bar content
Fix by holding a ref to the replacement foo onode so that get_onode cannot
read stale metadata out of kvdb before txn A commits.
Fixes: http://tracker.ceph.com/issues/36541
Signed-off-by: Sage Weil <sage@redhat.com>
ErasureCodeProfileService was being provided twice and that was causing
problems in production mode.
Fixes: https://tracker.ceph.com/issues/36544
Signed-off-by: Tiago Melo <tmelo@suse.com>
When initializing the Device structure, it have to run is_valid() to
ensure the data structures (_is_valid & rejected_reasons) to be
populated accordingly to the device state.
Signed-off-by: Erwan Velu <erwan@redhat.com>
If we cannot open a block device in O_RDWR in exclusive mode, it means
someone is actually using it like a raw database or similar.
In that case, the device should be considered as unusable as OSDs will
not be in a position to use it.
Signed-off-by: Erwan Velu <erwan@redhat.com>
We are already reporting the rotational & scheduler of a disk device.
Reporting the nr_requests could be useful to get how many concurrent IOs
the device supports/reports.
That could help detecting badly detected/configured devices.
Signed-off-by: Erwan Velu <erwan@redhat.com>
We are already reporting model & vendor of a given disk, let's also
report the revision of the firmware. That is useful to filter-out some
known broken revisions.
Signed-off-by: Erwan Velu <erwan@redhat.com>
If a devices is said to be read-only, there is no chance we can actually
use it. So let's report it as unusable.
Signed-off-by: Erwan Velu <erwan@redhat.com>
A block device can be filtered-out/ignored because it have features that
doesn't match Ceph's expectations.
As of today, the current code was rejected removable devices but it was
pretty hidden from the user, and implicit in the get_devices() function.
This patch is creating a new is_valid() function to perform all the
rejection tests and returns if this device can be used in the Ceph
context or not.
If is_valid() is returning False, the 'rejected_reasons' list reports all
the reasons why that devices got rejected.
Signed-off-by: Erwan Velu <erwan@redhat.com>
otherwise we will have
/usr/bin/ld: libzstd/lib/libzstd.a(error_private.c.o): relocation
R_X86_64_32S against `.rodata' can not be used when making a shared
object; recompile with -fPIC
Signed-off-by: Kefu Chai <kchai@redhat.com>
in python's distutils.ccompiler, linker_exe is composed using CC instead
of LDFLAGS. the latter only effects how it builds (shared) library.
and put CMAKE_C_FLAGS into the cflags for the compiler for building
python C extensions, it's more consistent this way. more importantly,
if we build with ASan enabled, the canary program, a.k.a. rados_dummy.c,
won't link without proper CFLAGS.
without this change, rados.so fails to build with errors like:
/usr/bin/ld: /var/ssd/ceph/build/lib/librados.so: undefined reference to
`__asan_stack_free_10'
/usr/bin/ld: /var/ssd/ceph/build/lib/librados.so: undefined reference to
`__asan_report_exp_store8'
...
...
clang: error: linker command failed with exit code 1 (use -v to see
invocation)
Link Error: RADOS library not found
make[3]: ***
[src/pybind/rados/CMakeFiles/cython_rados.dir/build.make:57:
src/pybind/rados/CMakeFiles/cython_rados] Error 1
Signed-off-by: Kefu Chai <kchai@redhat.com>
otherwise "cmake -DWITH_ASAN=ON -DCMAKE_BUILD_TYPE=Debug" will fail to
build with
/usr/bin/ld: //var/ssd/ceph/build/lib/libceph-common.so.0: undefined
reference to `TextTable::endrow'
Signed-off-by: Kefu Chai <kchai@redhat.com>
- manifest unset op to foo-chunk object
- remove manifest flag
- commit
- send an ack to a client
- send decrement mesages ("chunk_put") to old chunks (bar-chunk)
Current unit test(ManifestUnset) send "chunk_read" command (to bar-chunk)
in order to see whether chunk's reference count is decreased.
But, as described above, "chunk_read" event can be triggered after a client
(test application) receives an ack. Therefore, there is a corner case
such as bar-chunk (in chunk pool) receives "chunk_read" first instead of "chunk_put"
Reference count model of dedup/tiering is based on false-positive (#24230).
So decreasing reference count is not guaranteed. If reference mismatch occur,
chunk-scrub (this is WIP) will fix it.
One guaranteed thing is that existing manifest flag is removed.
So, the solution of this commit is just re-send unset op, and then
chenk that return value is -EOPNOTSUPP (this means manifest flags is removed).
Fixes: http://tracker.ceph.com/issues/24485
Signed-off-by: Myoungwon Oh <omwmw@sk.com>