cls/rgw: rgw_dir_suggest_changes detects race with completion
Reviewed-by: Matt Benjamin <mbenjami@redhat.com>
Reviewed-by: Mark Kogan <mkogan@redhat.com>
Reviewed-by: J. Eric Ivancich <ivancich@redhat.com>
rgw: Update "CEPH_RGW_DIR_SUGGEST_LOG_OP" for remove entries
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Reviewed-by: Adam C. Emerson <aemerson@redhat.com>
Reviewed-by: Matt Benjamin <mbenjami@redhat.com>
It's needed to addresses a FTBFS due to the Seastar's
no-locking-when-throwing hack.
Tags: seastar submodule
Signed-off-by: Radosław Zarzyński <rzarzyns@redhat.com>
It's necessary since 710a1bfdc02202fe9e59df8ea31de5b82b893fb4
in Seastar.
This change is a part of ongoing upgrade of Seastar which will
be completed in a follow-up PR, after merging another change
with the Seastar's upstream.
Signed-off-by: Radosław Zarzyński <rzarzyns@redhat.com>
On review, this constraint was correct--it does reliably prevent
same-cycle re-runs when a lc threads rendezvous on a bucket.
Also, for concurrent (or stale) and already processed buckets,
remember to advance head past the corresponding buckets.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Restore (and robustify) the assertion that, in general, each bucket
shard should be processed once per scheduling cycle.
If the prior cycle did not finish, processing in the current cyhcle
will continue from the marker where the last cycle left off.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
E.g.,
[
{
"bucket": ":bucket1:f2f4a8dd-7ec9-4758-bc4f-c8f5fbc85109.4137.2",
"shard": "lc.6",
"started": "Fri, 18 Feb 2022 17:30:16 GMT",
"status": "COMPLETE"
},
...
]
The prototyped approach adds a copy of the shard name (which is
assured to be a small string) to rgw::sal::LCEntry. It's not
expected to be represented in underlying store types.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Conveniently, this arose after removing all lifecycle shards from
RADOS, proving it could be done safely.
A restart is currently needed to recognize new lifecycle shards,
if rgw_gc_max_objs also changed.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Remove now-unused RGWLC::bucket_lc_prepare. Wrap serializer calls
in RGWLC::process(int index...) with simple backoff, limited to 5
retries.
In RGWLC::process(int index...), also open-coded the behavior of
RGWLC::bucket_lc_prepare(...), as the lock sharing between these
methods is error prone. For now, that method exists, so that it can
be called from the single-bucket process.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
This is an alternative solution to the (newly exposed) lifecycle
shard starvation problem reported by Jeegen Chen.
There was always an starvation condition implied by the
reset of lc shard head at the start of processing. The introduction
of "stale sessions" in parallel lifecycle changes made it more
visible, in particular when rgw_lc_debug_interval was set to a small
value and many buckets had lifecycle policy.
My hypothesis in this change is that lifecycle processing for each
lc shard should /always/ continue through the full set of eligible
buckets for the shard, regardless of how many processing cycles might
be required to do so. In general, restarting at the first eligible
bucket on each reschedule invites starvation when processing "gets
behind", so just avoid it.
Fixes: https://tracker.ceph.com/issues/49446
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
(cherry picked from commit 6e2ae13adced6b3dbb2fe16b547a30e9d68dfa06)
rgwlc: add a wraparound to continued shard processing
If the full set of buckets for a given lc shard couldn't be
processed in the prior cycle, processing will start with a
non-empty marker. Note the initial marker position, then
when the end of shard is reached, allow processing to wrap
around to the logical beginning of the shard and proceeding
through the initial marker.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Please enter the commit message for your changes. Lines starting
(cherry picked from commit 0b8f683d3cf444cc68fd30c3f179b9aa0ea08e7c)
don't report clearing incorrectly
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
The intent is to permit tracing of the bucket processing scheduler, without
expiring or transitioning any objects.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Provide an option to disable automatic clearing of stale sessions--
which, unless disabled, happens after 2 lifecycle scheduling cycles.
The default behavior is most likely not desired when a debugging or
testing lifecycle processing with rgw_lc_debug_interval is set, and
therefore re-entering a running session after 2 scheduling cycles is
fairly likely.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
to address following error when compiling with C++20 standard:
../src/rgw/cls_fifo_legacy.cc:2217:22: error: ISO C++20 considers use of overloaded operator '==' (with operand types 'rados::cls::fifo::journal_entry' and 'rados::cls::fifo::journal_entry') to be ambiguous despite there being a unique best viable function [-Werror,-Wambiguous-reversed-operator]
!(jiter->second == e)) {
~~~~~~~~~~~~~ ^ ~
../src/cls/fifo/cls_fifo_types.h:148:8: note: ambiguity is between a regular call to this operator and a call with the argument order reversed
bool operator ==(const journal_entry& e) {
^
Signed-off-by: Kefu Chai <tchaikov@gmail.com>
the std::allocator<T> member functions destroy() and deallocate() were
deprecated in c++17 and removed in c++20. call the static functions on
std::allocator_traits<T> instead
resolves the c++20 compilation error with clang13:
In file included from ceph/src/test/cls_fifo/bench_cls_fifo.cc:38:
ceph/src/neorados/cls/fifo.h:684:7: error: no member named 'destroy' in 'std::allocator<neorados::cls::fifo::detail::JournalProcessor<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>>'
a.destroy(t);
~ ^
ceph/src/neorados/cls/fifo.h:1728:11: note: in instantiation of function template specialization 'neorados::cls::fifo::FIFO::assoc_delete<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>, neorados::cls::fifo::detail::JournalProcessor<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>>' requested here
FIFO::assoc_delete(h, this);
^
ceph/src/neorados/cls/fifo.h:1605:6: note: in instantiation of member function 'neorados::cls::fifo::detail::JournalProcessor<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>::handle' requested here
handle(errc::inconsistency);
^
ceph/src/neorados/cls/fifo.h:857:8: note: in instantiation of member function 'neorados::cls::fifo::detail::JournalProcessor<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>::process' requested here
p->process();
^
/usr/include/boost/asio/bind_executor.hpp:407:12: note: in instantiation of member function 'neorados::cls::fifo::FIFO::NewPartPreparer<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>::operator()' requested here
return this->target_(BOOST_ASIO_MOVE_CAST(Args)(args)...);
^
ceph/src/common/async/bind_allocator.h:179:12: note: in instantiation of function template specialization 'boost::asio::executor_binder<neorados::cls::fifo::FIFO::NewPartPreparer<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>, boost::asio::executor>::operator()<boost::system::error_code &, bool>' requested here
return this->target(std::forward<Args>(args)...);
^
ceph/src/neorados/cls/fifo.h:939:5: note: in instantiation of function template specialization 'neorados::cls::fifo::FIFO::_update_meta<ceph::async::allocator_binder<boost::asio::executor_binder<neorados::cls::fifo::FIFO::NewPartPreparer<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>, boost::asio::executor>, std::allocator<void>>>' requested here
_update_meta(fifo::update{}.journal_entries_add(jentries),
^
ceph/src/neorados/cls/fifo.h:1008:7: note: in instantiation of function template specialization 'neorados::cls::fifo::FIFO::_prepare_new_part<ceph::async::allocator_binder<boost::asio::executor_binder<(lambda at ceph/src/neorados/cls/fifo.h:1012:4), boost::asio::executor>, std::allocator<void>>>' requested here
_prepare_new_part(
^
ceph/src/neorados/cls/fifo.h:524:7: note: in instantiation of function template specialization 'neorados::cls::fifo::FIFO::_prepare_new_head<ceph::async::allocator_binder<boost::asio::executor_binder<neorados::cls::fifo::FIFO::Pusher<spawn::detail::coro_handler<boost::asio::executor_binder<void (*)(), boost::asio::executor>, void>>, boost::asio::executor>, std::allocator<void>>>' requested here
_prepare_new_head(std::move(p));
^
Signed-off-by: Casey Bodley <cbodley@redhat.com>
resolves a c++20 compilation error with clang 13:
In file included from ceph/src/client/Client.cc:55:
In file included from ceph/src/messages/MClientCaps.h:19:
In file included from ceph/src/mds/mdstypes.h:22:
ceph/src/include/xlist.h:212:27: warning: ISO C++20 considers use of overloaded operator '!=' (with operand types 'xlist<Dentry *>::const_iterat
or' and 'xlist<Dentry *>::const_iterator') to be ambiguous despite there being a unique best viable function with non-reversed arguments [-Wambiguous-reversed
-operator]
for (const auto &item : list) {
^
ceph/src/client/Client.cc:3299:63: note: in instantiation of member function 'operator<<' requested here
ldout(cct, 20) << "link inode " << in << " parents now " << in->dentries << dendl;
^
ceph/src/include/xlist.h:202:10: note: candidate function with non-reversed arguments
bool operator!=(const_iterator& rhs) const {
^
ceph/src/include/xlist.h:199:10: note: ambiguous candidate function with reversed arguments
bool operator==(const_iterator& rhs) const {
^
Signed-off-by: Casey Bodley <cbodley@redhat.com>