These are intended to replace do_osd_ops*. The implementation
is simpler and does not involve passing success and failure
callbacks. It also moves responsibility for dealing with
the MOSDOpReply and client related error handling over to
ClientRequest.
do_osd_op* will be removed once users are switched over.
Signed-off-by: Samuel Just <sjust@redhat.com>
It seems like the motivation here was to allow do_osd_ops_execute to
communicate that it didn't submit an error log by making
maybe_submit_error_log a std::optional<eversion_t>. However,
submit_error_log itself always returns a version. Fix submit_error_log
and compensate in do_osd_ops_execute.
Signed-off-by: Samuel Just <sjust@redhat.com>
The return signature previously suggested that the second future
returned could be an error. This seemed necessary due to how
effects are handled:
template <typename MutFunc>
OpsExecuter::rep_op_fut_t
OpsExecuter::flush_changes_n_do_ops_effects(
const std::vector<OSDOp>& ops,
SnapMapper& snap_mapper,
OSDriver& osdriver,
MutFunc mut_func) &&
{
...
all_completed =
std::move(all_completed).then_interruptible([this, pg=this->pg] {
// let's do the cleaning of `op_effects` in destructor
return interruptor::do_for_each(op_effects,
[pg=std::move(pg)](auto& op_effect) {
return op_effect->execute(pg);
});
However, all of the actual execute implementations (created via
OpsExecuter::with_effect_on_obc) return a bare seastar::future and
cannot fail.
In a larger sense, it's actually critical that neither future returned
from flush_changes_n_do_ops_effects may fail -- they represent applying
the transaction locally and remotely. If either portion fails, there
would need to be an interval change to recover.
Signed-off-by: Samuel Just <sjust@redhat.com>
The idea here is that PG::do_osd_ops propogates an eagain after starting
a repair upon encountering an eio to indicate that the op should restart
from the top of ClientRequest::process_op.
However, InternalClientRequest's handler for this error simply ignores
it. ClientRequest's handling, while superficially reasonable, doesn't
actually work. Re-calling process_op would mean reentering previous
stages. This is problematic for at least a few reasons:
1. Reentering a prior stage with the same handler doesn't actually work
since the corresponding event entries will already be populated.
2. There might be other ops on the same object waiting on the process
stage. They'd need to be sent back as well in order to preserve
ordering.
Because this mechanism doesn't really seem to be fully baked, let's
remove it for now and try to reintroduce it later after
do_osd_ops[_execute] are a bit simpler.
Signed-off-by: Samuel Just <sjust@redhat.com>
Each of the two existing pipelines are shared across multiple
ops. Rather than defining them in a specific op or in
osd_operations/common/pg_pipeline.h, just declare them in
osd_operation.h.
Signed-off-by: Samuel Just <sjust@redhat.com>
f90af12d introduced check_already_complete_get_obc to replace get_obc,
but left get_obc and didn't update the other users.
Signed-off-by: Samuel Just <sjust@redhat.com>
* refs/pull/60301/head:
doc/governance: add new CSC members
Reviewed-by: Laura Flores <lflores@redhat.com>
Reviewed-by: Anthony D Atri <anthony.datri@gmail.com>
ea67f3dee2 switched to
asio::any_completion_handler<> for completions, but left some converting
overloads behind for compatibility. none of those overloads appear to be
used, so remove them
Signed-off-by: Casey Bodley <cbodley@redhat.com>
crimson/osd/pg: remove snapmapper objects when eventually removing collections at the last moment of pg deleting, just as pg meta objects
Reviewed-by: Samuel Just <sjust@redhat.com>
doc: SubmittingPatches-backports - remove backports team
Reviewed-by: Anthony D'Atri <anthony.datri@gmail.com>
Reviewed-by: Laura Flores <lflores@redhat.com>
Fixes: https://tracker.ceph.com/issues/68355
Fixes Includes: Added default zonegroup name with the sync policy details
Signed-off-by: Naman Munet <namanmunet@li-ff83bccc-26af-11b2-a85c-a4b04bfb1003.ibm.com>
crimson/osd/backfill_state: push peer pg infos' last_backfills only when all objects before them are backfilled
Reviewed-by: Matan Breizman <mbreizma@redhat.com>
Fixes the compiler warning:
src/common/ceph_context.h: In member function ‘std::shared_ptr<std::vector<entity_addrvec_t> > ceph::common::CephContext::get_mon_addrs() const’:
src/common/ceph_context.h:288:36: warning: ‘std::shared_ptr<_Tp> std::atomic_load_explicit(const shared_ptr<_Tp>*, memory_order) [with _Tp = vector<entity_addrvec_t>]’ is deprecated: use 'std::atomic<std::shared_ptr<T>>' instead [-Wdeprecated-declarations]
288 | auto ptr = atomic_load_explicit(&_mon_addrs, std::memory_order_relaxed);
| ~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/include/c++/14/bits/shared_ptr_atomic.h:133:5: note: declared here
133 | atomic_load_explicit(const shared_ptr<_Tp>* __p, memory_order)
| ^~~~~~~~~~~~~~~~~~~~
The modernized version does not build with GCC 11, so this patch
contains both versions for now, switched by a `__GNUC__` preprocessor
check.
Signed-off-by: Max Kellermann <max.kellermann@ionos.com>
Listener deletion is broken due to passing wrong gateway address.
Including `traddr` in DELETE API of listener to choose correct gateway address for deletion.
The same fix we did for POST API here: 287ff3b360
Fixes: https://tracker.ceph.com/issues/68506
Signed-off-by: Afreen Misbah <afreen23.git@gmail.com>
It was possible to give multiple devices to cbt:
> ceph-bluestore-tool show-label --dev /dev/sda --dev /dev/sdb
But is any of devices cannot provide valid label, nothing was printed.
Now, always print results. Non readable labels are output as empty dictionaries.
Exit code:
- 0 if any label properly read
- 1 if all labels failed
Fixes: https://tracker.ceph.com/issues/68505
Signed-off-by: Adam Kupczyk <akupczyk@ibm.com>
there are 2 issues
1. in cephadm, i was always using the first daemon to populate the group
in all the services for the dashboard config.
2. in the API, if there are more than 1 gateways listed in the config,
rather than chosing a random gateway from the group, raise an
exception and warn user to specify the gw_group parameter in the api
request
Fixes: https://tracker.ceph.com/issues/68463
Signed-off-by: Nizamudeen A <nia@redhat.com>