RGWDataSyncSingleEntryCR is the only caller of RGWRunBucketSourcesSyncCR
it always provides a source_bs, and never provides a target_bs. so remove
all the complexity related to target_bs, and the idea that we'd need to
sync several source bucket shards related to the target bucket
we now just have the single loop over the target buckets that use the
given bucket as a source
Signed-off-by: Casey Bodley <cbodley@redhat.com>
when data sync queries RGWOp_BILog_Info from an un-upgraded gateway, it
doesn't include the oldest_gen/latest_gen fields. so initialize these
variables to 0 by default
Signed-off-by: Casey Bodley <cbodley@redhat.com>
enable the background dynamic resharding thread based on
RGWSI_Zone::can_reshard(), which takes the zonegroup features into
account
Fixes: https://tracker.ceph.com/issues/52877
Signed-off-by: Casey Bodley <cbodley@redhat.com>
if the remote gives us more shards than we expect, just count those
shards as 'behind' and avoid out-of-bounds access of shard_status
Signed-off-by: Casey Bodley <cbodley@redhat.com>
if the full sync status object is missing, it's possible that we just
haven't started syncing it again after upgrading from just the per-shard
status objects
in this case, as long as we have a log generation 0, assume that we just
haven't initialized the full status object and try to read the gen=0
per-shard incremental status for comparison
Signed-off-by: Casey Bodley <cbodley@redhat.com>
all we need to construct the per-shard bucket sync status object names
are the bucket names themselves, which we already have from
rgw_sync_bucket_pipe
Signed-off-by: Casey Bodley <cbodley@redhat.com>
rgw_read_bucket_inc_sync_status() uses the size of this vector as the
'num_shards', so we need to resize it appropriately beforehand
Signed-off-by: Casey Bodley <cbodley@redhat.com>
the calls to rgw_read_bucket_inc_sync_status() depend on
sync_status.incremental_gen, which we need to read via
rgw_read_bucket_full_sync_status() regardless of whether
we're returning it to the client (version > 1)
Signed-off-by: Casey Bodley <cbodley@redhat.com>
As specified in rgw_bucket_index_marker_info, unless we're doing the
compatibility check, in which case we look at generation 0.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
and pass correct generation and num shards when deleting
per shard status objects when disabling during incremental sync
Signed-off-by: Yuval Lifshitz <ylifshit@redhat.com>
the sync_pair is used as input to RGWBucketPipeSyncStatusManager::status_oid()
to generate the per-shard sync status object names
this sync status tracks incremental bucket sync, which reads changes
from a source bucket's bilog shard, and copies objects from the remote
source bucket to the local destination bucket
this doesn't require sync to know anything about the destination bucket
shards, so rgw_bucket_sync_pair_info and status_oid() now only track the
the destination's rgw_bucket instead of rgw_bucket_shard
Signed-off-by: Casey Bodley <cbodley@redhat.com>
From the REST interface and radosgw-admin. Assume Generation 0 if none
provided and error if it doesn't exist.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
Fetch the current generation from remote peers and trim the minimum
marker on the minimum generation.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
Needed so we can get the incremental generation.
Guard this behind a version check and return the original output if
less than 2.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
poll on rgw_read_bucket_full_sync_status() until
full_status.incremental_gen catches up to the latest_gen we got from
rgw_read_remote_bilog_info()
Signed-off-by: Casey Bodley <cbodley@redhat.com>
knock out a TODO that was causing this assertion failure in
RGWRados::get_bucket_stats() after a reshard:
ceph_assert(headers.size() == bucket_instance_ids.size());
Signed-off-by: Casey Bodley <cbodley@redhat.com>
this adds wrapper structs rgw_data_notify_v1_encoder and
rgw_data_notify_v1_decoder that can encode/decode the v1 json format
directly on the v2 data structure
Signed-off-by: Casey Bodley <cbodley@redhat.com>