As specified in rgw_bucket_index_marker_info, unless we're doing the
compatibility check, in which case we look at generation 0.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
and pass correct generation and num shards when deleting
per shard status objects when disabling during incremental sync
Signed-off-by: Yuval Lifshitz <ylifshit@redhat.com>
the sync_pair is used as input to RGWBucketPipeSyncStatusManager::status_oid()
to generate the per-shard sync status object names
this sync status tracks incremental bucket sync, which reads changes
from a source bucket's bilog shard, and copies objects from the remote
source bucket to the local destination bucket
this doesn't require sync to know anything about the destination bucket
shards, so rgw_bucket_sync_pair_info and status_oid() now only track the
the destination's rgw_bucket instead of rgw_bucket_shard
Signed-off-by: Casey Bodley <cbodley@redhat.com>
From the REST interface and radosgw-admin. Assume Generation 0 if none
provided and error if it doesn't exist.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
Fetch the current generation from remote peers and trim the minimum
marker on the minimum generation.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
Needed so we can get the incremental generation.
Guard this behind a version check and return the original output if
less than 2.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
poll on rgw_read_bucket_full_sync_status() until
full_status.incremental_gen catches up to the latest_gen we got from
rgw_read_remote_bilog_info()
Signed-off-by: Casey Bodley <cbodley@redhat.com>
knock out a TODO that was causing this assertion failure in
RGWRados::get_bucket_stats() after a reshard:
ceph_assert(headers.size() == bucket_instance_ids.size());
Signed-off-by: Casey Bodley <cbodley@redhat.com>
this adds wrapper structs rgw_data_notify_v1_encoder and
rgw_data_notify_v1_decoder that can encode/decode the v1 json format
directly on the v2 data structure
Signed-off-by: Casey Bodley <cbodley@redhat.com>
clear the bucket layout we get from the metadata master, and overwrite it
with our zone's defaults
without clearing the layout, init_default_bucket_layout() was adding another
log layout in addition to the one from the master. this caused the bilog
list API to provide a 'next_log' when only gen=0 exists
Signed-off-by: Casey Bodley <cbodley@redhat.com>
* make sure src/dest shard ids are the same in sync pair
* copy sync pair by value in coroutine loop
Signed-off-by: Yuval Lifshitz <ylifshit@redhat.com>
if the old index is still referenced by an InIndex log layout, we can't
call clean_index() to remove the index objects yet. log trimming will do
that later, once the bilogs are no longer needed
Signed-off-by: Casey Bodley <cbodley@redhat.com>
wait until we've read the bucket sync status and found that we're in
incremental sync before we start using incremental_gen for comparison
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Drop entries from past generations.
Send entries of future generations to the error repo for retry.
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
but if gen>0, require decoders to understand the v2 format. this way,
old clients can't decode entries with gen>0, so they won't be able to
serve them to other zones
Signed-off-by: Casey Bodley <cbodley@redhat.com>