This avoids the problem that dirfrag becomes imcomplete before
ValidationContinuation::_dirfrags() get called;
Signed-off-by: Yan, Zheng <zyan@redhat.com>
the 'int ret' variable of the inner scope was shadowing an 'int ret'
variable in the outer scope, so we weren't propagating any of the error
codes
Signed-off-by: Casey Bodley <cbodley@redhat.com>
the InitSyncStatus coroutine records the position to start incremental
sync after finishing a full sync. this should be the master's marker
from the current period, rather than its oldest log period
this also adds a check to run_sync() that restarts a full sync if it
sees that our sync period is behind the master's oldest log period
Signed-off-by: Casey Bodley <cbodley@redhat.com>
RGWMetadataManager::get_log() will allocate a log and keep it in memory.
this could lead to a potential denial of service by making requests with
lots of different period ids
RGWMetadataLog if effectively stateless (the only state is a set of
modified_shards, which are not touched by any of the rest api calls), so
we can use a temporary instead of calling get_log()
Signed-off-by: Casey Bodley <cbodley@redhat.com>
now that the shards will be coming and going after startup, they need to
be reference counted and protected by a mutex
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Prior to this commit, the tarball from "make dist" did not include the
ceph-detect-init(8) man page rST source.
Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
rgw: adjust error code when bucket does not exist in copy operation
rgw: don't override error when initializing zonegroup
Fixes: #14975
Reviewed-by: Orit Wasserman <owasserm@redhat.com>
rgw: indexless buckets (Yehuda Sadeh)
- can define a policy, for which buckets are indexless
- users can then create buckets under the specified placement target
- indexless buckets will not be synced across zones
- does not work with (s3) versioned buckets
Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
When run without "--no-verify", all verification errors are noted,
but they are not forwarded/reported anywhere else but to cerr, which
will cause automated testing to ignore them. Make seq_read_bench and
rand_read_bench return -EIO on any verification error which will,
in turn, return it back to caller.
Fixes: #14971
Signed-off-by: Piotr Dałek <piotr.dalek@ts.fujitsu.com>
"Pool <pool> has too few pgs" is okay assuming it does not take other
pools into account. And since it does, it is confusing in the following
scenario:
1. Create two pools, one with small pg count and one with large
pg count
2. Put a whole lot of objects in smaller pool, resulting in "too few
pgs" warning on that pool, which is expected behavior.
3. Put a whole lot of objects in larger pool, warning goes away.
Suddenly smaller pool has plenty of PGs?
Current message suggests adding more nodes (or PGs) to pool, when
actually it's warning about significantly more objects in that
particular pool than in the other pools.
Signed-off-by: Piotr Dałek <piotr.dalek@ts.fujitsu.com>