mirror of
https://github.com/ceph/ceph
synced 2025-01-19 17:41:39 +00:00
aa0071ce8b
With dynamic bucket index resharding, when the average number of objects per shard exceeds the configured value, that bucket is scheduled for reshard. That bucket may receive more new objects before the resharding takes place. As a result, the existing code re-calculates the number of new shards just prior to resharding, rather than waste a resharding opportunity with too low a value. The same holds true for a user-scheduled resharding. A user reported confusion that the number reported in `radosgw-admin reshard list` wasn't the number that the reshard operation ultimately used. This commit makes it clear that the new number of shards is "tentative". And test_rgw_reshard.py is updated to reflect this altered output. Additionally this commit adds some modernization and efficiency to the "reshard list" subcommand. Signed-off-by: J. Eric Ivancich <ivancich@redhat.com> |
||
---|---|---|
.. | ||
run-datacache.sh | ||
run-reshard.sh | ||
run-s3tests.sh | ||
s3_bucket_quota.pl | ||
s3_multipart_upload.pl | ||
s3_user_quota.pl | ||
s3_utilities.pm | ||
test_librgw_file.sh | ||
test_rgw_datacache.py | ||
test_rgw_gc_log.sh | ||
test_rgw_obj.sh | ||
test_rgw_orphan_list.sh | ||
test_rgw_reshard.py | ||
test_rgw_throttle.sh |