From f4673d2da1ca69c2ed671005fc6c1215bd131d19 Mon Sep 17 00:00:00 2001 From: mhackett Date: Fri, 28 Aug 2020 10:54:43 -0400 Subject: [PATCH] doc: document tuning of radosgw lifecycle Fixes: https://tracker.ceph.com/issues/47190 Signed-off-by: mhackett --- doc/radosgw/config-ref.rst | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/doc/radosgw/config-ref.rst b/doc/radosgw/config-ref.rst index 117567ed704..f842e149d7c 100644 --- a/doc/radosgw/config-ref.rst +++ b/doc/radosgw/config-ref.rst @@ -381,6 +381,41 @@ instances or all radosgw-admin commands can be put into the ``[global]`` or the :Type: Boolean :Default: ``true`` +Lifecycle Settings +================== + +Bucket Lifecycle configuration can be used to manage your objects so they are stored +effectively throughout their lifetime. In past releases Lifecycle processing was rate-limited +by single threaded processing. With the Nautilus release this has been addressed and the +Ceph Object Gateway now allows for parallel thread processing of bucket lifecycles across +additional Ceph Object Gateway instances and replaces the in-order +index shard enumeration with a random ordered sequence. + +There are two options in particular to look at when looking to increase the +aggressiveness of lifecycle processing: + +``rgw lc max worker`` + +:Description: This option specifies the number of lifecycle worker threads + to run in parallel, thereby processing bucket and index + shards simultaneously. + +:Type: Integer +:Default: ``3`` + +``rgw lc max wp worker`` + +:Description: This option specifies the number of threads in each lifecycle + workers work pool. This option can help accelerate processing each bucket. + +These values can be tuned based upon your specific workload to further increase the +aggressiveness of lifecycle processing. For a workload with a larger number of buckets (thousands) +you would look at increasing the ``rgw lc max worker`` value from the default value of 3 whereas a +workload with a smaller number of buckets but higher number of objects (hundreds of thousands) +per bucket you would look at tuning ``rgw lc max wp worker`` from the default value of 3. + +:NOTE: When looking to to tune either of these specific values please validate the + current Cluster performance and Ceph Object Gateway utilization before increasing. Garbage Collection Settings ===========================