mirror of
https://github.com/ceph/ceph
synced 2025-02-21 18:17:42 +00:00
Merge pull request #59545 from anthonyeleven/improve-radosgw-configref
doc/radosgw: Improve config-ref.rst Reviewed-by: Jiffin Tony Thottan <thottanjiffin@gmail.com>
This commit is contained in:
commit
a71318b738
@ -264,18 +264,18 @@ QoS settings
|
||||
|
||||
.. versionadded:: Nautilus
|
||||
|
||||
The ``civetweb`` frontend has a threading model that uses a thread per
|
||||
The older and now non-default``civetweb`` frontend has a threading model that uses a thread per
|
||||
connection and hence is automatically throttled by :confval:`rgw_thread_pool_size`
|
||||
configurable when it comes to accepting connections. The newer ``beast`` frontend is
|
||||
not restricted by the thread pool size when it comes to accepting new
|
||||
connections, so a scheduler abstraction is introduced in the Nautilus release
|
||||
to support future methods of scheduling requests.
|
||||
when accepting connections. The newer and default ``beast`` frontend is
|
||||
not limited by the thread pool size when it comes to accepting new
|
||||
connections, so a scheduler abstraction was introduced in the Nautilus release
|
||||
to support additional methods of scheduling requests.
|
||||
|
||||
Currently the scheduler defaults to a throttler which throttles the active
|
||||
connections to a configured limit. QoS based on mClock is currently in an
|
||||
*experimental* phase and not recommended for production yet. Current
|
||||
implementation of *dmclock_client* op queue divides RGW ops on admin, auth
|
||||
(swift auth, sts) metadata & data requests.
|
||||
Currently the scheduler defaults to a throttler that limits active
|
||||
connections to a configured limit. QoS rate limiting based on mClock is currently
|
||||
*experimental* phase and not recommended for production. The current
|
||||
implementation of the *dmclock_client* op queue divides RGW ops into admin, auth
|
||||
(swift auth, sts) metadata, and data requests.
|
||||
|
||||
|
||||
.. confval:: rgw_max_concurrent_requests
|
||||
@ -305,9 +305,9 @@ D4N Settings
|
||||
============
|
||||
|
||||
D4N is a caching architecture that utilizes Redis to speed up S3 object storage
|
||||
operations by establishing shared databases between different RGW access points.
|
||||
operations by establishing shared databases among Ceph Object Gateway (RGW) daemons.
|
||||
|
||||
Currently, the architecture can only function on one Redis instance at a time.
|
||||
The D4N architecture can only function on one Redis instance at a time.
|
||||
The address is configurable and can be changed by accessing the parameters
|
||||
below.
|
||||
|
||||
@ -324,18 +324,18 @@ below.
|
||||
Topic persistency settings
|
||||
==========================
|
||||
|
||||
Topic persistency will persistently push the notification until it succeeds.
|
||||
Topic persistency will repeatedly push notifications until they succeed.
|
||||
For more information, see `Bucket Notifications`_.
|
||||
|
||||
The default behavior is to push indefinitely and as frequently as possible.
|
||||
With these settings you can control how long and how often to retry an
|
||||
unsuccessful notification. How long to persistently push can be controlled
|
||||
by providing maximum time of retention or maximum amount of retries.
|
||||
Frequency of persistent push retries can be controlled with the sleep duration
|
||||
unsuccessful notification by configuring the maximum retention time and/or or
|
||||
maximum number of retries.
|
||||
The interval between push retries can be configured via the sleep duration
|
||||
parameter.
|
||||
|
||||
All of these values have default value 0 (persistent retention is indefinite,
|
||||
and retried as frequently as possible).
|
||||
All of these options default to the value `0`, which means that persistent
|
||||
retention is indefinite, and notifications are retried as frequently as possible.
|
||||
|
||||
.. confval:: rgw_topic_persistency_time_to_live
|
||||
.. confval:: rgw_topic_persistency_max_retries
|
||||
|
Loading…
Reference in New Issue
Block a user