Remove "background", "getdata", and "putdata" from the list of LUA
context options. Passing these options throws the following error:
"ERROR: invalid script context: background. must be one of: preRequest,
postRequest".
Fixes: https://tracker.ceph.com/issues/64327
Signed-off-by: Zac Dover <zac.dover@proton.me>
This new cap allows users to run the admin api op
`get user info` without the S3 keys and Swift keys
in the response.
Signed-off-by: Ali Maredia <amaredia@redhat.com>
Update the "Creating a Pool" section of doc/rados/operations/pools.rst
so that the documentation no longer insists that the user change the
values of "osd_pool_default_pg_num" and "osd_pool_default_pgp_num".
See also: https://github.com/ceph/ceph/pull/55419
Tracker: https://tracker.ceph.com/issues/64259
Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
Update doc/rados/configuration/pool-pg-config-ref.rst to account for the
behavior of autoscaler.
Previously, this file was last meaningfully altered in 2013, prior to
the invention of autoscaler. A recent confusion was brought to my
attention on the Ceph Slack whereby a user attempted to alter the
default values of a Quincy cluster, as suggested in this documentation.
That alteration caused Ceph to throw the error "Error ERANGE: 'pgp_num'
must be greater than 0 and lower or equal than 'pg_num', which in this
case is one" and a related "rgw_init_ioctx ERROR" reading in part
"Numerical result out of range". The user removed the
"osd_pool_default_pgp_num" configuration line from ceph.conf and the
cluster worked as expected. I presume that this is because the removal
of this configuration line allowed autoscaler to work as intended.
Fixes: https://tracker.ceph.com/issues/64259
Co-authored-by: David Orman <ormandj@corenode.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
rgw/common: add rgw lifecycle specific debug log subsystem
Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
Reviewed-by: Jiffin Tony Thottan <jthottan@redhat.com>
The rgw_op section of `counter dump/schema` becomes:
- rgw_op_global for the global op counters
- rgw_op_per_user for the user labeled counters
- rgw_op_per_bucket for the bucket labeled counters
Signed-off-by: Ali Maredia <amaredia@redhat.com>
This commit adds some documentation about the
'hardware inventory / monitoring' feature (node-proxy agent).
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
Improve paragraphs under the heading "The Ceph Storage Cluster". Remove
a sentence that was pleonastic in its context in the paragraph.
Signed-off-by: Zac Dover <zac.dover@proton.me>
Read balancing may now be managed automatically via the balancer
manager module. Users may choose between two new modes: ``upmap-read``, which
offers upmap and read optimization simultaneously, or ``read``, which may be used
to only optimize reads. Existing balancer commands have also been added to
contain more information about read balancing.
Run the following commands to test the new automatic behavior:
`ceph balancer on` (on by default)
`ceph balancer mode <read|upmap-read>`
`ceph balancer status`
Run the following commands to test the new supervised behavior:
`ceph balancer off`
`ceph balancer mode <read|upmap-read>`
`ceph balancer eval` | `ceph balancer eval <pool-name>`
`ceph balancer eval-verbose` | `ceph balancer eval-verbose <pool-name>`
`ceph balancer optimize <plan-name>`
`ceph balancer show <plan-name>`
`ceph balancer eval <plan-name>`
`ceph balancer execute <plan-name>`
In the balancer module, there is also a new "self_test" function which tests
the module's basic functionality. This test can be triggered with the following
commands:
`ceph mgr module enable selftest`
`ceph mgr self-test module balancer`
Related Trello: https://trello.com/c/sWoKctzL/859-add-read-balancer-support-inside-the-balancer-module
Signed-off-by: Laura Flores <lflores@ibm.com>
Fix a tricky verb disagreement and rewrite a few sentences for what I
hope is greater clarity.
Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>