Merge pull request #44394 from melissa-kun-li/enable-autotune

Enable autotune for osd_memory_target on bootstrap

Reviewed-by: Alfonso Martínez <almartin@redhat.com>
This commit is contained in:
Adam King 2022-01-13 12:06:46 -05:00 committed by GitHub
commit b064b1fb4f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 16 additions and 7 deletions

View File

@ -110,6 +110,10 @@
by default. For more details, see:
https://docs.ceph.com/en/latest/rados/operations/placement-groups/
* Cephadm: ``osd_memory_target_autotune`` will be enabled by default which will set
``mgr/cephadm/autotune_memory_target_ratio`` to ``0.7`` of total RAM. This will be
unsuitable for hyperconverged infrastructures. For hyperconverged Ceph, please refer
to the documentation or set ``mgr/cephadm/autotune_memory_target_ratio`` to ``0.2``.
>=16.0.0
--------

View File

@ -362,11 +362,14 @@ See :ref:`cephadm-deploy-osds` for more detailed instructions.
Enabling OSD memory autotuning
------------------------------
It is recommended to enable ``osd_memory_target_autotune``.
in order to maximise the performance of the cluster. See :ref:`osd_autotune`.
.. warning:: By default, cephadm enables ``osd_memory_target_autotune`` on bootstrap, with ``mgr/cephadm/autotune_memory_target_ratio`` set to ``.7`` of total host memory.
In case the cluster hardware is not exclusively used by Ceph (hyperconverged),
reduce the memory consuption of Ceph like so:
See :ref:`osd_autotune`.
To deploy hyperconverged Ceph with TripleO, please refer to the TripleO documentation: `Scenario: Deploy Hyperconverged Ceph <https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/cephadm.html#scenario-deploy-hyperconverged-ceph>`_
In other cases where the cluster hardware is not exclusively used by Ceph (hyperconverged),
reduce the memory consumption of Ceph like so:
.. prompt:: bash #

View File

@ -380,9 +380,7 @@ memory with other services, cephadm can automatically adjust the per-OSD
memory consumption based on the total amount of RAM and the number of deployed
OSDs.
This option is enabled globally with::
ceph config set osd osd_memory_target_autotune true
.. warning:: Cephadm sets ``osd_memory_target_autotune`` to ``true`` by default which is unsuitable for hyperconverged infrastructures.
Cephadm will start with a fraction
(``mgr/cephadm/autotune_memory_target_ratio``, which defaults to

View File

@ -5134,6 +5134,10 @@ def command_bootstrap(ctx):
except Exception:
logger.info('\nApplying %s to cluster failed!\n' % ctx.apply_spec)
# enable autotune for osd_memory_target
logger.info('Enabling autotune for osd_memory_target')
cli(['config', 'set', 'osd', 'osd_memory_target_autotune', 'true'])
logger.info('You can access the Ceph CLI with:\n\n'
'\tsudo %s shell --fsid %s -c %s -k %s\n' % (
sys.argv[0],