doc: document bluestore compression settings

Signed-off-by: Kefu Chai <kchai@redhat.com>
This commit is contained in:
Kefu Chai 2017-08-02 15:35:29 +08:00
parent f27d76400a
commit f273712e1b
6 changed files with 131 additions and 5 deletions

View File

@ -0,0 +1,93 @@
==========================
BlueStore Config Reference
==========================
Inline Compression
==================
BlueStore supports inline compression using snappy, zlib, or LZ4. Please note,
the lz4 compression plugin is not distributed in the official release.
``bluestore compression algorithm``
:Description: The default compressor to use (if any) if the per-pool property
``compression_algorithm`` is not set. Note that zstd is *not*
recommended for bluestore due to high CPU overhead when
compressing small amounts of data.
:Type: String
:Required: No
:Valid Settings: ``lz4``, ``snappy``, ``zlib``, ``zstd``
:Default: ``snappy``
``bluestore compression mode``
:Description: The default policy for using compression if the per-pool property
``compression_mode`` is not set. ``none`` means never use
compression. ``passive`` means use compression when
`clients hint`_ that data is compressible. ``aggressive`` means
use compression unless clients hint that data is not compressible.
``force`` means use compression under all circumstances even if
the clients hint that the data is not compressible.
:Type: String
:Required: No
:Valid Settings: ``none``, ``passive``, ``aggressive``, ``force``
:Default: ``none``
``bluestore compression min blob size``
:Description: Chunks smaller than this are never compressed.
The per-pool property ``compression_min_blob_size`` overrides
this setting.
:Type: Unsigned Integer
:Required: No
:Default: 0
``bluestore compression min blob size hdd``
:Description: Default value of ``bluestore compression min blob size``
for rotational media.
:Type: Unsigned Integer
:Required: No
:Default: 128K
``bluestore compression min blob size ssd``
:Description: Default value of ``bluestore compression min blob size``
for non-rotational (solid state) media.
:Type: Unsigned Integer
:Required: No
:Default: 8K
``bluestore compression max blob size``
:Description: Chunks larger than this are broken into smaller blobs sizing
``bluestore compression max blob size`` before being compressed.
The per-pool property ``compression_max_blob_size`` overrides
this setting.
:Type: Unsigned Integer
:Required: No
:Default: 0
``bluestore compression max blob size hdd``
:Description: Default value of ``bluestore compression max blob size``
for rotational media.
:Type: Unsigned Integer
:Required: No
:Default: 512K
``bluestore compression max blob size ssd``
:Description: Default value of ``bluestore compression max blob size``
for non-rotational (solid state) media.
:Type: Unsigned Integer
:Required: No
:Default: 64K
.. _clients hint: ../../api/librados/#rados_set_alloc_hint

View File

@ -51,7 +51,8 @@ To optimize the performance of your cluster, refer to the following:
mon-lookup-dns
Heartbeat Settings <mon-osd-interaction>
OSD Settings <osd-config-ref>
Filestore Settings <filestore-config-ref>
BlueStore Settings <bluestore-config-ref>
FileStore Settings <filestore-config-ref>
Journal Settings <journal-ref>
Pool, PG & CRUSH Settings <pool-pg-config-ref.rst>
Messaging Settings <ms-ref>

View File

@ -236,7 +236,7 @@ A pool can then be changed to use the new rule with::
Device classes are implemented by creating a "shadow" CRUSH hierarchy
for each device class in use that contains only devices of that class.
Rules can then distributed data over the shadow hierarchy. One nice
Rules can then distribute data over the shadow hierarchy. One nice
thing about this approach is that it is fully backward compatible with
old Ceph clients. You can view the CRUSH hierarchy with shadow items
with::

View File

@ -220,7 +220,7 @@ or delete some existing data to reduce utilization.
Data health (pools & placement groups)
------------------------------
--------------------------------------
PG_AVAILABILITY
_______________

View File

@ -275,6 +275,37 @@ To set a value to a pool, execute the following::
You may set values for the following keys:
.. _compression_algorithm:
``compression_algorithm``
:Description: Sets inline compression algorithm to use for underlying BlueStore.
This setting overrides the `global setting <rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression algorithm``.
:Type: String
:Valid Settings: ``lz4``, ``snappy``, ``zlib``, ``zstd``
``compression_mode``
:Description: Sets the policy for the inline compression algorithm for underlying BlueStore.
This setting overrides the `global setting <rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression mode``.
:Type: String
:Valid Settings: ``none``, ``passive``, ``aggressive``, ``force``
``compression_min_blob_size``
:Description: Chunks smaller than this are never compressed.
This setting overrides the `global setting <rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression min blob *``.
:Type: Unsigned Integer
``compression_max_blob_size``
:Description: Chunks larger than this are broken into smaller blobs sizing
``compression_max_blob_size`` before being compressed.
:Type: Unsigned Integer
.. _size:
``size``

View File

@ -30,8 +30,9 @@ Major Changes from Kraken
and features. FIXME DOCS
* BlueStore supports *full data and metadata checksums* of all
data stored by Ceph.
* BlueStore supports inline compression using zlib, snappy, or LZ4. (Ceph also supports zstd for `RGW compression <../man/8/radosgw-admin/#options>`_
but zstd is not recommended for BlueStore for performance reasons.) FIXME DOCS
* BlueStore supports `inline compression <../rados/configuration/bluestore-config-ref/#inline-compression>`_
using zlib, snappy, or LZ4. (Ceph also supports zstd for `RGW compression <../man/8/radosgw-admin/#options>`_
but zstd is not recommended for BlueStore for performance reasons.)
* *Erasure coded* pools now have `full support for overwrites <../rados/operations/erasure-code/#erasure-coding-with-overwrites>`_,
allowing them to be used with RBD and CephFS.
* There is a new daemon, *ceph-mgr*, which is a required part of any