ceph/doc/config-cluster/rbd-config-ref.rst
Josh Durgin 3b0e1205ce doc: clarify rbd caching
* note that it's only for librbd
* put settings in the [client] section for clarity
* fix typo
* re-indent and clarify sentence about clustered fs on top of RBD

Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
2012-09-26 17:41:03 -07:00

85 lines
2.9 KiB
ReStructuredText

===========================
RBD Cache Config Settings
===========================
With the kernel rbd driver, the Linux page cache can be used to
improve performance. The userspace implementation, librbd, cannot take
advantage of the page cache, so it includes its own in-memory caching,
called RBD caching.
RBD caching behaves just like well-behaved hard disk caching. When the OS sends
a barrier or a flush request, all dirty data is written to the OSDs. This means
that using write-back caching is just as safe as using a well-behaved physical
hard disk with a VM that properly sends flushes (i.e. Linux kernel >= 2.6.32).
The cache is LRU, and in write-back mode it can coalesce contiguous
requests for better throughput.
.. versionadded:: 0.46
Ceph supports write-back caching for RBD. To enable it, add ``rbd cache =
true`` to the ``[client]`` section of your ``ceph.conf`` file. By default
``librbd`` does not perform any caching. Writes and reads go directly to the
storage cluster, and writes return only when the data is on disk on all
replicas. With caching enabled, writes return immediately, unless there are more
than ``rbd cache max dirty`` unflushed bytes. In this case, the write triggers
writeback and blocks until enough bytes are flushed.
.. versionadded:: 0.47
Ceph supports write-through caching for RBD. You can set the size of
the cache, and you can set targets and limits to switch from
write-back caching to write through caching. To enable write-through
mode, set ``rbd cache max dirty`` to 0. This means writes return only
when the data is on disk on all replicas, but reads may come from the
cache. The cache is in memory on the client, and each RBD image has
its own. Since the cache is local to the client, there's no coherency
if there are others accesing the image. Running GFS or OCFS on top of
RBD will not work with caching enabled.
The ``ceph.conf`` file settings for RBD should be set in the ``[client]``
section of your configuration file. The settings include:
``rbd cache``
:Description: Enable caching for RADOS Block Device (RBD).
:Type: Boolean
:Required: No
:Default: ``false``
``rbd cache size``
:Description: The RBD cache size in bytes.
:Type: 64-bit Integer
:Required: No
:Default: ``32 MiB``
``rbd cache max dirty``
:Description: The ``dirty`` limit in bytes at which the cache triggers write-back. If ``0``, uses write-through caching.
:Type: 64-bit Integer
:Required: No
:Constraint: Must be less than ``rbd cache size``.
:Default: ``24 MiB``
``rbd cache target dirty``
:Description: The ``dirty target`` before the cache begins writing data to the data storage. Does not block writes to the cache.
:Type: 64-bit Integer
:Required: No
:Constraint: Must be less than ``rbd cache max dirty``.
:Default: ``16 MiB``
``rbd cache max dirty age``
:Description: The number of seconds dirty data is in the cache before writeback starts.
:Type: Float
:Required: No
:Default: ``1.0``