Merge pull request #16787 from liewegas/wip-bluestore-docs

doc/release-notes: fix bluestore links

Reviewed-by: Abhishek Lekshmanan <abhishek@suse.com>
This commit is contained in:
Abhishek L 2017-08-04 17:37:47 +02:00 committed by GitHub
commit 0a5b0021b4
2 changed files with 18 additions and 15 deletions

View File

@ -12,12 +12,14 @@ In the simplest case, BlueStore consumes a single (primary) storage
device. The storage device is normally partitioned into two parts:
#. A small partition is formatted with XFS and contains basic metadata
for the OSD. This *data directory* includes information about the OSD
(its identifier, which cluster it belongs to, and its private keyring.
for the OSD. This *data directory* includes information about the
OSD (its identifier, which cluster it belongs to, and its private
keyring.
#. The rest of the device is normally a large partition occupying the
rest of the device that is managed directly by BlueStore contains all
of the actual data. This *main device* is normally identifed by a
``block`` symlink in data directory.
rest of the device that is managed directly by BlueStore contains
all of the actual data. This *primary device* is normally identifed
by a ``block`` symlink in data directory.
It is also possible to deploy BlueStore across two additional devices:

View File

@ -25,23 +25,24 @@ Major Changes from Kraken
* *BlueStore*:
- The new *BlueStore* backend for *ceph-osd* is now stable and the new
default for newly created OSDs. BlueStore manages data stored by each OSD
by directly managing the physical HDDs or SSDs without the use of an
intervening file system like XFS. This provides greater performance
and features. FIXME DOCS
- BlueStore supports *full data and metadata checksums* of all
- The new *BlueStore* backend for *ceph-osd* is now stable and the
new default for newly created OSDs. BlueStore manages data
stored by each OSD by directly managing the physical HDDs or
SSDs without the use of an intervening file system like XFS.
This provides greater performance and features. See
:doc:`/rados/configuration/storage-devices` and
:doc:`/rados/configuration/bluestore-config-ref`.
- BlueStore supports `full data and metadata checksums
<../rados/configuration/bluestore-config-ref/#checksums`_ of all
data stored by Ceph.
- BlueStore supports inline compression using zlib, snappy, or LZ4. (Ceph
also supports zstd for RGW compression but zstd is not recommended for
BlueStore for performance reasons.) FIXME DOCS
- BlueStore supports `inline compression
<../rados/configuration/bluestore-config-ref/#inline-compression>`_ using
zlib, snappy, or LZ4. (Ceph also supports zstd for `RGW compression
<../man/8/radosgw-admin/#options>`_ but zstd is not recommended for
BlueStore for performance reasons.)
* *Erasure coded* pools now have `full support for overwrites <../rados/operations/erasure-code/#erasure-coding-with-overwrites>`_,
* *Erasure coded* pools now have `full support for overwrites
<../rados/operations/erasure-code/#erasure-coding-with-overwrites>`_,
allowing them to be used with RBD and CephFS.
* *ceph-mgr*: