mirror of
https://github.com/ceph/ceph
synced 2024-12-17 08:57:28 +00:00
Merge pull request #16741 from liewegas/wip-more-doc-links
doc/release-notes: fix links, formatting; add crush device class docs Reviewed-by: Kefu Chai <kchai@redhat.com>
This commit is contained in:
commit
31cd44fd8c
@ -206,6 +206,43 @@ You can view the contents of the rules with::
|
||||
|
||||
ceph osd crush rule dump
|
||||
|
||||
Device classes
|
||||
--------------
|
||||
|
||||
Each device can optionally have a *class* associated with it. By
|
||||
default, OSDs automatically set their class on startup to either
|
||||
`hdd`, `ssd`, or `nvme` based on the type of device they are backed
|
||||
by.
|
||||
|
||||
The device class for one or more OSDs can be explicitly set with::
|
||||
|
||||
ceph osd crush set-device-class <class> <osd-name> [...]
|
||||
|
||||
Once a device class is set, it cannot be changed to another class
|
||||
until the old class is unset with::
|
||||
|
||||
ceph osd crush rm-device-class <osd-name> [...]
|
||||
|
||||
This allows administrators to set device classes without the class
|
||||
being changed on OSD restart or by some other script.
|
||||
|
||||
A placement rule that targets a specific device class can be created with::
|
||||
|
||||
ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class>
|
||||
|
||||
A pool can then be changed to use the new rule with::
|
||||
|
||||
ceph osd pool set <pool-name> crush_rule <rule-name>
|
||||
|
||||
Device classes are implemented by creating a "shadow" CRUSH hierarchy
|
||||
for each device class in use that contains only devices of that class.
|
||||
Rules can then distributed data over the shadow hierarchy. One nice
|
||||
thing about this approach is that it is fully backward compatible with
|
||||
old Ceph clients. You can view the CRUSH hierarchy with shadow items
|
||||
with::
|
||||
|
||||
ceph osd crush tree --show-shadow
|
||||
|
||||
|
||||
Weights sets
|
||||
------------
|
||||
|
@ -18,62 +18,60 @@ Major Changes from Kraken
|
||||
|
||||
- *General*:
|
||||
|
||||
* Ceph now has a simple, built-in web-based dashboard for monitoring
|
||||
cluster status. See :doc:`/mgr/dashboard/`.
|
||||
* Ceph now has a simple, `built-in web-based dashboard
|
||||
<../mgr/dashboard>`_ for monitoring cluster status.
|
||||
|
||||
- *RADOS*:
|
||||
|
||||
* *BlueStore*:
|
||||
|
||||
- The new *BlueStore* backend for *ceph-osd* is now stable and the new
|
||||
default for newly created OSDs. BlueStore manages data stored by each OSD
|
||||
by directly managing the physical HDDs or SSDs without the use of an
|
||||
intervening file system like XFS. This provides greater performance
|
||||
and features. FIXME DOCS
|
||||
- BlueStore supports *full data and metadata checksums* of all
|
||||
data stored by Ceph.
|
||||
- BlueStore supports inline compression using zlib, snappy, or LZ4. (Ceph
|
||||
also supports zstd for RGW compression but zstd is not recommended for
|
||||
BlueStore for performance reasons.) FIXME DOCS
|
||||
|
||||
* *Erasure coded* pools now have full support for *overwrites*,
|
||||
allowing them to be used with RBD and CephFS. See :doc:`/rados/operations/erasure-code/#erasure-coding-with-overwrites`.
|
||||
|
||||
* *ceph-mgr*:
|
||||
|
||||
- There is a new daemon, *ceph-mgr*, which is a required part of any
|
||||
Ceph deployment. Although IO can continue when *ceph-mgr* is
|
||||
down, metrics will not refresh and some metrics-related calls
|
||||
(e.g., ``ceph df``) may block. We recommend deploying several instances of
|
||||
*ceph-mgr* for reliability. See the notes on `Upgrading`_ below.
|
||||
- The *ceph-mgr* daemon includes a REST-based management API. The
|
||||
API is still experimental and somewhat limited but will form the basis
|
||||
for API-based management of Ceph going forward. See :doc:`/mgr/restful`.
|
||||
- *ceph-mgr* also includes a Prometheus exporter plugin, which can
|
||||
provide Ceph perfcounters to Prometheus. See :doc:`/mgr/prometheus`.
|
||||
|
||||
* The new *BlueStore* backend for *ceph-osd* is now stable and the new
|
||||
default for newly created OSDs. BlueStore manages data stored by each OSD
|
||||
by directly managing the physical HDDs or SSDs without the use of an
|
||||
intervening file system like XFS. This provides greater performance
|
||||
and features. FIXME DOCS
|
||||
* BlueStore supports *full data and metadata checksums* of all
|
||||
data stored by Ceph.
|
||||
* BlueStore supports inline compression using zlib, snappy, or LZ4. (Ceph
|
||||
also supports zstd for RGW compression but zstd is not recommended for
|
||||
BlueStore for performance reasons.) FIXME DOCS
|
||||
* *Erasure coded* pools now have `full support for overwrites <../rados/operations/erasure-code/#erasure-coding-with-overwrites>`_,
|
||||
allowing them to be used with RBD and CephFS.
|
||||
* There is a new daemon, *ceph-mgr*, which is a required part of any
|
||||
Ceph deployment. Although IO can continue when *ceph-mgr* is
|
||||
down, metrics will not refresh and some metrics-related calls
|
||||
(e.g., ``ceph df``) may block. We recommend deploying several instances of
|
||||
*ceph-mgr* for reliability. See the notes on `Upgrading`_ below.
|
||||
* The *ceph-mgr* daemon includes a `REST-based management API
|
||||
<../mgr/restful>`_. The API is still experimental and somewhat
|
||||
limited but will form the basis for API-based management of Ceph
|
||||
going forward.
|
||||
* *ceph-mgr* also includes a `Prometheus exporter <../mgr/prometheus>`_
|
||||
plugin, which can provide Ceph perfcounters to Prometheus.
|
||||
* The overall *scalability* of the cluster has improved. We have
|
||||
successfully tested clusters with up to 10,000 OSDs.
|
||||
* Each OSD can now have a *device class* associated with it (e.g.,
|
||||
`hdd` or `ssd`), allowing CRUSH rules to trivially map data to a
|
||||
subset of devices in the system. Manually writing CRUSH rules or
|
||||
manual editing of the CRUSH is normally not required. See
|
||||
:doc:`/rados/operations/crush-map/#crush-structure`.
|
||||
* Each OSD can now have a `device class
|
||||
<../rados/operations/crush-map/#device-classes>`_ associated with
|
||||
it (e.g., `hdd` or `ssd`), allowing CRUSH rules to trivially map
|
||||
data to a subset of devices in the system. Manually writing CRUSH
|
||||
rules or manual editing of the CRUSH is normally not required.
|
||||
* You can now *optimize CRUSH weights* to maintain a *near-perfect
|
||||
distribution of data* across OSDs. FIXME DOCS
|
||||
* There is also a new `upmap` exception mechanism that allows
|
||||
individual PGs to be moved around to achieve a *perfect
|
||||
distribution* (this requires luminous clients). See
|
||||
:doc:`/rados/operations/upmap`.
|
||||
* There is also a new `upmap <../rados/operations/upmap>`_ exception
|
||||
mechanism that allows individual PGs to be moved around to achieve
|
||||
a *perfect distribution* (this requires luminous clients).
|
||||
* Each OSD now adjusts its default configuration based on whether the
|
||||
backing device is an HDD or SSD. Manual tuning generally not required.
|
||||
* The prototype `mClock QoS queueing algorithm </rados/configuration/osd-config-ref/#qos-based-on-mclock>` is now available.
|
||||
* The prototype `mClock QoS queueing algorithm
|
||||
<../rados/configuration/osd-config-ref/#qos-based-on-mclock>`_ is now
|
||||
available.
|
||||
* There is now a *backoff* mechanism that prevents OSDs from being
|
||||
overloaded by requests to objects or PGs that are not currently able to
|
||||
process IO.
|
||||
* There is a simplified OSD replacement process that is more robust (see :doc:`/rados/operations/add-or-rm-osds/#replacing-an-osd`).
|
||||
* There is a simplified `OSD replacement process
|
||||
<../rados/operations/add-or-rm-osds/#replacing-an-osd>`_ that is more
|
||||
robust.
|
||||
* You can query the supported features and (apparent) releases of
|
||||
all connected daemons and clients with `ceph features </man/8/ceph#features>`_.
|
||||
all connected daemons and clients with `ceph features
|
||||
<../man/8/ceph#features>`_.
|
||||
* You can configure the oldest Ceph client version you wish to allow to
|
||||
connect to the cluster via ``ceph osd set-require-min-compat-client`` and
|
||||
Ceph will prevent you from enabling features that will break compatibility
|
||||
@ -148,7 +146,8 @@ Major Changes from Kraken
|
||||
|
||||
- *Miscellaneous*:
|
||||
|
||||
* Release packages are now being built for *Debian Stretch*. The
|
||||
* Release packages are now being built for *Debian Stretch*. Note
|
||||
that QA is limited to CentOS and Ubuntu (xenial and trusty). The
|
||||
distributions we build for now includes:
|
||||
|
||||
- CentOS 7 (x86_64 and aarch64)
|
||||
@ -157,8 +156,6 @@ Major Changes from Kraken
|
||||
- Ubuntu 16.04 Xenial (x86_64 and aarch64)
|
||||
- Ubuntu 14.04 Trusty (x86_64)
|
||||
|
||||
Note that QA is limited to CentOS and Ubuntu (xenial and trusty).
|
||||
|
||||
* *CLI changes*:
|
||||
|
||||
- The ``ceph -s`` or ``ceph status`` command has a fresh look.
|
||||
|
Loading…
Reference in New Issue
Block a user