From 5e54676d994e9ba2bf2acfdabe1b1788f05e75c2 Mon Sep 17 00:00:00 2001 From: Sage Weil Date: Thu, 27 Jul 2017 15:11:52 -0400 Subject: [PATCH] doc/release-notes: add various links to docs Signed-off-by: Sage Weil --- doc/release-notes.rst | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/doc/release-notes.rst b/doc/release-notes.rst index 7c6aa4346ee..c03e155be27 100644 --- a/doc/release-notes.rst +++ b/doc/release-notes.rst @@ -19,7 +19,7 @@ Major Changes from Kraken - *General*: * Ceph now has a simple, built-in web-based dashboard for monitoring - cluster status. FIXME DOCS. + cluster status. See :doc:`/mgr/dashboard/`. - *RADOS*: @@ -48,21 +48,23 @@ Major Changes from Kraken *ceph-mgr* for reliability. See the notes on `Upgrading`_ below. - The *ceph-mgr* daemon includes a REST-based management API. The API is still experimental and somewhat limited but will form the basis - for API-based management of Ceph going forward. FIXME DOCS + for API-based management of Ceph going forward. See :doc:`/mgr/restful`. - *ceph-mgr* also includes a Prometheus exporter plugin, which can - provide Ceph perfcounters to Prometheus. See ceph-mgr docs. + provide Ceph perfcounters to Prometheus. See :doc:`/mgr/prometheus`. * The overall *scalability* of the cluster has improved. We have successfully tested clusters with up to 10,000 OSDs. - * Each OSD can now have a *device class* associated with it (e.g., `hdd` or - `ssd`), allowing CRUSH rules to trivially map data to a subset of devices - in the system. Manually writing CRUSH rules or manual editing of the CRUSH - is normally not required. FIXME DOCS - * You can now *optimize CRUSH weights* can now be optimized to - maintain a *near-perfect distribution of data* across OSDs. FIXME DOCS + * Each OSD can now have a *device class* associated with it (e.g., + `hdd` or `ssd`), allowing CRUSH rules to trivially map data to a + subset of devices in the system. Manually writing CRUSH rules or + manual editing of the CRUSH is normally not required. See + :doc:`/rados/operations/crush-map/#crush-structure`. + * You can now *optimize CRUSH weights* to maintain a *near-perfect + distribution of data* across OSDs. FIXME DOCS * There is also a new `upmap` exception mechanism that allows individual PGs to be moved around to achieve a *perfect - distribution* (this requires luminous clients). FIXME DOCS + distribution* (this requires luminous clients). See + :doc:`/rados/operations/upmap`. * Each OSD now adjusts its default configuration based on whether the backing device is an HDD or SSD. Manual tuning generally not required. * The prototype `mClock QoS queueing algorithm`_ is now available.