ceph/PendingReleaseNotes

146 lines
6.3 KiB
Plaintext

14.0.1
------
* The 'ceph osd rm' command has been deprecated. Users should use
'ceph osd destroy' or 'ceph osd purge' (but after first confirming it is
safe to do so via the 'ceph osd safe-to-destroy' command).
* The MDS now supports dropping its cache for the purposes of benchmarking.
ceph tell mds.* cache drop <timeout>
Note that the MDS cache is cooperatively managed by the clients. It is
necessary for clients to give up capabilities in order for the MDS to fully
drop its cache. This is accomplished by asking all clients to trim as many
caps as possible. The timeout argument to the `cache drop` command controls
how long the MDS waits for clients to complete trimming caps. Keep in mind
that clients may still retain caps to open files which will prevent the
metadata for those files from being dropped by both the client and the MDS.
(This is an equivalent scenario to dropping the Linux
page/buffer/inode/dentry caches with some processes pinning some
inodes/dentries/pages in cache.)
* The mon_health_preluminous_compat and mon_health_preluminous_compat_warning
config options are removed, as the related functionality is more
than two versions old. Any legacy monitoring system expecting Jewel-style
health output will need to be updated to work with Nautilus.
* Nautilus is not supported on any distros still running upstart so upstart
specific files and references have been removed.
* The 'ceph pg <pgid> list_missing' command has been renamed to
'ceph pg <pgid> list_unfound' to better match its behaviour.
* The 'rbd-mirror' daemon can now retrieve remote peer cluster configuration
secrets from the monitor. To use this feature, the 'rbd-mirror' daemon
CephX user for the local cluster must use the 'profile rbd-mirror' mon cap.
The secrets can be set using the 'rbd mirror pool peer add' and
'rbd mirror pool peer set' actions.
* The `ceph mds deactivate` is fully obsolete and references to it in the docs
have been removed or clarified.
* The libcephfs bindings added the ceph_select_filesystem function
for use with multiple filesystems.
* The cephfs python bindings now include mount_root and filesystem_name
options in the mount() function.
* erasure-code: add experimental *Coupled LAYer (CLAY)* erasure codes
support. It features less network traffic and disk I/O when performing
recovery.
* The 'cache drop' OSD command has been added to drop an OSD's caches:
- ``ceph tell osd.x cache drop``
* The 'cache status' OSD command has been added to get the cache stats of an
OSD:
- ``ceph tell osd.x cache status'
* The libcephfs added several functions that allow restarted client to destroy
or reclaim state held by a previous incarnation. These functions are for NFS
servers.
>=13.1.0
--------
* The Telegraf module for the Manager allows for sending statistics to
an Telegraf Agent over TCP, UDP or a UNIX Socket. Telegraf can then
send the statistics to databases like InfluxDB, ElasticSearch, Graphite
and many more.
* The graylog fields naming the originator of a log event have
changed: the string-form name is now included (e.g., ``"name":
"mgr.foo"``), and the rank-form name is now in a nested section
(e.g., ``"rank": {"type": "mgr", "num": 43243}``).
* If the cluster log is directed at syslog, the entries are now
prefixed by both the string-form name and the rank-form name (e.g.,
``mgr.x mgr.12345 ...`` instead of just ``mgr.12345 ...``).
* The JSON output of the ``osd find`` command has replaced the ``ip``
field with an ``addrs`` section to reflect that OSDs may bind to
multiple addresses.
* CephFS clients without the 's' flag in their authentication capability
string will no longer be able to create/delete snapshots. To allow
``client.foo`` to create/delete snapshots in the ``bar`` directory of
filesystem ``cephfs_a``, use command:
- ``ceph auth caps client.foo mon 'allow r' osd 'allow rw tag cephfs data=cephfs_a' mds 'allow rw, allow rws path=/bar'``
* The ``osd_heartbeat_addr`` option has been removed as it served no
(good) purpose: the OSD should always check heartbeats on both the
public and cluster networks.
* The ``rados`` tool's ``mkpool`` and ``rmpool`` commands have been
removed because they are redundant; please use the ``ceph osd pool
create`` and ``ceph osd pool rm`` commands instead.
* The ``auid`` property for cephx users and RADOS pools has been
removed. This was an undocumented and partially implemented
capability that allowed cephx users to map capabilities to RADOS
pools that they "owned". Because there are no users we have removed
this support. If any cephx capabilities exist in the cluster that
restrict based on auid then they will no longer parse, and the
cluster will report a health warning like::
AUTH_BAD_CAPS 1 auth entities have invalid capabilities
client.bad osd capability parse failed, stopped at 'allow rwx auid 123' of 'allow rwx auid 123'
The capability can be adjusted with the ``ceph auth caps`` command. For example,::
ceph auth caps client.bad osd 'allow rwx pool foo'
* The ``ceph-kvstore-tool`` ``repair`` command has been renamed
``destructive-repair`` since we have discovered it can corrupt an
otherwise healthy rocksdb database. It should be used only as a last-ditch
attempt to recover data from an otherwise corrupted store.
* The default memory utilization for the mons has been increased
somewhat. Rocksdb now uses 512 MB of RAM by default, which should
be sufficient for small to medium-sized clusters; large clusters
should tune this up. Also, the ``mon_osd_cache_size`` has been
increase from 10 OSDMaps to 500, which will translate to an
additional 500 MB to 1 GB of RAM for large clusters, and much less
for small clusters.
* The ``mgr/balancer/max_misplaced`` option has been replaced by a new
global ``target_max_misplaced_ratio`` option that throttles both
balancer activity and automated adjustments to ``pgp_num`` (normally as a
result of ``pg_num`` changes). If you have customized the balancer module
option, you will need to adjust your config to set the new global option
or revert to the default of .05 (5%).
Upgrading from Luminous
-----------------------
* During the upgrade from luminous to nautilus, it will not be possible to create
a new OSD using a luminous ceph-osd daemon after the monitors have been
upgraded to nautilus.