14.2.1 ------ * The default value for `mon_crush_min_required_version` has been changed from `firefly` to `hammer`, which means the cluster will issue a health warning if your CRUSH tunables are older than hammer. There is generally a small (but non-zero) amount of data that will move around by making the switch to hammer tunables; for more information, see :ref:`crush-map-tunables`. If possible, we recommend that you set the oldest allowed client to `hammer` or later. You can tell what the current oldest allowed client is with:: ceph osd dump | min_compat_client If the current value is older than hammer, you can tell whether it is safe to make this change by verifying that there are no clients older than hammer current connected to the cluster with:: ceph features The newer `straw2` CRUSH bucket type was introduced in hammer, and ensuring that all clients are hammer or newer allows new features only supported for `straw2` buckets to be used, including the `crush-compat` mode for the :ref:`balancer`. >=15.0.0 -------- * The RGW "num_rados_handles" has been removed. * If you were using a value of "num_rados_handles" greater than 1 multiply your current "objecter_inflight_ops" and "objecter_inflight_op_bytes" paramaeters by the old "num_rados_handles" to get the same throttle behavior. * Ceph now packages python bindings for python3.6 instead of python3.4, because EPEL7 recently switched from python3.4 to python3.6 as the native python3. see the `announcement _` for more details on the background of this change. * librbd now uses a write-around cache policy be default, replacing the previous write-back cache policy default. This cache policy allows librbd to immediately complete write IOs while they are still in-flight to the OSDs. Subsequent flush requests will ensure all in-flight write IOs are completed prior to completing. The librbd cache policy can be controlled via a new "rbd_cache_policy" configuration option. * librbd now includes a simple IO scheduler which attempts to batch together multiple IOs against the same backing RBD data block object. The librbd IO scheduler policy can be controlled via a new "rbd_io_scheduler" configuration option. * RGW: radosgw-admin introduces two subcommands that allow the managing of expire-stale objects that might be left behind after a bucket reshard in earlier versions of RGW. One subcommand lists such objects and the other deletes them. Read the troubleshooting section of the dynamic resharding docs for details. * In the Zabbix Mgr Module there was a typo in the key being send to Zabbix for PGs in backfill_wait state. The key that was sent was 'wait_backfill' and the correct name is 'backfill_wait'. Update your Zabbix template accordingly so that it accepts the new key being send to Zabbix. * zabbix plugin for ceph manager now includes osd and pool discovery. Update of zabbix_template.xml is needed to receive per-pool (read/write throughput, diskspace usage) and per-osd (latency, status, pgs) statistics * The format of all date + time stamps has been modified to fully conform to ISO 8601. The old format (``YYYY-MM-DD HH:MM:SS.ssssss``) excluded the ``T`` separator between the date and time and was rendered using the local time zone without any explicit indication. The new format includes the separator as well as a ``+nnnn`` or ``-nnnn`` suffix to indicate the time zone, or a ``Z`` suffix if the time is UTC. For example, ``2019-04-26T18:40:06.225953+0100``. Any code or scripts that was previously parsing date and/or time values from the JSON or XML structure CLI output should be checked to ensure it can handle ISO 8601 conformant values. Any code parsing date or time values from the unstructured human-readable output should be modified to parse the structured output instead, as the human-readable output may change without notice. * The ``osd_recovery_max_active`` option now has ``osd_recovery_max_active_hdd`` and ``osd_recovery_max_active_ssd`` variants, each with different default values for HDD and SSD-backed OSDs, respectively. By default ``osd_recovery_max_active`` now defaults to zero, which means that the OSD will conditionally use the HDD or SSD option values. Administrators who have customized this value may want to consider whether they have set this to a value similar to the new defaults (3 for HDDs and 10 for SSDs) and, if so, remove the option from their configuration entirely. * monitors now have a `ceph osd info` command that will provide information on all osds, or provided osds, thus simplifying the process of having to parse `osd dump` for the same information.