mirror of https://github.com/ceph/ceph
156 lines
7.7 KiB
Plaintext
156 lines
7.7 KiB
Plaintext
14.2.1
|
|
------
|
|
|
|
* The default value for `mon_crush_min_required_version` has been
|
|
changed from `firefly` to `hammer`, which means the cluster will
|
|
issue a health warning if your CRUSH tunables are older than hammer.
|
|
There is generally a small (but non-zero) amount of data that will
|
|
move around by making the switch to hammer tunables; for more information,
|
|
see :ref:`crush-map-tunables`.
|
|
|
|
If possible, we recommend that you set the oldest allowed client to `hammer`
|
|
or later. You can tell what the current oldest allowed client is with::
|
|
|
|
ceph osd dump | min_compat_client
|
|
|
|
If the current value is older than hammer, you can tell whether it
|
|
is safe to make this change by verifying that there are no clients
|
|
older than hammer current connected to the cluster with::
|
|
|
|
ceph features
|
|
|
|
The newer `straw2` CRUSH bucket type was introduced in hammer, and
|
|
ensuring that all clients are hammer or newer allows new features
|
|
only supported for `straw2` buckets to be used, including the
|
|
`crush-compat` mode for the :ref:`balancer`.
|
|
|
|
|
|
>=15.0.0
|
|
--------
|
|
|
|
* The RGW "num_rados_handles" has been removed.
|
|
* If you were using a value of "num_rados_handles" greater than 1
|
|
multiply your current "objecter_inflight_ops" and
|
|
"objecter_inflight_op_bytes" paramaeters by the old
|
|
"num_rados_handles" to get the same throttle behavior.
|
|
|
|
* Ceph now packages python bindings for python3.6 instead of
|
|
python3.4, because EPEL7 recently switched from python3.4 to
|
|
python3.6 as the native python3. see the `announcement <https://lists.fedoraproject.org/archives/list/epel-announce@lists.fedoraproject.org/message/EGUMKAIMPK2UD5VSHXM53BH2MBDGDWMO/>_`
|
|
for more details on the background of this change.
|
|
|
|
* librbd now uses a write-around cache policy be default,
|
|
replacing the previous write-back cache policy default.
|
|
This cache policy allows librbd to immediately complete
|
|
write IOs while they are still in-flight to the OSDs.
|
|
Subsequent flush requests will ensure all in-flight
|
|
write IOs are completed prior to completing. The
|
|
librbd cache policy can be controlled via a new
|
|
"rbd_cache_policy" configuration option.
|
|
|
|
* librbd now includes a simple IO scheduler which attempts to
|
|
batch together multiple IOs against the same backing RBD
|
|
data block object. The librbd IO scheduler policy can be
|
|
controlled via a new "rbd_io_scheduler" configuration
|
|
option.
|
|
|
|
* RGW: radosgw-admin introduces two subcommands that allow the
|
|
managing of expire-stale objects that might be left behind after a
|
|
bucket reshard in earlier versions of RGW. One subcommand lists such
|
|
objects and the other deletes them. Read the troubleshooting section
|
|
of the dynamic resharding docs for details.
|
|
|
|
* RGW: Bucket naming restrictions have changed and likely to cause
|
|
InvalidBucketName errors. We recommend to set ``rgw_relaxed_s3_bucket_names``
|
|
option to true as a workaround.
|
|
|
|
* In the Zabbix Mgr Module there was a typo in the key being send
|
|
to Zabbix for PGs in backfill_wait state. The key that was sent
|
|
was 'wait_backfill' and the correct name is 'backfill_wait'.
|
|
Update your Zabbix template accordingly so that it accepts the
|
|
new key being send to Zabbix.
|
|
|
|
* zabbix plugin for ceph manager now includes osd and pool
|
|
discovery. Update of zabbix_template.xml is needed
|
|
to receive per-pool (read/write throughput, diskspace usage)
|
|
and per-osd (latency, status, pgs) statistics
|
|
|
|
* The format of all date + time stamps has been modified to fully
|
|
conform to ISO 8601. The old format (``YYYY-MM-DD
|
|
HH:MM:SS.ssssss``) excluded the ``T`` separator between the date and
|
|
time and was rendered using the local time zone without any explicit
|
|
indication. The new format includes the separator as well as a
|
|
``+nnnn`` or ``-nnnn`` suffix to indicate the time zone, or a ``Z``
|
|
suffix if the time is UTC. For example,
|
|
``2019-04-26T18:40:06.225953+0100``.
|
|
|
|
Any code or scripts that was previously parsing date and/or time
|
|
values from the JSON or XML structure CLI output should be checked
|
|
to ensure it can handle ISO 8601 conformant values. Any code
|
|
parsing date or time values from the unstructured human-readable
|
|
output should be modified to parse the structured output instead, as
|
|
the human-readable output may change without notice.
|
|
|
|
* The ``bluestore_no_per_pool_stats_tolerance`` config option has been
|
|
replaced with ``bluestore_fsck_error_on_no_per_pool_stats``
|
|
(default: false). The overall default behavior has not changed:
|
|
fsck will warn but not fail on legacy stores, and repair will
|
|
convert to per-pool stats.
|
|
|
|
* The ``osd_recovery_max_active`` option now has
|
|
``osd_recovery_max_active_hdd`` and ``osd_recovery_max_active_ssd``
|
|
variants, each with different default values for HDD and SSD-backed
|
|
OSDs, respectively. By default ``osd_recovery_max_active`` now
|
|
defaults to zero, which means that the OSD will conditionally use
|
|
the HDD or SSD option values. Administrators who have customized
|
|
this value may want to consider whether they have set this to a
|
|
value similar to the new defaults (3 for HDDs and 10 for SSDs) and,
|
|
if so, remove the option from their configuration entirely.
|
|
|
|
* monitors now have a `ceph osd info` command that will provide information
|
|
on all osds, or provided osds, thus simplifying the process of having to
|
|
parse `osd dump` for the same information.
|
|
|
|
* The structured output of ``ceph status`` or ``ceph -s`` is now more
|
|
concise, particularly the `mgrmap` and `monmap` sections, and the
|
|
structure of the `osdmap` section has been cleaned up.
|
|
|
|
* A health warning is now generated if the average osd heartbeat ping
|
|
time exceeds a configurable threshold for any of the intervals
|
|
computed. The OSD computes 1 minute, 5 minute and 15 minute
|
|
intervals with average, minimum and maximum values. New configuration
|
|
option ``mon_warn_on_slow_ping_ratio`` specifies a percentage of
|
|
``osd_heartbeat_grace`` to determine the threshold. A value of zero
|
|
disables the warning. New configuration option
|
|
``mon_warn_on_slow_ping_time`` specified in milliseconds over-rides the
|
|
computed value, causes a warning
|
|
when OSD heartbeat pings take longer than the specified amount.
|
|
New admin command ``ceph daemon mgr.# dump_osd_network [threshold]`` command will
|
|
list all connections with a ping time longer than the specified threshold or
|
|
value determined by the config options, for the average for any of the 3 intervals.
|
|
New admin command ``ceph daemon osd.# dump_osd_network [threshold]`` will
|
|
do the same but only including heartbeats initiated by the specified OSD.
|
|
|
|
* Inline data support for CephFS has been deprecated. When setting the flag,
|
|
users will see a warning to that effect, and enabling it now requires the
|
|
``--yes-i-really-really-mean-it`` flag. If the MDS is started on a
|
|
filesystem that has it enabled, a health warning is generated. Support for
|
|
this feature will be removed in a future release.
|
|
|
|
* Following invalid settings now are not tolerated anymore
|
|
for the command `ceph osd erasure-code-profile set xxx`.
|
|
* invalid `m` for "reed_sol_r6_op" erasure technique
|
|
* invalid `m` and invalid `w` for "liber8tion" erasure technique
|
|
|
|
* New OSD daemon command dump_recovery_reservations which reveals the
|
|
recovery locks held (in_progress) and waiting in priority queues.
|
|
|
|
* New OSD daemon command dump_scrub_reservations which reveals the
|
|
scrub reservations that are held for local (primary) and remote (replica) PGs.
|
|
|
|
* The ``pg_autoscale_mode`` is now set to ``on`` by default for newly
|
|
created pools, which means that Ceph will automatically manage the
|
|
number of PGs. To change this behavior, or to learn more about PG
|
|
autoscaling, see :ref:`pg-autoscaler`. Note that existing pools in
|
|
upgraded clusters will still be set to ``warn`` by default.
|