2020-05-19 13:38:46 +00:00
|
|
|
>=15.2.1
|
2019-01-23 19:20:33 +00:00
|
|
|
--------
|
|
|
|
|
2020-05-14 04:34:56 +00:00
|
|
|
* CVE-2020-10736: Fixes an authorization bypass in monitor and manager daemons
|
|
|
|
|
2020-02-12 14:38:29 +00:00
|
|
|
* Monitors now have config option ``mon_allow_pool_size_one``, which is disabled
|
|
|
|
by default. However, if enabled, user now have to pass the
|
|
|
|
``--yes-i-really-mean-it`` flag to ``osd pool set size 1``, if they are really
|
|
|
|
sure of configuring pool size 1.
|
2020-03-28 04:26:45 +00:00
|
|
|
|
|
|
|
* librbd now inherits the stripe unit and count from its parent image upon creation.
|
|
|
|
This can be overridden by specifying different stripe settings during clone creation.
|
2020-04-14 20:46:41 +00:00
|
|
|
|
|
|
|
* The balancer is now on by default in upmap mode. Since upmap mode requires
|
|
|
|
``require_min_compat_client`` luminous, new clusters will only support luminous
|
|
|
|
and newer clients by default. Existing clusters can enable upmap support by running
|
|
|
|
``ceph osd set-require-min-compat-client luminous``. It is still possible to turn
|
|
|
|
the balancer off using the ``ceph balancer off`` command. In earlier versions,
|
|
|
|
the balancer was included in the ``always_on_modules`` list, but needed to be
|
|
|
|
turned on explicitly using the ``ceph balancer on`` command.
|
2020-04-24 11:47:03 +00:00
|
|
|
|
|
|
|
* Cephadm: There were a lot of small usability improvements and bug fixes:
|
|
|
|
|
|
|
|
* Grafana when deployed by Cephadm now binds to all network interfaces.
|
|
|
|
* ``cephadm check-host`` now prints all detected problems at once.
|
|
|
|
* Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
|
|
|
|
when generating an SSL certificate for Grafana.
|
|
|
|
* The Alertmanager is now correctly pointed to the Ceph Dashboard
|
|
|
|
* ``cephadm adopt`` now supports adopting an Alertmanager
|
|
|
|
* ``ceph orch ps`` now supports filtering by service name
|
|
|
|
* ``ceph orch host ls`` now marks hosts as offline, if they are not
|
|
|
|
accessible.
|
|
|
|
|
|
|
|
* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
|
|
|
|
a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
|
|
|
|
nfs-ns::
|
|
|
|
|
|
|
|
ceph orch apply nfs mynfs nfs-ganesha nfs-ns
|
|
|
|
|
|
|
|
* Cephadm: ``ceph orch ls --export`` now returns all service specifications in
|
|
|
|
yaml representation that is consumable by ``ceph orch apply``. In addition,
|
2020-05-11 19:18:08 +00:00
|
|
|
the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
|
2020-04-24 11:47:03 +00:00
|
|
|
``--format json-pretty``.
|
|
|
|
|
|
|
|
* Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
|
|
|
|
the OSD specification before deploying OSDs. This makes it possible to
|
|
|
|
verify that the specification is correct, before applying it.
|
2020-05-11 19:16:36 +00:00
|
|
|
|
|
|
|
* RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
|
|
|
|
``radosgw-admin orphans find``, ``radosgw-admin orphans find``,
|
|
|
|
``radosgw-admin orphans find`` -- have been deprecated. They have
|
|
|
|
not been actively maintained and they store intermediate results on
|
|
|
|
the cluster, which could fill a nearly-full cluster. They have been
|
|
|
|
replaced by a tool, currently considered experimental,
|
|
|
|
``rgw-orphan-list``.
|
2020-05-18 17:59:31 +00:00
|
|
|
|
|
|
|
* RBD: The name of the rbd pool object that is used to store
|
|
|
|
rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
|
|
|
|
to "rbd_trash_purge_schedule". Users that have already started using
|
|
|
|
``rbd trash purge schedule`` functionality and have per pool or namespace
|
|
|
|
schedules configured should copy "rbd_trash_trash_purge_schedule"
|
|
|
|
object to "rbd_trash_purge_schedule" before the upgrade and remove
|
|
|
|
"rbd_trash_purge_schedule" using the following commands in every RBD
|
|
|
|
pool and namespace where a trash purge schedule was previously
|
|
|
|
configured::
|
|
|
|
|
|
|
|
rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
|
|
|
|
rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
|
|
|
|
|
|
|
|
or use any other convenient way to restore the schedule after the
|
|
|
|
upgrade.
|