ceph/PendingReleaseNotes

72 lines
3.6 KiB
Plaintext
Raw Normal View History

>=15.2.1
--------
* CVE-2020-10736: Fixes an authorization bypass in monitor and manager daemons
* Monitors now have config option ``mon_allow_pool_size_one``, which is disabled
by default. However, if enabled, user now have to pass the
``--yes-i-really-mean-it`` flag to ``osd pool set size 1``, if they are really
sure of configuring pool size 1.
* librbd now inherits the stripe unit and count from its parent image upon creation.
This can be overridden by specifying different stripe settings during clone creation.
* The balancer is now on by default in upmap mode. Since upmap mode requires
``require_min_compat_client`` luminous, new clusters will only support luminous
and newer clients by default. Existing clusters can enable upmap support by running
``ceph osd set-require-min-compat-client luminous``. It is still possible to turn
the balancer off using the ``ceph balancer off`` command. In earlier versions,
the balancer was included in the ``always_on_modules`` list, but needed to be
turned on explicitly using the ``ceph balancer on`` command.
* Cephadm: There were a lot of small usability improvements and bug fixes:
* Grafana when deployed by Cephadm now binds to all network interfaces.
* ``cephadm check-host`` now prints all detected problems at once.
* Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
when generating an SSL certificate for Grafana.
* The Alertmanager is now correctly pointed to the Ceph Dashboard
* ``cephadm adopt`` now supports adopting an Alertmanager
* ``ceph orch ps`` now supports filtering by service name
* ``ceph orch host ls`` now marks hosts as offline, if they are not
accessible.
* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
nfs-ns::
ceph orch apply nfs mynfs nfs-ganesha nfs-ns
* Cephadm: ``ceph orch ls --export`` now returns all service specifications in
yaml representation that is consumable by ``ceph orch apply``. In addition,
the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
``--format json-pretty``.
* Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
the OSD specification before deploying OSDs. This makes it possible to
verify that the specification is correct, before applying it.
* RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
``radosgw-admin orphans find``, ``radosgw-admin orphans find``,
``radosgw-admin orphans find`` -- have been deprecated. They have
not been actively maintained and they store intermediate results on
the cluster, which could fill a nearly-full cluster. They have been
replaced by a tool, currently considered experimental,
``rgw-orphan-list``.
* RBD: The name of the rbd pool object that is used to store
rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
to "rbd_trash_purge_schedule". Users that have already started using
``rbd trash purge schedule`` functionality and have per pool or namespace
schedules configured should copy "rbd_trash_trash_purge_schedule"
object to "rbd_trash_purge_schedule" before the upgrade and remove
"rbd_trash_purge_schedule" using the following commands in every RBD
pool and namespace where a trash purge schedule was previously
configured::
rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
or use any other convenient way to restore the schedule after the
upgrade.