mirror of
https://github.com/ceph/ceph
synced 2025-01-03 17:42:36 +00:00
69fbfbc9ac
This note was added to PendingReleaseNotes by 0efa73f0296276aadc9d20e58120c05a3f47ae03 but never found its way into the official v15.2.0 release notes. Signed-off-by: Nathan Cutler <ncutler@suse.com>
126 lines
6.8 KiB
Plaintext
126 lines
6.8 KiB
Plaintext
>=15.2.1
|
|
--------
|
|
|
|
* CVE-2020-10736: Fixes an authorization bypass in monitor and manager daemons
|
|
|
|
* The behaviour of the ``-o`` argument to the rados tool has been reverted to
|
|
its orignal behaviour of indicating an output file. This reverts it to a more
|
|
consisten behaviour when compared to other tools. Specifying obect size is now
|
|
accomplished by using an upper case O ``-O``.
|
|
|
|
* In certain rare cases, OSDs would self-classify themselves as type
|
|
'nvme' instead of 'hdd' or 'ssd'. This appears to be limited to
|
|
cases where BlueStore was deployed with older versions of ceph-disk,
|
|
or manually without ceph-volume and LVM. Going forward, the OSD
|
|
will limit itself to only 'hdd' and 'ssd' (or whatever device class the user
|
|
manually specifies).
|
|
|
|
* RGW: a mismatch between the bucket notification documentation and the actual
|
|
message format was fixed. This means that any endpoints receiving bucket
|
|
notification, will now receive the same notifications inside an JSON array
|
|
named 'Records'. Note that this does not affect pulling bucket notification
|
|
from a subscription in a 'pubsub' zone, as these are already wrapped inside
|
|
that array.
|
|
|
|
* The configuration value ``osd_calc_pg_upmaps_max_stddev`` used for upmap
|
|
balancing has been removed. Instead use the mgr balancer config
|
|
``upmap_max_deviation`` which now is an integer number of PGs of deviation
|
|
from the target PGs per OSD. This can be set with a command like
|
|
``ceph config set mgr mgr/balancer/upmap_max_deviation 2``. The default
|
|
``upmap_max_deviation`` is 1. There are situations where crush rules
|
|
would not allow a pool to ever have completely balanced PGs. For example, if
|
|
crush requires 1 replica on each of 3 racks, but there are fewer OSDs in 1 of
|
|
the racks. In those cases, the configuration value can be increased.
|
|
|
|
* MDS daemons can now be assigned to manage a particular file system via the
|
|
new ``mds_join_fs`` option. The monitors will try to use only MDS for a file
|
|
system with mds_join_fs equal to the file system name (strong affinity).
|
|
Monitors may also deliberately failover an active MDS to a standby when the
|
|
cluster is otherwise healthy if the standby has stronger affinity.
|
|
|
|
* RGW Multisite: A new fine grained bucket-granularity policy configuration
|
|
system has been introduced and it supersedes the previous coarse zone sync
|
|
configuration (specifically the ``sync_from`` and ``sync_from_all`` fields
|
|
in the zonegroup configuration. New configuration should only be configured
|
|
after all relevant zones in the zonegroup have been upgraded.
|
|
|
|
* RGW S3: Support has been added for BlockPublicAccess set of APIs at a bucket
|
|
level, currently blocking/ignoring public acls & policies are supported.
|
|
User/Account level APIs are planned to be added in the future
|
|
|
|
* RGW: The default number of bucket index shards for new buckets was raised
|
|
from 1 to 11 to increase the amount of write throughput for small buckets
|
|
and delay the onset of dynamic resharding. This change only affects new
|
|
deployments/zones. To change this default value on existing deployments,
|
|
use 'radosgw-admin zonegroup modify --bucket-index-max-shards=11'.
|
|
If the zonegroup is part of a realm, the change must be committed with
|
|
'radosgw-admin period update --commit' - otherwise the change will take
|
|
effect after radosgws are restarted.
|
|
|
|
* Monitors now have config option ``mon_allow_pool_size_one``, which is disabled
|
|
by default. However, if enabled, user now have to pass the
|
|
``--yes-i-really-mean-it`` flag to ``osd pool set size 1``, if they are really
|
|
sure of configuring pool size 1.
|
|
|
|
* librbd now inherits the stripe unit and count from its parent image upon creation.
|
|
This can be overridden by specifying different stripe settings during clone creation.
|
|
|
|
* The balancer is now on by default in upmap mode. Since upmap mode requires
|
|
``require_min_compat_client`` luminous, new clusters will only support luminous
|
|
and newer clients by default. Existing clusters can enable upmap support by running
|
|
``ceph osd set-require-min-compat-client luminous``. It is still possible to turn
|
|
the balancer off using the ``ceph balancer off`` command. In earlier versions,
|
|
the balancer was included in the ``always_on_modules`` list, but needed to be
|
|
turned on explicitly using the ``ceph balancer on`` command.
|
|
|
|
* Cephadm: There were a lot of small usability improvements and bug fixes:
|
|
|
|
* Grafana when deployed by Cephadm now binds to all network interfaces.
|
|
* ``cephadm check-host`` now prints all detected problems at once.
|
|
* Cephadm now calls ``ceph dashboard set-grafana-api-ssl-verify false``
|
|
when generating an SSL certificate for Grafana.
|
|
* The Alertmanager is now correctly pointed to the Ceph Dashboard
|
|
* ``cephadm adopt`` now supports adopting an Alertmanager
|
|
* ``ceph orch ps`` now supports filtering by service name
|
|
* ``ceph orch host ls`` now marks hosts as offline, if they are not
|
|
accessible.
|
|
|
|
* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
|
|
a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
|
|
nfs-ns::
|
|
|
|
ceph orch apply nfs mynfs nfs-ganesha nfs-ns
|
|
|
|
* Cephadm: ``ceph orch ls --export`` now returns all service specifications in
|
|
yaml representation that is consumable by ``ceph orch apply``. In addition,
|
|
the commands ``orch ps`` and ``orch ls`` now support ``--format yaml`` and
|
|
``--format json-pretty``.
|
|
|
|
* Cephadm: ``ceph orch apply osd`` supports a ``--preview`` flag that prints a preview of
|
|
the OSD specification before deploying OSDs. This makes it possible to
|
|
verify that the specification is correct, before applying it.
|
|
|
|
* RGW: The ``radosgw-admin`` sub-commands dealing with orphans --
|
|
``radosgw-admin orphans find``, ``radosgw-admin orphans find``,
|
|
``radosgw-admin orphans find`` -- have been deprecated. They have
|
|
not been actively maintained and they store intermediate results on
|
|
the cluster, which could fill a nearly-full cluster. They have been
|
|
replaced by a tool, currently considered experimental,
|
|
``rgw-orphan-list``.
|
|
|
|
* RBD: The name of the rbd pool object that is used to store
|
|
rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
|
|
to "rbd_trash_purge_schedule". Users that have already started using
|
|
``rbd trash purge schedule`` functionality and have per pool or namespace
|
|
schedules configured should copy "rbd_trash_trash_purge_schedule"
|
|
object to "rbd_trash_purge_schedule" before the upgrade and remove
|
|
"rbd_trash_purge_schedule" using the following commands in every RBD
|
|
pool and namespace where a trash purge schedule was previously
|
|
configured::
|
|
|
|
rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
|
|
rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
|
|
|
|
or use any other convenient way to restore the schedule after the
|
|
upgrade.
|