* build: Remove ceph-libboost\* packages in install-deps (`pr#52769 <https://github.com/ceph/ceph/pull/52769>`_, Adam Emerson)
* ceph-volume/cephadm: support lv devices in inventory (`pr#53286 <https://github.com/ceph/ceph/pull/53286>`_, Guillaume Abrioux)
* ceph-volume: add --osd-id option to raw prepare (`pr#52927 <https://github.com/ceph/ceph/pull/52927>`_, Guillaume Abrioux)
* ceph-volume: fix a regression in `raw list` (`pr#54521 <https://github.com/ceph/ceph/pull/54521>`_, Guillaume Abrioux)
* ceph-volume: fix mpath device support (`pr#53539 <https://github.com/ceph/ceph/pull/53539>`_, Guillaume Abrioux)
* ceph-volume: fix raw list for lvm devices (`pr#52619 <https://github.com/ceph/ceph/pull/52619>`_, Guillaume Abrioux)
* ceph-volume: fix raw list for lvm devices (`pr#52980 <https://github.com/ceph/ceph/pull/52980>`_, Guillaume Abrioux)
* ceph-volume: Revert "ceph-volume: fix raw list for lvm devices" (`pr#54429 <https://github.com/ceph/ceph/pull/54429>`_, Matthew Booth, Guillaume Abrioux)
* ceph: allow xlock state to be LOCK_PREXLOCK when putting it (`pr#53661 <https://github.com/ceph/ceph/pull/53661>`_, Xiubo Li)
* ceph_fs.h: add separate owner\_{u,g}id fields (`pr#53138 <https://github.com/ceph/ceph/pull/53138>`_, Alexander Mikhalitsyn)
* ceph_volume: support encrypted volumes for lvm new-db/new-wal/migrate commands (`pr#52875 <https://github.com/ceph/ceph/pull/52875>`_, Igor Fedotov)
* cephadm batch backport Aug 23 (`pr#53124 <https://github.com/ceph/ceph/pull/53124>`_, Adam King, Luis Domingues, John Mulligan, Redouane Kachach)
* cephadm: add a --dry-run option to cephadm shell (`pr#54220 <https://github.com/ceph/ceph/pull/54220>`_, John Mulligan)
* cephadm: add tcmu-runner to logrotate config (`pr#53122 <https://github.com/ceph/ceph/pull/53122>`_, Adam King)
* cephadm: Adding support to configure public_network cfg section (`pr#53110 <https://github.com/ceph/ceph/pull/53110>`_, Redouane Kachach)
* cephadm: delete /tmp/cephadm-<fsid> when removing the cluster (`pr#53109 <https://github.com/ceph/ceph/pull/53109>`_, Redouane Kachach)
* cephadm: Fix extra_container_args for iSCSI (`pr#53010 <https://github.com/ceph/ceph/pull/53010>`_, Raimund Sacherer)
* cephadm: fix haproxy version with certain containers (`pr#53751 <https://github.com/ceph/ceph/pull/53751>`_, Adam King)
* cephadm: make custom_configs work for tcmu-runner container (`pr#53404 <https://github.com/ceph/ceph/pull/53404>`_, Adam King)
* cephadm: run tcmu-runner through script to do restart on failure (`pr#53866 <https://github.com/ceph/ceph/pull/53866>`_, Adam King)
* cephadm: support for CA signed keys (`pr#53121 <https://github.com/ceph/ceph/pull/53121>`_, Adam King)
* cephfs-journal-tool: disambiguate usage of all keyword (in tool help) (`pr#53646 <https://github.com/ceph/ceph/pull/53646>`_, Manish M Yathnalli)
* cephfs-mirror: do not run concurrent C_RestartMirroring context (`issue#62072 <http://tracker.ceph.com/issues/62072>`_, `pr#53638 <https://github.com/ceph/ceph/pull/53638>`_, Venky Shankar)
* client: do not send metrics until the MDS rank is ready (`pr#52501 <https://github.com/ceph/ceph/pull/52501>`_, Xiubo Li)
* client: force sending cap revoke ack always (`pr#52507 <https://github.com/ceph/ceph/pull/52507>`_, Xiubo Li)
* client: issue a cap release immediately if no cap exists (`pr#52850 <https://github.com/ceph/ceph/pull/52850>`_, Xiubo Li)
* client: move the Inode to new auth mds session when changing auth cap (`pr#53666 <https://github.com/ceph/ceph/pull/53666>`_, Xiubo Li)
* client: trigger to flush the buffer when making snapshot (`pr#52497 <https://github.com/ceph/ceph/pull/52497>`_, Xiubo Li)
* client: wait rename to finish (`pr#52504 <https://github.com/ceph/ceph/pull/52504>`_, Xiubo Li)
* cmake: ensure fmtlib is at least 8.1.1 (`pr#52970 <https://github.com/ceph/ceph/pull/52970>`_, Abhishek Lekshmanan)
* Consider setting "bulk" autoscale pool flag when automatically creating a data pool for CephFS (`pr#52899 <https://github.com/ceph/ceph/pull/52899>`_, Leonid Usov)
* crimson/admin/admin_socket: remove path file if it exists (`pr#53964 <https://github.com/ceph/ceph/pull/53964>`_, Matan Breizman)
* crimson/ertr: assert on invocability of func provided to safe_then() (`pr#53958 <https://github.com/ceph/ceph/pull/53958>`_, Radosław Zarzyński)
* crimson/mgr: Fix config show command (`pr#53954 <https://github.com/ceph/ceph/pull/53954>`_, Aishwarya Mathuria)
* crimson/net: set TCP_NODELAY according to ms_tcp_nodelay (`pr#54063 <https://github.com/ceph/ceph/pull/54063>`_, Xuehan Xu)
* crimson/net: support connections in multiple shards (`pr#53949 <https://github.com/ceph/ceph/pull/53949>`_, Yingxin Cheng)
* crimson/os/object_data_handler: splitting right side doesn't mean splitting only one extent (`pr#54061 <https://github.com/ceph/ceph/pull/54061>`_, Xuehan Xu)
* crimson/os/seastore/omap_manager: fix the entry leak issue in BtreeOMapManager::omap_list() (`pr#53962 <https://github.com/ceph/ceph/pull/53962>`_, Xuehan Xu)
* crimson/os/seastore/onode_manager: populate value recorders of onodes to be erased (`pr#53966 <https://github.com/ceph/ceph/pull/53966>`_, Xuehan Xu)
* crimson/os/seastore/rbm: make rbm support multiple shards (`pr#53952 <https://github.com/ceph/ceph/pull/53952>`_, Myoungwon Oh)
* crimson/os/seastore/transaction_manager: data loss issues (`pr#53955 <https://github.com/ceph/ceph/pull/53955>`_, Xuehan Xu)
* crimson/os/seastore/transaction_manager: move intermediate_key by "remap_offset" when remapping the "back" half of the original pin (`pr#54140 <https://github.com/ceph/ceph/pull/54140>`_, Xuehan Xu)
* crimson/osd/object_context: consider clones found as long as they're in SnapSet::clones (`pr#53965 <https://github.com/ceph/ceph/pull/53965>`_, Xuehan Xu)
* crimson/osd/osd_operations: add pipeline to LogMissingRequest to sync it (`pr#53957 <https://github.com/ceph/ceph/pull/53957>`_, Xuehan Xu)
* doc: update test cluster commands in README.md (`pr#53349 <https://github.com/ceph/ceph/pull/53349>`_, Zac Dover)
* exporter: add ceph_daemon labels to labeled counters as well (`pr#53695 <https://github.com/ceph/ceph/pull/53695>`_, avanthakkar)
* exposed the open api and telemetry links in details card (`pr#53142 <https://github.com/ceph/ceph/pull/53142>`_, cloudbehl, dpandit)
* libcephsqlite: fill 0s in unread portion of buffer (`pr#53101 <https://github.com/ceph/ceph/pull/53101>`_, Patrick Donnelly)
* librbd: kick ExclusiveLock state machine on client being blocklisted when waiting for lock (`pr#53293 <https://github.com/ceph/ceph/pull/53293>`_, Ramana Raja)
* librbd: kick ExclusiveLock state machine stalled waiting for lock from reacquire_lock() (`pr#53919 <https://github.com/ceph/ceph/pull/53919>`_, Ramana Raja)
* librbd: make CreatePrimaryRequest remove any unlinked mirror snapshots (`pr#53276 <https://github.com/ceph/ceph/pull/53276>`_, Ilya Dryomov)
* MClientRequest: properly handle ceph_mds_request_head_legacy for ext_num_retry, ext_num_fwd, owner_uid, owner_gid (`pr#54407 <https://github.com/ceph/ceph/pull/54407>`_, Alexander Mikhalitsyn)
* MDS imported_inodes metric is not updated (`pr#51698 <https://github.com/ceph/ceph/pull/51698>`_, Yongseok Oh)
* mds/FSMap: allow upgrades if no up mds (`pr#53851 <https://github.com/ceph/ceph/pull/53851>`_, Patrick Donnelly)
* mds/Server: mark a cap acquisition throttle event in the request (`pr#53168 <https://github.com/ceph/ceph/pull/53168>`_, Leonid Usov)
* mds: acquire inode snaplock in open (`pr#53183 <https://github.com/ceph/ceph/pull/53183>`_, Patrick Donnelly)
* mds: add event for batching getattr/lookup (`pr#53558 <https://github.com/ceph/ceph/pull/53558>`_, Patrick Donnelly)
* mds: adjust pre_segments_size for MDLog when trimming segments for st… (`issue#59833 <http://tracker.ceph.com/issues/59833>`_, `pr#54035 <https://github.com/ceph/ceph/pull/54035>`_, Venky Shankar)
* mgr/dashboard: add e2e tests for cephfs management (`pr#53190 <https://github.com/ceph/ceph/pull/53190>`_, Nizamudeen A)
* mgr/dashboard: Add more decimals in latency graph (`pr#52727 <https://github.com/ceph/ceph/pull/52727>`_, Pedro Gonzalez Gomez)
* mgr/dashboard: add port and zone endpoints to import realm token form in rgw multisite (`pr#54118 <https://github.com/ceph/ceph/pull/54118>`_, Aashish Sharma)
* mgr/dashboard: add validator for size field in the forms (`pr#53378 <https://github.com/ceph/ceph/pull/53378>`_, Nizamudeen A)
* mgr/dashboard: align charts of landing page (`pr#53543 <https://github.com/ceph/ceph/pull/53543>`_, Pedro Gonzalez Gomez)
* mgr/dashboard: allow PUT in CORS (`pr#52705 <https://github.com/ceph/ceph/pull/52705>`_, Nizamudeen A)
* mgr/dashboard: allow tls 1.2 with a config option (`pr#53780 <https://github.com/ceph/ceph/pull/53780>`_, Nizamudeen A)
* mgr/dashboard: Block Ui fails in angular with target es2022 (`pr#54260 <https://github.com/ceph/ceph/pull/54260>`_, Aashish Sharma)
* mgr/dashboard: cephfs volume and subvolume management (`pr#53017 <https://github.com/ceph/ceph/pull/53017>`_, Pedro Gonzalez Gomez, Nizamudeen A, Pere Diaz Bou)
* mgr/dashboard: cephfs volume rm and rename (`pr#53026 <https://github.com/ceph/ceph/pull/53026>`_, avanthakkar)
* mgr/dashboard: cleanup rbd-mirror process in dashboard e2e (`pr#53220 <https://github.com/ceph/ceph/pull/53220>`_, Nizamudeen A)
* mgr/dashboard: Object gateway inventory card incorrect Buckets and user count (`pr#53382 <https://github.com/ceph/ceph/pull/53382>`_, Aashish Sharma)
* mgr/dashboard: Object gateway sync status cards keeps loading when multisite is not configured (`pr#53381 <https://github.com/ceph/ceph/pull/53381>`_, Aashish Sharma)
* mgr/dashboard: paginate hosts (`pr#52918 <https://github.com/ceph/ceph/pull/52918>`_, Pere Diaz Bou)
* mgr/dashboard: rbd image hide usage bar when disk usage is not provided (`pr#53810 <https://github.com/ceph/ceph/pull/53810>`_, Pedro Gonzalez Gomez)
* mgr/dashboard: remove empty popover when there are no health warns (`pr#53652 <https://github.com/ceph/ceph/pull/53652>`_, Nizamudeen A)
* mgr/dashboard: remove green tick on old password field (`pr#53386 <https://github.com/ceph/ceph/pull/53386>`_, Nizamudeen A)
* mgr/dashboard: remove used and total used columns in favor of usage bar (`pr#53304 <https://github.com/ceph/ceph/pull/53304>`_, Pedro Gonzalez Gomez)
* mgr/dashboard: replace sync progress bar with last synced timestamp in rgw multisite sync status card (`pr#53379 <https://github.com/ceph/ceph/pull/53379>`_, Aashish Sharma)
* mgr/dashboard: set CORS header for unauthorized access (`pr#53201 <https://github.com/ceph/ceph/pull/53201>`_, Nizamudeen A)
* mgr/dashboard: show a message to restart the rgw daemons after moving from single-site to multi-site (`pr#53805 <https://github.com/ceph/ceph/pull/53805>`_, Aashish Sharma)
* mgr/dashboard: subvolume rm with snapshots (`pr#53233 <https://github.com/ceph/ceph/pull/53233>`_, Pedro Gonzalez Gomez)
* mgr/dashboard: update rgw multisite import form helper info (`pr#54253 <https://github.com/ceph/ceph/pull/54253>`_, Aashish Sharma)
* mgr/dashboard: upgrade angular v14 and v15 (`pr#52662 <https://github.com/ceph/ceph/pull/52662>`_, Nizamudeen A)
* mgr/snap_schedule: allow retention spec 'n' to be user defined (`pr#52748 <https://github.com/ceph/ceph/pull/52748>`_, Milind Changire, Jakob Haufe)
* mgr/snap_schedule: make fs argument mandatory if more than one filesystem exists (`pr#54094 <https://github.com/ceph/ceph/pull/54094>`_, Milind Changire)
* mgr/volumes: Fix pending_subvolume_deletions in volume info (`pr#53572 <https://github.com/ceph/ceph/pull/53572>`_, Kotresh HR)
* mgr: register OSDs in ms_handle_accept (`pr#53187 <https://github.com/ceph/ceph/pull/53187>`_, Patrick Donnelly)
* mon, qa: issue pool application warning even if pool is empty (`pr#53041 <https://github.com/ceph/ceph/pull/53041>`_, Prashant D)
* os/bluestore: don't require bluestore_db_block_size when attaching new (`pr#52942 <https://github.com/ceph/ceph/pull/52942>`_, Igor Fedotov)
* os/bluestore: get rid off resulting lba alignment in allocators (`pr#54772 <https://github.com/ceph/ceph/pull/54772>`_, Igor Fedotov)
* osd/OpRequest: Add detail description for delayed op in osd log file (`pr#53688 <https://github.com/ceph/ceph/pull/53688>`_, Yite Gu)
* osd/OSDMap: Check for uneven weights & != 2 buckets post stretch mode (`pr#52457 <https://github.com/ceph/ceph/pull/52457>`_, Kamoltat)
* osd/scheduler/mClockScheduler: Use same profile and client ids for all clients to ensure allocated QoS limit consumption (`pr#53093 <https://github.com/ceph/ceph/pull/53093>`_, Sridhar Seshasayee)
* osd: fix logic in check_pg_upmaps (`pr#54276 <https://github.com/ceph/ceph/pull/54276>`_, Laura Flores)
* osd: fix read balancer logic to avoid redundant primary assignment (`pr#53820 <https://github.com/ceph/ceph/pull/53820>`_, Laura Flores)
* osd: fix use-after-move in build_incremental_map_msg() (`pr#54267 <https://github.com/ceph/ceph/pull/54267>`_, Ronen Friedman)
* osd: fix: slow scheduling when item_cost is large (`pr#53861 <https://github.com/ceph/ceph/pull/53861>`_, Jrchyang Yu)
* pybind/mgr/devicehealth: do not crash if db not ready (`pr#52213 <https://github.com/ceph/ceph/pull/52213>`_, Patrick Donnelly)
* pybind/mgr/pg_autoscaler: Cut back osdmap.get_pools calls (`pr#52767 <https://github.com/ceph/ceph/pull/52767>`_, Kamoltat)
* pybind/mgr/pg_autoscaler: fix warn when not too few pgs (`pr#53674 <https://github.com/ceph/ceph/pull/53674>`_, Kamoltat)
* pybind/mgr/pg_autoscaler: noautoscale flag retains individual pool configs (`pr#53658 <https://github.com/ceph/ceph/pull/53658>`_, Kamoltat)
* pybind/mgr/pg_autoscaler: Reorderd if statement for the func: _maybe_adjust (`pr#53429 <https://github.com/ceph/ceph/pull/53429>`_, Kamoltat)
* pybind/mgr/pg_autoscaler: Use bytes_used for actual_raw_used (`pr#53534 <https://github.com/ceph/ceph/pull/53534>`_, Kamoltat)
* pybind/mgr/volumes: log mutex locks to help debug deadlocks (`pr#53918 <https://github.com/ceph/ceph/pull/53918>`_, Kotresh HR)
* pybind/mgr: reopen database handle on blocklist (`pr#52460 <https://github.com/ceph/ceph/pull/52460>`_, Patrick Donnelly)
* pybind/rbd: don't produce info on errors in aio_mirror_image_get_info() (`pr#54055 <https://github.com/ceph/ceph/pull/54055>`_, Ilya Dryomov)
* python-common/drive_group: handle fields outside of 'spec' even when 'spec' is provided (`pr#53115 <https://github.com/ceph/ceph/pull/53115>`_, Adam King)
* python-common/drive_selection: lower log level of limit policy message (`pr#53114 <https://github.com/ceph/ceph/pull/53114>`_, Adam King)
* python-common: drive_selection: fix KeyError when osdspec_affinity is not set (`pr#53159 <https://github.com/ceph/ceph/pull/53159>`_, Guillaume Abrioux)
* qa/cephfs: switch to python3 for centos stream 9 (`pr#53624 <https://github.com/ceph/ceph/pull/53624>`_, Xiubo Li)
* qa/rgw: add new POOL_APP_NOT_ENABLED failures to log-ignorelist (`pr#53896 <https://github.com/ceph/ceph/pull/53896>`_, Casey Bodley)
* qa/smoke,orch,perf-basic: add POOL_APP_NOT_ENABLED to ignorelist (`pr#54376 <https://github.com/ceph/ceph/pull/54376>`_, Prashant D)
* qa/standalone/osd/divergent-prior.sh: Divergent test 3 with pg_autoscale_mode on pick divergent osd (`pr#52721 <https://github.com/ceph/ceph/pull/52721>`_, Nitzan Mordechai)
* rgw/multisite[archive zone]: fix storing of bucket instance info in the new bucket entrypoint (`pr#53466 <https://github.com/ceph/ceph/pull/53466>`_, Shilpa Jagannath)
* rgw/notification: pass in bytes_transferred to populate object_size in sync notification (`pr#53377 <https://github.com/ceph/ceph/pull/53377>`_, Juan Zhu)
* rgw/notification: remove non x-amz-meta-\* attributes from bucket notifications (`pr#53375 <https://github.com/ceph/ceph/pull/53375>`_, Juan Zhu)
* Users who have opted in to telemetry can also opt in to
participate in a leaderboard in the telemetry public dashboards
(https://telemetry-public.ceph.com/). In addition, users are now able to
provide a description of their cluster that will appear publicly in the
leaderboard. For more details, see:
https://docs.ceph.com/en/reef/mgr/telemetry/#leaderboard. To see a sample
report, run ``ceph telemetry preview``. To opt in to telemetry, run ``ceph
telemetry on``. To opt in to the leaderboard, run ``ceph config set mgr
mgr/telemetry/leaderboard true``. To add a leaderboard description, run
``ceph config set mgr mgr/telemetry/leaderboard_description ‘Cluster
description’`` (entering your own cluster description).
Upgrading from Pacific or Quincy
--------------------------------
Before starting, make sure your cluster is stable and healthy (no down or recovering OSDs). (This is optional, but recommended.) You can disable the autoscaler for all pools during the upgrade using the noautoscale flag.
..note::
You can monitor the progress of your upgrade at each stage with the ``ceph versions`` command, which will tell you what ceph version(s) are running for each type of daemon.
Upgrading cephadm clusters
~~~~~~~~~~~~~~~~~~~~~~~~~~
If your cluster is deployed with cephadm (first introduced in Octopus), then the upgrade process is entirely automated. To initiate the upgrade,
Upgrade progress can also be monitored with `ceph -s` (which provides a simple progress bar) or more verbosely with
..prompt:: bash #
ceph -W cephadm
The upgrade can be paused or resumed with
..prompt:: bash #
ceph orch upgrade pause # to pause
ceph orch upgrade resume # to resume
or canceled with
..prompt:: bash #
ceph orch upgrade stop
Note that canceling the upgrade simply stops the process; there is no ability to downgrade back to Pacific or Quincy.
Upgrading non-cephadm clusters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
..note::
1. If your cluster is running Pacific (16.2.x) or later, you might choose to first convert it to use cephadm so that the upgrade to Reef is automated (see above).
For more information, see https://docs.ceph.com/en/reef/cephadm/adoption/.
2. If your cluster is running Pacific (16.2.x) or later, systemd unit file names have changed to include the cluster fsid. To find the correct systemd unit file name for your cluster, run following command:
```
systemctl -l | grep <daemon type>
```
Example:
```
$ systemctl -l | grep mon | grep active
ceph-6ce0347c-314a-11ee-9b52-000af7995d6c@mon.f28-h21-000-r630.service loaded active running Ceph mon.f28-h21-000-r630 for 6ce0347c-314a-11ee-9b52-000af7995d6c
```
#. Set the `noout` flag for the duration of the upgrade. (Optional, but recommended.)
..prompt:: bash #
ceph osd set noout
#. Upgrade monitors by installing the new packages and restarting the monitor daemons. For example, on each monitor host
..prompt:: bash #
systemctl restart ceph-mon.target
Once all monitors are up, verify that the monitor upgrade is complete by looking for the `reef` string in the mon map. The command
..prompt:: bash #
ceph mon dump | grep min_mon_release
should report:
..prompt:: bash #
min_mon_release 18 (reef)
If it does not, that implies that one or more monitors hasn't been upgraded and restarted and/or the quorum does not include all monitors.
#. Upgrade `ceph-mgr` daemons by installing the new packages and restarting all manager daemons. For example, on each manager host,
..prompt:: bash #
systemctl restart ceph-mgr.target
Verify the `ceph-mgr` daemons are running by checking `ceph -s`:
..prompt:: bash #
ceph -s
::
...
services:
mon: 3 daemons, quorum foo,bar,baz
mgr: foo(active), standbys: bar, baz
...
#. Upgrade all OSDs by installing the new packages and restarting the ceph-osd daemons on all OSD hosts
#. If you set `noout` at the beginning, be sure to clear it with
..prompt:: bash #
ceph osd unset noout
#. Consider transitioning your cluster to use the cephadm deployment and orchestration framework to simplify cluster management and future upgrades. For more information on converting an existing cluster to cephadm, see https://docs.ceph.com/en/reef/cephadm/adoption/.
Post-upgrade
~~~~~~~~~~~~
#. Verify the cluster is healthy with `ceph health`. If your cluster is running Filestore, and you are upgrading directly from Pacific to Reef, a deprecation warning is expected. This warning can be temporarily muted using the following command
..prompt:: bash #
ceph health mute OSD_FILESTORE
#. Consider enabling the `telemetry module <https://docs.ceph.com/en/reef/mgr/telemetry/>`_ to send anonymized usage statistics and crash information to the Ceph upstream developers. To see what would be reported (without actually sending any information to anyone),
..prompt:: bash #
ceph telemetry preview-all
If you are comfortable with the data that is reported, you can opt-in to automatically report the high-level cluster metadata with