ceph/systemd/ceph-osd@.service.in

38 lines
1.1 KiB
SYSTEMD
Raw Normal View History

[Unit]
systemd: add osd id to service description So, instead of logging this: Jul 01 13:51:04 localhost systemd[1]: Failed to start Ceph object storage daemon. Jul 01 13:51:04 localhost systemd[1]: Failed to start Ceph object storage daemon. Jul 01 13:51:04 localhost systemd[1]: Failed to start Ceph object storage daemon. Jul 01 13:51:04 localhost systemd[1]: Failed to start Ceph object storage daemon. Jul 01 13:51:04 localhost systemd[1]: Failed to start Ceph object storage daemon. Jul 01 13:51:04 localhost systemd[1]: Failed to start Ceph object storage daemon. Jul 01 13:51:04 localhost systemd[1]: Failed to start Ceph object storage daemon. Jul 01 13:51:04 localhost systemd[1]: Failed to start Ceph object storage daemon. Jul 01 13:51:04 localhost systemd[1]: Failed to start Ceph object storage daemon. Jul 01 13:51:04 localhost systemd[1]: Failed to start Ceph object storage daemon. We see this, which is a lot more useful: Jul 01 13:59:32 localhost systemd[1]: Failed to start Ceph object storage daemon osd.27. Jul 01 13:59:32 localhost systemd[1]: Failed to start Ceph object storage daemon osd.32. Jul 01 13:59:32 localhost systemd[1]: Failed to start Ceph object storage daemon osd.29. Jul 01 13:59:32 localhost systemd[1]: Failed to start Ceph object storage daemon osd.31. Jul 01 13:59:32 localhost systemd[1]: Failed to start Ceph object storage daemon osd.23. Jul 01 13:59:32 localhost systemd[1]: Failed to start Ceph object storage daemon osd.24. Jul 01 13:59:32 localhost systemd[1]: Failed to start Ceph object storage daemon osd.25. Jul 01 13:59:32 localhost systemd[1]: Failed to start Ceph object storage daemon osd.30. Jul 01 13:59:32 localhost systemd[1]: Failed to start Ceph object storage daemon osd.28. Jul 01 13:59:32 localhost systemd[1]: Failed to start Ceph object storage daemon osd.22.
2016-07-01 11:55:58 +00:00
Description=Ceph object storage daemon osd.%i
PartOf=ceph-osd.target
systemd: Support Graceful Reboot for AIO Node Ceph AIO installation with single/multiple node is not friendly for loopback mount, especially always get deadlock issue during graceful system reboot. We already have `rbdmap.service` with graceful system reboot friendly as below: [Unit] After=network-online.target Before=remote-fs-pre.target Wants=network-online.target remote-fs-pre.target [Service] ExecStart=/usr/bin/rbdmap map ExecReload=/usr/bin/rbdmap map ExecStop=/usr/bin/rbdmap unmap-all This PR introduce: - `ceph-mon.target`: Ensure startup after `network-online.target` and before `remote-fs-pre.target` - `ceph-*.target`: Ensure startup after `ceph-mon.target` and before `remote-fs-pre.target` - `rbdmap.service`: Once all `_netdev` get unmount by `remote-fs.target`, ensure unmap all RBD BEFORE any Ceph components under `ceph.target` get stopped during shutdown The logic is concept proof by <https://github.com/alvistack/ansible-role-ceph_common/tree/develop>; also works as expected with Ceph + Kubernetes deployment by <https://github.com/alvistack/ansible-collection-kubernetes/tree/develop>. No more deadlock happened during graceful system reboot, both AIO single/multiple no de with loopback mount. Also see: - <https://github.com/ceph/ceph/pull/36776> - <https://github.com/etcd-io/etcd/pull/12259> - <https://github.com/cri-o/cri-o/pull/4128> - <https://github.com/kubernetes/release/pull/1504> Fixes: https://tracker.ceph.com/issues/47528 Signed-off-by: Wong Hoi Sing Edison <hswong3i@gmail.com>
2020-08-25 04:16:54 +00:00
After=network-online.target local-fs.target time-sync.target
Before=remote-fs-pre.target ceph-osd.target
Wants=network-online.target local-fs.target time-sync.target remote-fs-pre.target ceph-osd.target
[Service]
Environment=CLUSTER=ceph
EnvironmentFile=-@SYSTEMD_ENV_FILE@
ExecReload=/bin/kill -HUP $MAINPID
ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph
ExecStartPre=@CMAKE_INSTALL_FULL_LIBEXECDIR@/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i
LimitNOFILE=1048576
LimitNPROC=1048576
LockPersonality=true
MemoryDenyWriteExecute=true
# Need NewPrivileges via `sudo smartctl`
NoNewPrivileges=false
PrivateTmp=true
ProtectControlGroups=true
ProtectHome=true
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
# flushing filestore requires access to /proc/sys/vm/drop_caches
ProtectKernelTunables=false
ProtectSystem=full
Restart=on-failure
RestartSec=10
RestrictSUIDSGID=true
StartLimitBurst=3
StartLimitInterval=30min
TasksMax=infinity
[Install]
WantedBy=ceph-osd.target