2014-07-20 02:08:43 +00:00
|
|
|
[Unit]
|
2016-07-01 11:55:58 +00:00
|
|
|
Description=Ceph object storage daemon osd.%i
|
2017-01-25 11:39:40 +00:00
|
|
|
After=network-online.target local-fs.target time-sync.target ceph-mon.target
|
2016-04-07 18:17:44 +00:00
|
|
|
Wants=network-online.target local-fs.target time-sync.target
|
2015-10-26 07:13:19 +00:00
|
|
|
PartOf=ceph-osd.target
|
2014-07-20 02:08:43 +00:00
|
|
|
|
|
|
|
[Service]
|
2015-09-14 14:54:53 +00:00
|
|
|
LimitNOFILE=1048576
|
2015-09-17 22:28:38 +00:00
|
|
|
LimitNPROC=1048576
|
2018-02-27 08:42:48 +00:00
|
|
|
EnvironmentFile=-@SYSTEMD_ENV_FILE@
|
2014-07-20 02:08:43 +00:00
|
|
|
Environment=CLUSTER=ceph
|
2015-04-24 00:15:14 +00:00
|
|
|
ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph
|
2016-04-05 15:58:58 +00:00
|
|
|
ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i
|
2015-06-26 11:13:33 +00:00
|
|
|
ExecReload=/bin/kill -HUP $MAINPID
|
2019-02-01 19:48:00 +00:00
|
|
|
LockPersonality=true
|
|
|
|
MemoryDenyWriteExecute=true
|
|
|
|
# Need NewPrivileges via `sudo smartctl`
|
|
|
|
NoNewPrivileges=false
|
|
|
|
ProtectControlGroups=true
|
2016-01-28 02:17:14 +00:00
|
|
|
ProtectHome=true
|
2019-02-01 19:48:00 +00:00
|
|
|
ProtectKernelModules=true
|
|
|
|
# flushing filestore requires access to /proc/sys/vm/drop_caches
|
|
|
|
ProtectKernelTunables=false
|
2016-01-28 02:17:14 +00:00
|
|
|
ProtectSystem=full
|
|
|
|
PrivateTmp=true
|
2016-04-05 16:32:59 +00:00
|
|
|
TasksMax=infinity
|
2016-03-17 17:54:47 +00:00
|
|
|
Restart=on-failure
|
|
|
|
StartLimitInterval=30min
|
systemd: only restart 3 times in 30 minutes, as fast as possible
Once upon a time, we configured our init systems to only restart an OSD 3 times
in a 30 minute period. This made sure a permanently-slow OSD would stay dead,
and that an OSD which was dying on boot (but only after a long boot process)
would not insist on rejoining the cluster for *too* long.
In 62084375fa8370ca3884327b4a4ad28e0281747e, Boris applied these same rules to
systemd in a great bid for init system consistency. Hurray!
Sadly, Loic discovered that the great dragons udev and ceph-disk were
susceptible to races under systemd (that we apparently didn't see with
the other init systems?), and our 3x start limit was preventing the
system from sorting them out. In b3887379d6dde3b5a44f2e84cf917f4f0a0cb120
he configured the system to allow *30* restarts in 30 minutes, but no more
frequently than every 20 seconds.
So that resolved the race issue, which was far more immediately annoying
than any concern about OSDs sometimes taking too long to die. But I've started
hearing in-person reports about OSDs not failing hard and fast when they go bad,
and I attribute some of those reports to these init system differences.
Happily, we no longer rely on udev and ceph-disk, and ceph-volume shouldn't
be susceptible to the same race, so I think we can just go back to the old way.
Partly-reverts: b3887379d6dde3b5a44f2e84cf917f4f0a0cb120
Partly-fixes: http://tracker.ceph.com/issues/24368
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
2018-05-31 22:55:51 +00:00
|
|
|
StartLimitBurst=3
|
2014-07-20 02:08:43 +00:00
|
|
|
|
|
|
|
[Install]
|
2015-10-26 07:13:19 +00:00
|
|
|
WantedBy=ceph-osd.target
|