mirror of
https://github.com/ceph/ceph
synced 2025-01-21 02:31:19 +00:00
8a62cbc074
"ceph-disk trigger" invocation is currently performed in a mutually exclusive fashion, with each call first taking an flock on the path /var/lock/ceph-disk. On systems with a lot of osds, this leads to a large amount of lock contention during boot-up, and can cause some service instances to trip the 120 second timeout. Take an flock on a device specific path instead of /var/lock/ceph-disk, so that concurrent "ceph-disk trigger" invocations are permitted for independent osds. This greatly reduces lock contention and consequently the chance of service timeout. Per-device concurrency restrictions required for http://tracker.ceph.com/issues/13160 are maintained. Fixes: http://tracker.ceph.com/issues/18049 Signed-off-by: David Disseldorp <ddiss@suse.de> |
||
---|---|---|
.. | ||
50-ceph.preset | ||
ceph | ||
ceph-disk@.service | ||
ceph-mds.target | ||
ceph-mds@.service | ||
ceph-mgr.target | ||
ceph-mgr@.service | ||
ceph-mon.target | ||
ceph-mon@.service | ||
ceph-osd.target | ||
ceph-osd@.service | ||
ceph-radosgw.target | ||
ceph-radosgw@.service | ||
ceph-rbd-mirror.target | ||
ceph-rbd-mirror@.service | ||
ceph.target | ||
ceph.tmpfiles.d | ||
CMakeLists.txt | ||
rbdmap.service |