This fixes restart when multiple instances are running.
Fixes: #12407
Tested-by: Pavan Rallabhandi <pavan.rallabhandi@sandisk.com>
Signed-off-by: Sage Weil <sage@redhat.com>
If installed on Ubuntu where multipath does not activate properly, it
interferes with the other tests.
Signed-off-by: Loic Dachary <ldachary@redhat.com>
After preparing an OSD, wait for the corresponding OSD to be up
according to ceph osd dump before asserting the devices are in the
expected state. Otherwise the test races with ceph-disk activate which
is run asynchronously via udev / upstart / system.
Signed-off-by: Loic Dachary <ldachary@redhat.com>
It turns out it was not CentOS 7 specific. There is no excuse to skip
the tests anymore.
http://tracker.ceph.com/issues/12787 Refs: #12787
Signed-off-by: Loic Dachary <ldachary@redhat.com>
When calling partprobe, we make sure there is at least one udev add
called for each partition created when preparing a device. But there is
no guarantee that the udev add for data partition will be last and the
following scenario can happen:
- udev add data partition fails because the journal partition is owned
by root
- udev add journal partition chown the journal partition
- no other udev add event is sent and the OSD does not activate
An additional, possibly redundant, udev add event is fired after
partprobe is run and after udevadm settles, to guarantee there is at
least one udev add data partition after the last udev add journal
partition.
http://tracker.ceph.com/issues/12787Fixes: #12787
Signed-off-by: Loic Dachary <ldachary@redhat.com>
The update_partition call in main_prepare happens immediately after
prepare_dev but only if the data argument is a block device. There is no
reason for this separation: it is more sensible to call it from within
prepare_dev.
There is an additional test in prepare_dev that verifies partprobe won't
be called on a partition because it would not make sense.
Signed-off-by: Loic Dachary <ldachary@redhat.com>
Call udevadm settle before and after partprobe.
A side effect of partprobe is to remove partitions and add them again.
The first udevadm settle waits for ongoing udev events to complete, just
in case one of them rely on an existing partition on dev.
The second udevadm settle guarantees to the caller that all udev events
related to the partition table change have been processed, i.e. the
95-ceph-osd.rules actions and mode changes, group changes etc. are
complete.
Signed-off-by: Loic Dachary <ldachary@redhat.com>
Set the LOG level as well as the channel level, otherwise the debug
messages are trimmed before they reach the channel. Also set the prefix
while we're at it.
http://tracker.ceph.com/issues/13180Fixes: #13180
Signed-off-by: Loic Dachary <ldachary@redhat.com>
blkid 2.23.2 which is the default for official CentOS 7 cloud images
fails on journal device. It would be better to use blkid because it does
not trigger udev events, but it is more important to get reliable
results.
http://tracker.ceph.com/issues/13153Fixes: #13153
Signed-off-by: Loic Dachary <ldachary@redhat.com>
The ceph-disk list argument must be the device name without the leading
/dev/. This is error prone and silently does nothing. Strip the /dev/
prefix of ceph-disk list arguments so that it behaves as expected.
http://tracker.ceph.com/issues/13154Fixes: #13154
Signed-off-by: Loic Dachary <ldachary@redhat.com>
When running ceph-disk trigger /dev/dm-1 with systemd, the path name is
translated into /dev/dm/1 because of systemd escape rules. Explicitly
translate - into \x2d for systemd to preserve the -.
It would be better to use systemd-escape
http://www.freedesktop.org/software/systemd/man/systemd-escape.html
but it does not appear to be generally available on CentOS 7 and
probably other distributions.
http://tracker.ceph.com/issues/13174Fixes: #13174
Signed-off-by: Loic Dachary <ldachary@redhat.com>
When a data partition is removed and the journal partition is not
removed, ceph-disk list will not find a journal_for information and
should just ignore it.
http://tracker.ceph.com/issues/13157Fixes: #13157
Signed-off-by: Loic Dachary <ldachary@redhat.com>
When activating a device, ceph-disk trigger restarts the ceph-disk
systemd service. Two consecutive udev add on the same device will
restart the ceph-disk systemd service and the second one may kill the
first one, leaving the device half activated.
The ceph-disk systemd service is instructed to not kill an existing
process when restarting. The second run waits (via flock) for the second
one to complete before running so that they do not overlap.
http://tracker.ceph.com/issues/13160Fixes: #13160
Signed-off-by: Loic Dachary <ldachary@redhat.com>
On udev change the owner of the device switch back to the default. If
that happens on a journal while an OSD is being activated, it will fail
with permission denied.
Make sure all ceph device types are chown to ceph on udev change.
http://tracker.ceph.com/issues/13000Fixes: #13000
Signed-off-by: Loic Dachary <ldachary@redhat.com>