this fixes failures like,
/home/jenkins-build/build/workspace/ceph-pull-requests/qa/workunits/cephtool/test.sh:
line 32: ceph osd blacklist ls | grep 192.168.0.1: command not found
where the failure is not the "failure" we are expecting.
in our tests, following command
expect_false "ceph osd blacklist ls | grep 192.168.0.1"
is designed to to verify that "ceph osd blacklist ls | grep 192.168.0.1"
fails with non-zero return code. but expect_false() evaluates the command
line using plain "$@", which will send the arguments direct to the shell,
and $0 is "ceph auth get client.xx | grep caps | grep mon", which does
not exist and is not built-in command. so we need to check the grep
command instead.
for multiple piped command line, use
expect_false sh <<< "echo foo | grep bar | grep baz"
Signed-off-by: Kefu Chai <kchai@redhat.com>
Wrap partprobe with flock to stop udev from issuing BLKRRPART because
this is racy and frequently fails with a message like:
Error: Error informing the kernel about modifications to partition
/dev/vdc1 -- Device or resource busy. This means Linux won't know about
any changes you made to /dev/vdc1 until you reboot -- so you shouldn't
mount it or use it in any way before rebooting.
Opening a device (/dev/vdc for instance) in write mode indirectly
triggers a BLKRRPART ioctl from udev (starting version 214 and up)
when the device is closed (see below for the udev release note).
However, if udev fails to acquire an exclusive lock (with
flock(fd, LOCK_EX|LOCK_NB); ) the BLKRRPART ioctl is not issued.
045e00cf16/src/udev/udevd.c (L1042)
Acquiring an exclusive lock before running the process that opens the
device in write mode is therefore an effective way to control this
behavior.
git clone git://anonscm.debian.org/pkg-systemd/systemd.git
systemd/NEWS:
CHANGES WITH 214:
* As an experimental feature, udev now tries to lock the
disk device node (flock(LOCK_SH|LOCK_NB)) while it
executes events for the disk or any of its partitions.
Applications like partitioning programs can lock the
disk device node (flock(LOCK_EX)) and claim temporary
device ownership that way; udev will entirely skip all event
handling for this disk and its partitions. If the disk
was opened for writing, the close will trigger a partition
table rescan in udev's "watch" facility, and if needed
synthesize "change" events for the disk and all its partitions.
This is now unconditionally enabled, and if it turns out to
cause major problems, we might turn it on only for specific
devices, or might need to disable it entirely. Device Mapper
devices are excluded from this logic.
Fixes: http://tracker.ceph.com/issues/15176
Signed-off-by: Marius Vollmer <marius.vollmer@redhat com>
Signed-off-by: Loic Dachary <loic@dachary.org>
* Distutils.cmake:
set --prefix=${CMAKE_INSTALL_PREFIX} for python packages installed using
setuptools. it was set to --prefix=/user only when $DESTDIR is set. so
if user installs ceph using -DCMAKE_INSTALL_PREFIX, these python
packages still go to /usr, which is unexpected.
* ceph-disk/CMakeLists.txt:
install script into ${CMAKE_INSTALL_SBINDIR} instead of /usr/sbin
Signed-off-by: Kefu Chai <kchai@redhat.com>
* doc/start/ceph.conf: it was installed as /etc/ceph.conf.example.
but it's unexpected, and ceph.spec does not packages it.
* vstart.sh: this is for development usage. no need to package it.
Signed-off-by: Kefu Chai <kchai@redhat.com>
ceph-monstore-update-crush.sh is a user-facing script, and it is not
used internally. also ceph.spec expects it that way. technically, we
should not install arch-indep files into ${CMAKE_INSTALL_LIBDIR}/ceph.
leave this for another changeset..
this partially reverts 37f53ec
Signed-off-by: Kefu Chai <kchai@redhat.com>
rh and suse distros follows FHS and put amd64 dso libs into lib64 on
amd64 machines. so let's use ${CMAKE_INSTALL_LIBDIR} instead
Signed-off-by: Kefu Chai <kchai@redhat.com>