The packages repo host fails in environments where networking setup is
needed in VMs. Use the same user-data as the buildhosts to ensure this
is the case.
Signed-off-by: Robin H. Johnson <robin.johnson@dreamhost.com>
7b27e1db7: openstack: support /etc/network/intefaces injection
2358562cf: ensure VMs always have /etc/hosts set up
4378a505d: always allow unsigned deb packages
50b2db521: openstack: encode instance name with the full IP
6e828a33b: openstack: add 8.8.8.8 as a last resort resolver
Signed-off-by: Robin H. Johnson <robin.johnson@dreamhost.com>
Split the sleep from the server creation, so we catch 'server create'
failures (eg due to quota):
> Quota exceeded for cores: Requested 16, but already used 10 of 20 cores
> (HTTP 403) (Request-ID: req-6467934e-db50-4479-995c-4d44dedf553a)
Signed-off-by: Robin H. Johnson <robin.johnson@dreamhost.com>
OpenStack could tell us the VM has multiple networks, and offers no
guarantee about the order of addresses either (the old code failed if
the v4 IP was first).
For now, take the first listed network, and the first listed IPv4
address therein. Comments contain more detailed examples of possible
output from openstack tool.
Also remove the need for using jq to parse the output.
Signed-off-by: Robin H. Johnson <robin.johnson@dreamhost.com>
The commit from which workunits are fetched must be retrieved
from --ceph-git-url via teuth_config.get_ceph_git_url() instead of
assuming it is available via git://git.ceph.com/ceph.git.
Using git://git.ceph.com/ceph.git is convenient because it supports git
archive. In the general case, some git servers such as github do not
support git archive and a full git clone must be done instead.
Although it would be possible to
git clone --branch=master --depth=1 --single-branch
to reduce the amount of data being retrieved, it would require a
git fetch origin SHA1
but git version >= 1.7 do not support fetching a commit.
http://tracker.ceph.com/issues/13624Fixes: #13624
Signed-off-by: Loic Dachary <loic@dachary.org>
The sha1 for the workunit task is always set by the suite.py task. The
tag must be checked before the sha1 othewise it cannot be used to
override the sha1.
Signed-off-by: Loic Dachary <loic@dachary.org>
Use the Mount.* wrappers for filesystem operations,
so that changes like making run_shell use sudo just work.
Signed-off-by: John Spray <john.spray@redhat.com>
This was causing permissions issues when
running inside teuthology, as run_python
was using sudo and run_shell wasn't.
Would be nice to get rid of all the rootishness,
but for the moment just make it more uniform.
This tests the forward scrub's ability to traverse
some metadata and tag it, and the corresponding
functionality in cephfs-data-scan to filter based
on tag and inject orphaned items.
Signed-off-by: John Spray <john.spray@redhat.com>
Since buildpackages runs before target provisioning, it is possible that
the desired image does not yet exist on a newly provisionned tenant (or
region).
http://tracker.ceph.com/issues/13910Fixes: #13910
Signed-off-by: Loic Dachary <loic@dachary.org>
Similar to what the teuthology install.py task does, add --force-yes to
the apt-get install so that unsigned packages are successfully
installed. It is needed when the buildpackages task is used to create
packages on the fly.
There is no need to do the same for rpm packages because the
verification is controlled from the ceph-release package instead of from
the command line.
http://tracker.ceph.com/issues/13899Fixes: #13899
Signed-off-by: Loic Dachary <loic@dachary.org>
So that it can be used as follows:
teuthology-openstack ... --suite mysuite ... debug/openstack-15G.yaml
Signed-off-by: Loic Dachary <loic@dachary.org>
When the quotas are low, it matters to block until the build machine is
actually deleted. Otherwise target provisionning may fail because the
they exceed the quota. For instance the default on OVH is to have 32
cores and the build machine uses 16. The packages-repository machine
uses two, the teuthology cluster uses one and that leaves only 13 cores
for the targets which may be too low when running jobs that require
large instances.
Signed-off-by: Loic Dachary <loic@dachary.org>