The log_data and log_metadata are made configurable
via the YAML file and default to false
(meaning neither data nor metadata operations are
logged).
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
- Read ceph.conf from stored copy that includes overrides
- Get system users and keys from cluster instead of reading other
tasks' yaml, which may not be complete.
- Put zone info extraction from the cluster into utility functions,
since it'll be useful for other tests later.
- Work with more than one agent on a single host
- Accept more than one client to run, like almost every other task
- Rename target to dest for consistency with radosgw-agent
- Don't make everything one large function
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
This pulls access data out of the rgw task and off disk,
and then downloads, sets up, and runs an rgw sync agent
in test mode.
Signed-off-by: Greg Farnum <greg@inktank.com>
This makes --lock-many work when --machine-type vps is passed.
Before it wasn't handled correctly and guests were not created.
Now it creates and gives the back the user the list-targets for
said guests.
teuthology-lock --lock-many 4 --machine-type vps --os-type centos
This fixes issue #5836
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Reviewed-by: Alfredo Deza <alfredo@deza.pe>
On debian wheezy its mount output uses device-by-label and makes
our normal method of checking if a device is mounted not work.
Since vm's will always be vda for their boot device we will just
remove it from devs if its in there so it doesn't attempt to zap
vda.
I also added a strip() to remove the last blank entry that was
always getting added to the devs list on all machines. Example:
devs=['/dev/sda', '/dev/sdb', '/dev/sdc', '/dev/sdd', '']
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Reviewed-by: Alfredo Deza <alfredo@deza.pe>
Fixes a bug where an rgw client without
a system user specified would cause teuthology
to error out.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
By separating out the user creation from
generating the region/zone info, we can generate
users for RGW tests that run against the default
pools.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
A 'user create' call was being passed to radosgw-admin
with '--secret-key' instead of the valid '--secret'
which was causing a random secret to be generated,
which was causing subsequent tests to fail.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
fastcgi_sock dir needs to exist before radosgw starts, and apache-execed radosgw needs an explicit keyring argument.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Just a simple change to reconnect to SSH after running
ceph-qa-chef to get around things like ulimit changes.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
packages are missing (the old code skipped 'Nothing to do' messages, but these
cases are still errors).
Fixes#5803
Signed-off-by: Warren Usui <warren.usui@inktank.com>
Reviewed by: Sandon Van Ness
Needed some more changes to allow for the case of creating vm's
manually with teuthology-lock instead of letting teuthology handle
it in internal.py with lock_machines(). Just some additional checks
to go to defaults when ctx.config is non-existent (causes an
attributeerror).
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Reviewed-by: Warren Usui <warren.usui@inktank.com>
Teuthology got updated to use --os-type and os_type in yaml
instead of --vm-type. I added this to teuthology but forgot
to update tuthology-lock as well for manually creating vms.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Only radosgw needs this option, and each one will be different, so
remove it from the ceph.conf template.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
The clients are pretty regularly reporting busy on unmount when
samba runs above them. This will hopefully give us some info about why.
Signed-off-by: Greg Farnum <greg@inktank.com>
Since getting the ostype is used multiple places I made a
function for it and modified the existing code to use
said function. I also added tests for the function.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Due to bug #5716, pools need to start with a '.' at present.
Updating the examples to follow this convention.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
The post-yield code in create_dirs needed to
be tweaked to correctly delete the {tdir}/apache
directory (if it exists) on each client.
Signed-off-by: Joe Buck <jbbuck@gmail.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
Take client<->zone/region and the associated pools from ceph.conf, so
we don't have to invent a new format to specify it.
General region info is added to a new configuration section in the rgw
task. Each client is assumed to be a different zone, and a system user
is created with the key specified in the yaml, so it can be passed to
later task configuration as well. This isn't strictly necessary, but
avoids having to lookup this info in later tasks through something
like radosgw-admin.
Ports are allocated automatically because there's no obvious mapping
from host to client in the task configuration. Later tests can get the
endpoints desired by reading the region map.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Six copies are replaced with one, with an added option to check status
automatically. This should probably be used in a few places where the
return code is ignored.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
In some cases tests fail or nuke fails and the guest is
not properly destroyed. This will look to see if it gets
an error due to the guest already existing or its disks
existing and will re-create the guest.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Just to allow for the create to still work incase the os
volume is fairly large (takes a while to resize) and in
case the host machine is bogged down due to disk I/O.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Use os_type instead of vm_type for more generic naming
for when we start re-imaging bare metal. Also added a
os_version dictionary for default versions of distros
that we want over-riding what downburst defaults are.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
tasks:
...
- ceph.wait_for_mon_quorum: [a, b]
...
will block until the mon quorum consists of exactly [a, b]. This is
compared directly to the relevant field from 'ceph quorum_status'
which has the alphanumeric names only.
Signed-off-by: Sage Weil <sage@inktank.com>
Often we want to build a test collection that substitutes different
sequences of tasks into a parallel/sequential construction. However, the
yaml combination that happens when generating jobs is not smart enough to
substitute some fragment into a deeply-nested piece of yaml.
Instead, make these sequences top-level entries in the config dict, and
reference them. For example:
tasks:
- install:
- ceph:
- parallel:
- workload
- upgrade-sequence
workload:
workunit:
- something
upgrade-sequence:
install.restart: [osd.0, osd.1]
Signed-off-by: Sage Weil <sage@inktank.com>
Instead of relying on hardcoded values, obtain the max-skew default from
'ceph-mon --show-config-value mon_clock_drift_allowed' to match the mon's
expectation.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Sometimes the thing we're talking to is slow to start, or to register the
command we are running. Loop in that case, at least for a while.
Signed-off-by: Sage Weil <sage@inktank.com>
If not defined, defaults to 0.05; if 'max-skew' however is defined, it
must override whatever is on the config.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
teuthology-suite and schedulewill now take --worker instead of
--branch. The branch is set by setting teuthology_branch in the
yaml used to schedule the job.
The teuthology branches are assumed to be in ~/teuthology-$branch
of whatever user is running the workers.
This will make the CLI do every mon command twice and make sure they both
succeed. This catches problems with mon command idempotency faster than
waiting for random failures trigger.
Added sequential task and parallel task.
Changed _run_one_task to run_one_task (now called by new tasks too).
Fix#4969
Signed-off-by: Warren Usui <warren.usui@inktank.com>
We already install btrfs-tools and xfsprogs with ceph-qa-chef
Doing it here was just causing problems on non-ubuntu
distros and I really see no point for it to have it now.
This is needed so we can set the ceph branch for ceph-deploy
to use via the main yaml which is created via the suite
scheduler.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Because of issues with package name differences vps are
setup to use repopriority and our local repo (which has
some ceph/librados stuff in it) gets high priority so
the ceph.repo that is created on the machine from
ceph-release basically gets ignored. This change makes
it so ceph.repo is the same priority level as our local
repo.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
In some rare cases (mainly centos/rhel after creating the
guest with downburst it does not come up right. It
gets a kernel panic at boot. Usually just turning it off
and then back on again is enough but to be on the safe
side I figured it should be re-created instead. This
insures you don't get hung jobs from a guest that didn't
come up correctly.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
For some reason lock_many() has a description but lock()
does not. This was useful in my testing of unlocking and
re-locking VPS machines to destroy.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Figuring out which machines output is coming from when things
are being executed on multiple machines can be a huge pain.
This prints the IP in the logs so you can easily see where one
machine stops and another begins.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
In order to make IP addresses less likely to change and to allow
a smaller DHCP pool to be used I generated static MAC addresses
for all the vpm entries in the DB. I also put the correct entries
for all the other types of machines as well for their primary
(eth0) mac address as well in order to keep things standardized
and so there is another location where we have this information.
Without this fix going through a few tests would exhaust the DHCP
pool which at the time was around 460 IP addresses for virtual
machines and has since been upped to ~690 IP addresses.
Signed-off-by: Sandon Van Ness <sandon@inktank.com>
Reviewed-by: Warren Usui <warren.usui@inktank.com>