Add a function dedicated to erasure coded pools tests, similar to
repair_test_1. Add a corrupter that removes the hinfo_key from the object.
Signed-off-by: Loic Dachary <ldachary@redhat.com>
Add the CephManager.objectstore_tool method to encapsulate a call to
ceph-objectstore-tool. The wrapper can convert an object name into the
PG id and figure out the primary OSD. The designated OSD is stopped
before running the command and restarted afterwards.
Signed-off-by: Loic Dachary <ldachary@redhat.com>
The commit is large but does not introduce any semantic change and
consists primarily in code moving around, re-indented and removed.
Replace functions generating functions by functions and sequentially
iterating over a list of functions with a sequential call to the
functions.
Replace the setup/teardown with an equivalent using a with
statement and the ceph_manager.pool method.
Replace inline code with a call to ceph_manager.wait_for_all_up
It makes it easier to modify the tests, for instance to create erasure
coded pools and tests specific to them.
Signed-off-by: Loic Dachary <ldachary@redhat.com>
To create a pool before running a code bloc and remove it after.
with manager.pool("mypool"):
mytest..
Signed-off-by: Loic Dachary <ldachary@redhat.com>
Require ceph-objectstore-tool to be available on all OSD nodes
Log a message when tool is not available
Signed-off-by: David Zafman <dzafman@redhat.com>
The small segments and small segment limit
were used when doing a hacky flush by doing
IO and waiting: now that we have the explicit
'flush journal' asok in use, we can just use
a normal journal configuration.
Signed-off-by: John Spray <john.spray@redhat.com>
This was only used in get_first_mon, which doesn't actually
need the parameter itself. Makes it easier to casually
use Filesystem from any place with a ctx to hand.
Signed-off-by: John Spray <john.spray@redhat.com>
When unused clients were mounted during an fs new,
they would end up in a state where they stalled
on subsequent attempts to umount them (ceph-fuse
stalls on exit if it can't terminate its mds_session)
Signed-off-by: John Spray <john.spray@redhat.com>
Instead of blocking the whole port range (which
might make OSDs running on that node collateral
damage), read the MDS's port out of the MDS map
and just block that.
Signed-off-by: John Spray <john.spray@redhat.com>
...because this is the one that will store up
changes to roll back during teardown.
Doing this makes it easy to run lots of test cases
togeher in a single teuthology run, raher than
setting up/tearing down the ceph cluster for each
on.
Signed-off-by: John Spray <john.spray@redhat.com>
Now that we have more of these cases, there was lots
of duplication in setup and teardown. For some tests
the "reset everything" setup/teardown is overkill,
but it's harmless.
Signed-off-by: John Spray <john.spray@redhat.com>
Since the new 'tell' for the MDS was introduced,
caps have to have the '*' to permit running remote
administrative commands.
Signed-off-by: John Spray <john.spray@redhat.com>
Now that #10387 is fixed in master, we can tighten
up this test to ensure that the expected deletions
are happening.
Signed-off-by: John Spray <john.spray@redhat.com>
The tiobench software has been abandoned upstream for years. Fedora and
Debian are no longer shipping the tiobench package, so we've had to
carry the package ourselves in the Ceph project, and we're trying to
slim down our dependencies where it makes sense to do so.
Nuke the tiobench tests.
http://tracker.ceph.com/issues/10152 Refs: #10152
Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
This reverts commit 26a33c3a5a.
This is tryign to create the archive dir on the remote host:
2014-12-29T12:15:30.213 INFO:teuthology.orchestra.run.plana31:Running: 'mkdir -p /var/lib/teuthworker/archive/sage-2014-12-29_11:40:52-rgw-next---basic-multi/683052'
2014-12-29T12:15:30.231 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
File "/home/teuthworker/src/teuthology_master/teuthology/contextutil.py", line 28, in nested
vars.append(enter())
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/var/lib/teuthworker/src/ceph-qa-suite_next/tasks/s3readwrite.py", line 241, in run_tests
ctx.cluster.only(client).run(args=['mkdir', '-p', archive_dir])
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/cluster.py", line 64, in run
return [remote.run(**kwargs) for remote in remotes]
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 128, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 368, in run
r.wait()
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 106, in wait
exitstatus=status, node=self.hostname)
CommandFailedError: Command failed on plana31 with status 1: 'mkdir -p /var/lib/teuthworker/archive/sage-2014-12-29_11:40:52-rgw-next---basic-multi/683052'
...but it should only be on the local host.
This tests:
* The new 'flush journal' asok command
* That the resulting on disk structures are as expected
* That cephfs-journal-tool is happy with the result
Fixes: #9881
Signed-off-by: John Spray <john.spray@redhat.com>
Notes:
- very simple cluster configuration
- selects vps in the actual suite config files
- wheezy is currently disabled
Signed-off-by: Dan Mick <dan.mick@inktank.com>