Require ceph-objectstore-tool to be available on all OSD nodes
Log a message when tool is not available
Signed-off-by: David Zafman <dzafman@redhat.com>
ice-tools needs a virtualenv populated to properly run to build
an iceball; add the commands to do that. Also remove the built
iceball when the task exits.
Fixes: #10523
Signed-off-by: Dan Mick <dan.mick@redhat.com>
Previously, the task would search for the lexicographically-greatest
filename matching ICE-*.tar.gz; now it builds a specific name
ICE-{ice_version}-{ice_distro}.tar.gz
Fixes: #10521
Signed-off-by: Dan Mick <dan.mick@redhat.com>
The small segments and small segment limit
were used when doing a hacky flush by doing
IO and waiting: now that we have the explicit
'flush journal' asok in use, we can just use
a normal journal configuration.
Signed-off-by: John Spray <john.spray@redhat.com>
This was only used in get_first_mon, which doesn't actually
need the parameter itself. Makes it easier to casually
use Filesystem from any place with a ctx to hand.
Signed-off-by: John Spray <john.spray@redhat.com>
When unused clients were mounted during an fs new,
they would end up in a state where they stalled
on subsequent attempts to umount them (ceph-fuse
stalls on exit if it can't terminate its mds_session)
Signed-off-by: John Spray <john.spray@redhat.com>
Instead of blocking the whole port range (which
might make OSDs running on that node collateral
damage), read the MDS's port out of the MDS map
and just block that.
Signed-off-by: John Spray <john.spray@redhat.com>
...because this is the one that will store up
changes to roll back during teardown.
Doing this makes it easy to run lots of test cases
togeher in a single teuthology run, raher than
setting up/tearing down the ceph cluster for each
on.
Signed-off-by: John Spray <john.spray@redhat.com>
Now that we have more of these cases, there was lots
of duplication in setup and teardown. For some tests
the "reset everything" setup/teardown is overkill,
but it's harmless.
Signed-off-by: John Spray <john.spray@redhat.com>
Since the new 'tell' for the MDS was introduced,
caps have to have the '*' to permit running remote
administrative commands.
Signed-off-by: John Spray <john.spray@redhat.com>
Now that #10387 is fixed in master, we can tighten
up this test to ensure that the expected deletions
are happening.
Signed-off-by: John Spray <john.spray@redhat.com>
The tiobench software has been abandoned upstream for years. Fedora and
Debian are no longer shipping the tiobench package, so we've had to
carry the package ourselves in the Ceph project, and we're trying to
slim down our dependencies where it makes sense to do so.
Nuke the tiobench tests.
http://tracker.ceph.com/issues/10152 Refs: #10152
Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
This reverts commit 26a33c3a5aa2aedb52eb5ce140c76503f099b253.
This is tryign to create the archive dir on the remote host:
2014-12-29T12:15:30.213 INFO:teuthology.orchestra.run.plana31:Running: 'mkdir -p /var/lib/teuthworker/archive/sage-2014-12-29_11:40:52-rgw-next---basic-multi/683052'
2014-12-29T12:15:30.231 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
File "/home/teuthworker/src/teuthology_master/teuthology/contextutil.py", line 28, in nested
vars.append(enter())
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/var/lib/teuthworker/src/ceph-qa-suite_next/tasks/s3readwrite.py", line 241, in run_tests
ctx.cluster.only(client).run(args=['mkdir', '-p', archive_dir])
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/cluster.py", line 64, in run
return [remote.run(**kwargs) for remote in remotes]
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 128, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 368, in run
r.wait()
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 106, in wait
exitstatus=status, node=self.hostname)
CommandFailedError: Command failed on plana31 with status 1: 'mkdir -p /var/lib/teuthworker/archive/sage-2014-12-29_11:40:52-rgw-next---basic-multi/683052'
...but it should only be on the local host.
(cherry picked from commit 3960530b7decc360c72d4670475806c04f218bfa)
This reverts commit 26a33c3a5aa2aedb52eb5ce140c76503f099b253.
This is tryign to create the archive dir on the remote host:
2014-12-29T12:15:30.213 INFO:teuthology.orchestra.run.plana31:Running: 'mkdir -p /var/lib/teuthworker/archive/sage-2014-12-29_11:40:52-rgw-next---basic-multi/683052'
2014-12-29T12:15:30.231 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
File "/home/teuthworker/src/teuthology_master/teuthology/contextutil.py", line 28, in nested
vars.append(enter())
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/var/lib/teuthworker/src/ceph-qa-suite_next/tasks/s3readwrite.py", line 241, in run_tests
ctx.cluster.only(client).run(args=['mkdir', '-p', archive_dir])
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/cluster.py", line 64, in run
return [remote.run(**kwargs) for remote in remotes]
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/remote.py", line 128, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 368, in run
r.wait()
File "/home/teuthworker/src/teuthology_master/teuthology/orchestra/run.py", line 106, in wait
exitstatus=status, node=self.hostname)
CommandFailedError: Command failed on plana31 with status 1: 'mkdir -p /var/lib/teuthworker/archive/sage-2014-12-29_11:40:52-rgw-next---basic-multi/683052'
...but it should only be on the local host.
This tests:
* The new 'flush journal' asok command
* That the resulting on disk structures are as expected
* That cephfs-journal-tool is happy with the result
Fixes: #9881
Signed-off-by: John Spray <john.spray@redhat.com>
Notes:
- very simple cluster configuration
- selects vps in the actual suite config files
- wheezy is currently disabled
Signed-off-by: Dan Mick <dan.mick@inktank.com>
The format of the output of --op list was changed to include the PG to
which the object belong. It simplifies the loop in
ceph_objectstore_tool.py.
http://tracker.ceph.com/issues/10376Fixes: #10376
Signed-off-by: Loic Dachary <ldachary@redhat.com>