this makes the 'compression type' setting global to all gateways, and
makes the setting visible to other tasks in ctx.rgw.compression_type
Signed-off-by: Casey Bodley <cbodley@redhat.com>
The new fs setting standby_count_wanted is only avialable in luminous. Upgrade
tests were tripping on this.
Fixes: http://tracker.ceph.com/issues/19934
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
directory fragmentation generates extra osd ops, which affects checks
in the test.
Fixes: http://tracker.ceph.com/issues/19892
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
This was sending lots of metadata ops to MDSs to persuade
them to migrate some subtrees, but that was flaky. Use
the shiny new rank pinning functionality instead.
Signed-off-by: John Spray <john.spray@redhat.com>
Don't assume that test_data_scan will be run on exactly 2 MDS nodes.
Fixes: http://tracker.ceph.com/issues/19893
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
When selinux is enabled, kernel client may releases inodes (without
uptodate xattr) in readdir reply immediately after processing the reply.
The reason is that linking the inode to dentry causes deadlock if xattr
is not uptodate.
We can use stat(2) syscall to guarantee that kernel client caches an
inode.
Fixes: http://tracker.ceph.com/issues/19912
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Also catches corner-case found by Zheng where an unjournaled directory will
cause export pinning to fail because it cannot be made a subtree until its
parent is stable.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Idea here is that a pinned inode should not be exported when its parent is.
Setting the pinned inode's dirfrags to aux subtrees prevents them from being
merged with a parent subtree.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Pulling this out of the 'pg dump' heap is inefficient.
Also, pg dump data comes from the mgr and may be stale.
Signed-off-by: Sage Weil <sage@redhat.com>
Keep the pool flag around so we can distinguish between a pool that
should maintain hashes for each chunk, and a missing one is a bug, vs
an overwrites pool where we rely on bluestore checksums for detecting
corruption.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
This allows the client/admin to pin a directory tree to a particular rank,
preventing its export by the dynamic balancer.
Fixes: http://tracker.ceph.com/issues/17834
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
'remap' is to non-specific a name. In particular, it
sounds like it is related to the 'remapped' PG state
but in reality it is not related.
'upmap' or 'pg-upmap' is more specific: it maps a pgid
to the 'up' set value (or item)
Signed-off-by: Sage Weil <sage@redhat.com>
Previously, errors stuck indelibly to the inode, which
meant that a close call would see an error even if the
user already dutifully fsync()'d and handled it.
We should emit each error only once per file handle.
Signed-off-by: John Spray <john.spray@redhat.com>
Added '--cluster' to all necessary commands
ex: radosgw-admin, rados, ceph, made sure
necessary checks were in place so that clients
can be read with our without a cluster_name
preceeding them
Made master_client defined in the config for
radosgw-admin task
Signed-off-by: Ali Maredia <amaredia@redhat.com>
On slower machines (VPS, OVH) it takes time for the OSD to go down.
Fixes: http://tracker.ceph.com/issues/19556
Signed-off-by: Nathan Cutler <ncutler@suse.com>
otherwise the settings in "workunit" tasks are always overridden by the
settings in template config. so we'd better follow the way of how
"install" task updates itself with the "overrides" settings: it uses the
"overrides" as the *defaults*.
Fixes: http://tracker.ceph.com/issues/19429
Signed-off-by: Kefu Chai <kchai@redhat.com>
c1309fb failed to specify a branch when cloning using --depth=1, which
by default clones the HEAD. and we can not "git checkout" a specific
sha1 if it is not HEAD, after cloning using '--depth=1', so in this
change, we dispatch "tag", "branch", "HEAD" using three Refspec classes.
Signed-off-by: Kefu Chai <kchai@redhat.com>
Signed-off-by: Dan Mick <dan.mick@redhat.com>
"ps -xwwu<id>" is parsed as BSD, because -x is not a UNIX option.
"u" is a BSD option for user-oriented format, so the <id> ends up being
parsed as an old-style "select by pid". The only reason this command
doesn't dump other user's processes is that the BSD "only yourself"
restriction is in effect.
I'm not sure what's wrong with a simple "ps xww", but if we want to
select by euid, let's do it right.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Help avoid killing git.ceph.com. A depth 1 clone takes about
7 seconds, whereas a full one takes about 3:40 (much of it
waiting for the server to create a huge compressed pack)
Signed-off-by: Dan Mick <dan.mick@redhat.com>
We need this for CephFS, to verify that workloads
we expect to do a particular thing (like directory fragmentation
or metadata exports) are really doing it.
This is for giving us confidence in our coverage of these
features rather than testing them per se.
Fixes: http://tracker.ceph.com/issues/16523
Signed-off-by: John Spray <john.spray@redhat.com>
At the end of start_rgw() we wait till establishing HTTP connections
with RadosGW become possible. However, if RadosGW uses the FastCGI,
the condition can't be fulfilled without spawning HTTP server first.
Signed-off-by: Radoslaw Zarzynski <rzarzynski@mirantis.com>
if we run upgrade test, where, for example, "jewel" is not in
ceph-ci.git repo, we should check ceph.git to clone the workunits.
Signed-off-by: Kefu Chai <kchai@redhat.com>
as "workunits" reside in ceph/qa/workunits, it's more intuitive to
respect suite-repo option when cloning workunits.
Signed-off-by: Kefu Chai <kchai@redhat.com>
we should not update pools_to_fix_pgp_num if the pool is not expanded or
the pg_num is not increased due to pgs being created. this prevent us
from fixing the pgp_num after done with thrashing if we actually did
nothing when fixing the pgp_num when thrashing, but we removed the pool
from pools_to_fix_pgp_num after set_pool_pgpnum() returns.
Signed-off-by: Kefu Chai <kchai@redhat.com>
as "workunits" reside in ceph/qa/workunits, it's more intuitive to
respect suite-repo option when cloning workunits.
Signed-off-by: Kefu Chai <kchai@redhat.com>
It should live in teuthology, not in Ceph. And it is currently broken:
there is no need to keep it around.
Fixes: http://tracker.ceph.com/issues/18846
Signed-off-by: Loic Dachary <loic@dachary.org>
There were some cases where we would leave a mountpoint
that would cause the teuthology teardown to get hung up
when it tried to look inside cephtest/
Signed-off-by: John Spray <john.spray@redhat.com>
Thrashing MDS will often result in failures which often do not stop the
test. The failure may also cause the test to stall which will force the
machines to needlessly be locked until a timeout is reached. This
watchdog will unmount mounts and kill daemons when a failure is
detected.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
While the trasher supports the behavior desired by issue 10792 [1], the
bugs uncovered due to deactivating MDS (and sometimes killing
deactivating MDS) are presently a distraction from addressing issues
during normal failures. So now thrashing max_mds is turned off by
default. I have added a TODO to deactivate ranks in order (configurably)
as random deactivation causes a lot of other problems.
This also fixes a bug: random.randrange(0.0, 1.0) always returns 0.
Oops.
[1] http://tracker.ceph.com/issues/10792
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Currently multimds is prone to many failures when killing an active or
stopping MDS when there are MDS in the cluster which have been
deactivated (stopping). Have this turned off by default for now.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
The thrasher can enter an infinite loop waiting for an MDS to take a
certain rank when a replacement may not be possible. For example,
max_mds actives are already running.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
During the course of thrashing max_mds, the ranks assigned to MDSs may
develop holes. This causes the thrasher to try to wrongly deactivate
ranks that are not assigned.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
https://github.com/ceph/ceph/pull/13194 introduced a regression:
2017-02-06T16:14:23.162 INFO:tasks.thrashosds.thrasher:Traceback (most recent call last):
File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/ceph_manager.py", line 722, in wrapper
return func(self)
File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/ceph_manager.py", line 839, in do_thrash
self.choose_action()()
File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/ceph_manager.py", line 305, in kill_osd
output = proc.stderr.getvalue()
AttributeError: 'NoneType' object has no attribute 'getvalue'
This is because the original patch failed to pass "stderr=StringIO()" to run().
Fixes: http://tracker.ceph.com/issues/16263
Signed-off-by: Nathan Cutler <ncutler@suse.com>
Signed-off-by: Kefu Chai <kchai@redhat.com>
If Thrasher.__init__() spawns the do_thrash thread before initializing the
ceph_objectstore_tool property, do_thrash races with the rest
of Thrasher.__init__() and in some cases do_thrash can call kill_osd() before
Trasher.__init__() progresses much further. This can lead to an exception
("AttributeError: Thrasher instance has no attribute 'ceph_objectstore_tool'")
being thrown in kill_osd().
This commit eliminates the race by making sure the ceph_objectstore_tool
attribute is initialized before the do_thrash thread is spawned.
Fixes: http://tracker.ceph.com/issues/18799
Signed-off-by: Nathan Cutler <ncutler@suse.com>
The umount process can get stuck, in which case
we want to fail the test rather than waiting around for it.
During teardown of the kclient task catch this
timeout explicitly so that we will powercycle the node if
needed.
Signed-off-by: John Spray <john.spray@redhat.com>
Do the write after opening the file, so that we get good
behaviour wrt the change in Mount.open_background that uses
file existence to confirm that the open happened.
Signed-off-by: John Spray <john.spray@redhat.com>
Previously we could readily end up hanging on teardown
when something had gone wrong with umount. Forcing
is a big hammer (umount_wait will power cycle the node
if umount isn't working), so if we had to do that
then raise an exception to indicate that something
was wrong with the test.
Fixes: http://tracker.ceph.com/issues/18663
Signed-off-by: John Spray <john.spray@redhat.com>
Previously a later remote call could end up executing
before the remote python program in open_background
had actually got as far as opening the file.
Fixes: http://tracker.ceph.com/issues/18661
Signed-off-by: John Spray <john.spray@redhat.com>
Convenient when you want to create a fresh cluster
each test run: just pass --create and you'll get
a cluster with the right number of daemons for
the tests you're running.
Signed-off-by: John Spray <john.spray@redhat.com>
Previously this could get hung up if we killed one
PID and then the daemon reappears with a different
one (perhaps because we caught it during
daemonization?)
Signed-off-by: John Spray <john.spray@redhat.com>
If we checkout ceph-ci.git, and don't find a branch,
we'll try again from ceph.git. But the checkout will
already exist and the clone will fail, so we'll still
fail to find the branch.
The same can happen if a previous workunit task already
checked out the repo.
Fix by removing the repo before checkout (the first and
second times). Note that this may break if there are
multiple workunit tasks running in parallel on the same
role. That is already racy, so if it's happening, we'll
want to switch to using a truly unique clonedir for each
instantiation.
Fixes: http://tracker.ceph.com/issues/18336
Signed-off-by: Sage Weil <sage@redhat.com>
...before sending a tell command. Otherwise osd.2 might
start without 1, the io unblocks, and the tell fails
because osd.1 is still down.
Fixes: http://tracker.ceph.com/issues/18303
Signed-off-by: Sage Weil <sage@redhat.com>