Other rados put will fail as follows
$ touch /tmp/bar
$ ./rados -p rbd put existing_3 /tmp/bar
$ ./rados -p rbd put existing_3 /tmp/bar
WARNING: could not create object: existing_3
error putting rbd/existing_3: (17) File exists
it should be considered a bug in the rados command line but needs to be
addressed separately.
http://tracker.ceph.com/issues/9387Fixes: #9387
Signed-off-by: Loic Dachary loic-201408@dachary.org
Adding this so that we can modify the clients' conf file as needed with slow backend.
This can be achieved by:
overrides:
s3tests:
slow_backend: true
Signed-off-by: Yehuda Sadeh <yehuda@redhat.com>
(cherry picked from commit 61409179df)
May have been causing spurious failures on
trying to read session state after MDS restart (
session list isn't populated until recovery is
complete)
Signed-off-by: John Spray <john.spray@redhat.com>
To make the logs clearer when trying to work out
if/when something went wrong, rather than always
having client logs start with some failures.
Signed-off-by: John Spray <john.spray@redhat.com>
'client_id' was ambiguous because in other places it
meant the '0' in client.0, whereas here it means
the runtime-generated global ID of the client.
Signed-off-by: John Spray <john.spray@redhat.com>
Some of this stuff could be even more general for embedding
unittest-style suites, but for the moment let's keep the cephfs
stuff in a walled garden.
Signed-off-by: John Spray <john.spray@redhat.com>
...so that there will at least be multiple segments
in the log during the rewrite.
Also make the test stricter by checking that
cephfs-journal-tool can happily read the resulting
journal.
Signed-off-by: John Spray <john.spray@redhat.com>
Previously would fail because the cap waiter
completed too soon, without noticing that the
reason it completed quickly was because it failed.
Signed-off-by: John Spray <john.spray@redhat.com>
Check for more than 1 osd down and randomize on chance_move_pg (100%)
For now only export from older down osd to newly down osd to avoid missing map
Signed-off-by: David Zafman <david.zafman@inktank.com>
Based on ceph/src/test/ceph_objectstore_tool.py but only does
replicated pool testing and doesn't test argument validation.
Signed-off-by: David Zafman <david.zafman@inktank.com>
ceph.created_pool allows the user (via yaml lines) to add pools
that the ceph_manager knows about.
Fixes: 9091
Signed-off-by: Warren Usui <warren.usui@inktank.com>
This will enable using .yaml changes to switch this
guy over to use kcephfs client once the teuthology
code around it supports all the same hooks as I've added
for fuse.
Signed-off-by: John Spray <john.spray@redhat.com>
This is for any test config that needs to run
some workunit with clients unmounted. It allows
you to go toggle the mountedness of a client as
you go up and down the stack list this:
- ceph-fuse:
client.0:
mounted: true
- workunit:
clients:
client.0:
- fs/misc/trivial_sync.sh
- ceph-fuse:
client.0:
mounted:
false
The initial use case for this is running the
cephfs_journal_tool_smoke.sh workunit, which
tests administrative operations that are meant
to be run on an unmounted filesystem.
Signed-off-by: John Spray <john.spray@redhat.com>
So that we can explicitly stop daemons on demand. Useful
for MDS tool tests that want the MDS daemons not to be running,
is this is more solid and explicit than doing e.g. "ceph mds
stop" from within workunits.
Signed-off-by: John Spray <john.spray@redhat.com>
But don't error if it fails, as this would mean that the monitors
are just taking longer to form quorum. Go and try the next block which will
wait up to 15 minutes for a successful gatherkeys to happen (that only works
if monitors have formed quorum).
Signed-off-by: Alfredo Deza <alfredo.deza@inktank.com>
If erasure_code_profile is present at the same leve as ec-data-pool, it
is used to override the default hard coded profile.
Signed-off-by: Loic Dachary <loic-201408@dachary.org>
Inside a conditional to affect only 2.4, set User, Group, and the
module config to load mpm_event. This is normally done with the
default configuration files, but since this abbreviated conf bypasses
those, we must set them here.
Signed-off-by: Dan Mick <dan.mick@inktank.com>
instead of rados.py because ceph.py is only run once where rados.py
could be run multiple time, leading to race conditions
http://tracker.ceph.com/issues/9027Fixes: #9027
Signed-off-by: Loic Dachary <loic@dachary.org>
mount_osd_data and make_admin_daemon_dir are only used by
ceph_manager.py although they are defined in ceph.py
Signed-off-by: Loic Dachary <loic@dachary.org>
Globally overriding the rgw idle_timeout is not possible because it it
needs to be done on a per client.0, client.1, etc. basis. Add the
default_idle_timeout key to the rgw config : it defaults to the
previously hardcoded default (30) and can be changed via the override.
The existing tasks that were previously overriding the idle_timeout on a
per client basis are changed to use the default_idle_timeout instead for
consistency and to allow a global override.
Signed-off-by: Loic Dachary <loic@dachary.org>
gevent may hold the rados.py thread when it has an opportunity. The
if not hasattr(ctx, 'manager'):
must therefore be immediately before the manager creation it is supposed
to protect. If any of the functions called as a side effect of
first_mon = teuthology.get_first_mon(ctx, config)
(mon,) = ctx.cluster.only(first_mon).remotes.iterkeys()
give gevent an opportunity to hold the thread, it creates a race
condition.
The other possibility would be use a ctx lock to protect the code, but
this solution seem simpler.
http://tracker.ceph.com/issues/9027Fixes: #9027
Signed-off-by: Loic Dachary <loic@dachary.org>