ceph.created_pool allows the user (via yaml lines) to add pools
that the ceph_manager knows about.
Fixes: 9091
Signed-off-by: Warren Usui <warren.usui@inktank.com>
If erasure_code_profile is present at the same leve as ec-data-pool, it
is used to override the default hard coded profile.
Signed-off-by: Loic Dachary <loic-201408@dachary.org>
But don't error if it fails, as this would mean that the monitors
are just taking longer to form quorum. Go and try the next block which will
wait up to 15 minutes for a successful gatherkeys to happen (that only works
if monitors have formed quorum).
Signed-off-by: Alfredo Deza <alfredo.deza@inktank.com>
Inside a conditional to affect only 2.4, set User, Group, and the
module config to load mpm_event. This is normally done with the
default configuration files, but since this abbreviated conf bypasses
those, we must set them here.
Signed-off-by: Dan Mick <dan.mick@inktank.com>
Because rgw.py iterates over it to run the rgw server. If it is removed
the rgw servers are not run and all fails.
Signed-off-by: Loic Dachary <loic@dachary.org>
instead of rados.py because ceph.py is only run once where rados.py
could be run multiple time, leading to race conditions
http://tracker.ceph.com/issues/9027Fixes: #9027
Signed-off-by: Loic Dachary <loic@dachary.org>
mount_osd_data and make_admin_daemon_dir are only used by
ceph_manager.py although they are defined in ceph.py
Signed-off-by: Loic Dachary <loic@dachary.org>
Globally overriding the rgw idle_timeout is not possible because it it
needs to be done on a per client.0, client.1, etc. basis. Add the
default_idle_timeout key to the rgw config : it defaults to the
previously hardcoded default (30) and can be changed via the override.
The existing tasks that were previously overriding the idle_timeout on a
per client basis are changed to use the default_idle_timeout instead for
consistency and to allow a global override.
Signed-off-by: Loic Dachary <loic@dachary.org>
gevent may hold the rados.py thread when it has an opportunity. The
if not hasattr(ctx, 'manager'):
must therefore be immediately before the manager creation it is supposed
to protect. If any of the functions called as a side effect of
first_mon = teuthology.get_first_mon(ctx, config)
(mon,) = ctx.cluster.only(first_mon).remotes.iterkeys()
give gevent an opportunity to hold the thread, it creates a race
condition.
The other possibility would be use a ctx lock to protect the code, but
this solution seem simpler.
http://tracker.ceph.com/issues/9027Fixes: #9027
Signed-off-by: Loic Dachary <loic@dachary.org>