[rgw]: Fix help of radosgw-admin user info in case no uid
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Reviewed-by: Lenz Grimmer <lgrimmer@suse.com>
Reviewed-by: Abhishek Lekshmanan <abhishek@suse.com>
Cython version 0.29 removed the support for python subinterpreters,
which completely breaks ceph-mgr funcionality.
See cython repo commit:
7e27c7cd51
Fixes: http://tracker.ceph.com/issues/37472
Signed-off-by: Ricardo Dias <rdias@suse.com>
A Ceph Manager Orchestrator that uses a external REST API service to execute Ansible playbooks.
get_inventory implementation
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
Document how to use CLI through Orchestrator CLI
Signed-off-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
Some classes should still be imported directly from collections;
only OrderedDict, Iterable and Callable (in the context of the
ceph codebase) are found in collections.abc.
The current code works due to the fallback support for Python 2.
Signed-off-by: James Page <james.page@ubuntu.com>
On FreeBSD coredumps are enabled by default.
And thus tests in teardown() in ceph-helpers.sh can fail because of
unwaranted cores.
Signed-off-by: Willem Jan Withagen <wjw@digiware.nl>
Currently, for rados bench write test, it always
creates new objects for testing. Create operation
refers to non-neglectable metadata overhead, especially
for small write performance. This patch allows to
reuse objects for write testing
Signed-off-by: Li Wang <laurence.liwang@gmail.com>
if process_pg_map_command() fails to fulfill the request, we should keep
the odata intact. and let the module take care of it.
before this change, we always write to odata even if
process_pg_map_command() returns -EOPNOTSUPP, this leaves unnecessary
leftover in the output, like pg_info,pg_ready.
after this change, we won't touch odata if process_pg_map_command()
returns -EOPNOTSUPP. and odata will be filled with whatever the python
module returns.
Fixes: http://tracker.ceph.com/issues/37444
Signed-off-by: Kefu Chai <kchai@redhat.com>
* add qa/releases/nautilus.yaml so it can be reused.
* use releases/nautilus.yaml in luminous-x upgrade test, so
test_librbd_python.sh is able to use the feature introduced in
nautilus.
Fixes: http://tracker.ceph.com/issues/37432
Signed-off-by: Kefu Chai <kchai@redhat.com>
zone credentials are required to 'period update --commit' from
--rgw-zone remove
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Conflicts:
src/test/rgw/rgw_multi/tests.py
sync of deletes uses an If-UnModified-Since precondition, but does not
handle the corresponding ERR_PRECONDITION_FAILED error. treating this as
a failure means that we'll keep retrying the delete which will never
succeed. break this loop by treating ERR_PRECONDITION_FAILED as a
success
Fixes: http://tracker.ceph.com/issues/37448
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Yet another similar issue as 8d8e8a359c.
To reproduce, construct a cluster with 3 hosts, each containing a single osd only:
- cut off osd.1's cluster network, waiting osd.1 to be marked as down
- cut off both osd.2 & osd.3's cluster network
It is possible we'll get __two__ down osds (e.g., both osd.1 & osd.2 are down)
now and then restore osd.1 and osd.2's cluster network won't change anything.
The root cause is that by default we always call for at least 1/3 active heartbeat
connections with all current __up__ osds to bring a previously dead (unhealthy)
osd back to life. However, it is possible that the __up__ set could be the
minority part that has been cut off from the rest of the cluster entirely and hence
cause brain-split behaviour as demonstrated above.
The simplest way to fix is to try to re-activate an unhealthy osd whenever
we are still safe to do so. Also please keep in mind that frequent up-to-down
transitions will kill off the osd process entirely, and that is why the
```osd_markdown_log``` related checking is needed here..
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>