The current solution fails on our CI-system as some outputs can have
more values and some parameters like 'w' can vary in different
environments.
As this was only tested before in a vstart cluster environment it
worked.
Through this commit only the given attributes we know to be there,
will be tested.
Fixes: https://tracker.ceph.com/issues/37275
Signed-off-by: Stephan Müller <smueller@suse.com>
* refs/pull/24940/head:
qa: add test for getfattr ceph.dir.pin
client: support getfattr ceph.dir.pin extended attribute
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
mgr/dashboard: add profiles to set cluster's rebuild performance
Reviewed-by: Tiago Melo <tmelo@suse.com>
Reviewed-by: Sebastian Krah <skrah@suse.com>
Reviewed-by: Patrick Nawracay <pnawracay@suse.com>
The new default is bitmap, so we were testing bitmap twice. Instead,
explicitly call out stupid and bitmap cases so a future default change
won't break coverage.
Signed-off-by: Sage Weil <sage@redhat.com>
The change in b3e69a9609 broke the test's assumption that the endpoint
wouldn't be readable by block-manager. It doesn't looks as though that's
actually problematic for the ECP controller, so just update the test to
use rgw-manager instead.
Signed-off-by: Zack Cerza <zack@redhat.com>
* refs/pull/25308/head:
osd/OSD: OSD::mkfs asserts when reusing disk with existing superblock.
os/bluestore: add main device expand capability.
Reviewed-by: Sage Weil <sage@redhat.com>
Some classes should still be imported directly from collections;
only OrderedDict, Iterable and Callable (in the context of the
ceph codebase) are found in collections.abc.
The current code works due to the fallback support for Python 2.
Signed-off-by: James Page <james.page@ubuntu.com>
* add qa/releases/nautilus.yaml so it can be reused.
* use releases/nautilus.yaml in luminous-x upgrade test, so
test_librbd_python.sh is able to use the feature introduced in
nautilus.
Fixes: http://tracker.ceph.com/issues/37432
Signed-off-by: Kefu Chai <kchai@redhat.com>
This splits out the collection of health and log data from the
/api/dashboard/health controller into /api/health/{full,minimal} and
/api/logs/all.
/health/full contains all the data (minus logs) that /dashboard/health
did, whereas /health/minimal contains only what is needed for the health
component to function. /logs/all contains exactly what the logs portion
of /dashboard/health did.
By using /health/minimal, on a vstart cluster we pull ~1.4KB of data
every 5s, where we used to pull ~6KB; those numbers would get larger
with larger clusters. Once we split out log data, that will drop to
~0.4KB.
Fixes: http://tracker.ceph.com/issues/36675
Signed-off-by: Zack Cerza <zack@redhat.com>
* refs/pull/17526/head:
qa/tasks/ceph_manager: avoid test_map_discontinuity stall with too few up osds
Reviewed-by: Gregory Farnum <gfarnum@redhat.com>
Some tests have m=2,k=2 and this will break them. Sometimes even if we
have 5 up osds, we end up with 4 and CRUSH gets picky, so build in a
buffer and only do this if we have 6 up.
We don't have an easy way from here to see what the min up osds for healthy
is... basically this map discontinuity test just sucks.
Signed-off-by: Sage Weil <sage@redhat.com>