Updated health controller & test to reflect changes introduced
in 'df' payload.
Return 'total_used_raw_bytes' instead of 'total_used_bytes'
to match CLI 'bin/rados df' used/avail summary in
Landing Page (frontend component).
Do not return 'stats_by_class' to save bandwidth as they are
not needed (right now) in the dashboard.
Fixes: https://tracker.ceph.com/issues/37717
Signed-off-by: Alfonso Martínez <almartin@redhat.com>
* All pool controller methods with same default value
for stats flag.
* Stats requested explicitly by frontend service.
* Updated API tests accordingly.
Fixes: https://tracker.ceph.com/issues/36740
Signed-off-by: Alfonso Martínez <almartin@redhat.com>
1. To be able to run the cli without an external orchestrator.
2. Run the CLI in Teuthology.
Signed-off-by: Sebastian Wagner <sebastian.wagner@suse.com>
The current solution fails on our CI-system as some outputs can have
more values and some parameters like 'w' can vary in different
environments.
As this was only tested before in a vstart cluster environment it
worked.
Through this commit only the given attributes we know to be there,
will be tested.
Fixes: https://tracker.ceph.com/issues/37275
Signed-off-by: Stephan Müller <smueller@suse.com>
The change in b3e69a9609 broke the test's assumption that the endpoint
wouldn't be readable by block-manager. It doesn't looks as though that's
actually problematic for the ECP controller, so just update the test to
use rgw-manager instead.
Signed-off-by: Zack Cerza <zack@redhat.com>
Some classes should still be imported directly from collections;
only OrderedDict, Iterable and Callable (in the context of the
ceph codebase) are found in collections.abc.
The current code works due to the fallback support for Python 2.
Signed-off-by: James Page <james.page@ubuntu.com>
This splits out the collection of health and log data from the
/api/dashboard/health controller into /api/health/{full,minimal} and
/api/logs/all.
/health/full contains all the data (minus logs) that /dashboard/health
did, whereas /health/minimal contains only what is needed for the health
component to function. /logs/all contains exactly what the logs portion
of /dashboard/health did.
By using /health/minimal, on a vstart cluster we pull ~1.4KB of data
every 5s, where we used to pull ~6KB; those numbers would get larger
with larger clusters. Once we split out log data, that will drop to
~0.4KB.
Fixes: http://tracker.ceph.com/issues/36675
Signed-off-by: Zack Cerza <zack@redhat.com>
The behavior of `safe-to-destroy` has changed in
432f194355 (PR#24799) and the backend
needs to be adapted accordingly.
Fixes: http://tracker.ceph.com/issues/37290
Signed-off-by: Patrick Nawracay <pnawracay@suse.com>
Separate diskprediction local cloud from the diskprediction plugin.
Devicehealth invoke device prediction function related on the global
configuration "device_failure_prediction_mode".
Signed-off-by: Rick Chen <rick.chen@prophetstor.com>
The new info endpoint will provide the frontend with the necessary
information it needs to create new profiles.
Fixes: https://tracker.ceph.com/issues/25156
Signed-off-by: Stephan Müller <smueller@suse.com>
- Fix bug in Dashboard QA unit test framework. Don't set the application type header manually, this is done by the requests library if required.
- Enhance QA unit test helper: Print the response of the API request if it fails. This should help to identify the problem more easily.
- Fix bug in the OSD controller. A parameter needs to be converted to integer.
- Take care that the params of the request object are not modified.
The issue was introduced by PR https://github.com/ceph/ceph/pull/24475. The CherryPy json_in plugin disclosed the errorneous unit test helper implementation.
Fixes: https://tracker.ceph.com/issues/36708
Signed-off-by: Volker Theile <vtheile@suse.com>
Python 3.7 now shows a warning as below.
/usr/bin/ceph:128: DeprecationWarning: Using or importing the ABCs from
'collections' instead of from 'collections.abc' is deprecated, and in
3.8 it will stop working
import rados
This patch addresses the that particular issue.
Signed-off-by: Ganesh Maharaj Mahalingam <ganesh.mahalingam@intel.com>
This is related to http://tracker.ceph.com/issues/36453. It is far from
a complete solution, but seems like a positive move.
I tested this change by first disabling my browser cache, and then used
the /docs endpoint to query /api/dashboard/health. Before compression:
Content-Length: 60748
Time: 615ms
After:
Content-Length: 7505
Time: 92ms
Then, I logged into the dashboard as normal and reloaded the page once I
was in. Some values for the reload operation before compression:
Total page load time: 58.48s
vendor.js Content-Length: 6486025
vendor.js time: 48.09s
After:
Total page load time: 14.55s
vendor.js Content-Length: 1143178
vendor.js time: 4.50s
Signed-off-by: Zack Cerza <zack@redhat.com>
It is now commented out like it was before,
but I've added a comment what happened during this test with the QA
system. The problem was that even with only a increase of 1 PG the QA
cluster went into a cluster warning state and did not recover in time.
The QA coverage timeout is 2 minutes.
I could not reproduce this behavior with a local cluster, but I've
added a loop to wait until pgp and pg number are equal and the cluster
is in a healthy state again. This can take locally about 5 seconds.
The internal loop has a timeout of 3 minutes.
Fixes: https://tracker.ceph.com/issues/36362
Signed-off-by: Stephan Müller <smueller@suse.com>
The dashboard backend can now unset all set compression arguments if the
compression mode is switched to 'unset'. In the case of 'unset' Ceph
itself will only delete the 'compression_mode' argument, not all other
set arguments. The other arguments that should be removed, too, are
added to the update arguments in order to delete all set arguments.
Fixes: https://tracker.ceph.com/issues/36355
Signed-off-by: Stephan Müller <smueller@suse.com>
Refactor '_get_mon_allow_pool_delete_config' method to be a little bit
more general. The method can now be used to get the value of every
config option known to the cluster.
Signed-off-by: Tatjana Dehler <tdehler@suse.com>