Without this patch, attempts to install the ceph-mgr-diskprediction-local RPM
fail on SUSE platforms with the following error:
can't install ceph-mgr-diskprediction-local-14.1.0.402+ga396e8bf3b-3742.1.noarch:
nothing provides numpy needed by ceph-mgr-diskprediction-local-14.1.0.402+ga396e8bf3b-3742.1.noarch
nothing provides scipy needed by ceph-mgr-diskprediction-local-14.1.0.402+ga396e8bf3b-3742.1.noarch
Also take into account package naming differences between Fedora and
RHEL/CentOS.
Signed-off-by: Kefu Chai <kchai@redhat.com>
Signed-off-by: Nathan Cutler <ncutler@suse.com>
Add a new 'ceph orchestrator nfs update' command that will take the
NFS clustername and a new count as arguments. That will get translated
to a StatelessServiceSpec and passed to update_stateless_service.
Also, add the necessary stubs to the test_orchestrator and the CLI
QA test.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Allow rook to handle scaling the NFS server count up and down in an NFS
cluster. We just manifest these changes as change to the
spec.server.active field in the CRD.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
We currently have a min_size/max_size values in here, but we don't have
any orchestrators that can take advantage of two values. Let's just keep
a simple count for now, until we do.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
maybe_clientreplay_done() does not correctly handle the case that
replayed request is in the finished_queue (hasn't been dispatched)
Fixes: https://tracker.ceph.com/issues/38597
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
```handle_manifest_flush``` is obviously using the wrong
**last_peering_reset** to check whether a new peering procedure
has been re-initialized by then.
Fix by using a different alias of the local copy of the
pg-wide **last_peering_reset** variable, which is less confusing and
error-prone.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
For large clusters, we use device classes to isolate storage pools.
The existing 'osd df' output turns out to be too nosiy, say, if
you care about only single storage pool with osds possibly spanning over
all hosts.
With this change you are now being able to do 'osd df' by class (or by pool,
if you simply use classes to separate different pools), or by a specified
crush bucket name you are currently interested in, which is much more
convenient.
Some examples:
```
$ bin/ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 0.05878 - 60 GiB 6.4 GiB 23 MiB 0 B 6 GiB 54 GiB 10.60 1.00 - root default
-3 0.02939 - 30 GiB 3.2 GiB 12 MiB 0 B 3 GiB 27 GiB 10.60 1.00 - host ceph11
3 aaa 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 56 up osd.3
4 bbb 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 58 up osd.4
5 ccc 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 60 up osd.5
-5 0.02939 - 30 GiB 3.2 GiB 12 MiB 0 B 3 GiB 27 GiB 10.60 1.00 - host ceph12
0 aaa 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 50 up osd.0
1 bbb 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 61 up osd.1
2 ccc 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 51 up osd.2
TOTAL 60 GiB 6.4 GiB 23 MiB 0 B 6 GiB 54 GiB 10.60
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
$ bin/ceph osd df tree class aaa
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 0.05878 - 20 GiB 2.1 GiB 7.8 MiB 0 B 2 GiB 18 GiB 10.60 1.00 - root default
-3 0.02939 - 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 - host ceph11
3 aaa 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 56 up osd.3
-5 0.02939 - 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 - host ceph12
0 aaa 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 50 up osd.0
TOTAL 20 GiB 2.1 GiB 7.8 MiB 0 B 2 GiB 18 GiB 10.60
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
$ bin/ceph osd df tree name ceph11
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-3 0.02939 - 30 GiB 3.2 GiB 12 MiB 0 B 3 GiB 27 GiB 10.60 1.00 - host ceph11
3 aaa 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 56 up osd.3
4 bbb 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 58 up osd.4
5 ccc 0.00980 1.00000 10 GiB 1.1 GiB 3.9 MiB 0 B 1 GiB 9.0 GiB 10.60 1.00 60 up osd.5
TOTAL 30 GiB 3.2 GiB 12 MiB 0 B 3 GiB 27 GiB 10.60
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
```
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
Instead of generating three tests, each with bluestore-bitmap.yaml, it
generates four tests: one consisting of just bluestore-bitmap.yaml and
the other three without any trace of bluestore. This was introduced in
commit 711df71790fa ("qa: objectstore snippets for krbd").
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
* refs/pull/26742/head:
osd/PG: do not touch this->cct after PG is destroyed
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
* refs/pull/26460/head:
client: parameter "cap" is not used
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
This reverts commit 61b9432ef9a3847eceb96f8d5a854567c49bbf61.
If we are in the middle of replacing, we can not queue any further
write events into the old center because we may end up replacing
existing connection's center with a new one, and hence executing
the newly queued write events in the old thread.
See **transfer_existing** for a detailed description.
Also the patch does not make a lot of sense for the original issue
it tried to resolve, because **send_keepalive** is a pure noop if the
underlying connection is not ready, which is obviously true for the
case demonstrated in http://tracker.ceph.com/issues/38493..
Fixes: http://tracker.ceph.com/issues/38569
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
which should work as a good complementation of
the existing **set-device-class** and "rm-device-class"
command family.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
src/script: add run_mypy to run static type checking on Python code
Reviewed-by: Brad Hubbard <bhubbard@redhat.com>
Reviewed-by: Volker Theile <vtheile@suse.com>
* refs/pull/26694/head:
rpm: drop use of $FIRST_ARG
Reviewed-by: Boris Ranto <branto@redhat.com>
Reviewed-by: Tim Serong <tserong@suse.com>
Reviewed-by: Ken Dreyer <kdreyer@redhat.com>
The number of ports the OSDs listen on depend on the version of ceph
being used, so we need to test for that number accordingly.
Signed-off-by: Andrew Schoen <aschoen@redhat.com>
mgr/dashboard: fix for using '::' on hosts without ipv6
Reviewed-by: Lenz Grimmer <lgrimmer@suse.com>
Reviewed-by: Ricardo Dias <rdias@suse.com>
Reviewed-by: Sebastian Wagner <sebastian.wagner@suse.com>
Reviewed-by: Tatjana Dehler <tdehler@suse.com>
Reviewed-by: Volker Theile <vtheile@suse.com>