This commit adds some documentation about the
'hardware inventory / monitoring' feature (node-proxy agent).
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
Improve paragraphs under the heading "The Ceph Storage Cluster". Remove
a sentence that was pleonastic in its context in the paragraph.
Signed-off-by: Zac Dover <zac.dover@proton.me>
As ScrubResources is no longer involved in remote reservations, some
of the data listed by 'dump_scrub_reservations' is now collected by
OsdScrub itself (prior to this change, OsdScrub just forwarded the
request to ScrubResources).
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
so that it can be later used by the dashboard to configure the nvmeof
through UI
and create rbd pool in UI
Fixes: https://tracker.ceph.com/issues/64201
Signed-off-by: Nizamudeen A <nia@redhat.com>
This score works for pools in which the read_ratio
value is set.
Current limitations:
- This mechanism ignores osd read affinty
- There is a plan adding support for read affinity 0
in the next version.
- This mechanism works only when all PGs are full
- If read_ration is not set - the existing mechanism (named
fair score) is used.
Signed-off-by: Josh Salomon <41079547+JoshSalomon@users.noreply.github.com>
Signed-off-by: Josh Salomon <41079547+JoshSalomon@users.noreply.github.com>
Test cases for the read balancer which takes osd sizes into account.
Some balancing code refactoring and reorg for code that is used in
multiple tests
Signed-off-by: Josh Salomon <41079547+JoshSalomon@users.noreply.github.com>
This commit adds calculation for desired primary distribution which
takes into account the osd size. This way smaller OSDs can take more
read operations (by adding more primaries) and the larger OSDs take less
primaries and the load of the cluater can increase. (This feature offset
a bit the weakest link in the chain effect under some conditions). In
order to calculate the loads correctly there is a need to know the
read/write ratio for the pool, and this commit assumes the read_ratio
parameter is available for the pool.
Signed-off-by: Josh Salomon <41079547+JoshSalomon@users.noreply.github.com>
This parameter is used for better read balancing with non identical
devices.
- This parameter is controlled using the commands 'ceph osd pool set/get'
- This parameter is applicable only for replicated pools
- Valid values are integers in the range [0..100] and represent the
percentage of read IOs out of all IOs in the pool
- Value of 0 unsets this parameter and the value will be the default
value (this is the generic behavior of the command 'ceph osd pool
set'
- default value can be set by config parameter
`osd_pool_default_read_ratio`
Signed-off-by: Josh Salomon <41079547+JoshSalomon@users.noreply.github.com>
data['cephx']['name'] will return something like:
node-proxy.hostname123
the prefix "node-proxy." has the be removed otherwise there will be
a mismatch between what is actually expected.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
The recent migration to a separate daemon implied
some changes which have broken these tests.
This commit fixes them.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
So there's a default value (169.254.1.1) which is the default
address for the 'OS to iDrac pass-through' interface.
Given that node-proxy will reach the RedFish API through this interface,
we can make users avoid to pass that addr when providing the host spec
at bootstrap time.
Signed-off-by: Guillaume Abrioux <gabrioux@ibm.com>
Created a gRPC client by utilising the protobuf file available in nvmeof
repo
Copied the file to this repo and generated its output.
Fixes: https://tracker.ceph.com/issues/64201
Signed-off-by: Nizamudeen A <nia@redhat.com>
cephadm configures the nvmeof gateways and add the gateways to a config
store which dashboard will later on fetch to make the grpc calls.
Fixes: https://tracker.ceph.com/issues/64201
Signed-off-by: Nizamudeen A <nia@redhat.com>