mgr/rbd_support: cast pool_id from int to str when collecting LevelSpec
Reviewed-by: Mykola Golub <mgolub@suse.com>
Reviewed-by: Ilya Dryomov <idryomov@gmail.com>
Invoke "rbd mirror snapshot schedule ls -R" and "rbd mirror snapshot
schedule status" commands on all levels, consistently. In particular,
make sure that an image level schedule is listed for a recursive query
at the pool level both before and after the schedule kicks in:
$ rbd create --size 1G --mirror-image-mode snapshot -p foo bar
$ rbd mirror snapshot schedule add -p foo --image bar 1m
$ rbd mirror snapshot schedule ls -p foo -R
POOL NAMESPACE IMAGE SCHEDULE
foo bar every 1m
<wait for schedule to become visible in status>
$ rbd mirror snapshot schedule ls -p foo -R
POOL NAMESPACE IMAGE SCHEDULE
foo bar every 1m
Also, make sure that pool and image level status queries work:
$ rbd mirror snapshot schedule status -p foo
SCHEDULE TIME IMAGE
2022-03-04 07:14:00 foo/bar
$ rbd mirror snapshot schedule status -p foo --image bar
SCHEDULE TIME IMAGE
2022-03-04 07:14:00 foo/bar
Both of these issues are fixed by the previous commit.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
mon/ConfigMonitor: fix config get key with whitespace
Reviewed-by: Ronen Friedman <rfriedma@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Commit fea6fdff4c ("mgr/rbd_support: level_spec passed to some
commands is not optional") is wrong. While it is true that a valid
level_spec is needed to create a LevelSpec instance, an empty string
is very much a valid level spec -- it signifies "all levels".
This wasn't caught because within Ceph these commands are wrapped by
rbd CLI which injects an empty string in get_level_spec_args().
Fixes: https://tracker.ceph.com/issues/54058
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
With dynamic bucket index resharding, when the average number of
objects per shard exceeds the configured value, that bucket is
scheduled for reshard. That bucket may receive more new objects before
the resharding takes place. As a result, the existing code
re-calculates the number of new shards just prior to resharding,
rather than waste a resharding opportunity with too low a value.
The same holds true for a user-scheduled resharding.
A user reported confusion that the number reported in `radosgw-admin
reshard list` wasn't the number that the reshard operation ultimately
used. This commit makes it clear that the new number of shards is
"tentative". And test_rgw_reshard.py is updated to reflect this
altered output.
Additionally this commit adds some modernization and efficiency to the
"reshard list" subcommand.
Signed-off-by: J. Eric Ivancich <ivancich@redhat.com>
Update the workunit/mon/config.sh to include set/get/rm commands with and without whitespaces
Fixes: https://tracker.ceph.com/issues/44092
Signed-off-by: Nitzan Mordechai <nmordech@redhat.com>
* refs/pull/44054/head:
doc/rados/operations: document pg_num_max
mgr: set max of 32 pgs for .mgr pool
mgr/dashboard: expect pg_num_max property for pools
mon/OSDMonitor: add option --pg-num_max arg for create pool
mon/OSDMonitor: disallow setting pg_num < min or > max
mgr/pg_autoscaler: apply pg_num_max
mon: add pg_num_max pool property
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
I removed the `02-hosts-inventory.e2e` file because it is a duplicate
test of one of the test in the `01-hosts.e2e` file and fixed the error
from that file.
Also, in the inventory Identify test, we test for an element to be not
visible. According to the latest cypress docs, this should be not.exist
instead of not.visible since the cd-modal will not even be present in
the DOM
Fixes: https://tracker.ceph.com/issues/53499
Signed-off-by: Nizamudeen A <nia@redhat.com>
set and unset the noautoscale flag,
evaluate if the results are what
we expected. As well as, evaluate
if the flag is correct when we
create new pools.
Signed-off-by: Kamoltat <ksirivad@redhat.com>
pg_autoscale module will now start out all the pools
with a scale-up profile by default.
Added tests in workunits/mon/pg_autoscaler.sh
to evaluate if the default pool creation is
a scale-up profile
Updated documentation and release notes to
reflect the change in the default behavior
of the pg_autoscale profile.
Fixes: https://tracker.ceph.com/issues/53309
Signed-off-by: Kamoltat <ksirivad@redhat.com>
1. add a delay after pool creation
2. fix checking output due to changed output format
3. fix wrong variable to test chunk-repair
Signed-off-by: Myoungwon Oh <myoungwon.oh@samsung.com>
Currently,
# ceph orch ls -h
...
orch ls [<service_type>] [<service_name>] [--export] [-- List services known to orchestrator
format {plain|json|json-pretty|yaml}] [--refresh]
# ceph orch ls osd -h
... nothing ...
because the CLI is provided more arguments than the command prefix. Make
-h drop right-hand args until we get at least one prefix match. This
means we can have a partial command written with some args and add -h to
get a usage for that command.
Signed-off-by: Sage Weil <sage@newdream.net>
mgr/dashboard: Move force maintenance test to the workflow test suite
Reviewed-by: Avan Thakkar <athakkar@redhat.com>
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
Reviewed-by: Nizamudeen A <nia@redhat.com>
Reviewed-by: Pere Diaz Bou <pdiazbou@redhat.com>
Fixes a lexical error in one line of code added in
90e9307ab0, removing the dependency
on lsb_release, on 8/16/2021.
Fixes: https://tracker.ceph.com/issues/52613
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
The lsb_release utility brings in a lot of other dependencies. Remove
it from the RGW workunit Perl scripts.
Signed-off-by: Ken Dreyer <kdreyer@redhat.com>
In the LoadRequest in the ImageMap class add initial cleanup to remove
stale entries. To cleanup the LoadRequest will query the mirror image
list and remove all the image_map that are notin the list.
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@cern.ch>
This make sure that all images are deleted in the existing qa scripts
and checks if all rbd-mirror metadata in OMAP are correctly deleted.
Signed-off-by: Arthur Outhenin-Chalandre <arthur.outhenin-chalandre@cern.ch>