Modify test_activate_osd() to get the type of scheduler in use and then
verify the value of osd_max_backfills. This is because mclock scheduler
overrides this option to 1000 upon OSD initialization.
The test earlier used to pass because the OSD daemon was killed but not
marked down and upon being brought up, the wait for OSD up check was
passing quickly. But the OSD still didn't have the latest config values.
But now upon killing the OSD, the osd_fast_shutdown sequence notifies the
mon (see PR: https://github.com/ceph/ceph/pull/44807) and is marked down
and dead. Upon bringing it up, the wait for OSD up check takes a longer
time and this is sufficient for the config values to be updated. This
results in the correct values being read from the config 'Values' map.
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
The default pids-limit (docker 4096/podman 2048) prevent some
customization from working (http threads on RGW) or limits the number
of luns per iscsi target.
Fixes: https://tracker.ceph.com/issues/52898
Signed-off-by: Teoman ONAY <tonay@redhat.com>
Commit 08df6e0fd0 ("qa/workunits/rbd: expand LevelSpec parsing
coverage") didn't account for images with a separate data pool. This
was missed because of small-cache-pool.yaml breakage.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Add test to verify that the NFS servers don't restart when the
access type of a CephFS NFS export is updated.
And check the NFS servers are restarted when the pseudo path of
a CephFS NFS export is updated.
Signed-off-by: Ramana Raja <rraja@redhat.com>
Add the snaptrim duration to the json formatted output of the pg dump
stats. Define methods for a PG to set the snaptrim begin time and then to
calculate the total time spent to trim all the objects for the snaps in
the snap_trimq for the PG.
Tests:
- Librados C and C++ API tests to verify the time spent for a snaptrim
operation on a PG. These tests use the self-managed snaps APIs.
- Standalone tests to verify snaptrim duration using rados pool snaps.
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
Add a new column, OBJECTS_TRIMMED, to the pg dump stats that shows the
number of objects trimmed when a snap is removed.
When a pg splits, the stats from the parent pg is copied to the child
pg. In such a case, reset objects_trimmed to 0 for the child pg
(see PeeringState::split_into()). Otherwise, this will result in incorrect
stats to be shown for a child pg after the split operation.
Tests:
- Librados C and C++ API tests to verify the number of objects trimmed
during snaptrim operation. These tests use the self-managed snaps APIs.
- Standalone tests to verify objects trimmed using rados pool snaps.
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
mgr/rbd_support: cast pool_id from int to str when collecting LevelSpec
Reviewed-by: Mykola Golub <mgolub@suse.com>
Reviewed-by: Ilya Dryomov <idryomov@gmail.com>
Invoke "rbd mirror snapshot schedule ls -R" and "rbd mirror snapshot
schedule status" commands on all levels, consistently. In particular,
make sure that an image level schedule is listed for a recursive query
at the pool level both before and after the schedule kicks in:
$ rbd create --size 1G --mirror-image-mode snapshot -p foo bar
$ rbd mirror snapshot schedule add -p foo --image bar 1m
$ rbd mirror snapshot schedule ls -p foo -R
POOL NAMESPACE IMAGE SCHEDULE
foo bar every 1m
<wait for schedule to become visible in status>
$ rbd mirror snapshot schedule ls -p foo -R
POOL NAMESPACE IMAGE SCHEDULE
foo bar every 1m
Also, make sure that pool and image level status queries work:
$ rbd mirror snapshot schedule status -p foo
SCHEDULE TIME IMAGE
2022-03-04 07:14:00 foo/bar
$ rbd mirror snapshot schedule status -p foo --image bar
SCHEDULE TIME IMAGE
2022-03-04 07:14:00 foo/bar
Both of these issues are fixed by the previous commit.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Test the commands:
`osd pool create` <pool> --target_size_ratio <float>
`osd pool set` <pool> target_size_ratio <float>
`osd pool get` <pool> target_size_ratio
Signed-off-by: Kamoltat <ksirivad@redhat.com>
Fix the expected log message to match the scrub code, by removing
the redundant part.
Fixes: https://tracker.ceph.com/issues/54458
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
Added mds daemons so that it can create
cephFS pools and set options using
`do_set_pool()` in FSCommand.cc. Such that
we can cover corner cases like that in
https://tracker.ceph.com/issues/54263
Signed-off-by: Kamoltat <ksirivad@redhat.com>
mon/ConfigMonitor: fix config get key with whitespace
Reviewed-by: Ronen Friedman <rfriedma@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
A new job that doesn't want ms_mode to be set underneath it is about to
be added. Rename rxbounce to ms_modeless to make this purpose obvious.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
That `ceph fs perf stats` doesn't output stale metrics
after the rank0 MDS failover.
Fixes: https://tracker.ceph.com/issues/50033
Signed-off-by: Jos Collin <jcollin@redhat.com>