In 076a6abd80 I killed the 'class rename' command
and thought it was totally useless but I was wrong.
Consider the following user case:
(1) randomly choose some OSDs(e.g., from different hosts) and try to make them for private use only,
say, by grouping them into 'pool1'
(2) ceph osd crush set-device-class pool1 'OSDs from (1)'
(3) ceph osd crush rule create-replicated rule_for_pool1 default host pool1
(4) ceph osd pool rename pool1 pool2
(5) ceph osd crush class rename pool1 pool2
From the above user case, we need to safely change a pool name without worrying
any risk of data migration. That is why the 'osd crush class rename' command
is still needed here.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
The trigger_scrub sets the last_scrub_stamp backwards to
force a scheduled scrub. In a small window this stamp could get propagated
to the mgr. A test failure occurred because wait_for_scrub() was confused
by seeing a backward moving date.
The most critical change is having wait_for_scrub() make sure that the
date advances past the previous in value.
A test failed because the random backoff kept delayed triggered scrub, so
set osd_scrub_backoff throughout.
Signed-off-by: David Zafman <dzafman@redhat.com>
Add missing teardown to cleanup test directory
Fix pgid due to elimination of initial default pool
Testing could never fail because run_tests return ignored
Signed-off-by: David Zafman <dzafman@redhat.com>
We can't kill and restart osds because that will interfere with
the upgrade process. We can, however, thrash the layout by
tweaking osd weights and so on. This will exercise osd recovery
paths during the upgrade that aren't normally exercised (outside
of stress-split..which doesn't upgrade individual osds while they
are non-clean).
Signed-off-by: Sage Weil <sage@redhat.com>
The previous method to get the watcher admin socket was fragile
and had started to fail after the recent changes to vstart ceph.conf.
Fixes: http://tracker.ceph.com/issues/20954
Signed-off-by: Mykola Golub <mgolub@mirantis.com>
Rearrange logic to make it easier to measure accumulation.
Instrument the boto request/response loop to count bytes in and out.
Accumulate byte counts in usage like structure.
Compare actual usage reported by ceph against local usage measured.
Report and assert if there are any short-comings.
Remove zone placement rule that was newly added at end: tests should be rerunable.
Nit: the logic to wait for "delete_obj" is not quite right.
Fixes: http://tracker.ceph.com/issues/19870
Signed-off-by: Marcus Watts <mwatts@redhat.com>
* refs/remotes/upstream/pull/16378/head:
doc: remove accidental additions to release notes
qa/cephfs: Fix race in test_volume_client
qa/cephfs: Test filtered df
PendingReleaseNotes: add note about df filtering
client: Support new, filtered MStatfs
objecter: Support new, filtered MStatfs
mon/PGMap stats: Support new, filtered MStatfs
messages: Add optional data pool to MStatfs
Reviewed-by: John Spray <john.spray@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Sage Weil <sage@redhat.com>
If the OSD doesn't see IO, it won't flush more pg/osd stats when the
luminous flag is not yet set (legacy pgmonitor mode).
Signed-off-by: Sage Weil <sage@redhat.com>
rbd-ggate spawns a process responsible for the creation of ggate
device and forwarding I/O requests between the GEOM Gate kernel
subsystem and RADOS.
On FreeBSD it provides functionality similar to rbd-nbd on Linux.
Signed-off-by: Mykola Golub <mgolub@mirantis.com>