Run the same procedure as TestClusterFull, but
instead of limiting OSD memstore size, use pool
quota on the data pool.
Signed-off-by: John Spray <john.spray@redhat.com>
... s/mon_remote/admin_remote/ and allow caller to pass
in which remote they want to use for that. Enables use
with ceph_deploy task which does not give admin keys
to mons.
Signed-off-by: John Spray <john.spray@redhat.com>
This tests the new purge file/ops throttling
in the MDS, via the new perf counters for
strays/purging.
Fixes: #10390
Signed-off-by: John Spray <john.spray@redhat.com>
...as long as only one is active, all the ops
that default to talking to a single MDS should
be happy to talk to the active MDS, even if there
happens to be a standby lying around too.
Signed-off-by: John Spray <john.spray@redhat.com>
Where multiple MDSs were on the same node, trying
to concurrently update their firewall state was
causing an exception because the iptables command
errors out if another instance is already running.
Fixes: #10948
Signed-off-by: John Spray <john.spray@redhat.com>
This tests the new #9883 repair functionality
where we selectively scrape dentries out of
the journal while the MDS is offline.
Signed-off-by: John Spray <john.spray@redhat.com>
This was only used in get_first_mon, which doesn't actually
need the parameter itself. Makes it easier to casually
use Filesystem from any place with a ctx to hand.
Signed-off-by: John Spray <john.spray@redhat.com>
Instead of blocking the whole port range (which
might make OSDs running on that node collateral
damage), read the MDS's port out of the MDS map
and just block that.
Signed-off-by: John Spray <john.spray@redhat.com>
This tests:
* The new 'flush journal' asok command
* That the resulting on disk structures are as expected
* That cephfs-journal-tool is happy with the result
Fixes: #9881
Signed-off-by: John Spray <john.spray@redhat.com>
Previously was always using the default values of things
so querying mon instead of the appropriate service
worked fine. However, for things we might want to
update on a per-test basis we need to go ask the
correct service what the setting really is.
Needed for osd_mon_report_interval_max in the ENOSPC
testing.
Signed-off-by: John Spray <john.spray@redhat.com>
Leave the legacy handling out in cephfs_setup, move
the filesystem creation stuff into Filesystem. I
anticipate this being the right place for it if/when
we have tests that want to do 'fs rm' 'fs new' type
cycles within themselves.
Signed-off-by: John Spray <john.spray@redhat.com>
This was tripping over the recent commit 42c85e80
in Ceph master, which tightens the limits on
acceptable PG counts per OSD, and was making
teuthology runs fail due to never going clean.
Rather than put in a new hardcoded count, infer
it from config. Move some code around so that
the ceph task can get at a Filesystem object
to use in FS setup (this already has conf-getting
methods).
Signed-off-by: John Spray <john.spray@redhat.com>