Where multiple MDSs were on the same node, trying
to concurrently update their firewall state was
causing an exception because the iptables command
errors out if another instance is already running.
Fixes: #10948
Signed-off-by: John Spray <john.spray@redhat.com>
teuthology helpfully escapes things for us so
the \; didn't need the backslash. The logic
was still falling over in some cases too.
Additionally, make the FUSE /sys/ abort operation
more surgical by working out the connection name
of our own mount during mount().
Signed-off-by: John Spray <john.spray@redhat.com>
This tests the new #9883 repair functionality
where we selectively scrape dentries out of
the journal while the MDS is offline.
Signed-off-by: John Spray <john.spray@redhat.com>
This was only used in get_first_mon, which doesn't actually
need the parameter itself. Makes it easier to casually
use Filesystem from any place with a ctx to hand.
Signed-off-by: John Spray <john.spray@redhat.com>
When unused clients were mounted during an fs new,
they would end up in a state where they stalled
on subsequent attempts to umount them (ceph-fuse
stalls on exit if it can't terminate its mds_session)
Signed-off-by: John Spray <john.spray@redhat.com>
Instead of blocking the whole port range (which
might make OSDs running on that node collateral
damage), read the MDS's port out of the MDS map
and just block that.
Signed-off-by: John Spray <john.spray@redhat.com>
Now that we have more of these cases, there was lots
of duplication in setup and teardown. For some tests
the "reset everything" setup/teardown is overkill,
but it's harmless.
Signed-off-by: John Spray <john.spray@redhat.com>
This tests:
* The new 'flush journal' asok command
* That the resulting on disk structures are as expected
* That cephfs-journal-tool is happy with the result
Fixes: #9881
Signed-off-by: John Spray <john.spray@redhat.com>
Previously was always using the default values of things
so querying mon instead of the appropriate service
worked fine. However, for things we might want to
update on a per-test basis we need to go ask the
correct service what the setting really is.
Needed for osd_mon_report_interval_max in the ENOSPC
testing.
Signed-off-by: John Spray <john.spray@redhat.com>
Leave the legacy handling out in cephfs_setup, move
the filesystem creation stuff into Filesystem. I
anticipate this being the right place for it if/when
we have tests that want to do 'fs rm' 'fs new' type
cycles within themselves.
Signed-off-by: John Spray <john.spray@redhat.com>
This was tripping over the recent commit 42c85e80
in Ceph master, which tightens the limits on
acceptable PG counts per OSD, and was making
teuthology runs fail due to never going clean.
Rather than put in a new hardcoded count, infer
it from config. Move some code around so that
the ceph task can get at a Filesystem object
to use in FS setup (this already has conf-getting
methods).
Signed-off-by: John Spray <john.spray@redhat.com>
New CephFS tests for MDS's auto repair functions. (So far the only
test case is verify/repair backtrace on fetch dirfrag)
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Old version libfuse treats both flock and posix lock requests as posix
lock request. This is a workaround for the bug.
Fixes: #9995
Signed-off-by: Yan, Zheng <zyan@redhat.com>
'client_id' was ambiguous because in other places it
meant the '0' in client.0, whereas here it means
the runtime-generated global ID of the client.
Signed-off-by: John Spray <john.spray@redhat.com>
Some of this stuff could be even more general for embedding
unittest-style suites, but for the moment let's keep the cephfs
stuff in a walled garden.
Signed-off-by: John Spray <john.spray@redhat.com>