* refs/pull/21885/head:
qa: update cluster log health warning message
qa: add tests for client features
mds: evict clients that lack required features
mds: cleanup MDSRank::evict_client
mds: infer client version by client metadata and connection's features
mds: introduce "ceph fs set <fs_name> min_compat_client <release_name>"
mds: tell client why it's rejected
mds: introduce cephfs' own feature bits
mds: make Server::prepare_force_open_sessions() update client metadata
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/22740/head:
qa: create common conf for all cephfs suites
qa: remove wrongly created random distro conf
Reviewed-by: Zheng Yan <zyan@redhat.com>
This will be followed by removing common CephFS configurations in the
ceph.conf.template in teuthology.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This utilizes the recent feature in teuthology [1] to skip hidden files in
suites when building the job matrix.
Idea of this change is to enable referring to the top-level qa directory in a
position-independent way such that copies of a suite to another location do not
break any symlinks.
[1] https://github.com/ceph/teuthology/pull/1185
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21712/head:
qa/tasks/cephfs: add test for renewing stale session
client: invalidate caps and leases when session becomes stale
client: fix race in concurrent readdir
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
The snapshot hierarchy it leaves behind can't be cleaned up by `rm -rf` which
breaks workunit cleanup. So, don't run this as part of normal snaps test.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
instead of using ubuntu 14.04, since we want to drop the support of this
release.
Signed-off-by: Kefu Chai <kchai@redhat.com>
(cherry picked from commit 88311be439)
* instead of using ubuntu 14.04, use ubuntu_latest.since we want
to drop the support of this release.
* refactor this test to use the facet of ubuntu_latest.yaml.
Signed-off-by: Kefu Chai <kchai@redhat.com>
(cherry picked from commit aa89bb2f93)
* refs/pull/18192/head:
qa/cephfs: test ec data pool
qa/suites/fs/basic_functional/clusters: more osds
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/remotes/upstream/pull/17676/head:
qa/tasks/cephfs: Whitelist POOL_APP_NOT_ENABLED for test_misc
Reviewed-by: John Spray <john.spray@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This introduces two config parameters:
mds_cache_memory_limit: Sets the soft maximum of the cache to the given
byte count. (Like mds_cache_size, this doesn't actually limit the maximum
size of the cache. It just dictates the steady-state size.)
mds_cache_reservation: This replaces mds_health_cache_threshold everywhere
except the Beacon heartbeat sent to the mons. The idea here is to specify a
reservation of memory (5% by default) for operations and the MDS tries to
always maintain that reservation. So, the MDS will recall caps from clients
when it begins dipping into its reservation of memory.
mds_cache_size still limits the cache by Inode count but is now by-default 0
(i.e. unlimited). The new preferred way of specifying cache limits is by memory
size. The default is 1GB.
Fixes: http://tracker.ceph.com/issues/20594
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1464976
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
test_misc verifies that ceph fs new will not create a filesystem
on a pool that already contains objects. As part of the test, it
inserts a dummy object into a pool and then attempts to use it for
CephFS. This triggers POOL_APP_NOT_ENABLED. Setting the application
metadata for the pool (and having ceph fs new fail because of the
existing metadata) would then exercise a different failure case.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>