These have bit-rotted and no longer work. No cycles from interested parties
available to fix.
Fixes: https://tracker.ceph.com/issues/38487
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/25977/head:
qa/suites: exclude new packages when installing old versions
rpm: add dependency on python-kubernetes module to ceph-mgr-rook package
rpm,deb: add rbd_support module to ceph-mgr
packaging: split ceph-mgr diskprediction and rook plugins into own packages
Reviewed-by: Tim Serong <tserong@suse.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Sage Weil <sage@redhat.com>
As with trimming, use DecayCounters to throttle the number of caps we recall,
both globally and per-session.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
we use the playbook of "testnodes.yml" defined by ceph-cm-ansible for
initializing test nodes, and the role of "testnode" is used by
testnodes.yml. "testnode" requires "qemu-system-x86" or "qemu-kvm"
package to be installed. the qemu in turn depends on librbd1 and
librados2.
before librados3 was introduced, this worked perfectly. because in ceph
repo, qa/packages/packages.yaml defines the default set of packages the
"install" tasks should install. and in that yaml file, librados2 was
listed. so the package management system will overwrite the librados2
installed by ansible playbook with the version specified by the
"install" task, as apt/yum thinks this is what user requires explicitly,
so it's fine to install a different version of librados2.
after librados3 was introduced, librados2 was removed from
qa/packages/packages.yaml. because, by default, we need to install
librados3 instead of librados2 for ready a nautilus cluster. but the
problem is, the packge list also applies to "install" tasks installing
releases before nautilus, where we still need to replace the librados2
installed by ansible.
so, to address this issue, "librados2" is added to "extra_packages" of
the "install" tasks of tests installing old releases to install
librados2 explicitly instead of as a dependency of other ceph packages
like librbd1.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* refs/pull/21885/head:
qa: update cluster log health warning message
qa: add tests for client features
mds: evict clients that lack required features
mds: cleanup MDSRank::evict_client
mds: infer client version by client metadata and connection's features
mds: introduce "ceph fs set <fs_name> min_compat_client <release_name>"
mds: tell client why it's rejected
mds: introduce cephfs' own feature bits
mds: make Server::prepare_force_open_sessions() update client metadata
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/22740/head:
qa: create common conf for all cephfs suites
qa: remove wrongly created random distro conf
Reviewed-by: Zheng Yan <zyan@redhat.com>
This will be followed by removing common CephFS configurations in the
ceph.conf.template in teuthology.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This utilizes the recent feature in teuthology [1] to skip hidden files in
suites when building the job matrix.
Idea of this change is to enable referring to the top-level qa directory in a
position-independent way such that copies of a suite to another location do not
break any symlinks.
[1] https://github.com/ceph/teuthology/pull/1185
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/pull/21712/head:
qa/tasks/cephfs: add test for renewing stale session
client: invalidate caps and leases when session becomes stale
client: fix race in concurrent readdir
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
The snapshot hierarchy it leaves behind can't be cleaned up by `rm -rf` which
breaks workunit cleanup. So, don't run this as part of normal snaps test.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
instead of using ubuntu 14.04, since we want to drop the support of this
release.
Signed-off-by: Kefu Chai <kchai@redhat.com>
(cherry picked from commit 88311be4393586ae7f92862edebad907ee3a133f)
* instead of using ubuntu 14.04, use ubuntu_latest.since we want
to drop the support of this release.
* refactor this test to use the facet of ubuntu_latest.yaml.
Signed-off-by: Kefu Chai <kchai@redhat.com>
(cherry picked from commit aa89bb2f93a0ee7b26dff3972f09c64529054744)
* refs/pull/18192/head:
qa/cephfs: test ec data pool
qa/suites/fs/basic_functional/clusters: more osds
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/remotes/upstream/pull/17676/head:
qa/tasks/cephfs: Whitelist POOL_APP_NOT_ENABLED for test_misc
Reviewed-by: John Spray <john.spray@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This introduces two config parameters:
mds_cache_memory_limit: Sets the soft maximum of the cache to the given
byte count. (Like mds_cache_size, this doesn't actually limit the maximum
size of the cache. It just dictates the steady-state size.)
mds_cache_reservation: This replaces mds_health_cache_threshold everywhere
except the Beacon heartbeat sent to the mons. The idea here is to specify a
reservation of memory (5% by default) for operations and the MDS tries to
always maintain that reservation. So, the MDS will recall caps from clients
when it begins dipping into its reservation of memory.
mds_cache_size still limits the cache by Inode count but is now by-default 0
(i.e. unlimited). The new preferred way of specifying cache limits is by memory
size. The default is 1GB.
Fixes: http://tracker.ceph.com/issues/20594
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1464976
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
test_misc verifies that ceph fs new will not create a filesystem
on a pool that already contains objects. As part of the test, it
inserts a dummy object into a pool and then attempts to use it for
CephFS. This triggers POOL_APP_NOT_ENABLED. Setting the application
metadata for the pool (and having ceph fs new fail because of the
existing metadata) would then exercise a different failure case.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
so we can avoid the warnings like
grep: Unmatched ( or \(
because we pass the whitelisted string to `egrep -v "$1"` directly.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* refs/remotes/upstream/pull/15979/head:
Ignore unmatched rstat errors from MDS during rebuild testing
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Valgrind runs itself on forked children, and does its cleanup when they
complete, and this is slow... slow enough that it frequently makes the
test time out.
Valgrind let's you ignore child *processes* that you exec, but I can't
find a way to skip forked children in the same address space.
Work around this by skip this validation when running under valgrind.
Fixes: http://tracker.ceph.com/issues/20602
Signed-off-by: Sage Weil <sage@redhat.com>
This reverts 693bd23851, which was
added in response to http://tracker.ceph.com/issues/18126. But
we updated the Ubuntu packages in sepia so it should be good to go.
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
This reverts 693bd23851, which was
added in response to http://tracker.ceph.com/issues/18126. But
we updated the Ubuntu packages in sepia so it should be good to go.
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
This change happened a while back, but it got rolled back
when the generic objectstore/ dir had its filestore
entry split out into xfs and btrfs in 208675af.
Signed-off-by: John Spray <john.spray@redhat.com>
The "recovery" sub suite was originally tests for
client/mds recovery in certain failure cases, it has
since grown to include lots of unit testing of
various features using CephFSTestCase.
The "basic" suite is now specifically just running workloads
now that I've moved out the smaller functional tests.
Signed-off-by: John Spray <john.spray@redhat.com>
Most of what's in basic/ is "workload" type testing
(i.e. a simple cluster cluster configuration and then
running a script inside the client), which gets
permuted in various ways. Move the simpler
functional tests out with the others like themselves.
Signed-off-by: John Spray <john.spray@redhat.com>
These are unit tests for specific CephFS functionality,
it is gratuitous to repeat them with different underlying
RADOS object stores.
We retain coverage of XFS vs. bluestore in the workload tests.
Signed-off-by: John Spray <john.spray@redhat.com>