* refs/remotes/upstream/pull/17694/head:
qa/cephfs: kill mount if it gets evicted by mds
qa/cephfs: fix test_evict_client
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/remotes/upstream/pull/17657/head:
mds: optimize MDCache::rejoin_scour_survivor_replicas()
mds: fix MDSCacheObject::clear_replica_map
mds: support limiting cache by memory
common: refactor of lru
mds: resolve unsigned coercion compiler warning
common: use safer uint64_t for list size
common: add bytes2str pretty print function
mds: check if waiting is allocated before use
mds: go back to compact_map for replicas
mds: use mempool for cache objects
mds: cleanup replica_map access
common: add alloc_ptr smart pointer
common: add warning on base class use of mempool
common: use atomic uin64_t for counter
Reviewed-by: Zheng Yan <zyan@redhat.com>
ceph df accounts for pool size, so there is no need to do it in the test.
Fixes: http://tracker.ceph.com/issues/21381
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
This introduces two config parameters:
mds_cache_memory_limit: Sets the soft maximum of the cache to the given
byte count. (Like mds_cache_size, this doesn't actually limit the maximum
size of the cache. It just dictates the steady-state size.)
mds_cache_reservation: This replaces mds_health_cache_threshold everywhere
except the Beacon heartbeat sent to the mons. The idea here is to specify a
reservation of memory (5% by default) for operations and the MDS tries to
always maintain that reservation. So, the MDS will recall caps from clients
when it begins dipping into its reservation of memory.
mds_cache_size still limits the cache by Inode count but is now by-default 0
(i.e. unlimited). The new preferred way of specifying cache limits is by memory
size. The default is 1GB.
Fixes: http://tracker.ceph.com/issues/20594
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1464976
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Add support for testing recovery of CephFS metadata into an alternate
RADOS pool, useful as a disaster recovery mechanism that avoids
modifying the metadata in-place.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Add support for testing recovery of CephFS metadata into an alternate
RADOS pool, useful as a disaster recovery mechanism that avoids
modifying the metadata in-place.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Remove the alternate pool recovery test from test_data_scan. Newer
commits will place the test in its own file.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
* refs/remotes/upstream/pull/16378/head:
doc: remove accidental additions to release notes
qa/cephfs: Fix race in test_volume_client
qa/cephfs: Test filtered df
PendingReleaseNotes: add note about df filtering
client: Support new, filtered MStatfs
objecter: Support new, filtered MStatfs
mon/PGMap stats: Support new, filtered MStatfs
messages: Add optional data pool to MStatfs
Reviewed-by: John Spray <john.spray@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Note: unmounting the client is not necessary for purging snapshots.
Fixes: http://tracker.ceph.com/issues/20072
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Previously this relied on being run in a special cluster configuration
that set up standby replay daemons. This change will allow it
to live alongside all the 'normal' functional tests.
Signed-off-by: John Spray <john.spray@redhat.com>
Previously, calling mds_stop without mds_fail meant
that if the filesystem creation was not quick, then
we would see those daemons go laggy. This starts
to trigger failures now that we have cluster log
messages that fire when a daemon gets failed out
due to being laggy.
Signed-off-by: John Spray <john.spray@redhat.com>