* refs/remotes/upstream/pull/17694/head:
qa/cephfs: kill mount if it gets evicted by mds
qa/cephfs: fix test_evict_client
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/remotes/upstream/pull/17657/head:
mds: optimize MDCache::rejoin_scour_survivor_replicas()
mds: fix MDSCacheObject::clear_replica_map
mds: support limiting cache by memory
common: refactor of lru
mds: resolve unsigned coercion compiler warning
common: use safer uint64_t for list size
common: add bytes2str pretty print function
mds: check if waiting is allocated before use
mds: go back to compact_map for replicas
mds: use mempool for cache objects
mds: cleanup replica_map access
common: add alloc_ptr smart pointer
common: add warning on base class use of mempool
common: use atomic uin64_t for counter
Reviewed-by: Zheng Yan <zyan@redhat.com>
ceph df accounts for pool size, so there is no need to do it in the test.
Fixes: http://tracker.ceph.com/issues/21381
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
This introduces two config parameters:
mds_cache_memory_limit: Sets the soft maximum of the cache to the given
byte count. (Like mds_cache_size, this doesn't actually limit the maximum
size of the cache. It just dictates the steady-state size.)
mds_cache_reservation: This replaces mds_health_cache_threshold everywhere
except the Beacon heartbeat sent to the mons. The idea here is to specify a
reservation of memory (5% by default) for operations and the MDS tries to
always maintain that reservation. So, the MDS will recall caps from clients
when it begins dipping into its reservation of memory.
mds_cache_size still limits the cache by Inode count but is now by-default 0
(i.e. unlimited). The new preferred way of specifying cache limits is by memory
size. The default is 1GB.
Fixes: http://tracker.ceph.com/issues/20594
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1464976
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
* refs/remotes/upstream/pull/17679/head:
qa: get asok path from ceph.conf
qa: use config_path property instead of literal
Reviewed-by: John Spray <john.spray@redhat.com>
are mapped and use the new mapped role for upgrades during later
stage.
eg: mon.a is mapped to mon.mira002 during install, store this mapping
and durig upgrade map it back to appropriate name to find the hostname
with that role
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
jewel needs neither filestore or bluestore as an option, so provide none
when running with jewel branch.
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
This is to test for customer like upgrade scenarios and to find
any issues that may be related to systemd, packaging etc
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
rbd pool should exist for many rbd tests to work properly, create
the pool right after install is successful.
Signed-off-by: Vasu Kulkarni <vasu@redhat.com>
We assume below that rerrosd is up, but it may not be when we exit the
loop.
Fixes: http://tracker.ceph.com/issues/21206
Signed-off-by: Sage Weil <sage@redhat.com>
Add support for testing recovery of CephFS metadata into an alternate
RADOS pool, useful as a disaster recovery mechanism that avoids
modifying the metadata in-place.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Add support for testing recovery of CephFS metadata into an alternate
RADOS pool, useful as a disaster recovery mechanism that avoids
modifying the metadata in-place.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Remove the alternate pool recovery test from test_data_scan. Newer
commits will place the test in its own file.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Different filesystems (and further, different configurations of the
same filesystem) need different exclude lists. Hard coding the list in
a wrapper script is inflexible.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
If start osd process first and then mark it in, the
pg state may remain all active+clean when doing
wait_for_clean() check, which may fail the next
osd_scrub_pgs() process.
So faster pg state change by marking osd in first.
Signed-off-by: huangjun <huangjun@xsky.com>