* refs/remotes/upstream/pull/16036/head:
mds: improve cap min/max ratio descriptions
mds: fix whitespace
mds: cap client recall to min caps per client
mds: fix conf types
mds: fix whitespace
doc/cephfs: add client min cache and max cache ratio describe
mds: adding tunable features for caps_per_client
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Zheng Yan <zyan@redhat.com>
* refs/remotes/upstream/pull/17657/head:
mds: optimize MDCache::rejoin_scour_survivor_replicas()
mds: fix MDSCacheObject::clear_replica_map
mds: support limiting cache by memory
common: refactor of lru
mds: resolve unsigned coercion compiler warning
common: use safer uint64_t for list size
common: add bytes2str pretty print function
mds: check if waiting is allocated before use
mds: go back to compact_map for replicas
mds: use mempool for cache objects
mds: cleanup replica_map access
common: add alloc_ptr smart pointer
common: add warning on base class use of mempool
common: use atomic uin64_t for counter
Reviewed-by: Zheng Yan <zyan@redhat.com>
* refs/remotes/upstream/pull/17608/head:
doc/cephfs/posix: put posix notes in perspective
Reviewed-by: John Spray <john.spray@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This introduces two config parameters:
mds_cache_memory_limit: Sets the soft maximum of the cache to the given
byte count. (Like mds_cache_size, this doesn't actually limit the maximum
size of the cache. It just dictates the steady-state size.)
mds_cache_reservation: This replaces mds_health_cache_threshold everywhere
except the Beacon heartbeat sent to the mons. The idea here is to specify a
reservation of memory (5% by default) for operations and the MDS tries to
always maintain that reservation. So, the MDS will recall caps from clients
when it begins dipping into its reservation of memory.
mds_cache_size still limits the cache by Inode count but is now by-default 0
(i.e. unlimited). The new preferred way of specifying cache limits is by memory
size. The default is 1GB.
Fixes: http://tracker.ceph.com/issues/20594
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1464976
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Current cephfs can support seekdir efficiently. The diverge was
fixed by https://github.com/ceph/ceph/pull/14317
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
By default, this will create a pool per rack, 3x replication, with a host
failure domain. Those parameters can be customized via mgr config-key
options.
Signed-off-by: Sage Weil <sage@redhat.com>
it works with setuptools and is now compatible with py3
the py3 branch is created to track the upstream's master branch
Signed-off-by: Kefu Chai <kchai@redhat.com>