mirror of
https://github.com/ceph/ceph
synced 2025-02-04 09:23:24 +00:00
06c94de584
This introduces two config parameters: mds_cache_memory_limit: Sets the soft maximum of the cache to the given byte count. (Like mds_cache_size, this doesn't actually limit the maximum size of the cache. It just dictates the steady-state size.) mds_cache_reservation: This replaces mds_health_cache_threshold everywhere except the Beacon heartbeat sent to the mons. The idea here is to specify a reservation of memory (5% by default) for operations and the MDS tries to always maintain that reservation. So, the MDS will recall caps from clients when it begins dipping into its reservation of memory. mds_cache_size still limits the cache by Inode count but is now by-default 0 (i.e. unlimited). The new preferred way of specifying cache limits is by memory size. The default is 1GB. Fixes: http://tracker.ceph.com/issues/20594 Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1464976 Signed-off-by: Patrick Donnelly <pdonnell@redhat.com> |
||
---|---|---|
.. | ||
alternate-pool.yaml | ||
asok_dump_tree.yaml | ||
auto-repair.yaml | ||
backtrace.yaml | ||
cap-flush.yaml | ||
cephfs_scrub_tests.yaml | ||
cfuse_workunit_quota.yaml | ||
client-limits.yaml | ||
client-readahad.yaml | ||
client-recovery.yaml | ||
config-commands.yaml | ||
damage.yaml | ||
data-scan.yaml | ||
forward-scrub.yaml | ||
fragment.yaml | ||
journal-repair.yaml | ||
libcephfs_java.yaml | ||
libcephfs_python.yaml | ||
mds_creation_retry.yaml | ||
mds-flush.yaml | ||
mds-full.yaml | ||
pool-perm.yaml | ||
quota.yaml | ||
sessionmap.yaml | ||
strays.yaml | ||
test_journal_migration.yaml | ||
volume-client.yaml |