diff --git a/PendingReleaseNotes b/PendingReleaseNotes index 84fb744f3db..0a0bde3c995 100644 --- a/PendingReleaseNotes +++ b/PendingReleaseNotes @@ -216,6 +216,10 @@ * The format of MDSs in `ceph fs dump` has changed. +* The ``mds_cache_size`` config option is completely removed. Since luminous, + the ``mds_cache_memory_limit`` config option has been preferred to configure + the MDS's cache limits. + * The ``pg_autoscale_mode`` is now set to ``on`` by default for newly created pools, which means that Ceph will automatically manage the number of PGs. To change this behavior, or to learn more about PG diff --git a/doc/cephfs/app-best-practices.rst b/doc/cephfs/app-best-practices.rst index f55f46724c6..50bd3b689b5 100644 --- a/doc/cephfs/app-best-practices.rst +++ b/doc/cephfs/app-best-practices.rst @@ -69,10 +69,9 @@ performance is very different for workloads whose metadata fits within that cache. If your workload has more files than fit in your cache (configured using -``mds_cache_memory_limit`` or ``mds_cache_size`` settings), then -make sure you test it appropriately: don't test your system with a small -number of files and then expect equivalent performance when you move -to a much larger number of files. +``mds_cache_memory_limit`` settings), then make sure you test it +appropriately: don't test your system with a small number of files and then +expect equivalent performance when you move to a much larger number of files. Do you need a file system? -------------------------- diff --git a/doc/cephfs/cache-size-limits.rst b/doc/cephfs/cache-size-limits.rst index 4ea41443bcd..1f6f5d93b9f 100644 --- a/doc/cephfs/cache-size-limits.rst +++ b/doc/cephfs/cache-size-limits.rst @@ -5,10 +5,9 @@ This section describes ways to limit MDS cache size. You can limit the size of the Metadata Server (MDS) cache by: -* *A memory limit*: A new behavior introduced in the Luminous release. Use the `mds_cache_memory_limit` parameters. We recommend to use memory limits instead of inode count limits. -* *Inode count*: Use the `mds_cache_size` parameter. By default, limiting the MDS cache by inode count is disabled. +* *A memory limit*: A new behavior introduced in the Luminous release. Use the `mds_cache_memory_limit` parameters. -In addition, you can specify a cache reservation by using the `mds_cache_reservation` parameter for MDS operations. The cache reservation is limited as a percentage of the memory or inode limit and is set to 5% by default. The intent of this parameter is to have the MDS maintain an extra reserve of memory for its cache for new metadata operations to use. As a consequence, the MDS should in general operate below its memory limit because it will recall old state from clients in order to drop unused metadata in its cache. +In addition, you can specify a cache reservation by using the `mds_cache_reservation` parameter for MDS operations. The cache reservation is limited as a percentage of the memory and is set to 5% by default. The intent of this parameter is to have the MDS maintain an extra reserve of memory for its cache for new metadata operations to use. As a consequence, the MDS should in general operate below its memory limit because it will recall old state from clients in order to drop unused metadata in its cache. The `mds_cache_reservation` parameter replaces the `mds_health_cache_threshold` in all situations except when MDS nodes sends a health alert to the Monitors indicating the cache is too large. By default, `mds_health_cache_threshold` is 150% of the maximum cache size. diff --git a/doc/cephfs/health-messages.rst b/doc/cephfs/health-messages.rst index a0f460da23a..2e79c7bfa15 100644 --- a/doc/cephfs/health-messages.rst +++ b/doc/cephfs/health-messages.rst @@ -75,11 +75,11 @@ Message: "Client *name* failing to respond to cache pressure" Code: MDS_HEALTH_CLIENT_RECALL, MDS_HEALTH_CLIENT_RECALL_MANY Description: Clients maintain a metadata cache. Items (such as inodes) in the client cache are also pinned in the MDS cache, so when the MDS needs to shrink -its cache (to stay within ``mds_cache_size`` or ``mds_cache_memory_limit``), it -sends messages to clients to shrink their caches too. If the client is -unresponsive or buggy, this can prevent the MDS from properly staying within -its cache limits and it may eventually run out of memory and crash. This -message appears if a client has failed to release more than +its cache (to stay within ``mds_cache_memory_limit``), it sends messages to +clients to shrink their caches too. If the client is unresponsive or buggy, +this can prevent the MDS from properly staying within its cache limits and it +may eventually run out of memory and crash. This message appears if a client +has failed to release more than ``mds_recall_warning_threshold`` capabilities (decaying with a half-life of ``mds_recall_max_decay_rate``) within the last ``mds_recall_warning_decay_rate`` second. @@ -126,6 +126,6 @@ Code: MDS_HEALTH_CACHE_OVERSIZED Description: The MDS is not succeeding in trimming its cache to comply with the limit set by the administrator. If the MDS cache becomes too large, the daemon may exhaust available memory and crash. By default, this message appears if -the actual cache size (in inodes or memory) is at least 50% greater than -``mds_cache_size`` (default 100000) or ``mds_cache_memory_limit`` (default -1GB). Modify ``mds_health_cache_threshold`` to set the warning ratio. +the actual cache size (in memory) is at least 50% greater than +``mds_cache_memory_limit`` (default 1GB). Modify ``mds_health_cache_threshold`` +to set the warning ratio. diff --git a/doc/cephfs/mds-config-ref.rst b/doc/cephfs/mds-config-ref.rst index b91a44245b4..248368c1735 100644 --- a/doc/cephfs/mds-config-ref.rst +++ b/doc/cephfs/mds-config-ref.rst @@ -5,9 +5,8 @@ ``mds cache memory limit`` :Description: The memory limit the MDS should enforce for its cache. - Administrators should use this instead of ``mds cache size``. :Type: 64-bit Integer Unsigned -:Default: ``1073741824`` +:Default: ``1G`` ``mds cache reservation`` @@ -18,14 +17,6 @@ :Type: Float :Default: ``0.05`` -``mds cache size`` - -:Description: The number of inodes to cache. A value of 0 indicates an - unlimited number. It is recommended to use - ``mds_cache_memory_limit`` to limit the amount of memory the MDS - cache uses. -:Type: 32-bit Integer -:Default: ``0`` ``mds cache mid`` diff --git a/doc/cephfs/troubleshooting.rst b/doc/cephfs/troubleshooting.rst index 1ff6cd0bea1..dcc3f84ab28 100644 --- a/doc/cephfs/troubleshooting.rst +++ b/doc/cephfs/troubleshooting.rst @@ -38,9 +38,9 @@ specific clients as misbehaving, you should investigate why they are doing so. Generally it will be the result of -#. Overloading the system (if you have extra RAM, increase the "mds cache size" - config from its default 100000; having a larger active file set than your MDS - cache is the #1 cause of this!). +#. Overloading the system (if you have extra RAM, increase the + "mds cache memory limit" config from its default 1GiB; having a larger active + file set than your MDS cache is the #1 cause of this!). #. Running an older (misbehaving) client. diff --git a/doc/rados/configuration/ceph-conf.rst b/doc/rados/configuration/ceph-conf.rst index 0bbd243af8a..2c326bee216 100644 --- a/doc/rados/configuration/ceph-conf.rst +++ b/doc/rados/configuration/ceph-conf.rst @@ -144,7 +144,7 @@ These sections include: the Ceph Storage Cluster, and override the same setting in ``global``. -:Example: ``mds_cache_size = 10G`` +:Example: ``mds_cache_memory_limit = 10G`` ``client`` diff --git a/qa/tasks/cephfs/test_client_limits.py b/qa/tasks/cephfs/test_client_limits.py index cd9a9a6635a..7b496d751e3 100644 --- a/qa/tasks/cephfs/test_client_limits.py +++ b/qa/tasks/cephfs/test_client_limits.py @@ -38,15 +38,18 @@ class TestClientLimits(CephFSTestCase): :param use_subdir: whether to put test files in a subdir or use root """ - cache_size = open_files/2 + # Set MDS cache memory limit to a low value that will make the MDS to + # ask the client to trim the caps. + cache_memory_limit = "1K" - self.set_conf('mds', 'mds cache size', cache_size) + self.set_conf('mds', 'mds_cache_memory_limit', cache_memory_limit) self.set_conf('mds', 'mds_recall_max_caps', open_files/2) self.set_conf('mds', 'mds_recall_warning_threshold', open_files) self.fs.mds_fail_restart() self.fs.wait_for_daemons() mds_min_caps_per_client = int(self.fs.get_config("mds_min_caps_per_client")) + mds_max_caps_per_client = int(self.fs.get_config("mds_max_caps_per_client")) mds_recall_warning_decay_rate = self.fs.get_config("mds_recall_warning_decay_rate") self.assertTrue(open_files >= mds_min_caps_per_client) @@ -87,7 +90,7 @@ class TestClientLimits(CephFSTestCase): num_caps = self.get_session(mount_a_client_id)['num_caps'] if num_caps <= mds_min_caps_per_client: return True - elif num_caps < cache_size: + elif num_caps <= mds_max_caps_per_client: return True else: return False diff --git a/src/common/options.cc b/src/common/options.cc index d552b4345a3..e4e4c7dd21c 100644 --- a/src/common/options.cc +++ b/src/common/options.cc @@ -7557,11 +7557,6 @@ std::vector