From e29a3c7c2dc399b081c6c9eea133fff54d80b3fb Mon Sep 17 00:00:00 2001 From: Liu Lan Date: Tue, 8 Dec 2020 19:44:53 +0800 Subject: [PATCH] doc: fix a couple of typos Signed-off-by: Liu Lan --- doc/cephfs/cache-configuration.rst | 12 ++++++------ doc/dev/dashboard/ui_goals.rst | 16 ++++++++-------- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/doc/cephfs/cache-configuration.rst b/doc/cephfs/cache-configuration.rst index 480bb562dfd..94c970ee396 100644 --- a/doc/cephfs/cache-configuration.rst +++ b/doc/cephfs/cache-configuration.rst @@ -10,13 +10,13 @@ may cache and what manipulations clients may perform (e.g. writing to a file). The MDS and clients both try to enforce a cache size. The mechanism for specifying the MDS cache size is described below. Note that the MDS cache size -is a not a hard limit. The MDS always allows clients to lookup new metadata -which is loaded into the cache. This is an essential policy as its avoids +is not a hard limit. The MDS always allows clients to lookup new metadata +which is loaded into the cache. This is an essential policy as it avoids deadlock in client requests (some requests may rely on held capabilities before capabilities are released). When the MDS cache is too large, the MDS will **recall** client state so cache -items become unpinned and eligble to be dropped. The MDS can only drop cache +items become unpinned and eligible to be dropped. The MDS can only drop cache state when no clients refer to the metadata to be dropped. Also described below is how to configure the MDS recall settings for your workload's needs. This is necessary if the internal throttles on the MDS recall can not keep up with the @@ -77,7 +77,7 @@ life for the counter. If the MDS is continually removing items from its cache, it will reach a steady state of ``-ln(0.5)/rate*threshold`` items removed per second. -The defaults are conservative and may need changed for production MDS with +The defaults are conservative and may need to be changed for production MDS with large cache sizes. @@ -138,7 +138,7 @@ Session Liveness The MDS also keeps track of whether sessions are quiescent. If a client session is not utilizing its capabilities or is otherwise quiet, the MDS will begin -recalling state from the session even if its not under cache pressure. This +recalling state from the session even if it's not under cache pressure. This helps the MDS avoid future work when the cluster workload is hot and cache pressure is forcing the MDS to recall state. The expectation is that a client not utilizing its capabilities is unlikely to use those capabilities anytime @@ -156,7 +156,7 @@ and:: The configuration ``mds_session_cache_liveness_decay_rate`` indicates the half-life for the decay counter tracking the use of capabilities by the client. Each time a client manipulates or acquires a capability, the MDS will increment -the counter. This is a rough but effective way to monitor utilization of the +the counter. This is a rough but effective way to monitor the utilization of the client cache. The ``mds_session_cache_liveness_magnitude`` is a base-2 magnitude difference diff --git a/doc/dev/dashboard/ui_goals.rst b/doc/dev/dashboard/ui_goals.rst index 92ff572dfcb..4e68ec1f54e 100644 --- a/doc/dev/dashboard/ui_goals.rst +++ b/doc/dev/dashboard/ui_goals.rst @@ -2,13 +2,13 @@ Ceph Dashboard Design Goals =========================== -.. note:: this document is intended to provide a focal point for discussing the overall design +.. note:: This document is intended to provide a focal point for discussing the overall design principles for mgr/dashboard Introduction ============ -Most distributed storage architectures are inherently complex, and can present a management challenge +Most distributed storage architectures are inherently complex and can present a management challenge to Operations teams who are typically stretched across multiple product and platform disciplines. In general terms, the complexity of any solution can have a direct bearing on the operational costs incurred to manage it. The answer is simple...make it simple :) @@ -26,7 +26,7 @@ Understanding the Persona of the Target User Ceph has historically been administered from the CLI. The CLI has always and will always offer the richest, most flexible way to install and manage a Ceph cluster. Administrators who require and -demand this level of control are unlikely to adopt a UI for anything more than a technical curiousity. +demand this level of control are unlikely to adopt a UI for anything more than a technical curiosity. The relevance of the UI is therefore more critical for a new SysAdmin, where it can help technology adoption and reduce the operational friction that is normally experienced when implementing a new @@ -47,13 +47,13 @@ ______________ different views #. **Data timeliness**. Data displayed in the UI must be timely. State information **must** be reasonably recent for it to be relevant and acted upon with confidence. In addition, the age of the data should - be shown as an age (e.g. 20s ago) rather than UTC timestamps to make it more immediately consumable by + be shown as age (e.g. 20s ago) rather than UTC timestamps to make it more immediately consumable by the Administrator. #. **Automate through workflows**. If the admin has to follow a 'recipe' to perform a task, the goal of the dashboard UI should be to implement the flow. #. **Provide a natural next step**. The UI **is** the *expert system*, so instead of expecting the user - to know where to they go next, the UI should lead them. This means linking components together to - establish a flow, and deeper integration between the alertmanager implementation and the dashboard + to know where they go next, the UI should lead them. This means linking components together to + establish a flow and deeper integration between the alertmanager implementation and the dashboard elements enabling an Admin to efficiently step from alert to affected component. #. **Platform visibility**. The platform (OS and hardware configuration) is a fundamental component of the solution, so providing platform level insights can help deliver a more holistic view of the Ceph cluster. @@ -74,5 +74,5 @@ _______________ Focus On User Experience ======================== Ultimately, the goal must be to move away from pushing complexity onto the GUI user through multi-step -workflows like iSCSI configuration, or setting specific cluster flags in defined sequences. Simplicity, -should be the goal for the UI...let's leave complexity to the CLI. +workflows like iSCSI configuration or setting specific cluster flags in defined sequences. Simplicity +should be the goal for the UI...let's leave the complexity to the CLI.