2021-03-29 16:46:32 +00:00
|
|
|
.. _cephfs-top:
|
|
|
|
|
2020-11-18 07:49:45 +00:00
|
|
|
==================
|
|
|
|
CephFS Top Utility
|
|
|
|
==================
|
|
|
|
|
|
|
|
CephFS provides `top(1)` like utility to display various Ceph Filesystem metrics
|
|
|
|
in realtime. `cephfs-top` is a curses based python script which makes use of `stats`
|
|
|
|
plugin in Ceph Manager to fetch (and display) metrics.
|
|
|
|
|
|
|
|
Manager Plugin
|
|
|
|
--------------
|
|
|
|
|
|
|
|
Ceph Filesystem clients periodically forward various metrics to Ceph Metadata Servers (MDS)
|
|
|
|
which in turn get forwarded to Ceph Manager by MDS rank zero. Each active MDS forward its
|
2021-09-13 01:40:17 +00:00
|
|
|
respective set of metrics to MDS rank zero. Metrics are aggregated and forwarded to Ceph
|
2020-11-18 07:49:45 +00:00
|
|
|
Manager.
|
|
|
|
|
|
|
|
Metrics are divided into two categories - global and per-mds. Global metrics represent
|
|
|
|
set of metrics for the filesystem as a whole (e.g., client read latency) whereas per-mds
|
|
|
|
metrics are for a particular MDS rank (e.g., number of subtrees handled by an MDS).
|
|
|
|
|
|
|
|
.. note:: Currently, only global metrics are tracked.
|
|
|
|
|
|
|
|
`stats` plugin is disabled by default and should be enabled via::
|
|
|
|
|
|
|
|
$ ceph mgr module enable stats
|
|
|
|
|
|
|
|
Once enabled, Ceph Filesystem metrics can be fetched via::
|
|
|
|
|
|
|
|
$ ceph fs perf stats
|
|
|
|
{"version": 1, "global_counters": ["cap_hit", "read_latency", "write_latency", "metadata_latency", "dentry_lease"], "counters": [], "client_metadata": {"client.614146": {"IP": "10.1.1.100", "hostname" : "ceph-host1", "root": "/", "mount_point": "/mnt/cephfs", "valid_metrics": ["cap_hit", "read_latency", "write_latency", "metadata_latency", "dentry_lease"]}}, "global_metrics": {"client.614146": [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0]]}, "metrics": {"delayed_ranks": [], "mds.0": {"client.614146": []}}}
|
|
|
|
|
|
|
|
Details of the JSON command output are as follows:
|
|
|
|
|
|
|
|
- `version`: Version of stats output
|
|
|
|
- `global_counters`: List of global performance metrics
|
|
|
|
- `counters`: List of per-mds performance metrics
|
|
|
|
- `client_metadata`: Ceph Filesystem client metadata
|
|
|
|
- `global_metrics`: Global performance counters
|
|
|
|
- `metrics`: Per-MDS performance counters (currently, empty) and delayed ranks
|
|
|
|
|
|
|
|
.. note:: `delayed_ranks` is the set of active MDS ranks that are reporting stale metrics.
|
|
|
|
This can happen in cases such as (temporary) network issue between MDS rank zero
|
|
|
|
and other active MDSs.
|
|
|
|
|
|
|
|
Metrics can be fetched for a partcilar client and/or for a set of active MDSs. To fetch metrics
|
|
|
|
for a particular client (e.g., for client-id: 1234)::
|
|
|
|
|
|
|
|
$ ceph fs perf stats --client_id=1234
|
|
|
|
|
|
|
|
To fetch metrics only for a subset of active MDSs (e.g., MDS rank 1 and 2)::
|
|
|
|
|
|
|
|
$ ceph fs perf stats --mds_rank=1,2
|
|
|
|
|
|
|
|
`cephfs-top`
|
|
|
|
------------
|
|
|
|
|
|
|
|
`cephfs-top` utility relies on `stats` plugin to fetch performance metrics and display in
|
|
|
|
`top(1)` like format. `cephfs-top` is available as part of `cephfs-top` package.
|
|
|
|
|
|
|
|
By default, `cephfs-top` uses `client.fstop` user to connect to a Ceph cluster::
|
|
|
|
|
|
|
|
$ ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r'
|
|
|
|
$ cephfs-top
|
|
|
|
|
|
|
|
To use a non-default user (other than `client.fstop`) use::
|
|
|
|
|
|
|
|
$ cephfs-top --id <name>
|
|
|
|
|
|
|
|
By default, `cephfs-top` connects to cluster name `ceph`. To use a non-default cluster name::
|
|
|
|
|
|
|
|
$ cephfs-top --cluster <cluster>
|
|
|
|
|
2021-06-22 11:09:22 +00:00
|
|
|
`cephfs-top` refreshes stats every second by default. To choose a different refresh interval use::
|
2021-03-23 04:40:56 +00:00
|
|
|
|
|
|
|
$ cephfs-top -d <seconds>
|
|
|
|
|
2021-06-22 11:09:22 +00:00
|
|
|
Interval should be greater than or equal to 0.5 seconds. Fractional seconds are honoured.
|
2021-03-23 04:40:56 +00:00
|
|
|
|
2020-11-18 07:49:45 +00:00
|
|
|
Sample screenshot running `cephfs-top` with 2 clients:
|
|
|
|
|
|
|
|
.. image:: cephfs-top.png
|
|
|
|
|
|
|
|
.. note:: As of now, `cephfs-top` does not reliably work with multiple Ceph Filesystems.
|