Merge PR #38108 into master

* refs/pull/38108/head:
	doc, man: man page for `cephfs-top` utility
	doc: document `cephfs-top` utility
	test: selftest for `cephfs-top` utility
	spec, deb: package cephfs-top utility
	cephfs-top: top(1) like utility for Ceph Filesystem
	mgr/stats: include kernel version (for kclients) in `perf stats` command output
	mgr/stats: include version with `perf stats` output

Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This commit is contained in:
Patrick Donnelly 2021-01-11 08:38:52 -08:00
commit 318d3f4d80
No known key found for this signature in database
GPG Key ID: 3A2A7E25BEA8AADB
32 changed files with 571 additions and 0 deletions

View File

@ -718,6 +718,14 @@ storage system. This package provides a REST gateway to the
object store that aims to implement a superset of Amazon's S3
service as well as the OpenStack Object Storage ("Swift") API.
%package -n cephfs-top
Summary: top(1) like utility for Ceph Filesystem
BuildArch: noarch
Requires: python%{python3_pkgversion}-rados
%description -n cephfs-top
This package provides a top(1) like utility to display Ceph Filesystem metrics
in realtime.
%if %{with ocf}
%package resource-agents
Summary: OCF-compliant resource agents for Ceph daemons
@ -2175,6 +2183,11 @@ fi
%{_bindir}/cephfs-shell
%endif
%files -n cephfs-top
%{python3_sitelib}/cephfs_top-*.egg-info
%{_bindir}/cephfs-top
%{_mandir}/man8/cephfs-top.8*
%if 0%{with ceph_test_package}
%files -n ceph-test
%{_bindir}/ceph-client-debug

2
debian/cephfs-top.install vendored Normal file
View File

@ -0,0 +1,2 @@
usr/bin/cephfs-top
usr/lib/python3*/dist-packages/cephfs_top-*.egg-info

9
debian/control vendored
View File

@ -1184,6 +1184,15 @@ Description: interactive shell for the Ceph distributed file system
.
This package contains a CLI for interacting with the CephFS.
Package: cephfs-top
Architecture: all
Depends: ${misc:Depends}
${python3:Depends}
Description: This package provides a top(1) like utility to display various
filesystem metrics in realtime.
.
This package contains utility for displaying filesystem metrics.
Package: ceph-grafana-dashboards
Architecture: all
Description: grafana dashboards for the ceph dashboard

1
debian/rules vendored
View File

@ -126,6 +126,7 @@ override_dh_python3:
dh_python3 -p python3-ceph-argparse
dh_python3 -p python3-ceph-common
dh_python3 -p cephfs-shell
dh_python3 -p cephfs-top
dh_python3 -p cephadm
# do not run tests

BIN
doc/cephfs/cephfs-top.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

77
doc/cephfs/cephfs-top.rst Normal file
View File

@ -0,0 +1,77 @@
==================
CephFS Top Utility
==================
CephFS provides `top(1)` like utility to display various Ceph Filesystem metrics
in realtime. `cephfs-top` is a curses based python script which makes use of `stats`
plugin in Ceph Manager to fetch (and display) metrics.
Manager Plugin
--------------
Ceph Filesystem clients periodically forward various metrics to Ceph Metadata Servers (MDS)
which in turn get forwarded to Ceph Manager by MDS rank zero. Each active MDS forward its
respective set of metrics to MDS rank zero. Metrics are aggergated and forwarded to Ceph
Manager.
Metrics are divided into two categories - global and per-mds. Global metrics represent
set of metrics for the filesystem as a whole (e.g., client read latency) whereas per-mds
metrics are for a particular MDS rank (e.g., number of subtrees handled by an MDS).
.. note:: Currently, only global metrics are tracked.
`stats` plugin is disabled by default and should be enabled via::
$ ceph mgr module enable stats
Once enabled, Ceph Filesystem metrics can be fetched via::
$ ceph fs perf stats
{"version": 1, "global_counters": ["cap_hit", "read_latency", "write_latency", "metadata_latency", "dentry_lease"], "counters": [], "client_metadata": {"client.614146": {"IP": "10.1.1.100", "hostname" : "ceph-host1", "root": "/", "mount_point": "/mnt/cephfs", "valid_metrics": ["cap_hit", "read_latency", "write_latency", "metadata_latency", "dentry_lease"]}}, "global_metrics": {"client.614146": [[0, 0], [0, 0], [0, 0], [0, 0], [0, 0]]}, "metrics": {"delayed_ranks": [], "mds.0": {"client.614146": []}}}
Details of the JSON command output are as follows:
- `version`: Version of stats output
- `global_counters`: List of global performance metrics
- `counters`: List of per-mds performance metrics
- `client_metadata`: Ceph Filesystem client metadata
- `global_metrics`: Global performance counters
- `metrics`: Per-MDS performance counters (currently, empty) and delayed ranks
.. note:: `delayed_ranks` is the set of active MDS ranks that are reporting stale metrics.
This can happen in cases such as (temporary) network issue between MDS rank zero
and other active MDSs.
Metrics can be fetched for a partcilar client and/or for a set of active MDSs. To fetch metrics
for a particular client (e.g., for client-id: 1234)::
$ ceph fs perf stats --client_id=1234
To fetch metrics only for a subset of active MDSs (e.g., MDS rank 1 and 2)::
$ ceph fs perf stats --mds_rank=1,2
`cephfs-top`
------------
`cephfs-top` utility relies on `stats` plugin to fetch performance metrics and display in
`top(1)` like format. `cephfs-top` is available as part of `cephfs-top` package.
By default, `cephfs-top` uses `client.fstop` user to connect to a Ceph cluster::
$ ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r'
$ cephfs-top
To use a non-default user (other than `client.fstop`) use::
$ cephfs-top --id <name>
By default, `cephfs-top` connects to cluster name `ceph`. To use a non-default cluster name::
$ cephfs-top --cluster <cluster>
Sample screenshot running `cephfs-top` with 2 clients:
.. image:: cephfs-top.png
.. note:: As of now, `cephfs-top` does not reliably work with multiple Ceph Filesystems.

View File

@ -90,6 +90,7 @@ Administration
CephFS Quotas <quota>
Health messages <health-messages>
Upgrading old file systems <upgrading>
CephFS Top Utility <cephfs-top>
.. raw:: html

View File

@ -38,6 +38,7 @@ list(APPEND man_srcs
${osd_srcs}
${mon_srcs}
ceph-mds.rst
cephfs-top.rst
librados-config.rst
cephadm.rst)

50
doc/man/8/cephfs-top.rst Normal file
View File

@ -0,0 +1,50 @@
:orphan:
==========================================
cephfs-top -- Ceph Filesystem Top Utility
==========================================
.. program:: cephfs-top
Synopsis
========
| **cephfs-top** [flags]
Description
===========
**cephfs-top** provides top(1) like functionality for Ceph Filesystem.
Various client metrics are displayed and updated in realtime.
Ceph Metadata Servers periodically send client metrics to Ceph Manager.
``Stats`` plugin in Ceph Manager provides an interface to fetch these metrics.
Options
=======
.. option:: --cluster
Cluster: Ceph cluster to connect. Defaults to ``ceph``.
.. option:: --id
Id: Client used to connect to Ceph cluster. Defaults to ``fstop``.
.. option:: --selftest
Perform a selftest. This mode performs a sanity check of ``stats`` module.
Availability
============
**cephfs-top** is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer to the Ceph documentation at
http://ceph.com/ for more information.
See also
========
:doc:`ceph <ceph>`\(8),
:doc:`ceph-mds <ceph-mds>`\(8)

View File

@ -23,6 +23,7 @@
man/8/ceph-syn
man/8/ceph
man/8/cephadm
man/8/cephfs-top
man/8/crushtool
man/8/librados-config
man/8/monmaptool

View File

@ -6,9 +6,11 @@ tasks:
extra_packages:
rpm:
- python3-cephfs
- cephfs-top
deb:
- python3-cephfs
- cephfs-shell
- cephfs-top
# For kernel_untar_build workunit
extra_system_packages:
deb:

0
qa/suites/fs/top/% Normal file
View File

1
qa/suites/fs/top/.qa Symbolic link
View File

@ -0,0 +1 @@
../.qa

1
qa/suites/fs/top/begin.yaml Symbolic link
View File

@ -0,0 +1 @@
.qa/cephfs/begin.yaml

View File

View File

@ -0,0 +1,10 @@
meta:
- desc: 1 ceph cluster with 1 mon, 1 mgr, 3 osds, 1 mds
roles:
- - mon.a
- mgr.x
- mds.a
- osd.0
- osd.1
- osd.2
- client.0

1
qa/suites/fs/top/mount/.qa Symbolic link
View File

@ -0,0 +1 @@
../.qa

View File

@ -0,0 +1 @@
.qa/cephfs/mount/fuse.yaml

View File

@ -0,0 +1 @@
../.qa

View File

@ -0,0 +1 @@
.qa/objectstore/bluestore-bitmap.yaml

View File

@ -0,0 +1 @@
../.qa

View File

@ -0,0 +1 @@
./.qa/cephfs/overrides/whitelist_health.yaml

View File

@ -0,0 +1 @@
.qa/distros/supported-random-distro$

1
qa/suites/fs/top/tasks/.qa Symbolic link
View File

@ -0,0 +1 @@
../.qa

View File

@ -0,0 +1,4 @@
tasks:
- cephfs_test_runner:
modules:
- tasks.cephfs.test_fstop

View File

@ -0,0 +1,27 @@
import logging
from tasks.cephfs.cephfs_test_case import CephFSTestCase
from teuthology.exceptions import CommandFailedError
log = logging.getLogger(__name__)
class TestFSTop(CephFSTestCase):
def test_fstop_non_existent_cluster(self):
self.mgr_cluster.mon_manager.raw_cluster_cmd("mgr", "module", "enable", "stats")
try:
self.mount_a.run_shell(['cephfs-top',
'--cluster=hpec',
'--id=admin',
'--selftest'])
except CommandFailedError:
pass
else:
raise RuntimeError('expected cephfs-top command to fail.')
self.mgr_cluster.mon_manager.raw_cluster_cmd("mgr", "module", "disable", "stats")
def test_fstop(self):
self.mgr_cluster.mon_manager.raw_cluster_cmd("mgr", "module", "enable", "stats")
self.mount_a.run_shell(['cephfs-top',
'--id=admin',
'--selftest'])
self.mgr_cluster.mon_manager.raw_cluster_cmd("mgr", "module", "disable", "stats")

View File

@ -12,6 +12,8 @@ from mgr_module import CommandResult
from datetime import datetime, timedelta
from threading import Lock, Condition, Thread
PERF_STATS_VERSION = 1
QUERY_IDS = "query_ids"
GLOBAL_QUERY_ID = "global_query_id"
QUERY_LAST_REQUEST = "last_time_stamp"
@ -139,6 +141,9 @@ class FSPerfStats(object):
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
supported_metrics = [metric for metric, bit in MDS_PERF_QUERY_COUNTERS_MAP.items() if metric_features & (1 << bit)]
self.set_client_metadata(client_id, "valid_metrics", supported_metrics)
kver = metadata[CLIENT_METADATA_KEY].get("kernel_version", None)
if kver:
self.set_client_metadata(client_id, "kernel_version", kver)
# when all async requests are done, purge clients metadata if any.
if not self.client_metadata['in_progress']:
for client in self.client_metadata['to_purge']:
@ -391,6 +396,7 @@ class FSPerfStats(object):
def generate_report(self, user_query):
result = {} # type: Dict
# start with counter info -- metrics that are global and per mds
result["version"] = PERF_STATS_VERSION
result["global_counters"] = MDS_GLOBAL_PERF_QUERY_COUNTERS
result["counters"] = MDS_PERF_QUERY_COUNTERS

View File

@ -56,3 +56,8 @@ if(WITH_CEPHFS_SHELL)
add_tox_test(cephfs-shell)
endif()
endif()
option(WITH_CEPHFS_TOP "install cephfs-top utility" ON)
if(WITH_CEPHFS_TOP)
add_subdirectory(top)
endif()

View File

@ -0,0 +1,7 @@
include(Distutils)
distutils_install_module(cephfs-top)
if(WITH_TESTS)
include(AddCephTest)
add_tox_test(cephfs-top)
endif()

313
src/tools/cephfs/top/cephfs-top Executable file
View File

@ -0,0 +1,313 @@
#!/usr/bin/python3
import argparse
import sys
import curses
import errno
import json
import signal
import time
from collections import OrderedDict
from datetime import datetime
from enum import Enum, unique
import rados
class FSTopException(Exception):
def __init__(self, msg=''):
self.error_msg = msg
def get_error_msg(self):
return self.error_msg
@unique
class MetricType(Enum):
METRIC_TYPE_NONE = 0
METRIC_TYPE_PERCENTAGE = 1
METRIC_TYPE_LATENCY = 2
FS_TOP_PROG_STR = 'cephfs-top'
# version match b/w fstop and stats emitted by mgr/stats
FS_TOP_SUPPORTED_VER = 1
ITEMS_PAD_LEN = 1
ITEMS_PAD = " " * ITEMS_PAD_LEN
# metadata provided by mgr/stats
FS_TOP_MAIN_WINDOW_COL_CLIENT_ID = "CLIENT_ID"
FS_TOP_MAIN_WINDOW_COL_MNT_ROOT = "MOUNT_ROOT"
FS_TOP_MAIN_WINDOW_COL_MNTPT_HOST_ADDR = "MOUNT_POINT@HOST/ADDR"
MAIN_WINDOW_TOP_LINE_ITEMS_START = [ITEMS_PAD,
FS_TOP_MAIN_WINDOW_COL_CLIENT_ID,
FS_TOP_MAIN_WINDOW_COL_MNT_ROOT]
MAIN_WINDOW_TOP_LINE_ITEMS_END = [FS_TOP_MAIN_WINDOW_COL_MNTPT_HOST_ADDR]
# adjust this map according to stats version and maintain order
# as emitted by mgr/stast
MAIN_WINDOW_TOP_LINE_METRICS = OrderedDict([
("CAP_HIT", MetricType.METRIC_TYPE_PERCENTAGE),
("READ_LATENCY", MetricType.METRIC_TYPE_LATENCY),
("WRITE_LATENCY", MetricType.METRIC_TYPE_LATENCY),
("METADATA_LATENCY", MetricType.METRIC_TYPE_LATENCY),
("DENTRY_LEASE", MetricType.METRIC_TYPE_PERCENTAGE),
])
MGR_STATS_COUNTERS = list(MAIN_WINDOW_TOP_LINE_METRICS.keys())
FS_TOP_VERSION_HEADER_FMT = '{prog_name} - {now}'
FS_TOP_CLIENT_HEADER_FMT = 'Client(s): {num_clients} - {num_mounts} FUSE, '\
'{num_kclients} kclient, {num_libs} libcephfs'
CLIENT_METADATA_KEY = "client_metadata"
CLIENT_METADATA_MOUNT_POINT_KEY = "mount_point"
CLIENT_METADATA_MOUNT_ROOT_KEY = "root"
CLIENT_METADATA_IP_KEY = "IP"
CLIENT_METADATA_HOSTNAME_KEY = "hostname"
GLOBAL_METRICS_KEY = "global_metrics"
GLOBAL_COUNTERS_KEY = "global_counters"
def calc_perc(c):
if c[0] == 0 and c[1] == 0:
return 0.0
return round((c[0] / (c[0] + c[1])) * 100, 2)
def calc_lat(c):
return round(c[0] + c[1] / 1000000000, 2)
def wrap(s, sl):
"""return a '+' suffixed wrapped string"""
if len(s) < sl:
return s
return f'{s[0:sl-1]}+'
class FSTop(object):
def __init__(self, args):
self.rados = None
self.stop = False
self.stdscr = None # curses instance
self.client_name = args.id
self.cluster_name = args.cluster
self.conffile = args.conffile
def handle_signal(self, signum, _):
self.stop = True
def init(self):
try:
if self.conffile:
r_rados = rados.Rados(rados_id=self.client_name, clustername=self.cluster_name,
conffile=self.conffile)
else:
r_rados = rados.Rados(rados_id=self.client_name, clustername=self.cluster_name)
r_rados.conf_read_file()
r_rados.connect()
self.rados = r_rados
except rados.Error as e:
if e.errno == errno.ENOENT:
raise FSTopException(f'cluster {self.cluster_name} does not exist')
else:
raise FSTopException(f'error connecting to cluster: {e}')
self.verify_perf_stats_support()
signal.signal(signal.SIGTERM, self.handle_signal)
signal.signal(signal.SIGINT, self.handle_signal)
def fini(self):
if self.rados:
self.rados.shutdown()
self.rados = None
def selftest(self):
stats_json = self.perf_stats_query()
if not stats_json['version'] == FS_TOP_SUPPORTED_VER:
raise FSTopException('perf stats version mismatch!')
def setup_curses(self):
self.stdscr = curses.initscr()
# coordinate constants for windowing -- (height, width, y, x)
# NOTE: requires initscr() call before accessing COLS, LINES.
HEADER_WINDOW_COORD = (2, curses.COLS - 1, 0, 0)
TOPLINE_WINDOW_COORD = (1, curses.COLS - 1, 3, 0)
MAIN_WINDOW_COORD = (curses.LINES - 4, curses.COLS - 1, 4, 0)
self.header = curses.newwin(*HEADER_WINDOW_COORD)
self.topl = curses.newwin(*TOPLINE_WINDOW_COORD)
self.mainw = curses.newwin(*MAIN_WINDOW_COORD)
curses.wrapper(self.display)
def verify_perf_stats_support(self):
mon_cmd = {'prefix': 'mgr module ls', 'format': 'json'}
try:
ret, buf, out = self.rados.mon_command(json.dumps(mon_cmd), b'')
except Exception as e:
raise FSTopException(f'error checking \'stats\' module: {e}')
if ret != 0:
raise FSTopException(f'error checking \'stats\' module: {out}')
if 'stats' not in json.loads(buf.decode('utf-8'))['enabled_modules']:
raise FSTopException('\'stats\' module not enabled. Use \'ceph mgr module '
'enable stats\' to enable')
def perf_stats_query(self):
mgr_cmd = {'prefix': 'fs perf stats', 'format': 'json'}
try:
ret, buf, out = self.rados.mgr_command(json.dumps(mgr_cmd), b'')
except Exception as e:
raise FSTopException(f'error in \'perf stats\' query: {e}')
if ret != 0:
raise FSTopException(f'error in \'perf stats\' query: {out}')
return json.loads(buf.decode('utf-8'))
def mtype(self, typ):
if typ == MetricType.METRIC_TYPE_PERCENTAGE:
return "(%)"
elif typ == MetricType.METRIC_TYPE_LATENCY:
return "(s)"
else:
return ''
def refresh_top_line_and_build_coord(self):
xp = 0
x_coord_map = {}
heading = []
for item in MAIN_WINDOW_TOP_LINE_ITEMS_START:
heading.append(item)
nlen = len(item) + len(ITEMS_PAD)
x_coord_map[item] = (xp, nlen)
xp += nlen
for item, typ in MAIN_WINDOW_TOP_LINE_METRICS.items():
it = f'{item}{self.mtype(typ)}'
heading.append(it)
nlen = len(it) + len(ITEMS_PAD)
x_coord_map[item] = (xp, nlen)
xp += nlen
for item in MAIN_WINDOW_TOP_LINE_ITEMS_END:
heading.append(item)
nlen = len(item) + len(ITEMS_PAD)
x_coord_map[item] = (xp, nlen)
xp += nlen
self.topl.addstr(0, 0, ITEMS_PAD.join(heading), curses.A_STANDOUT | curses.A_BOLD)
return x_coord_map
def refresh_client(self, client_id, metrics, counters, client_meta, x_coord_map, y_coord):
for item in MAIN_WINDOW_TOP_LINE_ITEMS_END:
coord = x_coord_map[item]
if item == FS_TOP_MAIN_WINDOW_COL_MNTPT_HOST_ADDR:
self.mainw.addstr(y_coord, coord[0],
f'{client_meta[CLIENT_METADATA_MOUNT_POINT_KEY]}@'
f'{client_meta[CLIENT_METADATA_HOSTNAME_KEY]}/'
f'{client_meta[CLIENT_METADATA_IP_KEY]}')
for item in MAIN_WINDOW_TOP_LINE_ITEMS_START:
coord = x_coord_map[item]
hlen = coord[1] - len(ITEMS_PAD)
if item == FS_TOP_MAIN_WINDOW_COL_CLIENT_ID:
self.mainw.addstr(y_coord, coord[0],
wrap(client_id.split('.')[1], hlen))
elif item == FS_TOP_MAIN_WINDOW_COL_MNT_ROOT:
self.mainw.addstr(y_coord, coord[0],
wrap(client_meta[CLIENT_METADATA_MOUNT_ROOT_KEY], hlen))
cidx = 0
for item in counters:
coord = x_coord_map[item]
m = metrics[cidx]
typ = MAIN_WINDOW_TOP_LINE_METRICS[MGR_STATS_COUNTERS[cidx]]
if item.lower() in client_meta['valid_metrics']:
if typ == MetricType.METRIC_TYPE_PERCENTAGE:
self.mainw.addstr(y_coord, coord[0], f'{calc_perc(m)}')
elif typ == MetricType.METRIC_TYPE_LATENCY:
self.mainw.addstr(y_coord, coord[0], f'{calc_lat(m)}')
else:
self.mainw.addstr(y_coord, coord[0], "N/A")
cidx += 1
def refresh_clients(self, x_coord_map, stats_json):
counters = [m.upper() for m in stats_json[GLOBAL_COUNTERS_KEY]]
y_coord = 0
for client_id, metrics in stats_json[GLOBAL_METRICS_KEY].items():
self.refresh_client(client_id,
metrics,
counters,
stats_json[CLIENT_METADATA_KEY][client_id],
x_coord_map,
y_coord)
y_coord += 1
def refresh_main_window(self, x_coord_map, stats_json):
self.refresh_clients(x_coord_map, stats_json)
def refresh_header(self, stats_json):
if not stats_json['version'] == FS_TOP_SUPPORTED_VER:
self.header.addstr(0, 0, 'perf stats version mismatch!')
return False
client_metadata = stats_json[CLIENT_METADATA_KEY]
num_clients = len(client_metadata)
num_mounts = len([client for client, metadata in client_metadata.items() if not
metadata[CLIENT_METADATA_MOUNT_POINT_KEY] == 'N/A'])
num_kclients = len([client for client, metadata in client_metadata.items() if
"kernel_version" in metadata])
num_libs = num_clients - (num_mounts + num_kclients)
now = datetime.now().ctime()
self.header.addstr(0, 0,
FS_TOP_VERSION_HEADER_FMT.format(prog_name=FS_TOP_PROG_STR, now=now),
curses.A_STANDOUT | curses.A_BOLD)
self.header.addstr(1, 0, FS_TOP_CLIENT_HEADER_FMT.format(num_clients=num_clients,
num_mounts=num_mounts,
num_kclients=num_kclients,
num_libs=num_libs))
return True
def display(self, _):
x_coord_map = self.refresh_top_line_and_build_coord()
self.topl.refresh()
while not self.stop:
stats_json = self.perf_stats_query()
self.header.clear()
self.mainw.clear()
if self.refresh_header(stats_json):
self.refresh_main_window(x_coord_map, stats_json)
self.header.refresh()
self.mainw.refresh()
time.sleep(1)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Ceph Filesystem top utility')
parser.add_argument('--cluster', nargs='?', const='ceph', default='ceph',
help='Ceph cluster to connect (defualt: ceph)')
parser.add_argument('--id', nargs='?', const='fstop', default='fstop',
help='Ceph user to use to connection (default: fstop)')
parser.add_argument('--conffile', nargs='?', default=None,
help='Path to cluster configuration file')
parser.add_argument('--selftest', dest='selftest', action='store_true',
help='run in selftest mode')
args = parser.parse_args()
err = False
ft = FSTop(args)
try:
ft.init()
if args.selftest:
ft.selftest()
sys.stdout.write("selftest ok\n")
else:
ft.setup_curses()
except FSTopException as fst:
err = True
sys.stderr.write(f'{fst.get_error_msg()}\n')
except Exception as e:
err = True
sys.stderr.write(f'exception: {e}\n')
finally:
ft.fini()
sys.exit(0 if not err else -1)

View File

@ -0,0 +1,25 @@
# -*- coding: utf-8 -*-
from setuptools import setup
__version__ = '0.0.1'
setup(
name='cephfs-top',
version=__version__,
description='top(1) like utility for Ceph Filesystem',
keywords='cephfs, top',
scripts=['cephfs-top'],
install_requires=[
'rados',
],
classifiers=[
'Development Status :: 3 - Alpha',
'Environment :: Console',
'Intended Audience :: System Administrators',
'License :: OSI Approved :: GNU Lesser General Public License v2 or later (LGPLv2+)',
'Operating System :: POSIX :: Linux',
'Programming Language :: Python :: 3'
],
license='LGPLv2+',
)

View File

@ -0,0 +1,7 @@
[tox]
envlist = py3
skipsdist = true
[testenv:py3]
deps = flake8
commands = flake8 --ignore=W503 --max-line-length=100 cephfs-top