The ceph_erasure_code_benchmark output is converted into a JSON series
suitable to display in HTML with the http://www.flotcharts.org/
library. A self contained copy of the HTML,JS,CSS files is included for
durability and can be used from the source tree with:
CEPH_ERASURE_CODE_BENCHMARK=src/ceph_erasure_code_benchmark \
PLUGIN_DIRECTORY=src/.libs \
qa/workunits/erasure-code/bench.sh fplot jerasure |
tee qa/workunits/erasure-code/bench.js
and display with:
firefox qa/workunits/erasure-code/bench.html
Signed-off-by: Loic Dachary <loic@dachary.org>
Expand the default suite to enumerate all cases that are relevant to the
current code base so that it is easier to consume. Namely it means
* iterating over object sizes of 4KB (what is used by default) and
1MB (what was previous benchmarked)
* grouping results in series that would make sense to plot to get the
behavior of a given technique for a series of K/M values and all
possible erasures.
Instead of specifying the iterations to run, set the size of the total
data set to be exercised and compute the iterations by dividing it by
the object size. Since the object size varies, it is impractical to
preset the number of iterations and get meaningful results.
The PARAMETERS environment variable is added to enable the caller to
inject --parameter jerasure-variant=generic, for instance.
The packets size is calculated based on the other parameters. The
options are limited when packets are small (4KB) and it would not make a
real difference to give control over it. The packet size is capped to
a maximum of 3100 bytes which is roughly what has been found to be an
optimal value for large packets (1MB).
Signed-off-by: Loic Dachary <loic@dachary.org>
Previously this test assumed no pre-existing
filesystem and no MDS running. Generalize it
to nuke any existing filesystems found before
running, so that you can use it inside a vstart
cluster that had MDS>0.
Signed-off-by: John Spray <john.spray@redhat.com>
A sample command to run the test on hadoop 2.x is
TESTDIR=/home/test HADOOP_HOME=/usr/lib/hadoop HADOOP_MR_HOME=/usr/lib/hadoop-mapreduce sh workunits/hadoop-wordcount/test.sh starting hadoop-wordcount test
Signed-off-by: rootfs <hchen@redhat.com>
`cephfs set_layout` was broken and is now deprecated
in favour of using xattrs for layout. Retire the
kclient-specific test.
Fixes: #8773
Signed-off-by: John Spray <john.spray@redhat.com>
Make sure gets and sets of tiering-specific variables succeed on tier
pools and fail on non-tier pools.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
This reverts commit 29c33f0c05.
We don't need the debugging any more, and having two separate fsx runners
already caused one update-in-the-wrong-place issue.
Signed-off-by: Greg Farnum <greg@inktank.com>
If the test is run against a cluster started with vstart.sh (which is
the case for make check), the --asok-does-not-need-root disables the use
of sudo and allows the test to run without requiring privileged user
permissions.
Signed-off-by: Loic Dachary <loic@dachary.org>
(cherry picked from commit 522174b066)
mon: OSDMonitor: 'osd pool' - if we can set it, we must be able to get it
Reviewed-by: Loic Dachary <loic@dachary.org>
Reviewed-by: Sage Weil <sage@redhat.com>
Add support to get the values for the following variables:
- target_max_objects
- target_max_bytes
- cache_target_dirty_ratio
- cache_target_full_ratio
- cache_min_flush_age
- cache_min_evict_age
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Avoid possibility that we create multiple OSDs do to retries by passing in
the optional uuid arg. (A stray osd id will make the osd tell tests a
few lines down fail.)
Fixes: #8728
Signed-off-by: Sage Weil <sage@inktank.com>
... that after a fs new on fresh pools, crash_replay_interval
is set to the default on the data pool.
Signed-off-by: John Spray <john.spray@redhat.com>
If the test is run against a cluster started with vstart.sh (which is
the case for make check), the --asok-does-not-need-root disables the use
of sudo and allows the test to run without requiring privileged user
permissions.
Signed-off-by: Loic Dachary <loic@dachary.org>
Accomodate changes:
* data and metadata pools no longer exist by default
* filesystem-using tests must use `fs new` to create
the filesystem first.
Signed-off-by: John Spray <john.spray@inktank.com>
Fail if 'rbd rm' fails - most probably it'd fail with "image still has
watchers" and in that case it's a bug in the kernel client which we do
want to notice. Also nuke the trap-based error handling - cleanup() is
half-baked and not really necessary here.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Take advantage of the fact that 'rbd map' will now talk to udev and
output the device that got assigned by the kernel to the newly created
mapping. Drop the get_id() cruft, udevadm settle and chown calls.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
Commit 7dc93a9651 fixed an incorrect
behavior with the OSD's 'osd bench' value hard-caps. The test wasn't
appropriately modified unfortunately.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>