* since focal and centos both have yaml-cpp 0.6 available, which dropped
having boost as it's dependency, moving to 0.6 seems a good upgrade.
* cmake: delete Buildyaml, since distro suppilies v0.6 this is not needed
This fixes the build failure, as jaegertracing requires yaml-cpp v0.6+
```
Could NOT find yaml-cpp: Found unsuitable version "", but required is at
least "0.5.1" (found yaml-cpp_LIBRARY-NOTFOUND)
Signed-off-by: Deepika Upadhyay <dupadhya@redhat.com>
better maintainablity this way. and drop unsupported options of
- journaler batch interval
- journaler batch max
Signed-off-by: Kefu Chai <kchai@redhat.com>
to silence mypy warnings like:
tasks/vstart_runner.py:691: error: Definition of "_run_python" in base class "LocalCephFSMount" is incompatible with definition in base class "CephFSMount"
tasks/vstart_runner.py:705: error: Definition of "_run_python" in base class "LocalCephFSMount" is incompatible with definition in base class "CephFSMount"
Signed-off-by: Kefu Chai <kchai@redhat.com>
cephadm: haproxy 2.4 defaults to a different container user.
Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: Dimitri Savineau <dsavinea@redhat.com>
doc/dev/perf_counters: update docs to include more context about perf counter usage
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
as per
https://www.sphinx-doc.org/en/master/usage/restructuredtext/domains.html
> Like py:currentmodule, this directive produces no output. Instead, it
> serves to notify Sphinx that all following option directives document
> options for the program called name.
> ...
> The program name may contain spaces (in case you want to document
> subcommands like svn add and svn commit separately).
and to avoid the warnings like:
doc/man/8/ceph-volume.rst:424: WARNING: Duplicate explicit target name:
"cmdoption-ceph-volume-h".
we should specify different "program" for different set of options.
Signed-off-by: Kefu Chai <kchai@redhat.com>
otherwise we have following warning in health report
{"status":"HEALTH_WARN","checks":{"RECENT_MGR_MODULE_CRASH":{"severity":"HEALTH_WARN","summary":{"message":"1 mgr modules have recently crashed","count":1},"muted":false}},"mutes":[]}
and it does not disappear after the test waits for 30 seconds.
and the tasks.mgr.test_module_selftest.TestModuleSelftest test
fails like:
2021-07-21T09:59:52.560 INFO:tasks.cephfs_test_runner:======================================================================
2021-07-21T09:59:52.561 INFO:tasks.cephfs_test_runner:ERROR: test_module_commands (tasks.mgr.test_module_selftest.TestModuleSelftest)
2021-07-21T09:59:52.561 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2021-07-21T09:59:52.561 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2021-07-21T09:59:52.562 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_6a5d5abc027f706687dec92f92ff6fc6f074d2ae/qa/tasks/mgr/test_module_selftest.py", line 201, in
test_mo
dule_commands
2021-07-21T09:59:52.562 INFO:tasks.cephfs_test_runner: self.wait_for_health_clear(timeout=30)
2021-07-21T09:59:52.562 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_6a5d5abc027f706687dec92f92ff6fc6f074d2ae/qa/tasks/ceph_test_case.py", line 172, in
wait_for_health_c
lear
2021-07-21T09:59:52.563 INFO:tasks.cephfs_test_runner: self.wait_until_true(is_clear, timeout)
2021-07-21T09:59:52.563 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_6a5d5abc027f706687dec92f92ff6fc6f074d2ae/qa/tasks/ceph_test_case.py", line 209, in
wait_until_true
2021-07-21T09:59:52.563 INFO:tasks.cephfs_test_runner: raise TestTimeoutError("Timed out after {0}s and {1} retries".format(elapsed, retry_count))
2021-07-21T09:59:52.564 INFO:tasks.cephfs_test_runner:tasks.ceph_test_case.TestTimeoutError: Timed out after 30s and 0 retries
in this change, the crash reports are nuked right after
we see the warning, so that we can have a clean health
report.
Fixes: https://tracker.ceph.com/issues/51743
Signed-off-by: Kefu Chai <kchai@redhat.com>
qa/standalone: fixing the timings when waiting for deep-scrub to start
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Sridhar Seshasayee <sseshasa@redhat.com>
This is useful in perf testing and benchmarking as it allows,
through comparison with `CyanStore`, to judge to pentalty of
`AlienStore`.
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
The goals here are:
1. make deprecation of `FileStore` easier as creational
dependencies are segmented into a variant of `create()`
that could be cut off altogether with `FileStore`.
2. Allow crimson adapt `create()` without burdening it with
`FileStore`'s dependencies.
3. Simplify the implementation as a bunch of preprocessor
directives accumulated there over the time.
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
This boils down into 2 things:
1. building `MemStore` for crimson,
2. implementing the crimson-specific variant of `get_omap_values()`.
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
crimson/tools/store_nbd: pass app.alien() down to FSDriver
Reviewed-by: Chunmei Liu <chunmei.liu@intel.com>
Reviewed-by: Samuel Just <sjust@redhat.com>
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
qa/*/test_envlibrados_for_rocksdb.sh: remove OS specific configuration
Reviewed-by: David Galloway <dgallowa@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
this change partially reverts 81305b0da9,
otherwise we have following errors:
tasks/vstart_runner.py:691: error: Definition of "_run_python" in base class "LocalCephFSMount" is incompatible with definition in base class "CephFSMount"
tasks/vstart_runner.py:705: error: Definition of "_run_python" in base class "LocalCephFSMount" is incompatible with definition in base class "CephFSMount"
Signed-off-by: Kefu Chai <kchai@redhat.com>
Using only the exit status of `ceph-bluestore-tool show-label` to
determine if a device is a bluestore OSD could report a false negative
if there is a system error when `ceph-bluestore-tool` opens the device.
A better check is to open the device and read the bluestore device
label (the first 22 bytes of the device) to look for the bluestore
device signature ("bluestore block device"). If ceph-volume fails to
open the device due to a system error, it is safest to assume the device
is BlueStore so that an existing OSD isn't overwritten.
Signed-off-by: Blaine Gardner <blaine.gardner@redhat.com>
This seems like a reasonable default value based on testing results here:
https://docs.google.com/spreadsheets/d/1dwKcxFKpAOWzDPekgojrJhfiCtPgiIf8CGGMG1rboRU/edit?usp=sharing
Eventually we may want to rethink how the throttles and even how flow control
works, but this at least gives us some basic limits now ( a little higher than
the old value of 100 that we used for many years).
Signed-off-by: Mark Nelson <mnelson@redhat.com>
Another alternative would be to investigage a different setup
leverageing `--sysctl net.ipv4.ip_unprivileged_port_start=0`,
but that would be a larger PR.
Fixes: https://tracker.ceph.com/issues/51355
Signed-off-by: Sebastian Wagner <sewagner@redhat.com>