per https://www.debian.org/doc/debian-policy/ch-relationships.html
> Recommends
> This declares a strong, but not absolute, dependency.
>
> The Recommends field should list packages that would be found together
> with this one in all but unusual installations.
ceph-mgr-modules-core provides a set of ceph-mgr modules which are
always enabeld. but the rook module enables ceph-mgr to install and
configure a Ceph cluster using Rook. this module is very useful but
it does not have such a strong connection with ceph-mgr-modules-core.
we can always install it separately for using better intergration with
Rook.
See-also: https://tracker.ceph.com/issues/45574
Signed-off-by: Kefu Chai <kchai@redhat.com>
This PR intends to add a separate frontend component for the API docs that are currently being rendered from the python code.
Fixes: https://tracker.ceph.com/issues/50955
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
* initialize the cct use by test, otherwise g_ceph_context is
not set at all.
* instead of using g_ceph_context, use static member variable cct.
less dependency to the global instance.
* setup and teardown the cct for test suite, because global_init()
initialize g_ceph_context, which cannot be set multiple times.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* refs/pull/41636/head:
mgr/cephadm/inventory: do not try to resolve current mgr host
pybind/mgr/mgr_module: make get_mgr_ip() return mgr's IP from mgrmap
mgr/restful: use get_mgr_ip() instead of hostname
Reviewed-by: Adam King <adking@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
The path_prefix in prometheus.yml was specifying an
endpoint prefix, which was invalid. This resulted in 404
errors when trying to send alerts to alertmanager and
blocked alerts being sent on to the ceph-dashboard API
receiver. This fix remves this prefix.
Fixes: https://tracker.ceph.com/issues/51073
Signed-off-by: Paul Cuzner <pcuzner@redhat.com>
* refs/pull/41654/head:
mds: do not infinitely recursively print a metric
Reviewed-by: Xiubo Li <xiubli@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
* refs/pull/39910/head:
test: Add test for mgr hang when osd is full
mgr: Set client_check_pool_perm to false
mds: Add full caps to avoid osd full check
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
The CNI configuration may set up a private network for the container, which
is mapped to the hostname in /etc/hosts. For example, my test box sets
up 10.88.0.0/24 because I was using crio + kubeadm on this host earlier
(at least I think that's why):
$ sudo podman run --rm --name test123 --entrypoint /bin/bash -it quay.ceph.io/ceph-ci/ceph:master -c "cat /etc/hosts"
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.88.0.8 f9e91bf2478f test123
In any case, we should never trust a lookup of our own hostname from inside
a container!
This isn't quite sufficient, though: if this is a single-host cluster, then
we fall back to using get_mgr_ip(). That value may be distorted by the
public_network option on the mgr, but we don't have any other good
options here, and single-node clusters are unlikely to have complex
network configs.
Refactor a bit to avoid the try/except nesting.
Signed-off-by: Sage Weil <sage@newdream.net>
The previous approach was convoluted: we tried to do a DNS lookup on the
hostname, which would fail if /etc/hosts had an entry. Which, with podman,
it does. And the IP it has will vary in all sorts of weird ways. For
example, CNI on my host means that I get a dynamic address in 10.88.0.0/24.
Avoid all of that nonsense and use the IP that is in the mgrmap. There
may be multiple IPs (v2 + v1, or maybe even IPv4 + v6 in the future); in
that case, use the first one.
Signed-off-by: Sage Weil <sage@newdream.net>
osd: Run osd bench test to override default max osd capacity for mclock
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
rgw: parse tenant name out of rgwx-bucket-instance
Reviewed-by: Daniel Gryniewicz <dang@redhat.com>
Reviewed-by: Shilpa Jagannath <smanjara@redhat.com>
The multipart upload meta object is deleted when the multipart upload
is completed. When the bucket is versioned, it needs to be deleted
from the bucket index rather than go through versioning delete process
that adds a delete marker to the bucket index rather than simply
removing it from the bucket index.
Signed-off-by: J. Eric Ivancich <ivancich@redhat.com>
based on recent observation, quite a few C++ source file take
around more than 3.0GiB to compile. for instance,
test_mock_HttpClient.cc could take up to 6270MiB memory to compile.
so increase MAX_{LINK,COMPILE}_MEM accordingly.
Signed-off-by: Kefu Chai <kchai@redhat.com>
to lower the number of jobs, we are experiencing build failures on
a builder with 48c96t, 193 free mem. the failures were caused by
OOM killer which kills the c++ compiler
[498376.128969] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/jenkins.service,task=cc1plus,pid=1387895,uid=1110
[498376.145288] Out of memory: Killed process 1387895 (cc1plus) total-vm:3323312kB, anon-rss:3164568kB, file-rss:0kB, shmem-rss:0kB, UID:1110
[498376.315185] oom_reaper: reaped process 1387895 (cc1plus), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[498377.882072] cc1plus invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
before this change, we use the total memory to calculate the number
of jobs, and assume that each job takes at most 2.5GiB mem. in the
case above, the # of job is 96.
after this change, we use the free memory, and increse the mem per job
to 3.0GiB. in the case above, the # of job would be 85.
Signed-off-by: Kefu Chai <kchai@redhat.com>
for better readability, and so it's easier to make this step optional if
developer is not interested in using the restful mgr module.
Signed-off-by: Kefu Chai <kchai@redhat.com>
- UI: The object locking retention period must be expressed in days.
Years value from API (as API keeps param parity with RGW API) are converted to days.
- API: add more validation and increase test coverage.
Fixes: https://tracker.ceph.com/issues/49885
Signed-off-by: Alfonso Martínez <almartin@redhat.com>
Add data-allocate-pct to ceph-volume lvm batch to allow to only
allocate a certain percentage of a drive, e.g. to reduce wear leveling
impact on SSDs.
Signed-off-by: Jonas Pfefferle <pepperjo@japf.ch>