* refs/pull/29493/head:
qa/tasks/mgr/mgr_test_case: get mgrmap from 'mgr dump', not status
qa/tasks/ceph_manager: no newlines in 'ceph -s' output
mon: make mon summary more concise in 'ceph -s'
mon/MgrStatMonitor: set initial service_map 'modified' to cluster mkfs
mon: remove double-nesting of "osdmap" for ceph status
mon/MgrMap: make print_summary (used by 'ceph -s') more concise
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
* refs/pull/29511/head:
common/config: respect POD_MEMORY_REQUEST *and* POD_MEMORY_LIMIT env vars
common/config: let diff show non-build defaults
common/config: do no include multiple 'default' values
Reviewed-by: Mark Nelson <mnelson@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
That way we don't deconstruct it right after assigning a reference to
part of it.
Fixes: https://tracker.ceph.com/issues/41172
Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
We use a pair<pg_notify_t,PastIntervals> everywhere a pg_notify_t is used.
This is silly; just make it a member instead.
Include some minor compat cruft so we can speak to pre-octopus OSDs.
Signed-off-by: Sage Weil <sage@redhat.com>
mgr/dashboard: Prevent clone when layering not enabled on parent image
Reviewed-by: Ricardo Marques <rimarques@suse.com>
Reviewed-by: Tatjana Dehler <tdehler@suse.com>
rgw : Bucket mv, bucket chown and user rename utilities
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
std::shared_mutex expects a call to unlock_shared() after lock_shared().
use the std::shared_lock guard to make it more obviously correct
Signed-off-by: Casey Bodley <cbodley@redhat.com>
* use primitive types instead of `JLeaf(the_type)` as they are
equivalent in this context
* remove fields which are added only if certain channels are
activated.
* allow unknown fields, as we are including various stuff
in the report, for instance, osdmap, usage, crash info, etc.
Signed-off-by: Kefu Chai <kchai@redhat.com>
fix the regression introduced by 8c50be5df6, so ceph-mgr's python
modules are able to import python-common.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* refs/pull/29116/head:
osd: move heartbeat connection cleanup to helper
osd: break con<->session cycle when removing heartbeat peers
osd: mark down heartbeat connections on shutdown
crimson/: move get_mnow() to ShardServices, pass to heartbeat
crimson/osd: stubs for get_mnow, get_hbstamps
crimson/osd/heartbeat: adapt to new MOSDPing fields
crimson/osdmap_service: add get_mnow(), get_up_epoch()
osd/PeeringState: take HeartbeatStamps refs for current interval
osd: track clock delta between peer OSDs
osd: add get_mnow() interface to OSDService, PG, PeeringState
osd: record startup_time
osd: some minor refactoring/cleanup in handle_osd_ping
Reviewed-by: Samuel Just <sjust@redhat.com>
If a kubernetes pod spec specifies a limit of X, then the pod gets both
the limits.memory and requests.memory resource fields set, and rook passes
those as POD_MEMORY_LIMIT and POD_MEMORY_REQUEST environment variables.
This is a problem if only the limit is set, because we will end up
setting our osd_memory_target (and, in the future, other *_memory_targets)
to the hard limit, and the daemon will inevitably reach that threshold
and get killed.
Fix this by also looking at the POD_MEMORY_LIMIT value, and applying the
ratio (default: .8) to it, and setting our actual target to the min of
that and the POD_MEMORY_REQUEST.
Also, set the "default" target to ratio*limit, so that it will apply in
general when no request is specified.
When both request and limit are 10M, we then see
"osd_memory_target": {
"default": "800000000000",
"env": "800000000000",
"final": "800000000000"
},
In a more "normal" situation where limit is 10M and request is 5M, we get
"osd_memory_target": {
"default": "800000000000",
"env": "500000000000",
"final": "500000000000"
},
If only limit is specified (to 10M), we get
"osd_memory_target": {
"default": "800000000000",
"final": "800000000000"
},
Fixes: https://tracker.ceph.com/issues/41037
Signed-off-by: Sage Weil <sage@redhat.com>
This helps to to avoid the case where new tasks were not being scheduled
when an image name was re-used after having a task created under the
same name.
Fixes: https://tracker.ceph.com/issues/41032
Signed-off-by: Jason Dillaman <dillaman@redhat.com>