helper function to remove the user:buckets object.
rgw_remove_uid_index() now omits the object version tracker argument to
avoid reading the user info
Signed-off-by: Casey Bodley <cbodley@redhat.com>
rgw : Bucket mv, bucket chown and user rename utilities
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
std::shared_mutex expects a call to unlock_shared() after lock_shared().
use the std::shared_lock guard to make it more obviously correct
Signed-off-by: Casey Bodley <cbodley@redhat.com>
to address https://github.com/sphinx-doc/sphinx/issues/3620, we need to
use sphinx with its fix at
e049f86b2d
in other words, we need to use sphinx v2.0.0 and up. but sphinx 2.0
requires python >= 3.5, so we have to use python3 for building the
documents.
in this change:
* doc-requirements.txt: install python3 packages on debian derivatives
* build-doc: install python3.6 packages from EPEL7, and use python3
venv for using sphinx2
* doc-requirements.txt: bump up all python packages to latest
stable.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* use primitive types instead of `JLeaf(the_type)` as they are
equivalent in this context
* remove fields which are added only if certain channels are
activated.
* allow unknown fields, as we are including various stuff
in the report, for instance, osdmap, usage, crash info, etc.
Signed-off-by: Kefu Chai <kchai@redhat.com>
fix the regression introduced by 8c50be5df6, so ceph-mgr's python
modules are able to import python-common.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* refs/pull/29116/head:
osd: move heartbeat connection cleanup to helper
osd: break con<->session cycle when removing heartbeat peers
osd: mark down heartbeat connections on shutdown
crimson/: move get_mnow() to ShardServices, pass to heartbeat
crimson/osd: stubs for get_mnow, get_hbstamps
crimson/osd/heartbeat: adapt to new MOSDPing fields
crimson/osdmap_service: add get_mnow(), get_up_epoch()
osd/PeeringState: take HeartbeatStamps refs for current interval
osd: track clock delta between peer OSDs
osd: add get_mnow() interface to OSDService, PG, PeeringState
osd: record startup_time
osd: some minor refactoring/cleanup in handle_osd_ping
Reviewed-by: Samuel Just <sjust@redhat.com>
If a kubernetes pod spec specifies a limit of X, then the pod gets both
the limits.memory and requests.memory resource fields set, and rook passes
those as POD_MEMORY_LIMIT and POD_MEMORY_REQUEST environment variables.
This is a problem if only the limit is set, because we will end up
setting our osd_memory_target (and, in the future, other *_memory_targets)
to the hard limit, and the daemon will inevitably reach that threshold
and get killed.
Fix this by also looking at the POD_MEMORY_LIMIT value, and applying the
ratio (default: .8) to it, and setting our actual target to the min of
that and the POD_MEMORY_REQUEST.
Also, set the "default" target to ratio*limit, so that it will apply in
general when no request is specified.
When both request and limit are 10M, we then see
"osd_memory_target": {
"default": "800000000000",
"env": "800000000000",
"final": "800000000000"
},
In a more "normal" situation where limit is 10M and request is 5M, we get
"osd_memory_target": {
"default": "800000000000",
"env": "500000000000",
"final": "500000000000"
},
If only limit is specified (to 10M), we get
"osd_memory_target": {
"default": "800000000000",
"final": "800000000000"
},
Fixes: https://tracker.ceph.com/issues/41037
Signed-off-by: Sage Weil <sage@redhat.com>
This helps to to avoid the case where new tasks were not being scheduled
when an image name was re-used after having a task created under the
same name.
Fixes: https://tracker.ceph.com/issues/41032
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Move to ondisk format v3. This means that per-pool omap keys may exist,
but does not imply that *all* objects use the new form until the
per_pool_omap=1 super key is also set.
Signed-off-by: Sage Weil <sage@redhat.com>
The get_user_bytes() helper is a bit weird because it uses the
raw_used_rate (replication/EC factor) so that it can work *backwards*
from raw usage to normalized user usage. However, the legacy case that
works from PG stats does not use this factor... and the stored_raw value
(in the JSON output only) was incorrectly passing in a factor of 1.0,
which meant that for legacy mode it was a bogus value.
Fix by calculating stored_raw as stored_normalized * raw_used_rate.
Signed-off-by: Sage Weil <sage@redhat.com>
This is a minimal change: we aren't separately reporting data vs omap
usage (like we do in 'osd df' output for individual osds).
Signed-off-by: Sage Weil <sage@redhat.com>