* refs/pull/25360/head:
qa/workunits/mon/pg_autoscaler: clean up pools afterwards
qa/suites/rados/singletone/all/pg-autoscaler: whitelist health warnings
qa/tasks/ceph: wait for splits/merges before final scrub
mon/OSDMonitor: be tidy with target_size_ratio and pre-nautilus code
mgr/pg_autoscaler: simplify conditions
qa/suites/rados: add simple pg-autoscaler test
qa/workunits/cephtool/test.sh: pg_autoscale_mode=off while testing pg_num etc
doc/rados/operations: document autoscaler and its health warnings
mgr/pg_autoscaler: add pg autoscaler module
pybind/mgr/mgr_util: move format_ helpers out of status module
mon/OSDMonitor: accept optional target_size_{bytes,ratio} to 'osd pool create'
mon/OSDMonitor: remove max_split_count configurable
osd/osd_types: pool_opts_t: int -> int64_t
osd/osd_types: pool_opts: fix whitespace
osd/osd_types: pool_opts_t: make encoding feature-dependent
mgr/devicehealth: pg_num_min 1 for device_health_metrics pool
mon/OSDMonitor: accept optional pg_num_min to 'osd pool create'
mon/OSDMonitor: apply osd_pool_default_pg_autoscale_mode to new pools
pybind/mgr/mgr_module: some accessors
mon/MgrMonitor: enable progress module by default
osd/osd_types: add pool pg_autoscale_mode, pg_num_min, target_size_{bytes,ratio} properties
osdc/Objecter: revise get_latest_version locking
os/memstore: ignore OP_COLL_SET_BITS
qa: generalise REQUIRE_MEMSTORE
mgr: drop GIL in get_config
mon: add 'size' arg to `osd pool create`
mon: use pg_num_target for checks during creation
mgr: revise locking in getter paths
common/options: add `mon_target_pg_per_osd`
mgr: expose OSDMap.pool_raw_used_rate
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
This avoids a huge pg merge from 100s to 4, which takes a long time and
makes the teuthology scrub cleanup time out.
Signed-off-by: Sage Weil <sage@redhat.com>
Add explanatory information on:
* "rgw swift account in url" (including the Swift account in the Swift
API url and Keystone endpoint)
* "rgw swift versioning enabled" (enabling Swift object versioning)
* "rgw s3 auth use keystone" (enabling S3 authentication against
Keystone)
* "rgw keystone implicit tenants" (multi-tenancy via Keystone, including
its implications for the Swift and S3 APIs)
Fixes: http://tracker.ceph.com/issues/36765
Signed-off-by: Florian Haas <florian@citynetwork.eu>
I see this error when using "ceph orchestrator service ls":
Error EINVAL: Traceback (most recent call last):
File "/usr/lib64/ceph/mgr/orchestrator_cli/module.py", line 318, in handle_command
return self._handle_command(inbuf, cmd)
File "/usr/lib64/ceph/mgr/orchestrator_cli/module.py", line 330, in _handle_command
return self._list_services(cmd)
File "/usr/lib64/ceph/mgr/orchestrator_cli/module.py", line 165, in _list_services
s.config_location))
AttributeError: 'ServiceDescription' object has no attribute 'config_location'
The config_locations field should be rados_config_location.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
as centos-sclo-rh-source leads us to 404 at this moment. and we are not
using the source repo for building ceph. so we can just skip any
unavailable repo.
Fixes: http://tracker.ceph.com/issues/37707
Signed-off-by: Brad Hubbard <bhubbard@redhat.com>
Suggest or make changes to pool pg_num based on either current
utilization or administrator-provided target_size_{bytes,ratio}
expected utilization.
Signed-off-by: Sage Weil <sage@redhat.com>
This isn't really relevant or useful now that the mgr is throttling the
actual pg_num adjustment based on pg_num_target, % misplaced, etc.
Signed-off-by: Sage Weil <sage@redhat.com>
Move it up into CephTestCase so that mgr tests can
use it too, and pick it up in vstart_runner.py so
that these tests will work neatly there.
Signed-off-by: John Spray <john.spray@redhat.com>
Take advantage of keyword arguments to extend
what we can do in a single command during pool creation.
Signed-off-by: John Spray <john.spray@redhat.com>
This way, someone creating pools can proceed
as long as they've decreased the pg_num_target
of other pools, even if the adjustment hasn't
fully completed yet.
Signed-off-by: John Spray <john.spray@redhat.com>
This is the partner to mon_max_pg_per_osd, where
this is a more conservative target for PG auto adjustment,
leaving some breathing room for situations where we
might temporarily exceed our target PG count (but not
want to exceed our maximum PG count)
Signed-off-by: John Spray <john.spray@redhat.com>
* refs/pull/25190/head:
mgr/prometheus: adjust to new 'df' fields
mon/Monitor: fix newline between df section
doc: update docs for new ceph df output
mon/PGMap: break down RAW usage by device class
mon/PGMap: tweak df headers
mon/PGMap: GLOBAL -> RAW STORAGE in 'df' output
mon/PGMap: dump_fs_stats -> dump_cluster_stats
Reviewed-by: Kefu Chai <kchai@redhat.com>
commit 7b17da691f added a 'bucket' field
to this op without bumping the encode version, and is causing failures
on upgrade
Fixes: http://tracker.ceph.com/issues/37703
Signed-off-by: Casey Bodley <cbodley@redhat.com>