rados/cephadm/smoke* does not use the install task and the adjust-ulimits
dependency is met as a part of it. create_rbd_pool needs adjust-ulimits,
so for now we will disable create_rbd_pool by default and only set it
to true for the upgrade suite.
Signed-off-by: Neha Ojha <nojha@redhat.com>
This prevents the first mgr from being shut down due to lack of
appropriate placements.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
Signed-off-by: Yuri Weinstein <yweinste@redhat.com>
Nautilus monitors do not note the client sessions in the mgrmap. So when
we upgrade the warnings are unavoidable.
Fixes: https://tracker.ceph.com/issues/47689
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This happens because mgrs may start before mons, or before mons have
published a new mgrmap that disables orchestrator_cli.
Signed-off-by: Sage Weil <sage@redhat.com>
Create a pool that generates hit sets before the upgrade, and ensure that
they (continue to) trim after the upgrade.
Signed-off-by: Sage Weil <sage@redhat.com>
These are new packages, so they won't install just by upgrading the old
packages, and they are needed for some of the tests.
Signed-off-by: Sage Weil <sage@redhat.com>
* refs/pull/32232/head:
qa: no need to exclude ceph-mgr-diskprediction-cloud from package list to be installed
qa/packages: do not install ceph-mgr-diskprediction-cloud by default
ceph.spec.in: add runtime deps for mgr-diskprediction-cloud
Reviewed-by: Sage Weil <sage@redhat.com>
- This is an ancient swift version
- The tempest tests are newer and show provide similar coverage
- It somehow broke with the py3 transition
Signed-off-by: Sage Weil <sage@redhat.com>
We cannot do a traditional upgrade (install old package, start cluster,
install new package, ...) because nautilus is el7-only and octopus is
el8-only.
So, do these tests on ubuntu.
Signed-off-by: Sage Weil <sage@redhat.com>
* refs/pull/29292/head:
os/bluestore: warn on no per-pool omap
os/bluestore: fsck: warning (not error) by default on no per-pool omap
os/bluestore: fsck: int64_t for error count
os/bluestore: default size of 1 TB for testing
os/bluestore: behave if we *do* set PGMETA and PERPOOL flags
os/bluestore: do not set both PGMETA_OMAP and PERPOOL_OMAP
os/bluestore: fsck: only generate 1 error per omap_head
os/bluestore: make fsck repair convert to per-pool omap
os/bluestore: teach fsck to tolerate per-pool omap
os/bluestore: ondisk format change to 3 for per-pool omap
mon/PGMap: add data/omap breakouts for 'df detail' view
osd/osd_types: separate get_{user,allocated}_bytes() into data and omap variants
mon/PGMap: fix stored_raw calculation
mon/PGMap: add in actual omap usage into per-pool stats
osd: report per-pool omap support via store_statfs_t
os/bluestore: set per_pool_omap key on mkfs
osd/osd_types: count per-pool omap capable OSDs
os/bluestore: report omap_allocated per-pool
os/bluestore: add pool prefix to omap keys
kv/KeyValueDB: take key_prefix for estimate_prefix_size()
os/bluestore: fix manual omap key manipulation to use Onode::get_omap_key()
os/bluestore: make omap key helpers Onode methods
os/bluestore: add Onode::get_omap_prefix() helper
os/bluestore: change _do_omap_clear() args
Reviewed-by: Josh Durgin <jdurgin@redhat.com>