SKIP_IF_CRIMSON won't work here since we try to create EC pools
prior to the test being run.
Skip if the entire test instead by seperating EC tests.
Signed-off-by: Matan Breizman <mbreizma@redhat.com>
qa/suites/rados/thrash: modify selection of max-scrubs configuration values
Reviewed-by: Matan Breizman <mbreizma@redhat.com>
Reviewed-by: Samuel Just <sjust@redhat.com>
As the osd-max-scrubs default was increased from 1 to (currently) 3, the
original set of optional values under rados/thrash/3-scrub-overrides are
no longer useful. This commits changes the set of optional values to
reflect the current default.
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
* refs/pull/53999/head:
PendingReleaseNotes: support for subvolumes and subvolume groups in snap_schedule
snap_schedule/tests: fix db upgrade issue
qa: add yaml for on demand subvol version testing
qa: add test cases for testing --subvol and --group arguments
mgr/volumes: conditionalize subvolume upgrade
mgr/volumes: ensure correct init of v1 subvol
mgr/snap_schedule: add subvol and subvol group arguments to cli
mds/snap_schedule: add subvolume group column management
mgr/volumes: add remote helper methods to fetch subvolume info
Reviewed-by: Venky Shankar <vshankar@redhat.com>
I believe this check was originally added because
the 2->3 migration migrated some nfs related bits. Since
then we've had to update the migration this checks
for every time we bump the max migration. This change
is intended to instead just have it check for a
miration > 2 so we don't have to keep updating it.
Signed-off-by: Adam King <adking@redhat.com>
The compiled zipapp cephadm that began in reef needs
to be pulled differently than the old single python script
cephadm from earlier releases. This commit updates the reef-x
upgrade suite to pull cephadm in this new way.
Signed-off-by: Adam King <adking@redhat.com>
Adds a test that will set the default cephadm command
timeout and then force a timeout to occur by holding
the cephadm lock and triggering a device refresh.
This works because cephadm ceph-volume commands
require the cephadm lock to run, so the command will
timeout waiting for the lock to become available.
Signed-off-by: Adam King <adking@redhat.com>
* refs/pull/52196/head:
qa: configure balancer for multi-mds workloads
qa: create qa subvolumes in named subvolumegroup
qa: do not rely on default max_mds value
qa: add automate_balance to dashboard qa schema
doc/cephfs: add docs for balance_automate
doc/cephfs: use bash prompt for shell code
mds: add balance_automate fs setting
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>
* refs/pull/54726/head:
PendingReleaseNotes: announce cephfs-shell avail. on rhel9
qa: test fs:shell on all distros
qa: add cephfs-shell to installed rpm packages
ceph.spec.in: enable support for cephfs-shell by default via EPEL9
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Dhairya Parmar <dparmar@redhat.com>
It's common during cluster setup for there to be periods with
degraded/recovering PGs. Ignore those errors.
Signed-off-by: Samuel Just <sjust@redhat.com>
Bases on quincy-x.
```
$ cp -R qa/suites/upgrade/quincy-x/ qa/suites/upgrade/reef-x
$ git add qa/suites/upgrade/reef-x
$ git mv qa/suites/upgrade/reef-x/filestore-remove-check/1-ceph-install/quincy.yaml qa/suites/upgrade/reef-x/filestore-remove-check/1-ceph-install/reef.yaml
$ find qa/suites/upgrade/reef-x/ -type f -exec sed -i 's/quincy/reef/g' {} +
```
A note from rebase: changes from 05e24270a2
have been pulled in.
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
- remove upgrades from octopus
- stubs for completing upgrade to reef
Still missing the quincy-x upgrade tests.
Signed-off-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
A basic test for ceph-nvmeof[1] where
nvmeof initiator is created.
It requires use of a new task "nvmeof_gateway_cfg"
under cephadm which shares config information
between two remote hosts.
[1] https://github.com/ceph/ceph-nvmeof/
Signed-off-by: Vallari Agrawal <val.agl002@gmail.com>
This is a simple sub-suite that has one job. Always schedule on all supported distros.
Fixes: https://tracker.ceph.com/issues/43393
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
We need to get more debug logs from bluestore to know what exactly
has happened for the extent map.
URL: https://tracker.ceph.com/issues/63586
Signed-off-by: Xiubo Li <xiubli@redhat.com>
* start testing new_ops and stress_tests with both the drivers(i.e. fuse and kclient)
therefore moved 0-clients/ from tasks/3-workload/new_ops/ to tasks/ and renamed it to
2-clients/
* since new_ops/ and stress_tests/ now share the common upgrade yaml, moved the
tests yamls(in stress_tests/1-tests) directly under 3-workload/stress_tests/
* renamed 1-client-sanity.yaml in new_ops/ to newops.yaml
Fixes: https://tracker.ceph.com/issues/62953
Signed-off-by: Dhairya Parmar <dparmar@redhat.com>
qa/cephadm: basic test for monitoring stack
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
Reviewed-by: Juan Miguel Olmo Martínez <jolmomar@redhat.com>
Reviewed-by: Redouane Kachach <rkachach@redhat.com>
Since we're adding a warning if any host is listed explicitly
in the placement of any service when removing the host,
we need to adjust the host drain test that removes a host
without the --force flag to not have the explicit hostname
in the placement for the mon service.
Signed-off-by: Adam King <adking@redhat.com>