Update allocation file when we expand-device
Add the expended space to the allocator and then force an update to the allocation file
There is also a new standalone test case for expand
Fixes: https://tracker.ceph.com/issues/53699
Signed-off-by: Gabriel Benhanokh <gbenhano@redhat.com>
osd/scrub (& qa/standalone): test for scrub behavior when no-scrub is set but no-deep-scrub is not
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Matan Breizman <Matan.Brz@gmail.com>
Following PR#43244, the 'ceph tell pg deep_scrub' now sets both
deep-scrub and "regular" scrub time-stamps. This necessitated a modification
to TEST_scrub_warning, as more PGs in this test are late for their regular scrubbing.
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
A bug (https://tracker.ceph.com/issues/52901 - now fixed) resulted in
this combination of conditions leaving the PG in "scrubbing" state
forever. That bug was fixed by PR#43521. The patch here adds a
test to detect the (now fixed) wrong behavior.
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
Make osd-scrub-dump test ignore the 'scrubbing' that might be late to disappear
from the modified (PR #43403) 'pg dump' output.
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
PR #43239 has modified ECBackend::get_hash_info() behavior.
Modified the standalone scrub test to match.
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
This commit has been causing scheduled jobs to request e.g. aarch64
smithi machines, which don't exist. The dispatcher then tries to find them forever, requiring the dispatcher to be killed and restarted. The queue
will sit idle until someone notices the problem.
Signed-off-by: Zack Cerza <zack@redhat.com>
modified: qa/standalone/erasure-code/test-erasure-code-plugins.sh
new file: qa/suites/rados/thrash-erasure-code-isa/arch/aarch64.yaml
Signed-off-by: Dai Zhiwei <daizhiwei3@huawei.com>
Addition of a new column, SCRUB_DURATION, to the pg stats that stores the time taken for a PG scrub.
Fixes: https://tracker.ceph.com/issues/52605
Signed-off-by: Aishwarya Mathuria <amathuri@redhat.com>
Add a standalone test - test_activate_osd_skip_benchmark() in ceph-helpers.sh
that exercises the osd-mclock-skip-benchmark option.
Fixes: https://tracker.ceph.com/issues/52025
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
Modified test cases:
1. ver-health.sh:
a. TEST_check_version_health_1():
To avoid intermittent timeouts observed in wait_for_health_string(),
increase the wait time to 20 secs.
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
The following tests in the test files mentioned below use the
"osd_scrub_sleep" option to introduce delays during scrubbing to help
determine scrubbing states, validate reservations during scrubbing etc..
This works when using the "wpq" scheduler.
But when the "mclock_scheduler" is enabled, the "osd_scrub_sleep" is
disabled and overridden to 0. This is done to delegate the scheduling of
the background scrubs to the "mclock_scheduler" based on the set QoS
parameters. Due to this, the checks to verify the scrub states,
reservations etc. fail since the window to check them is very short
due to scrubs completing very quickly. This affects a small subset of
scrub tests mentioned below,
1. osd-scrub-dump.sh -> TEST_recover_unexpected()
2. osd-scrub-repair.sh -> TEST_auto_repair_bluestore_tag()
3. osd-scrub-test.sh -> TEST_scrub_abort(), TEST_deep_scrub_abort()
Only for the above tests, until there's a reliable way to query scrub
states with "--osd-scrub-sleep" set to 0, the "osd_op_queue" config
option is set to "wpq".
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
Modified test cases:
1. test-erasure-eio.sh:
a. Test_ec_backfill_unfound():
- Set osd_mclock_profile to high_recovery_ops profile.
- Increase the wait for backfill_unfound timeout to 240 secs.
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
Modified test cases:
1. osd-backfill-prio.sh:
Set osd_op_queue = wpq for all tests since the mclock doesn't
consider recovery priority as part of its scheduling algorithm.
2. osd-backfill-space.sh:
Set osd_mclock_profile to high_recovery_ops and increase the wait
for backfills timeout to 1200 secs for the following tests:
- TEST_backfill_test_simple()
- TEST_backfill_test_multi()
- TEST_backfill_test_sametarget()
- TEST_backfill_multi_partial()
- TEST_ec_backfill_simple()
- TEST_ec_backfill_multi()
- SKIP_TEST_ec_backfill_multi_partial()
- SKIP_TEST_ec_backfill_multi_partial()
3. osd-backfill-stats:
- TEST_backfill_ec_down_all_out():
Set osd_mclock_profile to high_recovery_ops and increase the wait
for recovery timeout to 240 secs.
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
Modified test cases:
1. osd-recovery-prio.sh:
Set osd_op_queue = wpq for all tests since mclock
doesn't consider recovery priority as part of its
scheduling algorithm.
2. osd-recovery-stats.sh:
a. TEST_recovery_undersized():
- Set osd_mclock_profile to high_recovery_ops profile.
- Increase wait for recovery timeout to 300 secs.
3. osd-rep-recov-eio.sh:
a. TEST_rep_backfill_unfound():
- Set osd_mclock_profile to high_recovery_ops profile.
- Increase wait for backfill_unfound to 360 secs.
4. repeer-on-acting-back.sh:
a. TEST_repeer_on_down_act():
- Set osd_mclock_profile to high_recovery_ops profile.
(To improve the test duration)
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
List of changes:
1. Remove the enforcement to use osd_op_queue=wpq when an osd is brought
up in the following functions:
- run_osd()
- run_osd_filestore() and
- activate_osd()
2. New functions:
- get_op_scheduler() - Get the current osd_op_queue for an osd.
3. Modified test cases:
- test_run_osd() - Add check for osd_max_backfill count.
The mclock scheduler overrides the count to 1000.
4. New test cases:
- test_activate_osd_after_mark_down()
- test_get_op_scheduler()
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
qa/standalone: fixing the timings when waiting for deep-scrub to start
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Sridhar Seshasayee <sseshasa@redhat.com>
initiate_and_fetch_state() initiates a scrub, then polls the published
PG state looking for 'scrubbing'. Calling flush_pg_stats() as part of
the polling process might cause the scrub and the following recovery to
be missed altogether.
Note: this polling mechanism is definitely not robust. Will be
redesigned in the future.
Fixes: https://tracker.ceph.com/issues/51581
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
* refs/pull/42041/head:
mgr/restful: ignore min/max_size
test/crush: drop min/max_size refs
qa/workunits/mon/pool_ops: remove test for min/max_size check
qa: scrub a few remaining mentions of ruleset
qa/standalone/mon/osd-*: fix tests
PendingReleaseNotes: note min/max_size removal
mgr/dashboard: remove max/min_size and ruleset
mon/OSDMonitor: fix calls to CrushTester
crush: eliminate min_size and max_size
test/cli/crushtool: reunumber rulesets in test maps
crushtool: require min/max or num-rep for --test
crush: remove last traces of 'ruleset'
test/cli/crushtool: use 'id' instead of 'ruleset' in crush inputs
crushtool: take --min-rep and --max-rep explicitly
crush/CrushTester: drop --ruleset
doc: scrub 'ruleset' from docs
src/erasure-code: rule, not ruleset
mon/OSDMonitor: remove check_crush_rule() callers
mon/OSDMonitor: rule, not ruleset
crushtool: remove check for overlapped ruels
crush/CrushWrapper: get_osd_pool_default_crush_replicated_ruleset -> rule
crush: remove find_rule()
mon/OSDMonitor: use pool's crush rule directly
osd/OSDMap: drop checks for ruleset == ruleid
osd/OSDMap: use pool's crush rule_id directly
mon/PGMap: use pool's crush_rule directly
mon/OSDMonitor: remove crush ruleset->rule rewrite
Reviewed-by: Ernesto Puerta <epuertat@redhat.com>
Reviewed-by: Avan Thakkar <athakkar@redhat.com>
Tests identified with missing teardown within osd-scrub-repair.sh:
1. TEST_periodic_scrub_replicated()
2. TEST_scrub_warning()
3. TEST_request_scrub_priority()
Centralize setup and teardown within the run() function for all the tests.
Fixes: https://tracker.ceph.com/issues/51580
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
The following files and tests in them did not teardown the
cluster after a test completed.
1. osd/osd-force-create.sh
2. osd/osd-reuse-id.sh
3. osd/pg-split-merge.sh
This wouldn't cause issues if the tests are run individually. But when
running all the tests in the files mentioned above, it could introduce
unexpected test failures down the line. For e.g., multiple tests may
create pools with same name and if they are not cleaned up properly, this
could result in unexpected failures in a subsequent test.
Fixes: https://tracker.ceph.com/issues/51580
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
This is mostly for testing: a lot of tests assume that there are no
existing pools. These tests relied on a config to turn off creating the
"device_health_metrics" pool which generally exists for any new Ceph
cluster. It would be better to make these tests tolerant of the new .mgr
pool but clearly there's a lot of these. So just convert the config to
make it work.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
This change is a follow-up to commit
b6e9c0903d5ad9a699b675f9fa7739e9cce9a5f3 that set the scheduler to wpq in
run_osd() and run_osd_filestore(). In addition, activate_osd() too has to
set the scheduler type to 'wpq' in order to be consistent and avoid test
failures.
The above is a temporary measure until all the standalone tests are
modified to run well with the mclock_scheduler.
Fixes: https://tracker.ceph.com/issues/51074
Signed-off-by: Sridhar Seshasayee <sseshasa@redhat.com>
A new test: auto_repair_bluestore_tag.
Based on auto_repair_bluestore_basic. Sets auto-repair, starts a periodic
deep-scrub, then verifies that the PG state, while scrubbing, is 'scrubbing+deep'
and not 'scrubbing+deep+repair'.
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
mclock_scheduler is now the default and some of these tests need to be modified
to run well with it. Continue using wpq until
https://tracker.ceph.com/issues/50574 is addressed.
Signed-off-by: Neha Ojha <nojha@redhat.com>
There already is a test to verify the mempool sharding works, in the sense that
it uses at least half of the variables available to count the number of
allocated objects and their total size. This new test verifies that, with
sharding, object counting is at least twice faster than without sharding. It
also collects cacheline contention data with the perf c2c tool. The manual
analysis of this data shows the optimization gain is indeed related to cacheline
contention.
Fixes: https://tracker.ceph.com/issues/49896
Signed-off-by: Loïc Dachary <loic@dachary.org>
Sync up with master up to commit 3d8e73b266 ("Merge pull request
#40731 from tchaikov/wip-yamlize-options"). Specifically, bring in
src/common/options.cc yamlization and move new auth-related options
into src/common/options/global.yaml.in.
Conflicts:
src/common/options.cc
src/common/options/global.yaml.in
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Change TEST_recovery_scrub_2 to create more objects and use
osd_recovery_sleep to prevent recovery from finihing before
we start to scrub. Verify that at least 1 scrub was started
while the pg was reovering.
Fixes: https://tracker.ceph.com/issues/49779
Signed-off-by: David Zafman <dzafman@redhat.com>
This reverts commit 1323bdb839.
The tests needs to scrub while recovery is in progress, so catching
recovery from the logs after the fact isn't the proper setup.
We can use osd_recovery_sleep config.
Signed-off-by: David Zafman <dzafman@redhat.com>
Given and initial (set of) osd(s), if provide up to N OSDs that can be
stopped together without making PGs become unavailable.
This can be used to quickly identify large(r) batches of OSDs that can be
stopped together to (for example) upgrade.
Signed-off-by: Sage Weil <sage@newdream.net>
in beb62c029a, FEATURE_QUINCY was added to
ceph::features::mon::get_persistent(), so update the test accordingly.
Signed-off-by: Kefu Chai <kchai@redhat.com>