ceph/qa/standalone/osd
xie xingguo 023524a26d osd/PeeringState: restart peering on any previous down acting member coming back
One of our customers wants to verify the data safety of Ceph during scaling
the cluster up, and the test case looks like:
- keep checking the status of a speficied pg, who's up is [1, 2, 3]
- add more osds: up [1, 2, 3] -> up [1, 4, 5], acting = [1, 2, 3], backfill_targets = [4, 5],
  pg is remapped
- stop osd.2: up [1, 4, 5], acting = [1, 3], backfill_targets = [4, 5], pg is undersized
- restart osd.2, acting will stay unchanged as 2 belongs to neither current up nor acting set,
  hence leaving the corresponding pg pinning undersized for a long time until all backfill
  targets completes

It does not pose any critical problem -- we'll end up getting that pg back into active + clean,
except that the long live DEGRADED warnings keep bothering our customer who cares about data
safety more than any thing else.

The right way to achieve the above goal is for:

	boost::statechart::result PeeringState::Active::react(const MNotifyRec& notevt)

to check whether the newly booted node could be validly chosen for the acting set and
request a new temp mapping. The new temp mapping would then trigger a real interval change
that will get rid of the DEGRADED warning.

Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
Signed-off-by: Yan Jun <yan.jun8@zte.com.cn>
2020-02-21 17:52:52 +08:00
..
divergent-priors.sh
ec-error-rollforward.sh
osd-backfill-prio.sh
osd-backfill-recovery-log.sh qa/standalone/osd/osd-backfill-recovery-log.sh: fix TEST_backfill_log_2 2020-01-24 22:42:04 +00:00
osd-backfill-space.sh test: Fix wait_for_state() to wait for a PG to get into a state 2020-01-13 18:39:38 -08:00
osd-backfill-stats.sh
osd-bench.sh qa/standalone/osd/osd-bench: debug bluestore 2020-01-29 07:43:41 -06:00
osd-bluefs-volume-ops.sh
osd-config.sh
osd-copy-from.sh
osd-dup.sh
osd-fast-mark-down.sh
osd-force-create-pg.sh
osd-markdown.sh
osd-reactivate.sh
osd-recovery-prio.sh
osd-recovery-space.sh test: Fix wait_for_state() to wait for a PG to get into a state 2020-01-13 18:39:38 -08:00
osd-recovery-stats.sh
osd-rep-recov-eio.sh
osd-reuse-id.sh
pg-split-merge.sh
repeer-on-acting-back.sh osd/PeeringState: restart peering on any previous down acting member coming back 2020-02-21 17:52:52 +08:00
repro_long_log.sh test: Update pg log test for new trimming behavior 2020-01-28 15:23:45 -08:00