mirror of
https://github.com/ceph/ceph
synced 2025-01-22 11:05:02 +00:00
023524a26d
One of our customers wants to verify the data safety of Ceph during scaling the cluster up, and the test case looks like: - keep checking the status of a speficied pg, who's up is [1, 2, 3] - add more osds: up [1, 2, 3] -> up [1, 4, 5], acting = [1, 2, 3], backfill_targets = [4, 5], pg is remapped - stop osd.2: up [1, 4, 5], acting = [1, 3], backfill_targets = [4, 5], pg is undersized - restart osd.2, acting will stay unchanged as 2 belongs to neither current up nor acting set, hence leaving the corresponding pg pinning undersized for a long time until all backfill targets completes It does not pose any critical problem -- we'll end up getting that pg back into active + clean, except that the long live DEGRADED warnings keep bothering our customer who cares about data safety more than any thing else. The right way to achieve the above goal is for: boost::statechart::result PeeringState::Active::react(const MNotifyRec& notevt) to check whether the newly booted node could be validly chosen for the acting set and request a new temp mapping. The new temp mapping would then trigger a real interval change that will get rid of the DEGRADED warning. Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn> Signed-off-by: Yan Jun <yan.jun8@zte.com.cn> |
||
---|---|---|
.. | ||
divergent-priors.sh | ||
ec-error-rollforward.sh | ||
osd-backfill-prio.sh | ||
osd-backfill-recovery-log.sh | ||
osd-backfill-space.sh | ||
osd-backfill-stats.sh | ||
osd-bench.sh | ||
osd-bluefs-volume-ops.sh | ||
osd-config.sh | ||
osd-copy-from.sh | ||
osd-dup.sh | ||
osd-fast-mark-down.sh | ||
osd-force-create-pg.sh | ||
osd-markdown.sh | ||
osd-reactivate.sh | ||
osd-recovery-prio.sh | ||
osd-recovery-space.sh | ||
osd-recovery-stats.sh | ||
osd-rep-recov-eio.sh | ||
osd-reuse-id.sh | ||
pg-split-merge.sh | ||
repeer-on-acting-back.sh | ||
repro_long_log.sh |