Commit Graph

376 Commits

Author SHA1 Message Date
Sage Weil
9352fc94ab qa/standalone/mon/health-mute.sh: s/kill daemons/kill_daemons/
Signed-off-by: Sage Weil <sage@redhat.com>
2019-08-19 09:27:51 -05:00
Kefu Chai
fc55a51a87
Merge pull request #29579 from liewegas/wip-big-vs-bluestore
osd: scrub error on big objects; make bluestore refuse to start on big objects

Reviewed-by: David Zafman <dzafman@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
2019-08-16 20:24:43 +08:00
Sage Weil
710fef96ea qa/standalone/mon/health-mutes: add tests
Make sure mute and unmute work.  Make sure stick is sticky. Mkae sure
counts can go down bupt if they go upt hte mute clears.

Signed-off-by: Sage Weil <sage@redhat.com>
2019-08-14 20:40:08 -05:00
David Zafman
5928fe8ca0 osd/PG: scrub error when objects are larger than osd_max_object_size
Signed-off-by: David Zafman <dzafman@redhat.com>
2019-08-14 20:25:12 -05:00
Kefu Chai
f13c7c83d9
Merge pull request #29342 from Jeegn-Chen/wip-scrub-extended-sleep
osd: support osd_scrub_extended_sleep

Reviewed-by: David Zafman <dzafman@redhat.com>
2019-08-13 09:09:52 +08:00
Jeegn Chen
3bfb5c2621 osd: support osd_scrub_extended_sleep
1. always take osd_scrub_sleep for manually initiated
   scrubs
2. when scrub_time_permit() return true for scheduled
   ones, the existing osd_scrub_sleep is used
3. when scrub_time_permit() return false for scheduled
   ones, there may be 2 scenarios
   3.1 if osd_scrub_extended_sleep <= osd_scrub_sleep,
       let's take osd_scrub_sleep
   3.2 otherwise, let's take osd_scrub_extended_sleep

Fixes: http://tracker.ceph.com/issues/40955
Signed-off-by: Jeegn Chen <jeegnchen@tencent.com>
2019-08-12 16:54:36 +08:00
David Zafman
b1c14b7f6e
Merge pull request #29494 from dzafman/wip-scrub-test
test: Bump sleep time for slower machines

Reviewed-by: Neha Ojha <nojha@redhat.com>
2019-08-07 18:30:31 -07:00
David Zafman
74d294d70b test: Bump sleep time for slower machines
Signed-off-by: David Zafman <dzafman@redhat.com>
2019-08-05 07:40:09 -07:00
Changcheng Liu
43ad4bf0dc ceph-objectstore-tool: set log date format
Set datefmt parameter to track the log information
%F Equivalent to %Y-%m-%d
%T Equivalent to "%H:%M:%S"

Signed-off-by: Robert Church <robert.church@windriver.com>
Reviewed-by: Changcheng Liu <changcheng.liu@aliyun.com>
2019-07-25 09:39:19 +08:00
Sage Weil
1b46267cf7 Merge PR #28839 into master
* refs/pull/28839/head:
	osd: support osd_repair_during_recovery

Reviewed-by: David Zafman <dzafman@redhat.com>
2019-07-16 10:07:53 -05:00
Sage Weil
ff7813aa14 qa/standalone/scrub/osd-scrub-snaps.sh: adjust expected output
SnapSet now dumps just seq, not a (fake) SnapContext.

Signed-off-by: Sage Weil <sage@redhat.com>
2019-07-12 09:55:06 -05:00
Sage Weil
03b9c66080 ceph-objectstore-tool: fix use of SnapSet::snaps
Instead, use clone_snaps to identify clones.

Signed-off-by: Sage Weil <sage@redhat.com>
2019-07-12 09:55:06 -05:00
Sage Weil
23eaf7c498 qa/standalone/scrub/osd-scrub-snaps: fix kv grep
SnapMapper keys are now SNA_, not MAP_.

Fixes: http://tracker.ceph.com/issues/40725
Signed-off-by: Sage Weil <sage@redhat.com>
2019-07-12 08:11:21 -05:00
Sage Weil
b2eb5232de Merge PR #28901 into master
* refs/pull/28901/head:
	qa/standalone/scrub/osd-scrub-repair: fix 'scrub ok' grep
	osd/osd_types: remove 'snap_context' from SnapSet::dump()

Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
2019-07-08 08:36:05 -05:00
Jeegn Chen
80f4e1f677 osd: support osd_repair_during_recovery
osd_repair_during_recovery=true allow explicitly requested reqair
to be scheduled on OSDs with active recovering.

Fixes: http://tracker.ceph.com/issues/40620
Signed-off-by: Jeegn Chen <jeegnchen@tencent.com>
2019-07-08 09:26:27 +08:00
Sage Weil
a960f2faa7 qa/standalone/scrub/osd-scrub-repair: fix 'scrub ok' grep
The log now also has a 'purged_snaps scrub ok' message that (generally)
precedes the first scrubbed PG.

Signed-off-by: Sage Weil <sage@redhat.com>
2019-07-04 18:27:37 -05:00
Sage Weil
70ad54a0b3 osd/osd_types: remove 'snap_context' from SnapSet::dump()
We no longer have a snaps field with real values, so dumping this as a
"snap_context" is silly.  Instead, just dump the seq.

Adjust qa/standalone/scrub/osd-scrub-repair.sh accordingly.

Signed-off-by: Sage Weil <sage@redhat.com>
2019-07-04 18:24:41 -05:00
Sage Weil
71e5cba00b Merge PR #28867 into master
* refs/pull/28867/head:
	qa/standalone/ceph-helpers: more osd debug

Reviewed-by: David Zafman <dzafman@redhat.com>
2019-07-03 21:27:20 -05:00
David Zafman
fe3b693d0f
Merge pull request #28334 from dzafman/wip-40073
osd: Fix the way that auto repair triggers after regular scrub

Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
2019-07-03 15:27:27 -07:00
Sage Weil
0d0759531a qa/standalone/ceph-helpers: more osd debug
debug_ms=1
debug_monc=20

Hunting down http://tracker.ceph.com/issues/40666

Signed-off-by: Sage Weil <sage@redhat.com>
2019-07-03 16:53:00 -05:00
David Zafman
27918bb906 osd: Handle scrub interval changes
Global changes reschedule all PG scrubs
Pool changes reschedule pool PG scrubs

Signed-off-by: David Zafman <dzafman@redhat.com>
2019-06-27 14:20:54 -07:00
Neha Ojha
bd15824567
Merge pull request #28204 from dzafman/wip-39555
mon: Improve health status for backfill_toofull and recovery_toofull

Reviewed-by: Joao Eduardo Luis <joao@suse.de>
Reviewed-by: Neha Ojha <nojha@redhat.com>
2019-06-20 11:12:10 -07:00
David Zafman
fa698e18e1 mon: Improve health status for backfill_toofull and recovery_toofull
Treat backfull_toofull as a warning condition because it can resolve itself.
Includes test case for PG_BACKFILL_FULL
Includes test case for recovery_toofull / PG_RECOVERY_FULL

Fixes: https://tracker.ceph.com/issues/39555

Signed-off-by: David Zafman <dzafman@redhat.com>
2019-06-20 02:22:01 +00:00
xie xingguo
ec27a162de mgr, osd: 'ceph osd df' by pool
Our test admin has been asking for this for the past few years:-)
Besides, this is also useful for operating on large Ceph clusters with
mutliple storage pools possibly spanning over all osds.

Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
2019-06-18 20:29:40 +08:00
David Zafman
590b4138ae
Merge pull request #28302 from dzafman/wip-40078
test: Make sure that extra scheduled scrubs don't confuse test

Reviewed-by: Josh Durgin <jdurgin@redhat.com>
2019-06-05 14:43:30 -07:00
Kefu Chai
cdba0f1420 qa/standalone/ceph-helpers: resurrect all OSD before waiting for health
address the regression introduced by e62cfceb
in e62cfceb, we wanted to test the newly introduced TOO_FEW_OSDS
warning, so we increased the number of OSD to the size of pool, so if
the number of OSD is less than pool size, monitor will send a warning
message.

but we need to bring all OSDs back if we are expecting a healthy
cluster. in this change, all OSDs are resurrect before
`wait_for_health_ok`.

Signed-off-by: Kefu Chai <kchai@redhat.com>
2019-05-30 23:52:36 +08:00
Kefu Chai
f6b022bdbe
Merge pull request #27806 from ashitakasam/add-osd-alarm
osd: Better error message when OSD count is less than osd_pool_default_size

Reviewed-by: Neha Ojha <nojha@redhat.com>
2019-05-30 21:28:54 +08:00
David Zafman
893d227c82 test: Make sure that extra scheduled scrubs don't confuse test
Fixes: http://tracker.ceph.com/issues/40078

Signed-off-by: David Zafman <dzafman@redhat.com>
2019-05-29 14:03:57 -07:00
David Zafman
7959159e83 test: Adding standalone test of log copy handling
Signed-off-by: David Zafman <dzafman@redhat.com>
2019-05-10 15:31:51 -07:00
zjh
e62cfceb95 qa/standalone: remove osd_pool_default_size in test_wait_for_health_ok
Signed-off-by: zjh <jhzeng93@foxmail.com>
2019-05-06 14:35:54 +08:00
Samuel Just
5ea5c47152 test-erasure-eio: first eio may be fixed during recovery
The changes to the way EC/ReplicatedBackend communicate read
t showerrors had a side effect of making first eio on the object in
TEST_rados_get_subread_eio_shard_[01] repair itself depending
on the timing of the killed osd recovering.  The test should
be improved to actually test that behavior at some point.

Signed-off-by: Samuel Just <sjust@redhat.com>
2019-05-01 11:22:28 -07:00
sjust@redhat.com
252d5c20cf osd/: move stat updates and publishing to PeeringState
Signed-off-by: Samuel Just <sjust@redhat.com>
2019-05-01 11:22:24 -07:00
David Zafman
66b041fa4a
Merge pull request #27769 from dzafman/wip-39333
osd-backfill-space.sh test failed in TEST_backfill_multi_partial()

Reviewed-by: Neha Ojha <nojha@redhat.com>
2019-04-26 11:55:04 -07:00
David Zafman
9931023457 test: osd-backfill-spsace.sh doesn't matter which PG wins the race
Fixes: http://tracker.ceph.com/issues/39333

Signed-off-by: David Zafman <dzafman@redhat.com>
2019-04-26 10:11:00 -07:00
David Zafman
39cc14bdc1
Merge pull request #27503 from dzafman/wip-39099
osd: Give recovery for inactive PGs a higher priority

Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
2019-04-25 15:06:56 -07:00
David Zafman
71d254647a test: osd-recovery-scrub.sh ignore error from kill_daemons()
Another work around for http://tracker.ceph.com/issues/38195

Signed-off-by: David Zafman <dzafman@redhat.com>
2019-04-25 13:53:27 -07:00
David Zafman
71d82dbeb9 test: Add tests for pool recovery priority conversion
Signed-off-by: David Zafman <dzafman@redhat.com>
2019-04-25 13:53:27 -07:00
David Zafman
444aa9f9fe osd, mon: New pool recovery priority range -10 to 10
Use OSD_POOL_PRIORITY_MAX and OSD_POOL_PRIORITY_MIN constants
Scale legacy priorities if exceeds maximum

Signed-off-by: David Zafman <dzafman@redhat.com>
2019-04-25 13:53:27 -07:00
David Zafman
3a234164d0
Merge pull request #27279 from dzafman/wip-divergent
Improvements to standalone tests

Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
2019-04-24 10:58:11 -07:00
Sage Weil
a3a4af3454 Merge PR #27656 into master
* refs/pull/27656/head:
	doc/dev/erasure-coded-pool: update
	doc/rados/operations/erasure-code*: update default ec profile references
	common/options: change default erasure-code-profile to k=2 m=2

Reviewed-by: Neha Ojha <nojha@redhat.com>
2019-04-24 08:14:55 -05:00
David Zafman
7e77898001 test: Divergent testing of _merge_object_divergent_entries() cases
Case 1: A more recent update exists
Case 2: The first entry in the divergent sequence is a create
Case 3  NOT TESTED - Ohject currently missing
Case 4: We can rollback all of the entries
Case 5: We cannot rollback at least 1 of the entries

Support starting OSDs even when "noup" is set (don't wait for up).
Move create_ec_pool() to ceph-helpers.sh

Fixes: https://tracker.ceph.com/issues/39162

Signed-off-by: David Zafman <dzafman@redhat.com>
2019-04-22 18:50:24 -07:00
Sage Weil
755e8c4ef2 Merge PR #27595 into master
* refs/pull/27595/head:
	osd: add 'ceph osd stop <osd.nnn>' command

Reviewed-by: Sage Weil <sage@redhat.com>
2019-04-20 08:52:01 -05:00
Sage Weil
3e86be7d50 common/options: change default erasure-code-profile to k=2 m=2
Signed-off-by: Sage Weil <sage@redhat.com>
2019-04-19 16:47:57 -05:00
xie xingguo
5dbae13ce0 osd: add 'ceph osd stop <osd.nnn>' command
stop command can be used to force stopping a specified osd daemon, e.g.,
you don't have to pre-figure out where it located.

Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
2019-04-18 13:55:02 +08:00
David Zafman
96861a8116 ceph-objectstore-tool: Rename dump-import to dump-export
If user specifies dump-import it will still work, but isn't
in the usage that way.

Fixes: http://tracker.ceph.com/issues/39284

Signed-off-by: David Zafman <dzafman@redhat.com>
2019-04-12 13:17:45 -07:00
Sage Weil
dc97651cbd Merge PR #27499 into master
* refs/pull/27499/head:
	qa/standalone/osd/osd-markdown: fix dup command disabling

Reviewed-by: Neha Ojha <nojha@redhat.com>
2019-04-12 06:54:58 -05:00
Sage Weil
f7216d0b2c qa/standalone/osd/osd-markdown: fix dup command disabling
The ceph cli tool checks for the presence of the variable, not its value.

Fixes: http://tracker.ceph.com/issues/38359
Signed-off-by: Sage Weil <sage@redhat.com>
2019-04-10 16:44:38 -05:00
David Zafman
69fa515c95 test: Make most tests use default objectstore bluestore
Change run_osd() to default objectstore bluestore
Use run_osd_filestore() to use the non-default objectstore
Fix inject_eio to handle any objectstore if config prefixed with type

Remaining tests using filestore:
	osd-pool-create.sh TEST_pool_create_rep_expected_num_objects
		Test filestore directory creation
	qa/standalone/osd/osd-dup.sh TEST_filestore_to_bluestore
		Obvious
	qa/standalone/osd/osd-rep-recov-eio.sh TEST_rep_read_unfound
		Requires data digest in object info
	qa/standalone/scrub/osd-scrub-repair.sh multiple tests
		Erasure code pools append mode for filestore is tested
	qa/standalone/special/ceph_objectstore_tool.py
		Test code verifies COT by directly examining filestore contents

Fixes: https://tracker.ceph.com/issues/39162

Signed-off-by: David Zafman <dzafman@redhat.com>
2019-04-10 08:55:04 -07:00
Kefu Chai
3805935ae0
Merge pull request #26806 from xiexingguo/wip-repair-eio-rep
osd: automatically repair replicated replica on pulling error

Reviewed-by: David Zafman <dzafman@redhat.com>
2019-04-08 19:46:36 +08:00
xie xingguo
6a8aedc107 qa: add new test case for pulling error
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
2019-04-04 11:04:43 +08:00