Commit Graph

511 Commits

Author SHA1 Message Date
Dan van der Ster
b550112dba qa/standalone/osd: add bad-inc-map.sh
Test that the osd doesn't crash when it gets a bad incremental osdmap.

Related-to: https://tracker.ceph.com/issues/46443
Signed-off-by: Dan van der Ster <daniel.vanderster@cern.ch>
2020-07-28 23:15:42 +02:00
David Zafman
365e48d6ec test: Check for interuption of scrubs with nosrub/nodeep_scrub
Signed-off-by: David Zafman <dzafman@redhat.com>
2020-07-24 11:41:20 -07:00
David Zafman
f272768802 test: mon-last-epoch-clean.sh fixed to avoid shell globbing
Signed-off-by: David Zafman <dzafman@redhat.com>
2020-07-24 11:40:24 -07:00
Kefu Chai
0ac787be2a qa/standalone: drop py2 support
Signed-off-by: Kefu Chai <kchai@redhat.com>
2020-07-05 10:58:28 +08:00
Kefu Chai
48f0e02d76 qa/standalone: flake8 fixes
Signed-off-by: Kefu Chai <kchai@redhat.com>
2020-06-23 23:01:27 +08:00
Neha Ojha
64bcd436cc
Merge pull request #35632 from dzafman/wip-46064
tools: Add statfs operation to ceph-objecstore-tool

Reviewed-by: Neha Ojha <nojha@redhat.com>
2020-06-18 16:25:04 -07:00
David Zafman
19054ceb43 tools: Add statfs operation to ceph-objecstore-tool
Fixes: https://tracker.ceph.com/issues/46064

Signed-off-by: David Zafman <dzafman@redhat.com>
2020-06-18 10:07:38 -07:00
David Zafman
41322eaa62 test: flush_pg_stats() ignore OSDs that don't respond to getting sequence
This eliminates bogus errors in the logs and returned from flush_pg_stats()

Signed-off-by: David Zafman <dzafman@redhat.com>
2020-06-16 17:45:26 -07:00
David Zafman
661996d434 mgr: Warn when too many reads are repaired on an OSD
Include test case
Configurable by setting mon_osd_warn_num_repaired (default 10)
Ignore new health warning with random eio injection test

Fixes: https://tracker.ceph.com/issues/41564

Signed-off-by: David Zafman <dzafman@redhat.com>
2020-06-16 17:45:27 -07:00
David Zafman
1efa5ca0a6
Merge pull request #35425 from dzafman/wip-44314
test: osd-backfill-stats.sh use nobackfill to avoid races in remainin…

Reviewed-by: Neha Ojha <nojha@redhat.com>
2020-06-09 17:15:52 -07:00
David Zafman
92f970cbed test: osd-backfill-stats.sh use nobackfill to avoid races in remaining test
Fixes: https://tracker.ceph.com/issues/44314

Signed-off-by: David Zafman <dzafman@redhat.com>
2020-06-05 17:48:10 -07:00
Yuri Weinstein
b8f632327f
Merge pull request #35279 from badone/wip-py2-fix-osd-scrub-repair.sh
qa/*/osd-scrub-repair.sh: Convert to python3 print syntax

Reviewed-by: Kefu Chai <kchai@redhat.com>
2020-06-03 11:12:21 -07:00
Neha Ojha
3a06af5af5 qa/standalone/scrub/osd-scrub-snaps.sh: fix grep pattern
The error looks like this:

2020-05-28T20:56:30.214+0000 7f66cdecf700 -1 log_channel(cluster) log [ERR] : scrub 1.0 1:ab946124:::obj15:head : can't decode 'snapset' attr void SnapSet::decode(ceph::buffer::v15_2_0::list::const_iterator&) no longer understand old encoding version 3 < 97: Malformed input

Fixes: https://tracker.ceph.com/issues/45760
Signed-off-by: Neha Ojha <nojha@redhat.com>
2020-05-28 22:41:38 +00:00
Neha Ojha
f72b19d09c qa/standalone/scrub/osd-scrub-repair.sh: fix grep pattern to match decode exception
We fail because the error message in the log looks like:

2020-05-27T21:02:48.447+0000 7fbfc4e60700 -1 log_channel(cluster) log [ERR] : scrub 3.0 3:5c7b2c47:::ROBJ16:head : can't decode 'snapset' attr void SnapSet::decode(ceph::buffer::v15_2_0::list::const_iterator&) no longer understand old encoding version 3 < 97: Malformed input

Fixes: https://tracker.ceph.com/issues/45660
Signed-off-by: Neha Ojha <nojha@redhat.com>
2020-05-28 00:38:17 +00:00
Brad Hubbard
80e7b7c19b qa/*/osd-scrub-repair.sh: Convert to python3 print syntax
Fixes: https://tracker.ceph.com/issues/45733

Signed-off-by: Brad Hubbard <bhubbard@redhat.com>
2020-05-28 08:32:54 +10:00
Neha Ojha
7c8b627eaa qa/*/osd-scrub-repair.sh: don't fail if PG is in active+clean+wait
a0b453ad33 added the wait state, which can
make PGs stay in active+clean+wait for a while instead of going into
active+clean directly. As far as TEST_auto_repair_bluestore_failed is
concerned, we only care about the repair state being cleared.

Fixes: https://tracker.ceph.com/issues/45075
Signed-off-by: Neha Ojha <nojha@redhat.com>
2020-04-23 20:24:28 +00:00
Neha Ojha
4f82ebf41b qa/standalone/scrub/osd-scrub-repair.sh: fix race in TEST_auto_repair_bluestore_failed
We need to flush_pg_stats before checking for active+clean.

Fixed: https://tracker.ceph.com/issues/45075
Signed-off-by: Neha Ojha <nojha@redhat.com>
2020-04-20 18:29:51 +00:00
Neha Ojha
61ad12e6ad
Merge pull request #34541 from neha-ojha/wip-balancer-on
mgr: turn on balancer in upmap mode by default

Reviewed-by: Josh Durgin <jdurgin@redhat.com>
2020-04-15 15:03:28 -07:00
Kefu Chai
eff9d0fc9a
Merge pull request #19076 from jecluis/wip-mon-fix-osdmap-lec-trim
mon/OSDMonitor: allow trimming maps even if osds are down

Reviewed-by: Kefu Chai <kchai@redhat.com>
2020-04-15 08:02:51 +08:00
Neha Ojha
ec85af5b19 qa/standalone/mon/osd-pool-df.sh: flush_pg_stats explicitly
Signed-off-by: Neha Ojha <nojha@redhat.com>
2020-04-14 19:09:45 +00:00
Neha Ojha
321faa9c6b qa/standalone/mon/osd-pool-df.sh: fix test to check for the right values
Though the test passed, we weren't checking for the correct values:

.../qa/standalone/mon/osd-pool-df.sh:62: TEST_ceph_df:  ceph df -f json
.../qa/standalone/mon/osd-pool-df.sh:62: TEST_ceph_df:  jq .stats.total_avail_bytes
../qa/standalone/mon/osd-pool-df.sh:62: TEST_ceph_df:  local global_avail=0
.../qa/standalone/mon/osd-pool-df.sh:63: TEST_ceph_df:  ceph df -f json
.../qa/standalone/mon/osd-pool-df.sh:63: TEST_ceph_df:  jq '.pools | map(select(.name == "$rep_poolname"))[0].stats.max_avail'
../qa/standalone/mon/osd-pool-df.sh:63: TEST_ceph_df:  local rep_avail=null
.../qa/standalone/mon/osd-pool-df.sh:64: TEST_ceph_df:  ceph df -f json
.../qa/standalone/mon/osd-pool-df.sh:64: TEST_ceph_df:  jq '.pools | map(select(.name == "$ec_poolname"))[0].stats.max_avail'
../qa/standalone/mon/osd-pool-df.sh:64: TEST_ceph_df:  local ec_avail=null
../qa/standalone/mon/osd-pool-df.sh:66: TEST_ceph_df:  echo '0 >= null*3'
../qa/standalone/mon/osd-pool-df.sh:66: TEST_ceph_df:  bc
1
../qa/standalone/mon/osd-pool-df.sh:67: TEST_ceph_df:  echo '0 >= null*1.5'
../qa/standalone/mon/osd-pool-df.sh:67: TEST_ceph_df:  bc
1

Signed-off-by: Neha Ojha <nojha@redhat.com>
2020-04-14 00:05:02 +00:00
Neha Ojha
480afa61b6 qa/standalone/mgr/balancer.sh: adapt test
Now that the balancer is on by default the test needs these changes.

Signed-off-by: Neha Ojha <nojha@redhat.com>
2020-04-14 00:05:02 +00:00
Sage Weil
731e508bbe qa/standalone/mon/msgr-v2-transition: remove test
v2 was introduced in nautilus, and we don't support mimic -> pacific
upgrades (only mimic -> octopus).  This test can be removed!

Signed-off-by: Sage Weil <sage@redhat.com>
2020-04-08 08:10:32 -05:00
Sage Weil
279c437994 qa/standalone/mon/misc: update TEST_mon_features
Signed-off-by: Sage Weil <sage@redhat.com>
2020-04-08 08:10:32 -05:00
Kefu Chai
b1738cd1ef qa/standalone/scrub: s/$(pgid)/${pgid}/
to address the test failures like
```
2020-04-07T15:44:58.693 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh:498: TEST_auto_repair_bluestore_failed:  ceph pg dump
pgs
2020-04-07T15:44:58.694 INFO:tasks.workunit.client.0.smithi049.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh:498: TEST_auto_repair_bluestore_failed:  pgid
2020-04-07T15:44:58.694 INFO:tasks.workunit.client.0.smithi049.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/scrub/osd-scrub-repair.sh: line 498: pgid: command not found
```

Signed-off-by: Kefu Chai <kchai@redhat.com>
2020-04-08 00:54:46 +08:00
Sage Weil
04e0b9c2f8 Merge PR #34126 into master
* refs/pull/34126/head:
	qa/*/osd-backfill-recovery-log.sh: flush_pg_stats before checking log length

Reviewed-by: Sage Weil <sage@redhat.com>
2020-03-23 13:55:16 -05:00
Neha
cfebec1b12 qa/*/osd-backfill-recovery-log.sh: flush_pg_stats before checking log length
It is possible for the pg dump to not be the latest when we check for newprimary
in _common_test(). This is because mgr_stats_period is 5 seconds, and we may not
have fetched the latest stats just yet. This causes the test to look at the same
stats before and after wait_for_clean.

Fixes: https://tracker.ceph.com/issues/43807 (2)
Signed-off-by: Neha Ojha <nojha@redhat.com>
2020-03-23 15:37:12 +00:00
Joao Eduardo Luis
3d682c21f6 qa/standalone: exercise osdmon's last epoch clean
Signed-off-by: Joao Eduardo Luis <joao@suse.de>
2020-03-23 14:58:59 +00:00
Kefu Chai
b0dca75a59
Merge pull request #34056 from xiexingguo/wip-44662
qa/*/osd-markdown.sh: propagate map to osd before testing its reaction

Reviewed-by: Neha Ojha <nojha@redhat.com>
2020-03-21 14:27:51 +08:00
xie xingguo
afdff0cd3f qa/*/osd-markdown.sh: propagate map to osd before testing its reaction
Mon might fail to share the newest map with any of up osds, e.g.,
due to an injected broken pipe. Since we don't have any client
activities during the osd-markdown tests, osds might be unaware of
the map changes made through CLI. Make sure osds have pulled the
newest map down before we can test its reaction correctly.

Fixes: https://tracker.ceph.com/issues/44662
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
2020-03-19 18:17:28 +08:00
Neha
6edd1cb686 qa/standalone/osd/osd-backfill-stats.sh: get_latest_osdmap to propagate map change
Fixes: https://tracker.ceph.com/issues/44518
Signed-off-by: Neha Ojha <nojha@redhat.com>
2020-03-18 22:57:41 +00:00
Sage Weil
603383605f Merge PR #33885 into master
* refs/pull/33885/head:
	Merge pull request #33848 from mchangir/octopus-tests-remove-suprious-whitespace
	Merge PR #33746 into octopus
	Merge PR #33830 into octopus
	Merge PR #33732 into octopus
	Merge PR #33620 into octopus
	Merge pull request #33876 from tchaikov/octopus-cephadm-mypy
	cephadm: add "assert foo is not None" for mypy check
	Merge pull request #33067 from tspmelo/wip-rbd-delete-with-snapshot
	cephadm: add grafana adopt
	Merge PR #33771 into octopus
	Merge PR #33850 into octopus
	Merge PR #33853 into octopus
	Merge PR #33857 into octopus
	Merge PR #32990 into octopus
	Merge PR #33713 into octopus
	Merge PR #33838 into octopus
	qa/tasks/cephadm: no default mon|mgr|crash service specs
	qa/suites/rados/cephadm/upgrade: upgrade start point that supports the no-spec option
	Merge PR #33832 into octopus
	cephadm: bootstrap: wait for mgr to restart after enabling a module
	mgr: add 'mgr_status' tell command
	Merge pull request #33839 from rhcs-dashboard/44538-fix-rgw-grafana-get-put-latencies
	Merge pull request #33743 from votdev/issue_43869_fix_qa_test
	cephadm: create initial mon and mgr service specs too
	cephadm: no need to pregenerate a crash key for the bootstrap host
	mgr/cephadm: do not complain when we don't have enough hosts
	mgr/cephadm: remove orphan daemons
	mgr/cephadm: report size=0 for fabricated ServiceDescription
	mgr/cephadm: safety check to prevent removing all mon|mgr daemons
	mgr/cephadm: prevent scaling mon|mgr below count=1
	mgr/cephadm: do not remove daemons from remove_service
	Merge pull request #33805 from tchaikov/wip-44500
	spec: Podman (temporarily) requires apparmor-abstractions on suse
	mgr/cephadm: Make sure we don't co-locate the same daemon
	monitoring: fix RGW grafana chart 'Average GET/PUT Latencies'
	tests: remove spurious whitespace
	mgr/cephadm: fix service list filtering
	Merge PR #33825 into octopus
	Merge PR #33811 into octopus
	Revert "Merge pull request #33673 from cbodley/wip-denc-enum"
	mgr/cephadm: fix upgrade order
	Merge PR #33801 into octopus
	Merge PR #33822 into octopus
	cephadm: bootstrap: tolerate error return from -h
	Merge PR #33809 into octopus
	Merge PR #32678 into octopus
	cephadm: use `sh` instead of `bash` during enter
	ceph.in: only shut down rados on clean exit
	common/ceph_timer: Pass reference to waited time on stack
	common/ceph_timer: Add test
	common/ceph_timer: Use unique_function, allowing noncopyable events
	common/ceph_timer: Couple cleanups
	common/ceph_timer: Fix namespaces
	common/ceph_timer: Add missing includes
	common/ceph_timer.h: Don't indent contents of a namespace
	mgr/dashboard: Crush rule modal
	mgr/dashboard: Preserve rule selection on pool type change
	mgr/dashboard: Crush rule is only send during replicated pool creation
	mgr/dashboard: Explicit returns in pool form
	mgr/dashboard: Removes fork join in pool form
	mgr/dashboard: Hide ECP actions during ec pool edit
	mgr/dashboard: Pool form erasure/replicated boolean
	mgr/dashboard: Change pool info API endpoint
	mgr/dashboard: Moves ECP info endpoint to UI-API
	mgr/cephadm: add _remove_osds_bg back to main loop
	mgr/cephadm/osd: update removal report immediately
	qa/tasks/ceph_manager: use StringIO for capturing COT output
	qa/standalone/scrub/osd-scrub-repair: force osdmap prop to osds
	qa/standalone/scrub/osd-scrub-test: wait longer for update
	qa/tasks/ceph_manager: capture stderr for COT
	qa/suites/rados/ceph: drop opensuse for now
	mon/MonClient: send logs to mon on separate schedule than pings
	mgr/dashboard: Fix missing ImageSpec usage
	mgr/dashboard: Allow removing RBD with snapshots
	mgr/dashboard: Refactor and cleanup tasks.mgr.dashboard.test_user
	mgr/dashboard: support multiple DriveGroups when creating OSDs
	mon/MonClient: send logs to mon even if we have no keelalive2
	cephadm: flag dashboard user to change password

Reviewed-by: Sebastian Wagner <swagner@suse.com>
2020-03-11 17:38:59 -05:00
Neha Ojha
6117a0d4db
Merge pull request #33281 from ideepika/wip-set-osd-pool-size-extra-param-check
mon/OSDMonitor: add flag `--yes-i-really-mean-it` for setting pool size 1

Reviewed-by: Greg Farnum <gfarnum@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
2020-03-09 19:14:50 -07:00
Sage Weil
3212932ba1 Merge PR #33809 into octopus
* refs/pull/33809/head:
	qa/standalone/scrub/osd-scrub-repair: force osdmap prop to osds
	qa/standalone/scrub/osd-scrub-test: wait longer for update

Reviewed-by: David Zafman <dzafman@redhat.com>
2020-03-09 15:28:19 -05:00
Deepika Upadhyay
21508bd9dd mon/OSDMonitor: add flag --yes-i-really-mean-it for setting pool size 1
Adds option `mon_allow_pool_size_one` which will be disabled by default
to ensure pools are not configured without replicas.
If the user still wants to use pool size 1, they will have to change the
value of `mon_allow_pool_size_one` to true and then have to pass flag
`--yes-i-really-mean-it` to cli command:

Example:
`ceph osd pool test set size 1 --yes-i-really-mean-it`

Fixes: https://tracker.ceph.com/issues/44025
Signed-off-by: Deepika Upadhyay <dupadhya@redhat.com>
2020-03-09 23:27:36 +05:30
Sage Weil
0447ed0ff9 qa/standalone/scrub/osd-scrub-repair: force osdmap prop to osds
flush_pg_stats isn't sufficient to ensure that OSDs have the latest
OSDMap.

Signed-off-by: Sage Weil <sage@redhat.com>
2020-03-08 14:52:10 -05:00
Sage Weil
ac9befd450 qa/standalone/scrub/osd-scrub-test: wait longer for update
Fixes: https://tracker.ceph.com/issues/43865
Signed-off-by: Sage Weil <sage@redhat.com>
2020-03-08 14:45:00 -05:00
David Zafman
e509b7c7d0 test: Add flush_pg_stats to avoid race with getting num_shards_repaired
Fixes: https://tracker.ceph.com/issues/44439

Signed-off-by: David Zafman <dzafman@redhat.com>
2020-03-06 04:25:37 +00:00
Kefu Chai
c6088bdd26
Merge pull request #33593 from dzafman/wip-cot-fix
test: Fix failing ceph_objectstore_tool.py test

Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
2020-03-02 18:58:19 +08:00
Kefu Chai
7b0e18c09e
Merge pull request #33566 from dzafman/wip-44296
test: Expect being off by up to 2 and make sure all PGs are active+clean

Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
2020-02-28 11:42:47 +08:00
David Zafman
08f7e7980f test: Fix failing ceph_objectstore_tool.py test
The -N option to vstart.sh was removed, use -k

Old hinfo_key binary happen to be utf-8 decodable, now it
throws an exception trying to decode it. Use new
option to ceph-objectstore-tool to treat stdout as a terminal
and convert binary data to base64.

Signed-off-by: David Zafman <dzafman@redhat.com>
2020-02-27 18:14:36 -08:00
David Zafman
49d9c7d664 test: Expect being off by up to 2 and make sure all PGs are active+clean
Fixes: https://tracker.ceph.com/issues/44296

Signed-off-by: David Zafman <dzafman@redhat.com>
2020-02-27 18:12:25 -08:00
David Zafman
587cd64207
Merge pull request #32342 from dzafman/wip-43126
mon: Improvements to slow heartbeat health messages

Reviewed-by: Sage Weil <sage@redhat.com>
2020-02-25 17:42:00 -08:00
Sage Weil
4d42b4c5a0 common/TextTable: default to 2 spaces separating columns
This is what other projects and libraries default to, and it is more
legible.

Signed-off-by: Sage Weil <sage@redhat.com>
2020-02-23 15:46:30 -06:00
Sage Weil
5afec0fbfb Merge PR #33091 into master
* refs/pull/33091/head:
	qa/suites/rados: disable device scraping
	qa/standalone/ceph-helpers: disable device monitoring
	qa/tasks/ceph.py: add pre-mgr-commands option for ceph task
	mgr/devicehealth: set default monitoring to 'on'

Reviewed-by: Sage Weil <sage@redhat.com>
2020-02-22 12:05:55 -06:00
xie xingguo
023524a26d osd/PeeringState: restart peering on any previous down acting member coming back
One of our customers wants to verify the data safety of Ceph during scaling
the cluster up, and the test case looks like:
- keep checking the status of a speficied pg, who's up is [1, 2, 3]
- add more osds: up [1, 2, 3] -> up [1, 4, 5], acting = [1, 2, 3], backfill_targets = [4, 5],
  pg is remapped
- stop osd.2: up [1, 4, 5], acting = [1, 3], backfill_targets = [4, 5], pg is undersized
- restart osd.2, acting will stay unchanged as 2 belongs to neither current up nor acting set,
  hence leaving the corresponding pg pinning undersized for a long time until all backfill
  targets completes

It does not pose any critical problem -- we'll end up getting that pg back into active + clean,
except that the long live DEGRADED warnings keep bothering our customer who cares about data
safety more than any thing else.

The right way to achieve the above goal is for:

	boost::statechart::result PeeringState::Active::react(const MNotifyRec& notevt)

to check whether the newly booted node could be validly chosen for the acting set and
request a new temp mapping. The new temp mapping would then trigger a real interval change
that will get rid of the DEGRADED warning.

Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
Signed-off-by: Yan Jun <yan.jun8@zte.com.cn>
2020-02-21 17:52:52 +08:00
Sage Weil
455cdcf89a qa/standalone/ceph-helpers: disable device monitoring
Signed-off-by: Sage Weil <sage@redhat.com>
2020-02-19 15:31:26 -06:00
Sage Weil
f10cc22c60 Merge PR #32961 into master
* refs/pull/32961/head:
	qa/standalone/osd/osd-bench: debug bluestore

Reviewed-by: Neha Ojha <nojha@redhat.com>
2020-01-30 10:42:17 -06:00
Sage Weil
b99e506a3f qa/standalone/osd/osd-bench: debug bluestore
Looking for https://tracker.ceph.com/issues/43888

Signed-off-by: Sage Weil <sage@redhat.com>
2020-01-29 07:43:41 -06:00
David Zafman
e18519ad09 test: Update pg log test for new trimming behavior
Fixes: https://tracker.ceph.com/issues/43864

Signed-off-by: David Zafman <dzafman@redhat.com>
2020-01-28 15:23:45 -08:00