The CRUSH rule creation is busted (rules and buckets out of order), but
after I fix that it doesn't seem to run right anyway. Remove it.
We get the mon thrasher coverage from rados/monthrash already; I don't
think this is adding meaningful coverage for the amount of effort it takes
to maintain.
Signed-off-by: Sage Weil <sage@redhat.com>
When a collection is split this needs to be persisted again. Normally
this is only persisted when the missing set is rebuilt during a new
interval when it previous did not include deletes, but during split we
keep the in-memory missing set may_include_deletes flag, but do not
rebuild the missing set.
Fixes: http://tracker.ceph.com/issues/20704
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
This prevents us from importing a missing set without also setting the
may_include_deletes_in_missing omap value if appropriate.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
This was dropped in bf49385679 but should
not have been. Since we are advertising the addr and not the bind
addr there is no reason to disable this check.
Signed-off-by: Sage Weil <sage@redhat.com>
Immediately after we bind to a port, but before we have set up our
auth infrastructure, we may get incoming connections. Deny them. Since
we are not yet advertising ourselves these are peers trying to connect
to old instances of daemons, not us.
This triggers now because of bf49385679.
Previously, the peer would see we were a different addr and drop the
connection. Now, it continues.
Fixes: http://tracker.ceph.com/issues/20667
Signed-off-by: Sage Weil <sage@redhat.com>
messages/: always set header.version in encode_payload()
Reviewed-by: Haomai Wang <haomai@xsky.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
cephtool/test.sh: Only delete a test pool when no longer needed.
Reviewed-by: Willem Jan Withagen <wjw@digiware.nl>
Reviewed-by: xie xingguo <xie.xingguo@zte.com.cn>
We only allow alphanumeric and underscore characters in tenant names
according to the validation in `RGWHandler_REST::validate_tenant_name`
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
the pool_getset pool is deleted before all tests on it are complete
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1990: test_mon_osd_pool_set: ceph osd pool delete pool_get
set pool_getset --yes-i-really-really-mean-it
4: pool 'pool_getset' removed
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1992: test_mon_osd_pool_set: ceph osd pool get rbd crush_r
ule
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1992: test_mon_osd_pool_set: grep 'crush_rule: '
4: crush_rule: replicated_rule
4: /home/jenkins/workspace/ceph-master/qa/workunits/cephtool/test.sh:1994: test_mon_osd_pool_set: ceph -f json osd pool get poo
l_getset compression_mode
4: Error ENOENT: unrecognized pool 'pool_getset'
Signed-off-by: Willem Jan Withagen <wjw@digiware.nl>
This randomly issues pg force-recovery/force-backfill and
pg cancel-force-recovery/cancel-force-backfill during QA
testing. Disabled for upgrades from hammer, jewel and kraken.
Signed-off-by: Piotr Dałek <piotr.dalek@corp.ovh.com>
Documentation for new pg force-recovery, pg force-backfill,
pg-cancel-force-recovery and pg-cancel-force-backfill.
Signed-off-by: Piotr Dałek <piotr.dalek@corp.ovh.com>
This commit implements the MOSDForceRecovery handler along with
all required code to have PGs processed in desired order
(PGs with force_recovery/force_backfill first).
Obviously it's not going to work cluster-wide and OSDs that are
not affected - but may affect affected OSDs - may cut into PG
recovery queue and cause PGs with force_* flags to get recovered
or backfilled later than expected, but still way earlier than
without it.
Signed-off-by: Piotr Dałek <piotr.dalek@corp.ovh.com>
The optional arg ("front") was meant to control if PG was supposed
to be put in front or back (default) of awaiting_throttle. For some
reason, this was't used at all, so this commit removes it and
replaces with logic that checks whether the PG has forcecd
backfill or recovery set, and lets it in the front only in that
case.
Signed-off-by: Piotr Dałek <piotr.dalek@corp.ovh.com>
Implement commands "pg force-recovery", "pg force-backfill", "pg
cancel-force-recovery" and "pg cancel-force-backfill" that accept
an one or more PG IDs and cause these PGs to be recovered or
backfilled first. "cancel-*" commands can be used to revert the
effect of "pg force-*" commands.
Signed-off-by: Piotr Dałek <piotr.dalek@corp.ovh.com>
Introduce new message type (MOSDForceRecovery) that will be used to
force (or cancel forcing) PG recovery/backfill.
Signed-off-by: Piotr Dałek <piotr.dalek@corp.ovh.com>
Reduce max automatically calculable recovery/backfill priority to 254
and reserve 255 for forced backfill/recovery, so recovery/backfill on
user-designated PGs can be requested before other currently backfilled
and/or recovered PGs. Clear PG_STATE_FORCED_BACKFILL and
PG_STATE_FORCED_RECOVERY once recovery/backfill is done.
Signed-off-by: Piotr Dałek <piotr.dalek@corp.ovh.com>