- OSDMap encode and decode translate between the flags and int
representations.
- OSDMap::Incremental only does decode; we do not expect to ever encode
an incremental osdmap for an old osd that sets any of these flags.
- the 'osd set' command still lets you set the jewel and kraken flags,
but not luminous.
- OSDMap::apply_incremental handles the conversion of legacy require flags
to the new field if the jewel or kraken flags have to be set before
starting the osd upgrade.
- clear out the legacy flags when we make the luminous transition only;
until then we keep using the old flag in the encoded and decoded version
(although the require_osd_release field will be accurate in memory in all
cases).
Signed-off-by: Sage Weil <sage@redhat.com>
This parsed out as
tasks:
- install: null
- ceph:
conf:
osd: osd max object name len = 400 osd max object namespace len = 64
- workunit:
clients:
all:
- rados/test_health_warnings.sh
which is clearly not correct.
Signed-off-by: Sage Weil <sage@redhat.com>
The scrub_pgs command also waits for healthy for a while, but fails
silently if it times out, which means the subsequent scrubs will also
fail to clean up.
This forces an earlier failure that does not obscure the root cause.
Signed-off-by: Sage Weil <sage@redhat.com>
We don't want to do the at-end.yaml scrubbing business with this test.
Move it into a separate collection until after luminous.
I have a todo item on the post-luminous cleanup list to avoid forgetting
to move this back.
Fixes: http://tracker.ceph.com/issues/19935
Signed-off-by: Sage Weil <sage@redhat.com>
The OSDs must have a map reflecting the require_luminous flag in order
for the legacy conversion to happen. A quick rados bench should ensure
that.
Signed-off-by: Sage Weil <sage@redhat.com>
This lets us run multiple cleanup steps right before ceph
teardown.
Note that we drop the facet from multimon/ because it
doesn't factor out cluster creation before this step
properly. That's fine because the require_luminous
cleanup shouldn't be related to the multimon tests.
Signed-off-by: Sage Weil <sage@redhat.com>
otherwise the bluestore tests will fail with failures like
qa/workunits/cephtool/test.sh:1343: test_mon_osd_pool: ceph osd pool set ec_test allow_ec_overwrites true
Error EINVAL: pool must only be stored on bluestore for scrubbing to work: osd.1 uses filestore
qa/workunits/cephtool/test.sh:1343: test_mon_osd_pool: return 1
Signed-off-by: Kefu Chai <kchai@redhat.com>
cephtool.yaml is bluestore-only, yet it was in singleton/ which runs against a
generalized objectstore matrix.
Fixes: http://tracker.ceph.com/issues/19797
Signed-off-by: Nathan Cutler <ncutler@suse.com>
The tests that exercise mgr failover do not necessarily
leave a happy working mgr daemon in place, and since
pg dump moved into the mgr, that means they should
not try and call "pg dump" to validate PG state on shutdown.
Signed-off-by: John Spray <john.spray@redhat.com>
Keep the pool flag around so we can distinguish between a pool that
should maintain hashes for each chunk, and a missing one is a bug, vs
an overwrites pool where we rely on bluestore checksums for detecting
corruption.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
'remap' is to non-specific a name. In particular, it
sounds like it is related to the 'remapped' PG state
but in reality it is not related.
'upmap' or 'pg-upmap' is more specific: it maps a pgid
to the 'up' set value (or item)
Signed-off-by: Sage Weil <sage@redhat.com>
qa/suites: drop 'fs' facet, and add 'objectstore' facet where missing
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Reviewed-by: Jason Dillaman <dillaman@redhat.com>