When a pool is created with ceph osd pool create, the auid is not
inferred from the session auid and is set to zero. Add the
ceph osd pool set <pool> auid <int>
command to set it after it is created, and the matching get:
ceph osd pool get <pool> auid
Signed-off-by: Loic Dachary <loic@dachary.org>
Modified qemu-iotests workunit script to check for versions
that use the latest qemu (currently only Trusty). Limit the
tests to those that are applicable to rbd.
Fixes: 7882
Signed-off-by: Warren Usui <warren.usui@inktank.com>
Listing objects isn't reliable with cache pools; skip that part of the
test if we see that rbd has tiering enabled.
Signed-off-by: Sage Weil <sage@inktank.com>
There are several perils when splitting a cache pool:
- split invalidstes pg stats, which disables the agent
- a scrub must be manually triggered post-split to rebuild stats
- the pool may fill the OSDs during that period.
- or, the pool may end up beyond the 'full' mark and once scrub does
complete and the agent activate we may block IO for a long time while
we catch up with flush/evict
Make it a bit harder for users to shoot themselves in the foot.
Fixes: #8043
Signed-off-by: Sage Weil <sage@inktank.com>
Create a custom profile with ruleset-failure-domain=osd. (The default
ruleset-failure-domain=host won't do because this script assumes and
works only if all osds are on the same host.) While at it, set k and m
explicitly to avoid troubles in the future.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
OSDs that for some reason get behind on processing their op queue break
expect_alloc_hint_eq(), as it pokes the FS and not the journal. Fix it
by flushing the journal before proceeding with anything else.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
The qa and functional tests are adapted to the new command prototype
requiring a profile instead of a list of properties. When possible the
implicit ruleset creation is used to simplify the test setup.
Signed-off-by: Loic Dachary <loic@dachary.org>
A new module, s3_utilities.pm has been created. It contains
subroutines common to at least two of the workunits in this
directory. Code was moved here from the other pl files, and
some minor changes (paramers and scope changes) were needed.
Fixes: 7472
Signed-off-by: Warren Usui <warren.usui@inktank.com>
If I have to touch this again I will remove it. Ugh. This time,
ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2014-03-11_02:30:01-rados-firefly-distro-basic-plana/125922
hit NXIO a few lines down because one of the OSDs was still down.
Signed-off-by: Sage Weil <sage@inktank.com>
Added port (fixed value for right now in teuthology) to hostname.
Fixes: 7374
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Signed-off-by: Warren Usui <warren.usui@inktank.com>
(cherry picked from commit 8200b8a025)
- fix the wait check for osds to come back up
- make sure they get marked back in, too
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Dan Mick <dan.mick@inktank.com>
'rados cppool' copies the contents but that doesn't make the destination
pool an unmanaged snaps pool. Therefore, we must get an ENOTSUP when
we try to remove an unmanaged snap from a not-unmanaged pool.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
This wreaks havoc on our QA because it marks osds up and down and then
immediately after that we try to scrub and some osds are still down.
Adjust the CLI test to wait for all OSDs to come back up after thrashing.
Signed-off-by: Sage Weil <sage@inktank.com>
Prevent creation of buckets of type 0 ('osd', 'device', etc.), as they
will confusing the mapping algorithm.
Signed-off-by: Sage Weil <sage@inktank.com>
The cache pools will throttle when they reach the target max size, so it
is important to make the administrator aware when they approach that point.
Unfortunately it is not particularly easy to efficiently keep track of
which PGs have hit their limit and use that for reporting. However, it
is easy to raise a flag when we start to approach the target for the
entire pool, and that sort of early warning is arguably more useful
anyway.
Trigger the warning based on the target full ratio. Not when we hit the
target, but when we are 2/3 between it and completely full.
Implements: #7442
Signed-off-by: Sage Weil <sage@inktank.com>