Create a custom profile with ruleset-failure-domain=osd. (The default
ruleset-failure-domain=host won't do because this script assumes and
works only if all osds are on the same host.) While at it, set k and m
explicitly to avoid troubles in the future.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
OSDs that for some reason get behind on processing their op queue break
expect_alloc_hint_eq(), as it pokes the FS and not the journal. Fix it
by flushing the journal before proceeding with anything else.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>
The qa and functional tests are adapted to the new command prototype
requiring a profile instead of a list of properties. When possible the
implicit ruleset creation is used to simplify the test setup.
Signed-off-by: Loic Dachary <loic@dachary.org>
A new module, s3_utilities.pm has been created. It contains
subroutines common to at least two of the workunits in this
directory. Code was moved here from the other pl files, and
some minor changes (paramers and scope changes) were needed.
Fixes: 7472
Signed-off-by: Warren Usui <warren.usui@inktank.com>
If I have to touch this again I will remove it. Ugh. This time,
ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2014-03-11_02:30:01-rados-firefly-distro-basic-plana/125922
hit NXIO a few lines down because one of the OSDs was still down.
Signed-off-by: Sage Weil <sage@inktank.com>
Added port (fixed value for right now in teuthology) to hostname.
Fixes: 7374
Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Signed-off-by: Warren Usui <warren.usui@inktank.com>
(cherry picked from commit 8200b8a025)
- fix the wait check for osds to come back up
- make sure they get marked back in, too
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Dan Mick <dan.mick@inktank.com>
'rados cppool' copies the contents but that doesn't make the destination
pool an unmanaged snaps pool. Therefore, we must get an ENOTSUP when
we try to remove an unmanaged snap from a not-unmanaged pool.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
This wreaks havoc on our QA because it marks osds up and down and then
immediately after that we try to scrub and some osds are still down.
Adjust the CLI test to wait for all OSDs to come back up after thrashing.
Signed-off-by: Sage Weil <sage@inktank.com>
Prevent creation of buckets of type 0 ('osd', 'device', etc.), as they
will confusing the mapping algorithm.
Signed-off-by: Sage Weil <sage@inktank.com>
The cache pools will throttle when they reach the target max size, so it
is important to make the administrator aware when they approach that point.
Unfortunately it is not particularly easy to efficiently keep track of
which PGs have hit their limit and use that for reporting. However, it
is easy to raise a flag when we start to approach the target for the
entire pool, and that sort of early warning is arguably more useful
anyway.
Trigger the warning based on the target full ratio. Not when we hit the
target, but when we are 2/3 between it and completely full.
Implements: #7442
Signed-off-by: Sage Weil <sage@inktank.com>
This is a friendlier interface for setting up a cache tier with some
reasonable defaults (defined via config options). This will simplify
the user experience and documentation.
Signed-off-by: Sage Weil <sage@inktank.com>
In general, users should not use non-empty pools as new tiers or else
things can behave strangely:
- the data sets are unrelated behavior will be... strange.
- if the cache pool is not "new" and does not do the OMAP flag, the OSD
will not know not to flush omap objects to an EC base tier
- probably other random stuff I'm forgetting
Allow a user to shoot themselves in the foot with --force-nonempty.
Implements: #7457
Signed-off-by: Sage Weil <sage@inktank.com>
We would like to get the hit set parameters: hit_set_type |
hit_set_period | hit_set_count | hit_set_fpp via OSDMonitor
Signed-off-by: Kai Zhang <zakir.exe@gmail.com>