We want to avoid a situation where the agent clicks on and off when the
system hovers around a utilization threshold. Particularly for trim,
the system can expend a lot of energy doing a minimal amount of work when
the effort level is low. To avoid this, enable when we are some amount
above the threshold, and do not turn off until we are the same amount below
the target.
Signed-off-by: Sage Weil <sage@inktank.com>
When the agent starts, start at a random offset to ensure we get a more
uniform distribution of attention to all objects in the PG. Otherwise, we
will disproportionately examine objects at the "beginning" of the PG if we
are interrupted by peering or restarts or some other activity.
Note that if the agent_state is preserved, we do not forget our position,
which is also nice.
We *could* persist this position in the pg_info_t somewhere, but I am not
sure it is worth the effort.
Signed-off-by: Sage Weil <sage@inktank.com>
This is very basic flush and evict functionality for the tiering agent.
The flush policy is very simple: if we are above the threshold and the
object is dirty, and not super young, flush it. This is not too braindead
of a policy (although we could clearly do something smarter).
The evict policy is pretty simple: evict the object if it is clean and
we are over our full threshold. If we are in the middle mode, try to
estimate how cold the object is based on an accumulated histogram of
objects we have examined so far, and decide to evict based on our
position in that histogram relative to our "effort" level.
Caveats:
* the histograms are not refreshed
* we aren't taking temperature into consideration yet, although some of
the infrastructure is there.
Signed-off-by: Sage Weil <sage@inktank.com>
Move the check for clones into a helper so that we will be able to use in
other places where we need to evict.
Signed-off-by: Sage Weil <sage@inktank.com>
Add a callback hook for whenever an OpContext completes or cancels. We
are pretty sloppy here about the return values because our initial user
will not care, and it is unclear if future users will.
Signed-off-by: Sage Weil <sage@inktank.com>
A PG is not always an equally sized fraction of the total pool size due to
the use of ceph_stable_mod. Add a helper to return the fraction
(denominator) of a given pg based on the current pg_num value.
Signed-off-by: Sage Weil <sage@inktank.com>
comment out erasure pool related tests when an OSD is involved because
it does not work yet. See http://tracker.ceph.com/issues/7360.
Signed-off-by: Loic Dachary <loic@dachary.org>
By default, disallow adjustment of primary affinity unless the user has
opted in by adjusting their monitor config. This will avoid some user
pain because inadvertantly setting the affinity will prevent older clients
from connecting to and using the cluster.
Signed-off-by: Sage Weil <sage@inktank.com>
The behavior is a bit different for replicated and indep/erasure mode.
In the first case, we are rearranging the result. In the second case,
we can just set the primary argument to the right value.
Signed-off-by: Sage Weil <sage@inktank.com>
Currently if an election times out we call a new
election. If we have never joined a quorum, bootstrap
instead. This is heavier weight, but captures the case
where, during bootstrap:
- a and b have learned each others' addresses
- everybody calls an election
- a and b form a quorum
- c loops trying to call an election, but is ignored
because a and b don't see its address in the monmap
See logs:
ubuntu@teuthology:/var/lib/teuthworker/archive/sage-2014-02-14_13:50:04-ceph-deploy-wip-7212-sage-b-testing-basic-plana/83194
Signed-off-by: Sage Weil <sage@inktank.com>
It is important in the bootstrap case that the very first paxos round
also codify the contents of the monmap itself in order to avoid any manner
of confusing scenarios where subsequent elections are called and people
try to recover and modify paxos without agreeing on who the quorum
participants are.
Signed-off-by: Sage Weil <sage@inktank.com>
It is only safe to dynamically update the address for a peer mon in our
monmap if we are in the midst of the initial quorum formation (i.e.,
monmap.epoch == 0). If it is a later epoch, we have formed our initial
quorum and any and all monmap changes need to be agreed upon by the quorum
and committed via paxos.
Fixes: #7212
Signed-off-by: Sage Weil <sage@inktank.com>
We've had some trouble with not clearing out subscription requests and
overloading the monitors (though only because of other bugs). Write a
helper for handling subscription requests that we can use to centralize
safety logic. Clear out the subscription whenever we get a map that covers
it; if there are more maps available than we received, we will issue another
subscription request based on "m->newest_map" at the end of handle_osd_map().
Notice that the helper will no longer request old maps which we already have,
and that unless forced it will not dispatch multiple subscribe requests
to a single monitor.
Skipping old maps is safe:
1) we only trim old maps when the monitor tells us to,
2) we do not send messages to our peers until we have updated our maps
from the monitor.
That means only old and broken OSDs will send us messages based on maps
in our past, and we can (and should) ignore any directives from them anyway.
Signed-off-by: Greg Farnum <greg@inktank.com>
--test-map-pgs mode allows to map all pgs from either all pools or just
one pool. Mention it in usage output.
Signed-off-by: Ilya Dryomov <ilya.dryomov@inktank.com>