We don't want to dump the cache every time an item is trimmed and the
mount_cond gets signaled; this can make umount crazy-slow when logging is
turned up.
Instead, only dump if we wait 5 seconds without making any progress on
shrinking the cache.
Signed-off-by: Sage Weil <sage@inktank.com>
If the mds revokes our cache cap, and we follow
the _read_sync() path, on a zero-byte file the
osd returns ENOENT. We need to replace ENOENT
with a return of 0 in this case.
Signed-off-by: Sam Lang <sam.lang@inktank.com>
This tests a bug (#3490) in the Client::_read_sync
codepath, and should be run with conf->client_read_sync_always
set to true.
Signed-off-by: Sam Lang <sam.lang@inktank.com>
Let scrubber.end be (foo, HEAD, 10) where the oid is foo , HEAD is the
snap, and 10 is the hash and scrubber.begin similarly be (bar, 5, 1).
After choosing to scan [(bar, 5, 1), (foo, HEAD, 10)), we block writes
on that interval.
1) A write might then come in for foo (which isn't blocked) which
creates a new snap (foo, 400, 10) which happens to fall in the interval.
This will result in a crash in _scrub() when it attempts to compare
clones since it will get (foo, 400, 10) but not the head object
(foo, HEAD, 10).
2) Alternately, the write from 1) has already happened. When we scan
the log, we find 34'10 and 34'11 are the clone operation creating
(foo, 400, 10) and the modify on (foo, HEAD, 10) respectively. Both
primary and replica will wait for last_update_applied to be 34'10
before scanning, but last_update_applied will in fact skip to 34'11
since 34'10 and 34'11 happened in the same transaction. This can
result in IO hanging on the scrubber interval.
Instead, we ensure that scrubber.end is exactly a hash boundary
(min hobject_t a with the specified hash). No such object can
exist since we don't create objects with empty oids, so no writes
can occur on that object.
Signed-off-by: Samuel Just <sam.just@inktank.com>
history.last_epoch_started marks a lower bound on the last epoch at
which the pg went active. As with info.last_epoch_started, it should be
0 prior to the first activation.
Signed-off-by: Samuel Just <sam.just@inktank.com>
In order to proceed with peering, we need an osd with a log including
the last commit sent to a client. This translates to the oldest
last_update from the infos of the most recent acting set to go active.
history.last_epoch_started gives us a lower bound on the last time the
entire acting set persisted authoratative logs/infos. However, it
doesn't indicate anything about the info/log on the osd which sent it.
Thus, we will maintain an osd local info.last_epoch_started to determine
which osds were actually active (and thus have the required log
entries). The max info.last_epoch_started in the prior set gives us an
upper bound on the last interval during which writes occurred. The min
last_update among the infos with that last_epoch_started must therefore
be an upper bound on the oldest operation which clients consider
committed. Any osd with an info.last_updated past that version must be
sufficient.
The observed bug was there was an empty pg info with a
last_epoch_started at the most recent interval which pushed
min_last_update_acceptable to eversion_t(). There were two down osds,
but peering proceeded since the backfill peer did survive. However,
its info was later disregarded due to incomplete. An empty osd was
then chosen as the best_info since it's last_update was equal to
min_last_update_acceptable. This caused the contents of the pg to be
lost.
Signed-off-by: Samuel Just <sam.just@inktank.com>
This will make them much more noticeable and reduce the odds of something
writing data which assumes the previous op succeeded.
Signed-off-by: Greg Farnum <greg@inktank.com>
These functions are like the non-safe versions, but assert that
there were no disk errors and have void return types. Change a
bunch of callers who weren't checking the return code to use
these variants instead.
(Unfortunately we can't make them default safe because several of
the callers depend on getting back the length, and are perfectly happy
with ENOENT producing a 0 return value.)
Signed-off-by: Greg Farnum <greg@inktank.com>
This adds a bash script that creates an rbd image, then repeatedly
maps and unmaps it for a specified duration (5 minutes by default).
Signed-off-by: Alex Elder <elder@inktank.com>
A new option that goes through the indexed objects, verifies
their actual state and updates the index accordingly.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
Besides suggesting changes to the object's index, we also need
to remove the parts that build the object. This only applies to
parts of multipart objects.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
This can sometimes return errors since it's a storage access, and
we're pretty sure ignoring it is the cause of a broken store we've seen.
Signed-off-by: Greg Farnum <greg@inktank.com>
Make import work; do I/O in image native block size.
Note: creating sparse images is not currently attempted; could
scan for runs of zeros and write discontiguous chunks to image.
Fixes: #3503
Signed-off-by: Dan Mick <dan.mick@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
(cherry picked from commit c99d9c3ae7)