C_AioRead::finish needs to add in each chunk of a partial read
request to the 'partial' map in the AioCompletion's state
(in destriper, of type StripedReadResult). That map is global
and must be protected from simultaneous access. Use the
AioCompletion lock; could create a separate lock if contention is an
issue.
Fixes: #3567
Signed-off-by: Dan Mick <dan.mick@inktank.com>
(cherry picked from commit a55700cc0a)
During a test_librbd_fsx run including flatten, ImageCtx->parent
was being dereferenced while null. Between the time the parent
overlap is calculated and the time the guard+write completes
with ENOENT and submits the copyup+write, the parent image
could have changed (by resize) or been made irrelevant (by
child flatten) such that the parent overlap is now incorrect.
Handle "no parent" by just sending the copyup+write; the copyup
part will be a no-op. Move to WRITE_FLAT state in this case
because there's no more child to deal with.
Handle "overlap changed" by recalculating overlap before
reading parent data; if none is left, don't read, but rather
just clear m_object_image_extents, in which case the copyup
will again be a no-op because it will be of zero length.
However we still have a parent, so stay in WRITE_COPYUP state
and come back through as usual.
Signed-off-by: Dan Mick <dan.mick@inktank.com>
Fixes: #3524
(cherry picked from commit 41e16a3b40)
After replay, we don't know if the dentry removal has already been
committed. Use a sloppy removal so that we succeed even if we are
repeating the operation.
Conveniently, the previous implementation (pre v0.55) silently ignored
tmap op codes it did not understand, which means this new RMSLOPPY will
be interpreted the same as an actual RMSLOPPY. That means an v0.55
mds can run against an older osd (say, argonaut) without problems.
Signed-off-by: Sage Weil <sage@inktank.com>
This reverts 29fae494d0 and fixes the
alternate implmentation added by 8e91d00b52.
librbd relies the ENOENT return value.
Reported-by: Dan Mick <dan.mick@inktank.com>
Signed-off-by: Sage Weil <sage@inktank.com>
If we apply or commit a RepModify from a prevous perring interval, we need
to free it.
This fixes 'slow request' messages when in fact clients requests are not
delayed, and plugs the related memory leak.
Signed-off-by: Sage Weil <sage@inktank.com>
We only queue the _applied_recovered_object callback on the primary for the
final push. It is this callback which decrements active_pushes. It's ok to
not increment active_pushes for the intermediate pushes since these only affact
a temp file.
Signed-off-by: Samuel Just <sam.just@inktank.com>
If the mds revokes our cache cap, and we follow
the _read_sync() path, on a zero-byte file the
osd returns ENOENT. We need to replace ENOENT
with a return of 0 in this case.
Signed-off-by: Sam Lang <sam.lang@inktank.com>
This tests a bug (#3490) in the Client::_read_sync
codepath, and should be run with conf->client_read_sync_always
set to true.
Signed-off-by: Sam Lang <sam.lang@inktank.com>
Let scrubber.end be (foo, HEAD, 10) where the oid is foo , HEAD is the
snap, and 10 is the hash and scrubber.begin similarly be (bar, 5, 1).
After choosing to scan [(bar, 5, 1), (foo, HEAD, 10)), we block writes
on that interval.
1) A write might then come in for foo (which isn't blocked) which
creates a new snap (foo, 400, 10) which happens to fall in the interval.
This will result in a crash in _scrub() when it attempts to compare
clones since it will get (foo, 400, 10) but not the head object
(foo, HEAD, 10).
2) Alternately, the write from 1) has already happened. When we scan
the log, we find 34'10 and 34'11 are the clone operation creating
(foo, 400, 10) and the modify on (foo, HEAD, 10) respectively. Both
primary and replica will wait for last_update_applied to be 34'10
before scanning, but last_update_applied will in fact skip to 34'11
since 34'10 and 34'11 happened in the same transaction. This can
result in IO hanging on the scrubber interval.
Instead, we ensure that scrubber.end is exactly a hash boundary
(min hobject_t a with the specified hash). No such object can
exist since we don't create objects with empty oids, so no writes
can occur on that object.
Signed-off-by: Samuel Just <sam.just@inktank.com>
history.last_epoch_started marks a lower bound on the last epoch at
which the pg went active. As with info.last_epoch_started, it should be
0 prior to the first activation.
Signed-off-by: Samuel Just <sam.just@inktank.com>
In order to proceed with peering, we need an osd with a log including
the last commit sent to a client. This translates to the oldest
last_update from the infos of the most recent acting set to go active.
history.last_epoch_started gives us a lower bound on the last time the
entire acting set persisted authoratative logs/infos. However, it
doesn't indicate anything about the info/log on the osd which sent it.
Thus, we will maintain an osd local info.last_epoch_started to determine
which osds were actually active (and thus have the required log
entries). The max info.last_epoch_started in the prior set gives us an
upper bound on the last interval during which writes occurred. The min
last_update among the infos with that last_epoch_started must therefore
be an upper bound on the oldest operation which clients consider
committed. Any osd with an info.last_updated past that version must be
sufficient.
The observed bug was there was an empty pg info with a
last_epoch_started at the most recent interval which pushed
min_last_update_acceptable to eversion_t(). There were two down osds,
but peering proceeded since the backfill peer did survive. However,
its info was later disregarded due to incomplete. An empty osd was
then chosen as the best_info since it's last_update was equal to
min_last_update_acceptable. This caused the contents of the pg to be
lost.
Signed-off-by: Samuel Just <sam.just@inktank.com>
This will make them much more noticeable and reduce the odds of something
writing data which assumes the previous op succeeded.
Signed-off-by: Greg Farnum <greg@inktank.com>
These functions are like the non-safe versions, but assert that
there were no disk errors and have void return types. Change a
bunch of callers who weren't checking the return code to use
these variants instead.
(Unfortunately we can't make them default safe because several of
the callers depend on getting back the length, and are perfectly happy
with ENOENT producing a 0 return value.)
Signed-off-by: Greg Farnum <greg@inktank.com>
This adds a bash script that creates an rbd image, then repeatedly
maps and unmaps it for a specified duration (5 minutes by default).
Signed-off-by: Alex Elder <elder@inktank.com>
This can sometimes return errors since it's a storage access, and
we're pretty sure ignoring it is the cause of a broken store we've seen.
Signed-off-by: Greg Farnum <greg@inktank.com>
Make import work; do I/O in image native block size.
Note: creating sparse images is not currently attempted; could
scan for runs of zeros and write discontiguous chunks to image.
Fixes: #3503
Signed-off-by: Dan Mick <dan.mick@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
(cherry picked from commit c99d9c3ae7)
Detect a misordered ondisk tmap... if we are already decoding it. We still
leave the trailing bits unchecked.
Signed-off-by: Sage Weil <sage@inktank.com>