Rename applied_seq to max_applied_seq, since it is a bound; there may be
seq's < max_applied_seq that are not applied. This aligns the naming with
max_applying_seq.
Signed-off-by: Sage Weil <sage@inktank.com>
We can have a large number of operations in the op_wq waiting to be applied
to the fs. Currently, when we want to commit, we want for them *all* to
apply. This can take a very long time (the default queue length is 500
operations!).
Instead, mark an Op as started ("applying") when the thread pool actually
starts to apply it. At that point, only wait for applying ops to complete.
We let any threads with an op seq < max_applying_seq begin as well so that
we have a proper ordering/barrier. When those flush, applied_seq will ==
max_applying_seq, and that becomes the committing_seq value.
Note that 'applied_seq' is still maintain, but serves no real purpose
except to populate our asserts with sanity checks. max_applying_seq serves
the purpose applied_seq used to.
This removes once unnecessary source of latency associated with fs
commits.
Signed-off-by: Sage Weil <sage@inktank.com>
If we apply or commit a RepModify from a prevous perring interval, we need
to free it.
This fixes 'slow request' messages when in fact clients requests are not
delayed, and plugs the related memory leak.
Signed-off-by: Sage Weil <sage@inktank.com>
We only queue the _applied_recovered_object callback on the primary for the
final push. It is this callback which decrements active_pushes. It's ok to
not increment active_pushes for the intermediate pushes since these only affact
a temp file.
Signed-off-by: Samuel Just <sam.just@inktank.com>
If the mds revokes our cache cap, and we follow
the _read_sync() path, on a zero-byte file the
osd returns ENOENT. We need to replace ENOENT
with a return of 0 in this case.
Signed-off-by: Sam Lang <sam.lang@inktank.com>
This tests a bug (#3490) in the Client::_read_sync
codepath, and should be run with conf->client_read_sync_always
set to true.
Signed-off-by: Sam Lang <sam.lang@inktank.com>
Let scrubber.end be (foo, HEAD, 10) where the oid is foo , HEAD is the
snap, and 10 is the hash and scrubber.begin similarly be (bar, 5, 1).
After choosing to scan [(bar, 5, 1), (foo, HEAD, 10)), we block writes
on that interval.
1) A write might then come in for foo (which isn't blocked) which
creates a new snap (foo, 400, 10) which happens to fall in the interval.
This will result in a crash in _scrub() when it attempts to compare
clones since it will get (foo, 400, 10) but not the head object
(foo, HEAD, 10).
2) Alternately, the write from 1) has already happened. When we scan
the log, we find 34'10 and 34'11 are the clone operation creating
(foo, 400, 10) and the modify on (foo, HEAD, 10) respectively. Both
primary and replica will wait for last_update_applied to be 34'10
before scanning, but last_update_applied will in fact skip to 34'11
since 34'10 and 34'11 happened in the same transaction. This can
result in IO hanging on the scrubber interval.
Instead, we ensure that scrubber.end is exactly a hash boundary
(min hobject_t a with the specified hash). No such object can
exist since we don't create objects with empty oids, so no writes
can occur on that object.
Signed-off-by: Samuel Just <sam.just@inktank.com>
history.last_epoch_started marks a lower bound on the last epoch at
which the pg went active. As with info.last_epoch_started, it should be
0 prior to the first activation.
Signed-off-by: Samuel Just <sam.just@inktank.com>
In order to proceed with peering, we need an osd with a log including
the last commit sent to a client. This translates to the oldest
last_update from the infos of the most recent acting set to go active.
history.last_epoch_started gives us a lower bound on the last time the
entire acting set persisted authoratative logs/infos. However, it
doesn't indicate anything about the info/log on the osd which sent it.
Thus, we will maintain an osd local info.last_epoch_started to determine
which osds were actually active (and thus have the required log
entries). The max info.last_epoch_started in the prior set gives us an
upper bound on the last interval during which writes occurred. The min
last_update among the infos with that last_epoch_started must therefore
be an upper bound on the oldest operation which clients consider
committed. Any osd with an info.last_updated past that version must be
sufficient.
The observed bug was there was an empty pg info with a
last_epoch_started at the most recent interval which pushed
min_last_update_acceptable to eversion_t(). There were two down osds,
but peering proceeded since the backfill peer did survive. However,
its info was later disregarded due to incomplete. An empty osd was
then chosen as the best_info since it's last_update was equal to
min_last_update_acceptable. This caused the contents of the pg to be
lost.
Signed-off-by: Samuel Just <sam.just@inktank.com>
This will make them much more noticeable and reduce the odds of something
writing data which assumes the previous op succeeded.
Signed-off-by: Greg Farnum <greg@inktank.com>
These functions are like the non-safe versions, but assert that
there were no disk errors and have void return types. Change a
bunch of callers who weren't checking the return code to use
these variants instead.
(Unfortunately we can't make them default safe because several of
the callers depend on getting back the length, and are perfectly happy
with ENOENT producing a 0 return value.)
Signed-off-by: Greg Farnum <greg@inktank.com>
This adds a bash script that creates an rbd image, then repeatedly
maps and unmaps it for a specified duration (5 minutes by default).
Signed-off-by: Alex Elder <elder@inktank.com>
This can sometimes return errors since it's a storage access, and
we're pretty sure ignoring it is the cause of a broken store we've seen.
Signed-off-by: Greg Farnum <greg@inktank.com>
Make import work; do I/O in image native block size.
Note: creating sparse images is not currently attempted; could
scan for runs of zeros and write discontiguous chunks to image.
Fixes: #3503
Signed-off-by: Dan Mick <dan.mick@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
(cherry picked from commit c99d9c3ae7)
Detect a misordered ondisk tmap... if we are already decoding it. We still
leave the trailing bits unchecked.
Signed-off-by: Sage Weil <sage@inktank.com>
The MDS may include RM ops in a tmap update for items that were already
removed: after restarting and replaying the journal, it doesn't know
which dentries were previously committed and which were not.
No other (known) users care about the error code.
Signed-off-by: Sage Weil <sage@inktank.com>
The previous tmap implementation requires that the update stream be
sorted or else it will behave erratically (by placing new keys in the
map out of order). This can cause very strange failures: reads may
appear to return the correct result initially, but once intervening
keys are remove they will not... depending on how read is implemented
on the client side.
Fix this by doing the optimized updates initially, but falling back to
a slow implementation if an unsorted update is detected. It is slow,
but such updates are rare.
Signed-off-by: Sage Weil <sage@inktank.com>
Validate change to not assume dest pool == src pool
Signed-off-by: Dan Mick <dan.mick@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
(cherry picked from commit 39180430b9)
import allows specifying one image, implicitly or explicitly the
"source" image, even though it's really the destination. Fix up
the reassignment of 'source' to 'dest', and check for and complain
about specifying two different pools or images for import.
Signed-off-by: Dan Mick <dan.mick@inktank.com>
Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
(cherry picked from commit c219698149)