When finishing exporting a subtree, the exporter MDS drops locks and
sends MExportDirFinish message to the importer MDS. The bounds of
subtree can get fragmented by third party before the importer MDS
receives the MExportDirFinish message. So the importer MDS can add
inaccurate bounds to the EImportFinish event.
The fix is find approximal bounds when finishing ambiguous imports.
SSE4 are only not availabe on older CPUs. Although the compiler could
probably generate the code, there is no point in doing so. The SSE4.1,
SSE4.2 and PCLMUL cpu features are only tested if the target CPU is
AMD64 or x86_64.
Signed-off-by: Loic Dachary <loic@dachary.org>
Commit 6e013cd6 (properly set COMPLETE flag when merging dirfrags)
tries solving the issue that new dirfrag's COMPLETE flag gets lost
if MDS splits the new dirfrag, then the fragment operation gets
rolled back. It records the original dirfrag's COMPLETE flag when
EFragment PREPARE event is encountered. If the fragment operation
needs to rollback, The COMPLETE flag is journaled in corresponding
EFragment ROLLBACK event. This is problematic when the ROLLBACK
event and the "mkdir" event belong to different log segments. After
the log segment that contains the "mkdir" event is trimmed, the
dirfrag can not be considered as complete.
The fix is commit new dirfrag before splitting it. After dirfrag is
committed to object store, losing COMPLETE flag is not a big deal.
Signed-off-by: Yan, Zheng <zheng.z.yan@ntel.com>
On machines with MON and OSDs (on boot) OSDs started shortly after MON startup
but MON needs time to become oprational so OSDs fail to start due to short
timeout because they don't have enough time to establish communication with
cluster. This is even more likely to happen when there are other monitors down
which is not unusual when servers are rebooting after power failure.
Increasing timeout significantly improves chances for successful OSD start.
Signed-off-by: Dmitry Smirnov <onlyjob@member.fsf.org>
Verify that the mon is responding by checking the keepalive2 reply
timestamp. We cannot rely solely on TCP timing out and returning an
error.
Fixes: #7888
Signed-off-by: Sage Weil <sage@inktank.com>
This is similar to KEEPALIVE, except a timestamp is also exchanged. It is
sent with the KEEPALIVE, and then returned with the ACK. The last
received stamp is stored in the Connection so that it can be queried for
liveness. Since all of the users of keepalive are already regularly
triggering a keepalive, they can check the liveness at the same time.
See #7888.
Signed-off-by: Sage Weil <sage@inktank.com>
We call these in on_activate and on_pool_change. In the former, we are
necessarily active. In the latter, we only want to do anything if we are
active (otherwise, it will be taken care of when we eventually do become
active).
Fixes: #7904
Signed-off-by: Samuel Just <sam.just@inktank.com>
Make sure the Inode does not go away while a readahead is in progress. In
particular:
- read_async
- start a readahead
- get actual read from cache, return
- close/release
- call ObjectCacher::release_set() and get unclean > 0, assert
Fixes: #7867
Backport: emperor, dumpling
Signed-off-by: Sage Weil <sage@inktank.com>
If we do no assemble a target bl, we still want to return a valid return
code with the number of bytes read-ahead so that the C_RetryRead completion
will see this as a finish and call the caller's provided Context.
Signed-off-by: Sage Weil <sage@inktank.com>
RGWRados::initialize() is not called when doing
RGWRados::get_raw_storage_provider(). This was the culprit for issue
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
We aren't actually active between activate() and all_activated_committed().
We'd have to suspend agent_work during that period which seems like too much
complexity for too little work saved.
Fixes: #7904
Signed-off-by: Samuel Just <sam.just@inktank.com>
Fixes: #7903
Since we didn't prefetch data then we couldn't rely on the data to
actually exist there. In that case just move on and read the object.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
If we decide to revert back to up, we need to
1- return false, so that we go into the NeedActingChange state, and
2- actually ask for that change.
It's too fugly to try to jump down to the existing queue_want_pg_temp
call 100+ lines down in this function, so just do it here. We already
know that we are requesting to clear the pg_temp.
Fixes: #7902
Backport: emperor, dumpling
Signed-off-by: Sage Weil <sage@inktank.com>
When splitting dirfrag, delta dirstat is always added to the first new
dirfrag. Before the delta dirstat is propagated to inode, unlinking file
from the rest dirfrags can cause nagtive inode dirstat.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Commit bc3325b37 fixes a stack overflow bug happens when replaying
client requests. Similar stack overflow can happens when processing
finished contexts.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
The auth mds has received dirty scatterlock state. But it hasn't
journaled the dirty state yet. The log segment that marked the
scatterlock dirty need to be preserved. Therefore, we can't clear
the dirty flag of scatterlock.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Fragmenting a non-auth dirfrag results several smaller dirfrags. Some
of the resulting dirfrags can be empty, which are not used to connected
to auth subtree.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
If dirfrags are subtree roots, mark the dirfragtreelock as scattered
dirty, otherwise journal the dirfragtree change.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
MDCache::handle_cache_expire() ignores mismatched dirfrags. this is
OK during normal operation because MDS doesn't trim replica inode
whose dirfrags are likely being fragmented (see commit 22535340).
During recovery, the recovering MDS can reveive survivor MDS' cache
expire message before it sends cache rejoin acks. In this case,
there still can be mismatched dirfrags, but nothing prevents the
survivor MDS to trim inode of these mismatched dirfrags. So there
can be unconnected dirfrags when the recovering MDS sends cache
rejoin acks.
The fix is, when mismatched dirfrag is encountered during recovery,
check if inode of the dirfrag is still replicated to the sender MDS.
If the inode is not replicated, remove the sender MDS from replica
maps of all child dirfrags.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
For slave rename and rmdir events, the MDS needs to preserve non-auth
dirfrag where the renamed inode originally lives in until slave commit
event is encountered. Current method to handle this is use MDCache::
uncommitted_slave_rename_olddir to track any non-auth dirfrag that
need to be preserved. This method does not works well if any preserved
dirfrag gets fragmented by log event (such as ESubtreeMap) between the
slave prepare event and the slave commit event.
The fix is tracking inode of dirfrag instead of tracking dirfrag that
need to preserved directly.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>