Prior to split, this did not matter. With split, however, it's
crucial that a pg go through advance_pg() for the map causing
the split. During operation, a PG lags the OSD superblock
epoch. If the OSD dies after the OSD epoch passes the split
but before the pg epoch passes the split, the PG will be
reloaded at the OSD epoch and won't see the split operation.
The PG collection might after that point contain incorrect
objects which should have been split into a child.
Signed-off-by: Samuel Just <sam.just@inktank.com>
PGs are split after updating to the map on which they split.
OSD::activate_map populates the set of currently "splitting"
pgs. Messages for those pgs are delayed until the split
is complete. We add the newly split children to pg_map
once the transaction populating their on-disk state completes.
Signed-off-by: Samuel Just <sam.just@inktank.com>
Splits will be handled when the map update effecting the split is
processed for the splitting pg on each OSD. This will mesh
with the pg history which will place the new pg at the current
positions of the splitting pg.
Signed-off-by: Samuel Just <sam.just@inktank.com>
If an unlink is interupted between removing the file
and updating the subdir attribute, the attribute will
overestimate the number of files in the directory. This
is by design, at worst we will merge the collection later
than intended, but closing the gap would require a second
subdir xattr update. However, this can in extreme cases
result in a collection with subdirectories but no objects.
FileStore::_destry_collection would therefore see an
erroneous -ENOTEMPTY.
prep_delete allows the CollectionIndex implementation to
clean up state prior to removal.
Signed-off-by: Samuel Just <sam.just@inktank.com>
Several pieces of HashIndex involve multi-step operations
which are sensitive to OSD crashes. This patch introduces
failure injection to force retries from various points in
the LFNIndex helper methods to be used with store_test.cc.
Signed-off-by: Samuel Just <sam.just@inktank.com>
Server::_rename_prepare() adds remote inode's parent instead of
projected parent to the journal. So during journal replay, the
journal entry for the rename operation will wrongly revert the
remote inode's projected rename. This issue can be reproduced by:
touch file1
ln file1 file2
rm file1
mv file2 file3
After journal replay, file1 reappears and directory's fragstat
gets corrupted.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Creating bloom filter for incomplete dir that was added by log
replay will confuse subsequent dir lookup and can create null
dentry for existing file. The erroneous null dentry confuses the
fragstat accounting and causes undeletable empty directory.
The fix is check if the dir is complete before creating the bloom
filter. For the MDCache::trim_non_auth{,_subtree} cases, just do
not call CDir::add_to_bloom because bloom filter is useless for
replica.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
These asserts are valid for a uniform cluster, but they won't hold
for a replica running a version without the info.last_epoch_started
patch.
Signed-off-by: Samuel Just <sam.just@inktank.com>
Reviewed-by: Greg Farnum <greg@inktank.com>
Server::_rename_prepare() adds remote inode's parent instead of
projected parent to the journal. So during journal replay, the
journal entry for the rename operation will wrongly revert the
remote inode's projected rename. This issue can be reproduced by:
touch file1
ln file1 file2
rm file1
mv file2 file3
After journal replay, file1 reappears and directory's fragstat
gets corrupted.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Creating bloom filter for incomplete dir that was added by log
replay will confuse subsequent dir lookup and can create null
dentry for existing file. The erroneous null dentry confuses the
fragstat accounting and causes undeletable empty directory.
The fix is check if the dir is complete before creating the bloom
filter. For the MDCache::trim_non_auth{,_subtree} cases, just do
not call CDir::add_to_bloom because bloom filter is useless for
replica.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Avoid anything on stdout that will generate cron emails for people.
Reported-by: Stefan Priebe <s.priebe@profihost.ag>
Signed-off-by: Sage Weil <sage@inktank.com>
If a subdirectory is specified to ceph_mount, the
root inode does not have an ino of CEPH_INO_ROOT, so
cwd will fail to ever find root and eventially hits
an assertion in in->get_first_parent(). This fix uses
the inode stored in the root member instead, ensuring
that we stop wherever the mount is rooted.
Signed-off-by: Sam Lang <sam.lang@inktank.com>
After replay, we don't know if the dentry removal has already been
committed. Use a sloppy removal so that we succeed even if we are
repeating the operation.
Conveniently, the previous implementation (pre v0.55) silently ignored
tmap op codes it did not understand, which means this new RMSLOPPY will
be interpreted the same as an actual RMSLOPPY. That means an v0.55
mds can run against an older osd (say, argonaut) without problems.
Signed-off-by: Sage Weil <sage@inktank.com>
This reverts 29fae494d0 and fixes the
alternate implmentation added by 8e91d00b52.
librbd relies the ENOENT return value.
Reported-by: Dan Mick <dan.mick@inktank.com>
Signed-off-by: Sage Weil <sage@inktank.com>
Rename applied_seq to max_applied_seq, since it is a bound; there may be
seq's < max_applied_seq that are not applied. This aligns the naming with
max_applying_seq.
Signed-off-by: Sage Weil <sage@inktank.com>
We can have a large number of operations in the op_wq waiting to be applied
to the fs. Currently, when we want to commit, we want for them *all* to
apply. This can take a very long time (the default queue length is 500
operations!).
Instead, mark an Op as started ("applying") when the thread pool actually
starts to apply it. At that point, only wait for applying ops to complete.
We let any threads with an op seq < max_applying_seq begin as well so that
we have a proper ordering/barrier. When those flush, applied_seq will ==
max_applying_seq, and that becomes the committing_seq value.
Note that 'applied_seq' is still maintain, but serves no real purpose
except to populate our asserts with sanity checks. max_applying_seq serves
the purpose applied_seq used to.
This removes once unnecessary source of latency associated with fs
commits.
Signed-off-by: Sage Weil <sage@inktank.com>