Per Greg's recommendation, change the name of this function to better
indicate what it does now that we always request a journal flush on
the last cap flush.
Also, add a comment above the function to better explain why we do this.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Ensure that we ask the MDS to flush the journal on the last cap flush
from sync_fs and umount codepaths.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
The kernel client lags the userland code a bit, and feature support for
addr2 is not quite ready. Still, we want to allow the client to set the
new flags field in a cap request before then so it can get better fsync
performance.
When we go to update the cap fields, grab the features from the peer,
and verify that the appropriate flags are set before we apply updates
to the btime and change_attr.
Also, just have the function return early if dirty is 0, since it's
a no-op in that case, and turn the comment above the function into
an assertion.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Ensure that the client will request an immediate journal flush from the
MDS when we'll end up waiting on the flush response. This patch should
fix the fsync codepath, but we may need something similar for syncfs.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
In a later patch, we'll want to have the client set the sync flag in
the cap flush, to hint to the MDS that it should process it immediately.
We could add a second bool, but let's instead do what the kernel client
does which is to have a flags field. With that, the existing no_delay
bool becomes CHECK_CAPS_NODELAY.
We'll add other flags in subsequent patches.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
If the client has set the sync flag in a cap update, then it
is indicating that it's waiting on the reply. Ensure that we flush
the journal in that case.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
The osd_scrub_during_recovery config option allows for configuring
if the OSD will schedule a new scrub while recovery is active.
When set to false no new scrubs will be initiated by the OSD while
there are recovery threads active on that OSD.
Signed-off-by: Wido den Hollander <wido@42on.com>
Duplicated definition of lambas of same function is not good.
Also switching ExtentMap::rm() to use the new disposer to
keep pace with others.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
As follows:
/home/jenkins-build/build/workspace/ceph-pull-requests/build/boost/include/boost/intrusive/pointer_plus_bits.hpp: In member function ‘bool BlueStore::ExtentMap::encode_some(uint32_t, uint32_t, ceph::bufferlist&, unsigned int*)’:
/home/jenkins-build/build/workspace/ceph-pull-requests/build/boost/include/boost/intrusive/pointer_plus_bits.hpp:76:7: warning: ‘dummy’ is used uninitialized in this function [-Wuninitialized]
n = pointer(uintptr_t(p) | (uintptr_t(n) & Mask));
^
/home/jenkins-build/build/workspace/ceph-pull-requests/src/os/bluestore/BlueStore.cc:1779:10: note: ‘dummy’ was declared here
Extent dummy(offset);
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
tests: fix tests vs pool deletion
The default changed to disallow pool delete as of #11665; the tests assume it's allowed.
Reviewed-by: Dan Mick <dmick@redhat.com>
Intended to remove an apparent race. The two effects are
1. replace top-level command callouts w/file builtins
2. do them in the src/rgw sub-cmake
This is cleaner, and ideally avoids the race.
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
If we set *next to max, then the caller (a few lines up) doesn't terminate
the loop and will keep trying to list objects in every following hash
dir until it reaches the end of the collection. In fact, if we have an
end bound we will never to an efficient listing unless we hit the max
first.
For one user, this was causing OSD suicides when scrub ran because it
wasn't able to list all objects before the timeout. In general, this would
cause scrub to stall a PG for a long time and slow down requests.
Broken by refactor in 921c4586f1.
Fixes: http://tracker.ceph.com/issues/17859
Signed-off-by: Sage Weil <sage@redhat.com>