Cache rejoin ack message may fragment dirfrag, we should set the
'replay' parameter of adjust_dir_fragments() to false in this case.
This makes sure that CDir::merge/split wake up any dentry waiter
in the fragmented dirfrag.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
when gathering rstat for directory inode that is fragmented to
several dirfrags, inode's rstat may temporarily become nagtive.
This is because, when splitting dirfrag, delta rstat is always
added to the first new dirfrag.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
MDentryLink and MMDSFragmentNotify push replica inode/dirfrags
to other MDS. They both are racy because, when the target MDS
receives them, it may has expired the replicaed inode/dirfrags'
ancestor. The race creates unconnected replica inode/dirfrags,
unconnected replicas are problematic for subtree migration
because migrator sends MExportDirNotify according to subtree
dirfrag's replica list. MDS that contains unconnected replicas
may not receive MExportDirNotify.
The fix is, for MDentryLink and MMDSFragmentNotify messages
that may be received later, we avoid trimming their parent
replica objects. If null replica dentry is not readable, we
may receive a MDentryLink message later. If replica inode's
dirfragtreelock is not readable, it's likely some dirfrags
of the inode are being fragmented, we may receive a
MMDSFragmentNotify message later.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
If unstable scatter lock is encountered when handling weak cache
rejoin, don't remove the recovering MDS from the scatter lock's
gather list. The reason is the recovering MDS may hold rejoined
wrlock on the scatter lock. (Rejoined wrlocks were created when
handling strong cache rejoins from survivor MDS)
When composing cache rejoin ack, if the recovering MDS is in lock's
gather list, set lock state of the recovering MDS to a compatible
unstable stable.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
This patch contains 3 changes:
- limit the number of in progress fragmenting processes.
- reduce the probability of splitting small dirfrag.
- process the merge_queue when thrash_fragments is enabled.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
For rename operation, null dentry is first replayed, it detaches
the inode from the FS hierarchy. Then primary dentry is replayed,
it updates the inode and re-attaches the inode to the FS hierarchy.
We may call CInode::force_dirfrag() when updating the inode. But
CInode::force_dirfrag() doesn't work well when inode is detached
from the FS hierarchy because adjusting fragments may also adjust
subtree map. The fix is don't detach the inode when replaying the
null dentry.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Introduce new flag DIRTYDFT to CDir and EMetaBlob::dirlump, the new
flag indicates the dirfrag is newly fragmented and the corresponding
dirfragtree change hasn't been propagate to the directory inode.
After fragmenting subtree dirfrags, make sure DIRTYDFT flag is set
on EMetaBlob::dirlump that correspond to the resulting dirfrags.
Journal replay code uses DIRTYDFT frag to decide if dirfragtree is
scattered dirty.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
We can't wait until object becomes auth pinnable after freezing a
dirfrag/subtree, because it can cause deadlock. Current fragmenting
dirfrag code checks if the directory inode is auth pinnable, then
calls Locker::acquire_locks(). It avoids deadlock, but also forbids
fragmenting subtree dirfrags. We can get rid of the limitation by
using 'nonlocking auth pin' mode of Locker::acquire_locks().
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
freezing dir and freezing tree have the same deadlock cases.
This patch adds freeze dir deadlock detection, which imitates
commit ab93aa59 (mds: freeze tree deadlock detection)
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Current code uses the start time of freezing tree to detect deadlock.
It is better to check how long the auth pin count of freezing tree
stays unchanged to decide if there is potential deadlock.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
When sending MDSFragmentNotify to peers, also replicate the new
dirfrags. This guarantees peers get new replica nonces for the
new dirfrags. So it's safe to ignore mismatched/old dirfrags in
the cache expire message.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Undef inode may contain a undef dirfrag (*). When undef inode is
opened, we should force fragment the undef dirfrag (*). because
we may open other dirfrags later, the undef dirfrag (*) will
overlap with them.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
don't keep the COMPLETE flag when merging dirfrags during journal
replay, because it's inconvenience to check if the all dirfrags
under the 'basefrag' are in the cache and complete. One special case
is that newly created dirfrag get fragmented, then the fragment
operation get rolled back. The COMPLETE flag should be preserved in
this case because the dirfrag still doesn't exist on object store.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
process subtree dirfrags first, then process nested dirfrags. because
the code that processes nested dirfrags treats unprocessed subtree
dirfrags as child directories' dirfrags.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
don't force dir fragments according to the subtree bounds in resolve
message. The resolve message was not sent by the auth MDS of these
subtree bounds dirfrags.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
When handle discover dirfrag message, choose an approximate frag if
the requested dirfrag doesn't exist. When handling discover dirfrag
replay, wake up appropriate waiters if the reply is different from
the the requested dirfrag.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
MDCache::discover_ino() doesn't work well for directories that are
fragmented to several dirfrags. Because MDCache::handle_discover()
doesn't know which dirfrags the inode lives in when the sender has
outdatad frag information.
This patch replaces all use of MDCache::discover_ino() with
MDCache::discover_path().
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Current discover dirfrag code only allows discover one dirfrag at
a time. This can cause deadlock if there are directories that are
fragmented to several dirfrags. For example:
mds.0 mds.1
-----------------------------------------------------------------
freeze subtree (1.*) with bound (2.1*)
discover (2.0*) ->
handle discover (2.0*), frozen tree, wait
<- export subtree (1.*) to with bound (2.1*)
discover (2.1*), wait
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
commit 15a5d37a (mds: fix race between scatter gather and dirfrag export)
is incomplete, it doesn't handles the race that no fragstat/neststat is
gathered. Previous commit prevents scatter gather during exporting dir,
which eliminates races of this type.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
If auth MDS of the subtree root inode is neither the exporter MDS
nor the importer MDS and it gathers subtree root's fragstat/neststat
while the subtree is exporting. It's possible that the exporter MDS
and the importer MDS both are auth MDS of the subtree root or both
are not auth MDS of the subtree root at the time they receive the
lock messages. So the auth MDS of the subtree root inode may get no
or duplicated fragstat/neststat for the subtree root dirfrag.
The fix is, during exporting a subtree, both the exporter MDS and
the importer MDS hold locks on scatter locks of the subtree root
inode. For the importer MDS, it tries acquiring locks on the scatter
locks when handling the MExportDirPrep message. If fails to acquire
all locks, it sends a NACK to the exporter MDS. The exporter MDS
cancels the exporting when receiving the NACK.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Start internal MDS request to acquire locks required by exporting dir.
It's more reliable than using Locker::rdlock_take_set(), It also allows
acquiring locks besides rdlock.
Only use Locker::acquire_locks() to acquire locks in the first stage of
exporting dir (before freeze the subtree). After the subtree is frozen,
to minimize the time of frozen tree, still use 'try lock' to re-acquire
the locks.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Add a parameter to Locker::acquire_locks() to enabled nonblocking
auth pin. If nonblocking mode is enabled and an object that can't
be auth pinned is encountered, Locker::acquire_locks() aborts the
MDRequest instead of waiting.
The nonlocking mode is acquired by the cases that we want to acquire
locks after auth pinning a directory or freezing a dirfrag/subtree.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Rename may move file from one dirfrag to another dirfrag of the same
directory inode. If two dirfrags belong to different auth MDS, both
MDS should hold wrlocks on filelock/nestlock of the directory inode.
If a lock is in both wrlocks list and remote_wrlocks list, current
Locker::acquire_locks() only acquires the local wrlock. The auth MDS
of the source dirfrag doesn't have the wrlock, so slave request of
the operation may modify the dirfrag after fragstat/neststat of the
dirfrag have already been gathered. It corrupts the dirstat/neststat
accounting.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
OSD should return no data if the read size is trimmed to zero by the
truncate_seq/truncate_size check. We can't rely on ObjectStore::read()
to do that because it reads the entire object when the 'len' parameter
is zero.
Fixes: #7371
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Each ceph-osd process's Objecter instance has a sequence
of tid's that start at 1. To ensure these are unique
across all time, set the client incarnation to the
OSDMap epoch in which we booted.
Note that the MDS does something similar (except the
incarnation is actually the restart count for the MDS
rank, since the MDSMap tracks that explicitly).
Backport: emperor
Signed-off-by: Sage Weil <sage@inktank.com>
We need to focus agent attention on those PGs that most need it. For
starters, full PGs need immediate attention so that we can unblock IO.
More generally, fuller ones will give us the best payoff in terms of
evicted data vs effort expended finding candidate objects.
Restructure the agent queue with priorities. Quantize evict_effort so that
PGs do not jump between priorities too frequently.
Signed-off-by: Sage Weil <sage@inktank.com>
If we are full and get a write request to a new object, put the op on a
wait list. Wake up when the agent frees up some space.
Note that we do not block writes to existing objects. That would be a
more aggressive strategy, but it is difficult to know up front whether we
will increase the size of the object or not, so we just leave it be. I
suspect this strategy is "good enough".
Also note that we do not yet prioritize agent attention to PGs that most
need eviction (e.g., those that are full).
Signed-off-by: Sage Weil <sage@inktank.com>
If the cache pool is full, we are processing a read op, and we would
otherwise promote, redirect instead. This lets us continue to process the
op without blocking or making the cache pool any more full than it is.
Signed-off-by: Sage Weil <sage@inktank.com>
The EC pool does not support omap content. If the caching/tiering agent
encounters such an object, just skip it. Use the OMAP object_info_t flag
for this.
Although legacy pools will have objects with omap that do not have this
flag set, no *cache* pools yet exist, so we do not need to worry about the
agent running across legacy content.
Signed-off-by: Sage Weil <sage@inktank.com>
Set a flag if we ever set or update OMAP content on an object. This gives
us an easy indicator for the cache agent (without actually querying the
ObjectStore) so that we can avoid trying to flush omap to EC pools.
Signed-off-by: Sage Weil <sage@inktank.com>
The agent initiates flush ops that don't have an OpRequest associated
with them. Make reply_ctx skip the actual reply message instead of
crashing if the flush request gets canceled (e.g., due to a race with
a write).
Signed-off-by: Sage Weil <sage@inktank.com>
If the target is > 1.0 for some reason (bad configuration, or high slop
value), and we are not yet full, we should be in IDLE mode--not SOME.
Signed-off-by: Sage Weil <sage@inktank.com>