Commit Graph

31717 Commits

Author SHA1 Message Date
Yan, Zheng
c54b3ceaac mds: fix slave rename rollback
use rollback bufferlist to decide if the inode is being exported.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
9e8dbf9e3f mds: remove failed MDS from export bystanders list
make sure the importer does not wait for MExportDirNotifyAck from
the failed MDS

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
21d209d024 mds: wake up dentry waiters when handling cache rejoin ack
Cache rejoin ack message may fragment dirfrag, we should set the
'replay' parameter of adjust_dir_fragments() to false in this case.
This makes sure that CDir::merge/split wake up any dentry waiter
in the fragmented dirfrag.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
74ef370b02 mds: fix negative rstat assertion
when gathering rstat for directory inode that is fragmented to
several dirfrags, inode's rstat may temporarily become nagtive.
This is because, when splitting dirfrag, delta rstat is always
added to the first new dirfrag.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
22535340b4 mds: avoid race between cache expire and pushing replicas
MDentryLink and MMDSFragmentNotify push replica inode/dirfrags
to other MDS. They both are racy because, when the target MDS
receives them, it may has expired the replicaed inode/dirfrags'
ancestor. The race creates unconnected replica inode/dirfrags,
unconnected replicas are problematic for subtree migration
because migrator sends MExportDirNotify according to subtree
dirfrag's replica list. MDS that contains unconnected replicas
may not receive MExportDirNotify.

The fix is, for MDentryLink and MMDSFragmentNotify messages
that may be received later, we avoid trimming their parent
replica objects. If null replica dentry is not readable, we
may receive a MDentryLink message later. If replica inode's
dirfragtreelock is not readable, it's likely some dirfrags
of the inode are being fragmented, we may receive a
MMDSFragmentNotify message later.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
cc77ef2d52 mds: fix scattered wrlock rejoin
If unstable scatter lock is encountered when handling weak cache
rejoin, don't remove the recovering MDS from the scatter lock's
gather list. The reason is the recovering MDS may hold rejoined
wrlock on the scatter lock. (Rejoined wrlocks were created when
handling strong cache rejoins from survivor MDS)

When composing cache rejoin ack, if the recovering MDS is in lock's
gather list, set lock state of the recovering MDS to a compatible
unstable stable.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
3b90c78540 mds: fixes for thrash fragment
This patch contains 3 changes:
- limit the number of in progress fragmenting processes.
- reduce the probability of splitting small dirfrag.
- process the merge_queue when thrash_fragments is enabled.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
5b1de69ac7 mds: force fragment subtree bounds when replaying ESubtreeMap
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
4d5ceba531 mds: fix 'force dirfrags' during journal replay
For rename operation, null dentry is first replayed, it detaches
the inode from the FS hierarchy. Then primary dentry is replayed,
it updates the inode and re-attaches the inode to the FS hierarchy.
We may call CInode::force_dirfrag() when updating the inode. But
CInode::force_dirfrag() doesn't work well when inode is detached
from the FS hierarchy because adjusting fragments may also adjust
subtree map. The fix is don't detach the inode when replaying the
null dentry.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
f3666ededc mds: journal dirfragtree change
Introduce new flag DIRTYDFT to CDir and EMetaBlob::dirlump, the new
flag indicates the dirfrag is newly fragmented and the corresponding
dirfragtree change hasn't been propagate to the directory inode.

After fragmenting subtree dirfrags, make sure DIRTYDFT flag is set
on EMetaBlob::dirlump that correspond to the resulting dirfrags.
Journal replay code uses DIRTYDFT frag to decide if dirfragtree is
scattered dirty.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
ee7ac6fc66 mds: allow fragmenting subtree dirfrags
We can't wait until object becomes auth pinnable after freezing a
dirfrag/subtree, because it can cause deadlock. Current fragmenting
dirfrag code checks if the directory inode is auth pinnable, then
calls Locker::acquire_locks(). It avoids deadlock, but also forbids
fragmenting subtree dirfrags. We can get rid of the limitation by
using 'nonlocking auth pin' mode of Locker::acquire_locks().

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
98105b2448 mds: preserve dir_auth when spliting/merging dirfrags
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
3dc51dea31 mds: minor cleanup for EFragment and MMDSFragmentNotify
pass dirfrag_t to their constructors, instead of passing inode_t
and frag_t separately.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
9df6861b31 mds: freeze dir deadlock detection
freezing dir and freezing tree have the same deadlock cases.
This patch adds freeze dir deadlock detection, which imitates
commit ab93aa59 (mds: freeze tree deadlock detection)

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
9a47913d20 mds: improve freeze tree deadlock detection
Current code uses the start time of freezing tree to detect deadlock.
It is better to check how long the auth pin count of freezing tree
stays unchanged to decide if there is potential deadlock.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
8079939534 mds: handle frag mismatch for cache expire
When sending MDSFragmentNotify to peers, also replicate the new
dirfrags. This guarantees peers get new replica nonces for the
new dirfrags. So it's safe to ignore mismatched/old dirfrags in
the cache expire message.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
305d16f3f6 mds: handle frag mismatch for cache rejoin weak
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
0eb311d39c mds: fix open undef dirfrags
Undef inode may contain a undef dirfrag (*). When undef inode is
opened, we should force fragment the undef dirfrag (*). because
we may open other dirfrags later, the undef dirfrag (*) will
overlap with them.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
6e013cd675 mds: properly set COMPLETE flag when merging dirfrags
don't keep the COMPLETE flag when merging dirfrags during journal
replay, because it's inconvenience to check if the all dirfrags
under the 'basefrag' are in the cache and complete. One special case
is that newly created dirfrag get fragmented, then the fragment
operation get rolled back. The COMPLETE flag should be preserved in
this case because the dirfrag still doesn't exist on object store.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:51 +08:00
Yan, Zheng
ee0ab2b733 mds: fix CInode::get_dirfrags_under()
make the function work when opened dirfrags don't match the
dirfragtree.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Yan, Zheng
1080fa4571 mds: fix MDCache::adjust_subtree_after_rename()
process subtree dirfrags first, then process nested dirfrags. because
the code that processes nested dirfrags treats unprocessed subtree
dirfrags as child directories' dirfrags.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Yan, Zheng
e0e25658d6 mds: fix MDCache::get_force_dirfrag_bound_set()
don't force dir fragments according to the subtree bounds in resolve
message. The resolve message was not sent by the auth MDS of these
subtree bounds dirfrags.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Yan, Zheng
ffe7151600 mds: handle frag mismatch for discover
When handle discover dirfrag message, choose an approximate frag if
the requested dirfrag doesn't exist. When handling discover dirfrag
replay, wake up appropriate waiters if the reply is different from
the the requested dirfrag.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Yan, Zheng
b88034ee53 mds: use discover_path to open remote inode
MDCache::discover_ino() doesn't work well for directories that are
fragmented to several dirfrags. Because MDCache::handle_discover()
doesn't know which dirfrags the inode lives in when the sender has
outdatad frag information.

This patch replaces all use of MDCache::discover_ino() with
MDCache::discover_path().

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Yan, Zheng
1ff776669b mds: introduce fine-grained discover dirfrag wait queue
Current discover dirfrag code only allows discover one dirfrag at
a time. This can cause deadlock if there are directories that are
fragmented to several dirfrags. For example:

mds.0                        mds.1
-----------------------------------------------------------------
                             freeze subtree (1.*) with bound (2.1*)
discover (2.0*) ->
                             handle discover (2.0*), frozen tree, wait
                          <- export subtree (1.*) to with bound (2.1*)
discover (2.1*), wait

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Yan, Zheng
2c909cda0e mds: revert commit 15a5d37a
commit 15a5d37a (mds: fix race between scatter gather and dirfrag export)
is incomplete, it doesn't handles the race that no fragstat/neststat is
gathered. Previous commit prevents scatter gather during exporting dir,
which eliminates races of this type.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Yan, Zheng
5faa3134a1 mds: acquire scatter locks when exporting dir
If auth MDS of the subtree root inode is neither the exporter MDS
nor the importer MDS and it gathers subtree root's fragstat/neststat
while the subtree is exporting. It's possible that the exporter MDS
and the importer MDS both are auth MDS of the subtree root or both
are not auth MDS of the subtree root at the time they receive the
lock messages. So the auth MDS of the subtree root inode may get no
or duplicated fragstat/neststat for the subtree root dirfrag.

The fix is, during exporting a subtree, both the exporter MDS and
the importer MDS hold locks on scatter locks of the subtree root
inode. For the importer MDS, it tries acquiring locks on the scatter
locks when handling the MExportDirPrep message. If fails to acquire
all locks, it sends a NACK to the exporter MDS. The exporter MDS
cancels the exporting when receiving the NACK.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Yan, Zheng
3154ee84fa mds: acquire locks required by exporting dir
Start internal MDS request to acquire locks required by exporting dir.
It's more reliable than using Locker::rdlock_take_set(), It also allows
acquiring locks besides rdlock.

Only use Locker::acquire_locks() to acquire locks in the first stage of
exporting dir (before freeze the subtree). After the subtree is frozen,
to minimize the time of frozen tree, still use 'try lock' to re-acquire
the locks.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Yan, Zheng
3fb408ee01 mds: introduce nonlocking auth pin
Add a parameter to Locker::acquire_locks() to enabled nonblocking
auth pin. If nonblocking mode is enabled and an object that can't
be auth pinned is encountered, Locker::acquire_locks() aborts the
MDRequest instead of waiting.

The nonlocking mode is acquired by the cases that we want to acquire
locks after auth pinning a directory or freezing a dirfrag/subtree.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Yan, Zheng
d0df8413fb mds: allow acquiring wrlock and remote wrlock at the same time
Rename may move file from one dirfrag to another dirfrag of the same
directory inode. If two dirfrags belong to different auth MDS, both
MDS should hold wrlocks on filelock/nestlock of the directory inode.
If a lock is in both wrlocks list and remote_wrlocks list, current
Locker::acquire_locks() only acquires the local wrlock. The auth MDS
of the source dirfrag doesn't have the wrlock, so slave request of
the operation may modify the dirfrag after fragstat/neststat of the
dirfrag have already been gathered. It corrupts the dirstat/neststat
accounting.

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-17 09:37:50 +08:00
Sage Weil
75675bc879 Merge pull request #1251 from ceph/wip-7371
ReplicatedPG: return no data if read size is trimmed to zero

Backport: emperor, dumpling
Reviewed-by: Sage Weil <sage@inktank.com>
2014-02-16 08:26:33 -08:00
Yan, Zheng
1dae27c505 ReplicatedPG: return no data if read size is trimmed to zero
OSD should return no data if the read size is trimmed to zero by the
truncate_seq/truncate_size check. We can't rely on ObjectStore::read()
to do that because it reads the entire object when the 'len' parameter
is zero.

Fixes: #7371
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
2014-02-16 22:36:09 +08:00
Sage Weil
dcb6d02c52 Merge pull request #1223 from ceph/wip-7395
Improve OSD subscription handling

Reviewed-by: Sage Weil <sage@inktank.com>
2014-02-15 22:36:23 -08:00
Sage Weil
8547f43eba Merge pull request #1234 from dachary/wip-format
mon: remove format argument from osd crush dump

Reviewed-by: Sage Weil <sage@inktank.com>
2014-02-15 22:21:57 -08:00
Sage Weil
774125c7a8 osd: set client incarnation for Objecter instance
Each ceph-osd process's Objecter instance has a sequence
of tid's that start at 1.  To ensure these are unique
across all time, set the client incarnation to the
OSDMap epoch in which we booted.

Note that the MDS does something similar (except the
incarnation is actually the restart count for the MDS
rank, since the MDSMap tracks that explicitly).

Backport: emperor
Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:40 -08:00
Sage Weil
0dd1e07194 osd: schedule agent from a priority queue
We need to focus agent attention on those PGs that most need it.  For
starters, full PGs need immediate attention so that we can unblock IO.
More generally, fuller ones will give us the best payoff in terms of
evicted data vs effort expended finding candidate objects.

Restructure the agent queue with priorities.  Quantize evict_effort so that
PGs do not jump between priorities too frequently.

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:40 -08:00
Sage Weil
a8129829ce osd/ReplicatedPG: simplify agent_choose_mode
Use a temp variable.

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:40 -08:00
Sage Weil
905df2e729 osd/ReplicatedPG: block requests to cache PGs when they are full
If we are full and get a write request to a new object, put the op on a
wait list.  Wake up when the agent frees up some space.

Note that we do not block writes to existing objects.  That would be a
more aggressive strategy, but it is difficult to know up front whether we
will increase the size of the object or not, so we just leave it be.  I
suspect this strategy is "good enough".

Also note that we do not yet prioritize agent attention to PGs that most
need eviction (e.g., those that are full).

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:40 -08:00
Sage Weil
85e06f9d05 osd/ReplicatedPG: redirect reads instead of promoting when full
If the cache pool is full, we are processing a read op, and we would
otherwise promote, redirect instead.  This lets us continue to process the
op without blocking or making the cache pool any more full than it is.

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
b92c79d200 osd/ReplicatedPG: use reply_ctx in a few cases
Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
7f854211d3 osd/ReplicatedPG: do not flush omap objects to an EC base pool
The EC pool does not support omap content.  If the caching/tiering agent
encounters such an object, just skip it.  Use the OMAP object_info_t flag
for this.

Although legacy pools will have objects with omap that do not have this
flag set, no *cache* pools yet exist, so we do not need to worry about the
agent running across legacy content.

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
8c7bc2e873 osd/ReplicatedPG: do not activate agent unless base pool exists
Paranoia.

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
11e4695a77 osd: add OMAP flag to object_info_t
Set a flag if we ever set or update OMAP content on an object.  This gives
us an easy indicator for the cache agent (without actually querying the
ObjectStore) so that we can avoid trying to flush omap to EC pools.

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
6581ce9cf8 osd/ReplicatedPG: ignore starvation potential when taking write lock during promote
Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
f617eba034 osd/ReplicatedPG: do not choke on op-less flush OpContexts (from flush)
The agent initiates flush ops that don't have an OpRequest associated
with them.  Make reply_ctx skip the actual reply message instead of
crashing if the flush request gets canceled (e.g., due to a race with
a write).

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
dd3814f3fe osd/ReplicatedPG: do not flush|evict degraded objects
The repop won't work right; we still repair the object before making
any update.

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
90457b17e9 ceph_test_rados_api_tier: fix osd pool set json syntax
String, not int.

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
230aad7baa osd: clear agent state when PG becomes a replica
Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
c2d16d7247 osd/ReplicatedPG: do not flush or evict hitsets
Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00
Sage Weil
e07f987d9f osd/ReplicatedPG: fix evict mode selection for large target
If the target is > 1.0 for some reason (bad configuration, or high slop
value), and we are not yet full, we should be in IDLE mode--not SOME.

Signed-off-by: Sage Weil <sage@inktank.com>
2014-02-15 22:09:39 -08:00