If mds_wanted in add_cap is more than we actually want (for
example, on a getattr that races with a cap wanted release),
requeue a cap check. We don't want to release immediately if we
can help it because something like readdir would prematurely (?)
release caps we're holding on to for good measure...
Track flushing caps globally.
We still need to add a mechanism to reflush if the mds session
drops, or if the cap gets migrated. We will know i_flushing_caps
is us if we are/were i_auth_cap.
Maintain dirty and sync inode lists. When inode is dirtied,
make sure inode is on VFS dirty lists (so the vfs will write it
out for us). When we flush caps, add to sync list.
On sync_fs, wait for sync list to drain. (Lists are separate to
avoid starving a sync with newly dirtied inodes.)
Remove old unmount cap check cruft... once the sb goes read-only
we know all the dirty cap data has been flushed.
Create .ceph/mds%d/{stray/,journal}. Restructure mds creation,
startup, and shutdown procedures to create, import (and populate),
and export the per-mds directory.
If the client reasserts caps from replayed requests, the inodes don't yet
exists during the reconnect (or even rejoin) stage. So, if we don't find
the inode, keep the reconnect info around. When processing a replayed
request in CInode::encode_inodestat, set wanted/issued appropriately.
This is incomplete. We really need something to ensure we deal with
replayed requests before new requests are handled.. and on a cluster-wide
basis, since requests may involve slave requests to other mds's.
We also should clean up reconnects unclaimed after all replays are
complete. And somehow inform the client when the cap is officially
nonexistent and EBADF.