mirror of
https://github.com/ceph/ceph
synced 2025-01-31 07:22:56 +00:00
50f2b00369
git-svn-id: https://ceph.svn.sf.net/svnroot/ceph@1039 29311d96-e01e-0410-9327-a35deaab8ce9
334 lines
8.4 KiB
Plaintext
334 lines
8.4 KiB
Plaintext
blech:
|
|
- EMetablob should return 'expired' if they have
|
|
higher versions (and are thus described by a newer journal entry)
|
|
|
|
|
|
mds
|
|
- journal+recovery
|
|
- EImportMap
|
|
- EMetaBlob replay
|
|
- import/export
|
|
- how to keep other MDS nodes from goofing up the import/export notify stuff
|
|
- recovery vs import/export
|
|
- idempotent ops
|
|
- unlink
|
|
- open+create
|
|
- file capabilities i/o
|
|
- link
|
|
- rename
|
|
- mds failure
|
|
- mdsmon map updates, mds states
|
|
- active, down, recovering, stopping
|
|
-
|
|
- should auth_pins really go to the root?
|
|
- FIXME: auth_pins on importer versus import beneath an authpinned region?
|
|
|
|
journaler
|
|
- fix up for large events (e.g. imports)
|
|
|
|
- paxos for monitor
|
|
- lnet?
|
|
- crush
|
|
- xml import/export?
|
|
- crush tools
|
|
|
|
== todo
|
|
|
|
1- pipelining writes?
|
|
2- intervening reads?
|
|
|
|
inode ops
|
|
utime -- no concurrency issues
|
|
chown/chmod -- should lock
|
|
truncate -- should lock
|
|
1-> no. multiple process concurrency on a single inode is not important.
|
|
2-> maybe... intervening stats? probably not important.
|
|
|
|
directory ops. parent inode mtime, + dirent xlocks?
|
|
mknod
|
|
open+create
|
|
symlink
|
|
unlink
|
|
rmdir
|
|
rename
|
|
1-> yes. but mtime updates are independent (mtime monotonically increasing), so it's easy.
|
|
2-> yes.
|
|
|
|
--> so, make let's make file/hard wrlock exclusive.
|
|
|
|
locks
|
|
namespace
|
|
path pins -- read lock
|
|
dentry xlock -- write lock
|
|
inode
|
|
hard/file rd start/stop -- read lock
|
|
hard/file wr start/stop -- write lock
|
|
|
|
|
|
|
|
|
|
- integrate revisions into ObjectCacher
|
|
- clean up oid.rev vs op.rev in osd+osdc
|
|
|
|
rados+ebofs
|
|
- purge replicated writes from cache. (with exception of partial tail blocks.)
|
|
|
|
rados paper todo
|
|
- better experiments
|
|
- berkeleydb objectstore?
|
|
- flush log only in response to subsequent read or write?
|
|
- better behaving recovery
|
|
- justify use of splay.
|
|
- dynamic replication
|
|
- snapshots
|
|
|
|
rados snapshots
|
|
- attr.crev is rev we were created in.
|
|
- oid.rev=0 is "live". defined for attr.crev <= rev.
|
|
- otherwise, defined for attr.crev <= rev < oid.rev (i.e. oid.rev is upper bound, non-inclusive.)
|
|
|
|
- write|delete is tagged with op.rev
|
|
- if attr.crev < op.rev
|
|
- we clone to oid.rev=rev (clone keeps old crev)
|
|
- change live attr.crev=rev.
|
|
- apply update
|
|
- read is tagged with op.rev
|
|
- if 0, we read from 0 (if it exists).
|
|
- otherwise we choose object rev based on op.rev vs oid.rev, and then verifying attr.crev <= op.rev.
|
|
|
|
- how to get usage feedback to monitor?
|
|
|
|
- change messenger entity_inst_t
|
|
- no more rank! make it a uniquish nonce?
|
|
|
|
- clean up mds caps release in exporter
|
|
- figure out client failure modes
|
|
- clean up messenger failure modes.
|
|
- add connection retry.
|
|
|
|
mds recovery
|
|
- multiple passes?
|
|
1- establish import/export map
|
|
?-
|
|
2- replay inode, dir, dentry updates
|
|
- single pass
|
|
- each event needs to embed inode for trace up to the import
|
|
- second stage will reconcile cached items with other active mds nodes
|
|
- cached items will be shared with the primary to repopulate it's non-dirty cache
|
|
- query clients for their state too?
|
|
- mds must journal list of clients with whom we share state?
|
|
|
|
|
|
journaler
|
|
- should we pad with zeros to avoid splitting individual entries?
|
|
- make it a g_conf flag?
|
|
- have to fix reader to skip over zeros (either <4 bytes for size, or zeroed sizes)
|
|
- need to truncate at detected (valid) write_pos to clear out any other partial trailing writes
|
|
|
|
|
|
monitor
|
|
?- monitor user lib that handles resending, redirection of mon requests.
|
|
- elector
|
|
/- organize monitor store
|
|
|
|
osdmon
|
|
- distribute
|
|
- recovery: store elector epochs with maps..
|
|
- monitor needs to monitor some osds...
|
|
- monitor pgs, notify on out
|
|
- watch osd utilization; adjust overload in cluster map
|
|
|
|
mdsmon
|
|
|
|
osd/rados
|
|
- efficiently replicate clone() objects
|
|
- pg_num instead of pg_bits
|
|
- flag missing log entries on crash recovery --> WRNOOP? or WRLOST?
|
|
- consider implications of nvram writeahead logs
|
|
- fix heartbeat wrt new replication
|
|
- mark residual pgs obsolete ???
|
|
- rdlocks
|
|
- optimize remove wrt recovery pushes
|
|
- pg_bit/pg_num changes
|
|
- report crashed pgs?
|
|
|
|
messenger
|
|
/- share same tcp socket for sender and receiver
|
|
/- graceful connection teardown
|
|
- close idle connections
|
|
- generalize out a transport layer?
|
|
- eg reliable tcp for most things, connectionless unreliable datagrams for monitors?
|
|
- or, aggressive connection closing on monitors? or just max_connections and an lru?
|
|
- osds: forget idle client addrs
|
|
|
|
objecter
|
|
|
|
objectcacher
|
|
- ocacher caps transitions vs locks
|
|
- test read locks
|
|
|
|
reliability
|
|
- heartbeat vs ping
|
|
- osdmonitor, filter
|
|
|
|
ebofs
|
|
- verify proper behavior of conflicting/overlapping reads of clones
|
|
- test(fix) sync()
|
|
- combine inodes and/or cnodes into same blocks
|
|
- allow btree sets instead of maps
|
|
- eliminate nodepools
|
|
- nonblocking write on missing onodes?
|
|
- fix bug in node rotation on insert (and reenable)
|
|
- fix NEAR_LAST_FWD (?)
|
|
- journaling? in NVRAM?
|
|
- metadata in nvram? flash?
|
|
|
|
|
|
|
|
bugs/stability
|
|
- figure out weird 40ms latency with double log entries
|
|
|
|
|
|
|
|
remaining hard problems
|
|
- how to cope with file size changes and read/write sharing
|
|
- mds STOGITH...
|
|
|
|
|
|
crush
|
|
- more efficient failure when all/too many osds are down
|
|
- allow forcefeed for more complicated rule structures. (e.g. make force_stack a list< set<int> >)
|
|
|
|
|
|
mds
|
|
- distributed client management
|
|
- anchormgr
|
|
- 2pc
|
|
- independent journal?
|
|
- distributed?
|
|
- link count management
|
|
- also 2pc
|
|
- chdir (directory opens!)
|
|
- rewrite logstream
|
|
- clean up
|
|
- be smart about rados ack vs reread
|
|
- log locking? root log object
|
|
- trimming, rotation
|
|
|
|
- efficient stat for single writers
|
|
- lstat vs stat
|
|
- add FILE_CAP_EXTEND capability bit
|
|
- only share osdmap updates with clients holding capabilities
|
|
- delayed replica caps release... we need to set a timer event? (and cancel it when appropriate?)
|
|
- finish hard links!
|
|
- reclaim danglers from inode file on discover...
|
|
- fix rename wrt hard links
|
|
- interactive hash/unhash interface
|
|
- test hashed readdir
|
|
- make logstream.flush align itself to stripes
|
|
|
|
- carefully define/document frozen wrt dir_auth vs hashing
|
|
|
|
|
|
|
|
client
|
|
- mixed lazy and non-lazy io will clobber each others' caps in the buffer cache
|
|
|
|
- test client caps with meta exports
|
|
- some heuristic behavior to consolidate caps to inode auth
|
|
- client will re-tx anything it needed to say upon rx of new mds notification (?)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
MDS TODO
|
|
- fix hashed readdir: should (optionally) do a lock on dir namespace?
|
|
- fix hard links
|
|
- they mostly work, but they're fragile
|
|
- sync clients on stat
|
|
- will need to ditch 10s client metadata caching before this is useful
|
|
- implement truncate
|
|
- implement hashed directories
|
|
- statfs?
|
|
- rewrite journal + recovery
|
|
- figure out online failure recovery
|
|
- more distributed fh management?
|
|
- btree directories (for efficient large directories)
|
|
- consistency points/snapshots
|
|
|
|
- fix MExportAck and others to use dir+dentry, not inode
|
|
(otherwise this all breaks with hard links.. altho it probably needs reworking already?)
|
|
|
|
|
|
|
|
|
|
|
|
why qsync could be wrong (for very strict POSIX) : varying mds -> client message transit or processing times.
|
|
- mds -> 1,2 : qsync
|
|
- client1 writes at byte 100
|
|
- client1 -> mds : qsync reply (size=100)
|
|
- client1 writes at byte 300
|
|
- client1 -> client2 (outside channel)
|
|
- client2 writes at byte 200
|
|
- client2 -> mds : qsync reply (size=200)
|
|
-> stat results in size 200, even though at no single point in time was the max size 500.
|
|
-> for correct result, need to _stop_ client writers while gathering metadata.
|
|
|
|
|
|
SAGE:
|
|
|
|
- string table?
|
|
|
|
- hard links
|
|
- fix MExportAck and others to use dir+dentry, not inode
|
|
(otherwise this all breaks with hard links.. altho it probably needs reworking already!)
|
|
|
|
- do real permission checks?
|
|
|
|
|
|
|
|
CLIENT TODO
|
|
|
|
- statfs
|
|
|
|
|
|
|
|
|
|
|
|
ISSUES
|
|
|
|
|
|
- discover
|
|
- soft: authority selectively repicates, or sets a 'forward' flag in reply
|
|
- hard: authority always replicates (eg. discover for export)
|
|
- forward flag (see soft)
|
|
- error flag (if file not found, etc.)
|
|
- [what was i talking about?] make sure waiters are properly triggered, either upon dir_rep update, or (empty!) discover reply
|
|
|
|
|
|
|
|
DOCUMENT
|
|
- cache, distributed cache structure and invariants
|
|
- export process
|
|
- hash/unhash process
|
|
|
|
|
|
TEST
|
|
- hashing
|
|
- test hash/unhash operation
|
|
- hash+export: encode list of replicated dir inodes so they can be discovered before import is procesed.
|
|
- test nauthitems (wrt hashing?)
|
|
|
|
|
|
IMPLEMENT
|
|
|
|
- smarter balancing
|
|
- popularity calculation and management is inconsistent/wrong.
|
|
- does it work?
|
|
|
|
- dump active config in run output somewhere
|
|
|
|
|