2008-05-10 23:31:14 +00:00
|
|
|
big items
|
|
|
|
- quotas
|
|
|
|
- accounting
|
|
|
|
- enforcement
|
|
|
|
- rados cow/snapshot infrastructure
|
|
|
|
- mds snapshots
|
|
|
|
- mds security enforcement
|
|
|
|
- client, user authentication
|
|
|
|
- cas
|
|
|
|
|
|
|
|
- meta vs data crush rules
|
|
|
|
- use libuuid
|
2008-04-18 15:10:33 +00:00
|
|
|
|
2008-03-25 22:55:26 +00:00
|
|
|
userspace client
|
2008-06-03 20:13:58 +00:00
|
|
|
- handle session STALE
|
2008-05-10 05:47:04 +00:00
|
|
|
- rm -rf on fragmented directory
|
2008-05-01 18:13:33 +00:00
|
|
|
- time out caps, wake up waiters on renewal
|
2008-05-15 20:39:51 +00:00
|
|
|
- link caps with mds session
|
2008-03-28 16:24:18 +00:00
|
|
|
- validate dn leases
|
2008-06-05 03:47:02 +00:00
|
|
|
- fix lease validation to check session ttl
|
2008-03-25 22:55:26 +00:00
|
|
|
- clean up ll_ interface, now that we have leases!
|
|
|
|
- clean up client mds session vs mdsmap behavior?
|
|
|
|
|
2007-11-20 18:19:11 +00:00
|
|
|
kernel client
|
2008-04-17 01:53:13 +00:00
|
|
|
- flush caps on sync, fsync, etc.
|
2008-05-01 14:13:47 +00:00
|
|
|
- do we need to block?
|
2008-04-30 00:01:51 +00:00
|
|
|
- timeout mds session close on umount
|
2008-04-08 02:48:19 +00:00
|
|
|
- deal with CAP_RDCACHE properly: invalidate cache pages?
|
2008-03-07 18:37:33 +00:00
|
|
|
- procfs/debugfs
|
2008-03-03 06:41:06 +00:00
|
|
|
- adjust granular debug levels too
|
|
|
|
- should we be using debugfs?
|
2008-03-07 18:37:33 +00:00
|
|
|
- a dir for each client instance (client###)?
|
|
|
|
- hooks to get mds, osd, monmap epoch #s
|
2008-05-26 16:15:09 +00:00
|
|
|
- clean up messenger vs ktcp
|
|
|
|
- hook into sysfs?
|
2008-01-19 16:18:14 +00:00
|
|
|
- vfs
|
2008-01-31 23:29:57 +00:00
|
|
|
- can we use dentry_path(), if it gets merged into mainline?
|
2008-04-02 23:37:00 +00:00
|
|
|
- io / osd client
|
2007-12-24 05:16:04 +00:00
|
|
|
- osd ack vs commit handling. hmm!
|
2007-05-25 20:10:48 +00:00
|
|
|
|
2008-03-20 16:41:09 +00:00
|
|
|
client
|
|
|
|
- clean up client mds session vs mdsmap behavior?
|
|
|
|
|
2007-12-19 04:54:23 +00:00
|
|
|
osdmon
|
|
|
|
- monitor needs to monitor some osds...
|
|
|
|
|
|
|
|
crush
|
2008-01-19 00:44:18 +00:00
|
|
|
- more efficient failure when all/too many osds are down
|
|
|
|
- allow forcefeed for more complicated rule structures. (e.g. make force_stack a list< set<int> >)
|
|
|
|
- "knob" bucket
|
2007-12-19 04:54:23 +00:00
|
|
|
|
|
|
|
pgmon
|
|
|
|
- monitor pg states, notify on out?
|
|
|
|
- watch osd utilization; adjust overload in cluster map
|
|
|
|
|
|
|
|
mon
|
2008-04-15 01:32:14 +00:00
|
|
|
- paxos need to clean up old states.
|
2007-12-21 01:50:39 +00:00
|
|
|
- some sort of tester for PaxosService...
|
2008-04-15 01:32:14 +00:00
|
|
|
- osdmon needs to lower-bound old osdmap versions it keeps around?
|
2007-12-19 04:54:23 +00:00
|
|
|
|
2008-06-04 18:08:06 +00:00
|
|
|
mds
|
|
|
|
- dir frags
|
|
|
|
- fix replay (dont want dir frozen, pins, etc.?)
|
|
|
|
- fix accounting
|
2008-05-27 14:32:00 +00:00
|
|
|
|
2007-09-13 03:58:22 +00:00
|
|
|
- proper handling of cache expire messages during rejoin phase?
|
2007-11-16 21:46:52 +00:00
|
|
|
-> i think cache expires are fine; the rejoin_ack handler just has to behave if rejoining items go missing
|
|
|
|
- try_remove_unlinked_dn thing
|
2007-07-20 17:47:49 +00:00
|
|
|
|
2007-11-16 21:46:52 +00:00
|
|
|
- rerun destro trace against latest, with various journal lengths
|
2007-07-17 17:51:11 +00:00
|
|
|
|
2008-03-31 17:27:12 +00:00
|
|
|
- lease length heuristics
|
|
|
|
- mds lock last_change stamp?
|
|
|
|
|
2008-06-05 14:14:45 +00:00
|
|
|
- handle slow client reconnect (i.e. after mds has gone active)
|
2008-03-28 16:42:37 +00:00
|
|
|
|
2008-02-29 00:58:28 +00:00
|
|
|
- fix reconnect/rejoin open file weirdness
|
2007-11-16 21:46:52 +00:00
|
|
|
- get rid of C*Discover objects for replicate_to .. encode to bufferlists directly?
|
2008-06-04 18:08:06 +00:00
|
|
|
|
2008-06-05 14:14:45 +00:00
|
|
|
- can we get rid of the dirlock remote auth_pin weirdness on subtree roots?
|
2008-06-04 18:08:06 +00:00
|
|
|
- anchor_destroy needs to xlock linklock.. which means it needs a Mutation wrapper?
|
|
|
|
- ... when it gets a caller.. someday..
|
2008-05-22 21:20:30 +00:00
|
|
|
|
2008-06-04 18:08:06 +00:00
|
|
|
- make truncate faster with a trunc_seq, attached to objects as attributes?
|
2007-11-16 21:46:52 +00:00
|
|
|
|
2007-06-06 22:43:47 +00:00
|
|
|
- osd needs a set_floor_and_read op for safe failover/STOGITH-like semantics.
|
|
|
|
|
2007-10-12 22:46:27 +00:00
|
|
|
- could mark dir complete in EMetaBlob by counting how many dentries are dirtied in the current log epoch in CDir...
|
2007-08-22 17:37:13 +00:00
|
|
|
|
2007-10-12 22:46:27 +00:00
|
|
|
- FIXME how to journal/store root and stray inode content?
|
2007-05-16 21:53:22 +00:00
|
|
|
- in particular, i care about dirfragtree.. get it on rejoin?
|
|
|
|
- and dir sizes, if i add that... also on rejoin?
|
|
|
|
|
2007-10-12 22:46:27 +00:00
|
|
|
- efficient stat for single writers
|
|
|
|
- add FILE_CAP_EXTEND capability bit
|
|
|
|
|
2007-05-16 21:53:22 +00:00
|
|
|
|
2007-02-17 22:49:47 +00:00
|
|
|
journaler
|
|
|
|
- fix up for large events (e.g. imports)
|
|
|
|
- use set_floor_and_read for safe takeover from possibly-not-quite-dead otherguy.
|
|
|
|
- should we pad with zeros to avoid splitting individual entries?
|
|
|
|
- make it a g_conf flag?
|
|
|
|
- have to fix reader to skip over zeros (either <4 bytes for size, or zeroed sizes)
|
|
|
|
- need to truncate at detected (valid) write_pos to clear out any other partial trailing writes
|
|
|
|
|
|
|
|
|
2008-01-31 20:01:31 +00:00
|
|
|
fsck
|
|
|
|
- fsck.ebofs
|
|
|
|
- online mds fsck?
|
2008-02-29 00:58:28 +00:00
|
|
|
- object backpointer attrs to hint catastrophic reconstruction?
|
2008-01-31 20:01:31 +00:00
|
|
|
|
2007-02-17 22:49:47 +00:00
|
|
|
objecter
|
2007-08-25 22:40:01 +00:00
|
|
|
- maybe_request_map should set a timer event to periodically re-request.
|
2008-05-16 21:54:04 +00:00
|
|
|
- transaction prepare/commit?
|
2007-02-17 22:49:47 +00:00
|
|
|
- read+floor_lockout
|
2006-10-02 03:55:32 +00:00
|
|
|
|
|
|
|
osd/rados
|
2008-05-16 22:33:57 +00:00
|
|
|
- how does an admin intervene when a pg needs a dead osd to repeer?
|
2008-03-24 19:53:26 +00:00
|
|
|
|
|
|
|
- a more general fencing mechanism? per-object granularity isn't usually a good match.
|
2007-08-13 19:14:00 +00:00
|
|
|
|
2006-09-15 21:53:31 +00:00
|
|
|
- consider implications of nvram writeahead logs
|
2007-08-13 19:14:00 +00:00
|
|
|
|
|
|
|
- flag missing log entries on crash recovery --> WRNOOP? or WRLOST?
|
|
|
|
|
|
|
|
- efficiently replicate clone() objects
|
2006-09-10 03:43:27 +00:00
|
|
|
- fix heartbeat wrt new replication
|
2006-09-13 21:00:01 +00:00
|
|
|
- mark residual pgs obsolete ???
|
2006-08-03 19:59:26 +00:00
|
|
|
- rdlocks
|
2006-09-15 21:53:31 +00:00
|
|
|
- optimize remove wrt recovery pushes
|
2006-07-28 19:07:47 +00:00
|
|
|
- report crashed pgs?
|
|
|
|
|
2007-07-13 17:29:26 +00:00
|
|
|
messenger
|
|
|
|
- fix messenger shutdown.. we shouldn't delete messenger, since the caller may be referencing it, etc.
|
|
|
|
|
2007-02-17 22:49:47 +00:00
|
|
|
simplemessenger
|
2006-10-09 19:10:44 +00:00
|
|
|
- close idle connections
|
2006-08-02 18:00:24 +00:00
|
|
|
|
|
|
|
objectcacher
|
2007-08-13 19:14:00 +00:00
|
|
|
- merge clean bh's
|
2006-08-04 23:15:35 +00:00
|
|
|
- ocacher caps transitions vs locks
|
|
|
|
- test read locks
|
2006-08-02 18:00:24 +00:00
|
|
|
|
2006-06-26 20:53:41 +00:00
|
|
|
reliability
|
2007-02-17 22:49:47 +00:00
|
|
|
- heartbeat vs ping?
|
2006-06-26 20:53:41 +00:00
|
|
|
- osdmonitor, filter
|
|
|
|
|
2006-08-14 04:44:55 +00:00
|
|
|
ebofs
|
2007-12-19 20:00:10 +00:00
|
|
|
- btrees
|
|
|
|
- checksums
|
|
|
|
- dups
|
|
|
|
- sets
|
|
|
|
|
2007-11-16 21:46:52 +00:00
|
|
|
- optionally scrub deallocated extents
|
|
|
|
- clone()
|
|
|
|
|
|
|
|
- map ObjectStore
|
2007-08-13 19:14:00 +00:00
|
|
|
|
2006-12-07 19:17:37 +00:00
|
|
|
- verify proper behavior of conflicting/overlapping reads of clones
|
2006-09-26 22:22:01 +00:00
|
|
|
- combine inodes and/or cnodes into same blocks
|
|
|
|
- fix bug in node rotation on insert (and reenable)
|
2006-10-25 18:17:42 +00:00
|
|
|
- fix NEAR_LAST_FWD (?)
|
2007-08-13 19:14:00 +00:00
|
|
|
|
|
|
|
- awareness of underlying software/hardware raid in allocator so that we
|
|
|
|
write full stripes _only_.
|
|
|
|
- hmm, that's basically just a large block size.
|
|
|
|
|
|
|
|
- rewrite the btree code!
|
|
|
|
- multithreaded
|
|
|
|
- eliminate nodepools
|
|
|
|
- allow btree sets
|
|
|
|
- allow arbitrary embedded data?
|
|
|
|
- allow arbitrary btrees
|
|
|
|
- allow root node(s?) to be embedded in onode, or whereever.
|
|
|
|
- keys and values can be uniform (fixed-size) or non-uniform.
|
|
|
|
- fixed size (if any) is a value in the btree struct.
|
|
|
|
- negative indicates bytes of length value? (1 -> 255bytes, 2 -> 65535 bytes, etc.?)
|
|
|
|
- non-uniform records preceeded by length.
|
|
|
|
- keys sorted via a comparator defined in btree root.
|
|
|
|
- lexicographically, by default.
|
|
|
|
|
|
|
|
- goal
|
|
|
|
- object btree key->value payload, not just a data blob payload.
|
|
|
|
- better threading behavior.
|
|
|
|
- with transactional goodness!
|
|
|
|
|
|
|
|
- onode
|
|
|
|
- object attributes.. as a btree?
|
|
|
|
- blob stream
|
|
|
|
- map stream.
|
|
|
|
- allow blob values.
|
|
|
|
|
|
|
|
-
|
|
|
|
|
2006-08-14 04:44:55 +00:00
|
|
|
|
|
|
|
|
2006-08-04 23:15:35 +00:00
|
|
|
remaining hard problems
|
|
|
|
- how to cope with file size changes and read/write sharing
|
|
|
|
|
2008-06-06 14:40:55 +00:00
|
|
|
|
|
|
|
snapshot notes --
|
|
|
|
|
2008-06-12 03:41:56 +00:00
|
|
|
typedef __u64 snapid_t;
|
|
|
|
#define MAXSNAP (spanid_t)(-2)
|
|
|
|
#define NOSNAP (spanid_t)(-1)
|
|
|
|
|
|
|
|
|
2008-06-06 14:40:55 +00:00
|
|
|
mds
|
|
|
|
- break mds hierarchy into snaprealms
|
|
|
|
- keep per-realm inode xlists, so that breaking a realm is O(size(realm))
|
|
|
|
struct snap {
|
2008-06-12 03:41:56 +00:00
|
|
|
snapid_t snapid;
|
2008-06-06 14:40:55 +00:00
|
|
|
string name;
|
|
|
|
utime_t ctime;
|
|
|
|
};
|
2008-06-12 03:41:56 +00:00
|
|
|
|
2008-06-06 14:40:55 +00:00
|
|
|
struct snaprealm {
|
2008-06-12 03:41:56 +00:00
|
|
|
map<snapid_t, snap> snaps;
|
|
|
|
snaprealm *parent;
|
|
|
|
list<snaprealm> children;
|
2008-06-06 14:40:55 +00:00
|
|
|
xlist<CInode*> inodes_with_caps; // used for efficient realm splits
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
- link client caps to realm, so that snapshot creation is O(num_child_realms*num_clients)
|
|
|
|
- keep per-realm, per-client record with cap refcount, to avoid traversinng realm inode lists looking for caps
|
|
|
|
|
|
|
|
struct CapabilityGroup {
|
|
|
|
int client;
|
|
|
|
xlist<Capability*> caps;
|
|
|
|
snaprealm *realm;
|
|
|
|
};
|
|
|
|
in snaprealm,
|
|
|
|
map<int, CapabilityGroup*> client_cap_groups; // used to identify clients who need snap notifications
|
|
|
|
|
|
|
|
|
2008-06-12 03:41:56 +00:00
|
|
|
- what is snapid?
|
|
|
|
- can we get away with it _not_ being ordered?
|
|
|
|
- for osds.. yes.
|
|
|
|
- for mds.. may make the cdentry range info tricky!
|
|
|
|
- so.. let's assign it via using a the monitor state machine, for now. simple and safe, though a bit slow.
|
|
|
|
- for now, state is simply last_snapid
|
2008-06-11 17:45:54 +00:00
|
|
|
|
|
|
|
|
|
|
|
metadata
|
2008-06-12 03:41:56 +00:00
|
|
|
- fix up inode_map to key off vinodeno.. or have a second map for non-zero snapids..
|
|
|
|
- no, just key of vinodeno_t, and make it
|
|
|
|
|
2008-06-12 04:55:32 +00:00
|
|
|
CInode *get_inode(inodeno_t ino, snapid_t sn=NOSNAP);
|
2008-06-12 03:41:56 +00:00
|
|
|
|
2008-06-11 17:45:54 +00:00
|
|
|
struct vinodeno_t {
|
|
|
|
inodeno_t ino;
|
2008-06-12 03:41:56 +00:00
|
|
|
snapid_t snapid;
|
2008-06-11 17:45:54 +00:00
|
|
|
};
|
2008-06-12 03:41:56 +00:00
|
|
|
|
2008-06-11 17:45:54 +00:00
|
|
|
- dentry: replace dname -> ino, rino+rtype with
|
2008-06-12 03:41:56 +00:00
|
|
|
(dname, csnap, dsnap) -> vino, vino+rtype (where valid range is [csnap, dsnap)
|
|
|
|
- live dentries have dsnap = NOSNAP. kept in separate map:
|
2008-06-11 17:45:54 +00:00
|
|
|
- map<string, CDentry*> items;
|
2008-06-12 03:41:56 +00:00
|
|
|
- map<pair<string,dsnap>, CDentry> vitems;
|
|
|
|
- or?
|
|
|
|
|
2008-06-12 04:55:32 +00:00
|
|
|
CDentry *lookup(string dname, snapid_t sn=NOSNAP);
|
2008-06-12 03:41:56 +00:00
|
|
|
|
2008-06-11 17:45:54 +00:00
|
|
|
- track vitem count in fragstat.
|
|
|
|
- when vitem count gets large, add pointer in fnode indicating vitem range stored in separate dir object.
|
|
|
|
|
2008-06-11 17:22:47 +00:00
|
|
|
|
2008-06-06 14:40:55 +00:00
|
|
|
client
|
|
|
|
- also keep caps linked into snaprealm list
|
2008-06-12 03:41:56 +00:00
|
|
|
- current snapid (lineage) for each snaprealm
|
|
|
|
- attach snapid (lineage) to each dirty page
|
2008-06-06 14:40:55 +00:00
|
|
|
- can we cow page if its dirty but a different realm?
|
2008-06-12 03:41:56 +00:00
|
|
|
...hmm probably not, but we can flush it in write_begin, just like when we do a read to make it clean
|
2008-06-11 17:22:47 +00:00
|
|
|
|
|
|
|
|
|
|
|
osd
|
|
|
|
- pass realm lineage with osd op/capability
|
|
|
|
- tag each non-live object with the set of realms it is defined over
|
2008-06-12 03:41:56 +00:00
|
|
|
- osdmap has sparse map of extant snapids. incrementals are simple rmsnapid, and max_snapid increase
|
2008-06-11 17:22:47 +00:00
|
|
|
|
|
|
|
|
|
|
|
rados snapshots
|
|
|
|
- integrate revisions into ObjectCacher?
|
|
|
|
- clean up oid.rev vs op.rev in osd+osdc
|
|
|
|
|
|
|
|
- attr.crev is rev we were created in.
|
|
|
|
- oid.rev=0 is "live". defined for attr.crev <= rev.
|
|
|
|
- otherwise, defined for attr.crev <= rev < oid.rev (i.e. oid.rev is deletion time. upper bound, non-inclusive.)
|
|
|
|
|
|
|
|
- write|delete is tagged with op.rev
|
|
|
|
- if attr.crev != op.rev
|
|
|
|
- we clone to oid.rev=rev (clone keeps old crev)
|
|
|
|
- tag clone with list of revs it is defined over
|
|
|
|
- change live attr.crev=rev.
|
|
|
|
- apply update
|
|
|
|
- read is tagged with op.rev
|
|
|
|
- if 0, we read from 0 (if it exists).
|
|
|
|
- otherwise we choose object rev based on op.rev vs oid.rev, and then verifying attr.crev <= op.rev.
|
|
|
|
- walk backwards through snap lineage? i.e. if lineage = 1, 5, 30, 77, 100(now), and op.rev = 30, try 100, 77.
|
|
|
|
|
|
|
|
- or, tag live (0) object with attr listing which revs exist (and keep it around at size 0 if it doesn't logically exist)
|
|
|
|
- no, the dir lookup on old revs will be in a cached btrfs btree dir node (no inode needed until we have a hit)
|
|
|
|
|
|
|
|
btrfs rev de-duping
|
|
|
|
- i.e. when sub_op_push gets an object
|
|
|
|
- query checksums
|
|
|
|
- userland will read+verify ranges are actually a match?
|
|
|
|
- punch hole (?)
|
|
|
|
- clone file range (not entire file)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
interface
|
|
|
|
$ ls -al .snapshot # list snaps. show both symbolic names, and timestamp names? (symbolic -> timestamp symlinks, maybe)
|
|
|
|
$ mkdir .snapshot/blah # create snap
|
2008-06-11 17:45:54 +00:00
|
|
|
$ rmdir .snapshot/blah # remove it
|