mirror of
https://github.com/ceph/ceph
synced 2025-01-19 01:21:49 +00:00
Ceph is a distributed object, block, and file storage platform
e74250d82c
Wido saw a pg go active, but an activate log+info update crossed paths with a pg_notify info, and the primary overwrote it's updated shiny new info with the stale old info from the replica. Don't do that. It causes problems down the line. In this case, we got osd/OSD.cc: In function 'void OSD::generate_backlog(PG*)': osd/OSD.cc:3863: FAILED assert(!pg->is_active()) 1: (ThreadPool::worker()+0x28f) [0x5b08ff] 2: (ThreadPool::WorkThread::entry()+0xd) [0x4edb8d] 3: (Thread::_entry_func(void*)+0xa) [0x46892a] 4: (()+0x69ca) [0x7f889ff249ca] 5: (clone()+0x6d) [0x7f889f1446cd] on the replica because it was active but the primary was restarting peering due to the bad info. |
||
---|---|---|
debian | ||
fusetrace | ||
man | ||
qa | ||
src | ||
web | ||
wireshark | ||
.gitignore | ||
AUTHORS | ||
autogen.sh | ||
builddebs.sh | ||
ceph.spec.in | ||
ChangeLog | ||
configure.ac | ||
COPYING | ||
INSTALL | ||
Makefile.am | ||
NEWS | ||
publish.sh | ||
pull.sh | ||
push.sh | ||
README | ||
RELEASE_CHECKLIST | ||
release.sh | ||
sign.sh |
Ceph - a scalable distributed file system ----------------------------------------- Please see http://ceph.newdream.net/ for current info. ---- To build the server daemons, and FUSE client, $ ./autogen.sh $ ./configure $ make or $ cd src $ make (Note that the FUSE client will only be built if libfuse is present.) ---- A quick summary of binaries that will be built in src/ daemons: cmon -- monitor daemon. handles cluster state and configuration information. cosd -- storage daemon. stores objects on a given block device. cmds -- metadata daemon. handles file system namespace. ceph -- send management commands to the monitor cluster. userland clients: cfuse -- fuse client. csyn -- synthetic workload generator client. tools: mkmonfs -- create a fresh monfs (for a new filesystem) monmaptool -- create/edit mon map osdmaptool -- create/edit osd map crushtool -- create/edit crush map scripts: mkcephfs -- cluster mkfs tool init-ceph -- init.d start/stop script