Drop this useless helper and call cct->put() directly. The comment that
this can't be used after global_init is no longer relevant as long as
nobody puts a reference they don't own... and nobody owns
g_ceph_context.
Signed-off-by: Sage Weil <sage@inktank.com>
This was creating a new cluster connection/session per iteration, and
along with it a few service threads and sockets and so forth.
Unfortunately, librados leaks like a sieve, starting with CephContext
and ceph::crypto::init(). See #845 and #2067.
Signed-off-by: Sage Weil <sage@inktank.com>
pinfo.stats might be wrong if we did log-based recovery on the
backfilled portion in addition to continuing backfill.
bug #2750
Signed-off-by: Samuel Just <sam.just@inktank.com>
After a client reconnect, the client replays outstanding ops. The
OSD then immediately responds with success if the op has already
committed (version < ReplicatedPG::get_first_in_progress).
Otherwise, we stick it in waiting_for_ondisk to be replied to when
eval_repop concludes that waitfor_disk is empty.
Fixes#2508
Signed-off-by: Samuel Just <sam.just@inktank.com>
Try treating the image as new format if it's not in the old-style
directory, which is the last step in old-style removal. Then if the
image is not found in the new-style directory, -ENOENT will be
returned, preserving the semantics that existed prior to
6f096b6cdc.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Wait for all replicas to construct the base scrub map before finalizing
the scrub and locking out writes.
Signed-off-by: Mike Ryan <mike.ryan@inktank.com>
When we are signaling the cond to indicate that a notify is complete,
take the appropriate lock. This removes the possibility of a race
that loses our signal. (That would be very difficult given that there
are network round trips involved, but this makes the lock/cond usage
"correct.")
Signed-off-by: Sage Weil <sage@inktank.com>
Break kick() into wake() and _wake() methods, depending on whether the
lock is already held. (The rename ensures that we audit/fix all
callers.)
Signed-off-by: Sage Weil <sage@inktank.com>
Try to verify that we are holding the same mutex that the waiter is
waiting on. Specifically:
* only wait on a single mutex for this cond
* remember which mutex that is
* if we signal and someone has waited, try to make sure we are holding
the mutex as well. (Mutex::is_locked() is unsufficient here; it doesn't
ensure that *our* thread tool the mutex. it is necessary, though!)
Introduce a sloppy_signal() method that can be used if we actually mean
to signal the cond without holding the proper lock (and, presumably,
don't care about losing a signal).
Signed-off-by: Sage Weil <sage@inktank.com>
Issue #2701. This info wasn't really used anywhere and we weren't
removing it. It was also sharing the same pool namespace as the
info indexed by bucket name, which is bad.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
Issue #2701. This info wasn't really used anywhere and we weren't
removing it. It was also sharing the same pool namespace as the
info indexed by bucket name, which is bad.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
New rados command: rados cp <src-obj> [dest-obj]
Requires specifying source pool. Target pool and locator can be specified.
The new command preserves object xattrs and omap data.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
We need at least none non-pure virtual method to tell gcc where the
vtable goes. The destructor wins!
libosd.a(libosd_a-ReplicatedPG.o): In function `~PG':
/home/sage/src/ceph/src/osd/PG.h:1367: undefined reference to `vtable for PG'
libosd.a(libosd_a-ReplicatedPG.o):(.rodata._ZTI12ReplicatedPG[typeinfo for ReplicatedPG]+0x10): undefined reference to `typeinfo for PG'
libosd.a(libosd_a-PG.o): In function `PG':
/home/sage/src/ceph/src/osd/PG.cc:85: undefined reference to `vtable for PG'
...
Signed-off-by: Sage Weil <sage@inktank.com>
on_removal is now in ReplicatedPG in order to handle watcher state
and repop state. Addionally, workqueue dequeues are handled already
in OSD::_remove_pg.
Signed-off-by: Samuel Just <sam.just@inktank.com>