Since this is often looked up by snap_id anyway, snap_lock
is easy to use for this.
This lets us avoid taking md_lock in many places.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
There's no need to explicitly close the ioctx. Doing so may cause
problems when the Images using it are destroyed afterwards. Just let
normal cleanup at the end of the block take care of it in the correct
order.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
This simplifies locking by obviating the NULL checks. We no longer
need md_lock to protect these acceses. We can use object_map_lock
instead, to make sure no one reads an object map while its being
updated.
Keep track of whether the object map is enabled for a given snapshot
internally. In each public method, check this state, and automatically
set it correctly when refreshing the object map. During snapshot
removal, unconditionally try to remove the object map object, to
protect against bugs leaking objects, and to be consistent with image
removal.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
Detect the case of a crashed lock owner by waiting for up to 30 seconds
for a async request progress message from the leader. If a progress
message isn't received, restart the request (and possibly take ownership
of the lock).
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Replace the two Context threading classes used within
ImageWatcher with a facade to orchestrate the scheduling
and canceling of Context task callbacks.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Ensure that all in-flight maintenance operations (resize, flatten) are
not running when the exclusive lock is released. The lock will be
released when transitioning to a snapshot, closing the image, or
cooperatively when another client requests the lock.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
If the async operation associated with a flush request completes,
only complete the flush contexts if no previous operations are
still in flight. Otherwise, move the flush contexts to an older
in-flight async operation.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
add_snap() updates the ImageCtx snapshot metadata in memory, as well
as reading the flags as part of the object map snapshot. Both of these
require holding snap_lock.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
This is another step towards eliminating md_lock from the writeback
path. Almost all the places that use ImageCtx->flags already use
snap_lock, so there's no need to create a new lock. For the others,
add a helper, test_flags() that acquires the lock, similar to
test_features().
This also makes sure we look up flags of the snapshot we're operating
on, instead of those for head.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
A bunch of these used to be here, but were removed when converting to
RWLocks, before RWLocks had is_[w]locked() methods.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
This gets the appropriate locks, and checks the currently open
snapshot instead of head. Looking up features by snap_id prepares us
for future addition or removal of e.g. an object map throughout the
life of an image.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
This was being protected by md_lock, but that has become too coarse
since it is used to prevent writes from proceeding while flushing
caches for a snapshot. With the addition of ObjectMap and
ImageWatcher, writeback could try to acquire md_lock again, leading to
a deadlock.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
We were passing in a NULL data structure, probably in an attempt to
let things clean up -- but our implementation just returns with a NULL
pass-in value, so drop it for clarity.
Signed-off-by: Greg Farnum <gfarnum@redhat.com>
- move argparse to ceph-common
- split out rados, rbd, and cephfs bindings into their own packages
- keep python-ceph as a metapackage
Signed-off-by: Sage Weil <sage@redhat.com>
python-ceph contains various header files/bindings for serveral
libraries, this patch creates *-devel packages for all the
libraries separately and provides the compatibility layer for
the split.
Signed-off-by: Boris Ranto <branto@redhat.com>
The RBD large_write test cases was taking multiple minutes to
run under a Fedora 21 VM. Replaced the million+ random number
generator calls with a single call to os.urandom. The test
now completes within seconds.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
The tests were sending invalid responses back to ImageWatchers
(missing the result code), which had the potential to allow the
lock to be acquired sooner than the test was expecting since
ImageWatcher would assume the last of response code meant no
clients owned the exclusive lock and would retry as fast as
possible.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
The C_DoWatchError context did not verify whether or not the
watch was cancelled prior to invoking the callback. This
resulted in sporadic crashes when reconnect errors bubbled
up to destroyed objects.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
We are using the connection features to populate the features field in the
OSDMap, but this is the *intersection* of mon and osd features, not the
osd features. Fix this by explicitly specifying the features in
MOSDBoot.
Fixes: #10911
Backport: giant, firefly
Signed-off-by: Sage Weil <sage@redhat.com>
Specifically, the object_copy_data_t encoding changed such that the reply
encoding is dependent on features; if we proxy such a read to an old
OSD it will use *our* features to encode instead of the original OSD's.
This effectively conditionally reverts 8e145e08ed
when the cluster features aren't all present.
Fixes: #10788
Signed-off-by: Sage Weil <sage@redhat.com>
This method is O(n) and called from in a few places for each IO operation.
Cache the value since it does not change over the lifetime of a single
epoch. Invalidate on apply_incremental() and decode.
Signed-off-by: Sage Weil <sage@redhat.com>
Since the post-snap create header update runs asynchrously
in a finalizer callback, it's possible that the snapshot
is not immediately visible. Also, if a proxied snap create
message is replayed, it's possible for the client to receive
a EEXISTS error.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>