To check a mounted device it is needed to verify on a combination of
realpath and plain devices against realpath and plain paths. In LVM, two
different paths might refer to the same devices
Signed-off-by: Alfredo Deza <adeza@redhat.com>
* refs/pull/19957/head:
client: fixup parallel calls to ceph_ll_lookup_inode() in NFS FASL
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
* refs/pull/20373/head:
client: clamp I/O sizes to INT_MAX when we can't return larger values
test: new testcase for ceph_ll_readv and ceph_ll_writev
client: hook up ceph_ll_readv and ceph_ll_writev
client: type safety cleanup for _read and _write codepaths
Reviewed-by: Gregory Farnum <gfarnum@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Greg Farnum <gfarnum@redhat.com>
After removing the last snapshot linked to a parent image,
don't clear the CLONE_CHILD op feature bit if the image HEAD
is still linked to the parent.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
The onreadable completions go through a finisher; add a final event
in that stream that keeps the PG alive while prior events flush.
flush() isn't quite sufficient since it doesn't wait for the finisher
events to flush too--only for the actual apply to have happened.
Signed-off-by: Sage Weil <sage@redhat.com>
Note that we don't need to worry about the internal get_omap_iterator
callrs (e.g., omap_rmkeyrange) because the apply thread does these
ops sequentially and in order.
Signed-off-by: Sage Weil <sage@redhat.com>
- keep mapper around for duration of import
- flush in-flight requests before tearing it down. This is necessary
because the mapper still uses onreadable.
Signed-off-by: Sage Weil <sage@redhat.com>
This removes a ton of tracking for ReplicatedBackend. ECBackend needs
to keep most of it so that it can track in-flight applies on legacy
peer OSDs. We can remove this post-nautilus.
Signed-off-by: Sage Weil <sage@redhat.com>
PrimaryLogPG calls it synchronously, on its own, after
submit_transaction. That means the backends no longer need to
track it or call back to it.
Signed-off-by: Sage Weil <sage@redhat.com>
This is no longer needed. FileStore was the only backend doing async
applies, and it now blocks until apply all on its own.
Signed-off-by: Sage Weil <sage@redhat.com>
bluestore and memstore are the only backends to implement
open_collection, and both of them can issue a handle immediately
after queue_transaction. Do that!
Signed-off-by: Sage Weil <sage@redhat.com>
Prevent a collection delete + recreate sequence from allowing two
conflicting OpSequencers for the same collection to exist as this
can lead to racing async apply threads.
Signed-off-by: Sage Weil <sage@redhat.com>
Note that this is *slight* overkill in that a *source* object of a clone
will also appear in the applying map, even though it is not being
modified. Given that those clone operations are normally coupled with
another transaction that does write (which is why we are cloning in the
first place) this should not make any difference.
Signed-off-by: Sage Weil <sage@redhat.com>
mon,osd: do not use crush_device_class file to initalize class for new osds
Reviewed-by: Alfredo Deza <adeza@redhat.com>
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Andrew Schoen <aschoen@redhat.com>