close() was never called for the passed in IoCtx which
could probably result in an IoCtx leak if the original
IoCtx was a valid pool context allocated earlier.
Its kind of better to do it here rather than to leave
the destruction on the caller for better (or cleaner)
common case handling.
Signed-off-by: Venky Shankar <vshankar@redhat.com>
For vstart.sh powered tests, save 9 characters in the path name
by replacing testdir/test- with td/t-
60 characters imposed by jenkins
9 characters for src/test
5 characters for td/t-
33 left (instead of 24) for the test to create asok such as out/client.admin.25327.asok
Moving these files outside of the build directory is a bad idea because
tests should only create/use files within the builddir and not write
outside of this directory. Doing so would make things more complicated
for cleanup in case the test fail and create other problems as a
consequence (filling out disk space, conflicting directories between
runs etc.).
For ceph-helpers.sh tests replace testdir with td, saving 5 characters.
This is not strictly necessary but keeps the directory names consistent:
if the developer wants to get rid of all the test leftovers, it is
enough to remove the a single directory: td.
Fixes: http://tracker.ceph.com/issues/16014
Signed-off-by: Loic Dachary <loic@dachary.org>
While it is being worked on, because it frequently fails.
Refs: http://tracker.ceph.com/issues/17830
Signed-off-by: Dan Mick <dan.mick@redhat.com>
Signed-off-by: Loic Dachary <loic@dachary.org>
As described in http://tracker.ceph.com/issues/17937, a client with
restricted pool access can still delete files unless a corresponding
MDS path restriction is also in place.
Signed-off-by: David Disseldorp <ddiss@suse.de>
The recent change to do this logic with file copy (and in src/rgw)
resolved the build problem, but now updates to the civetweb
submodule were not reflected in the build.
Move the copy into a custom target which will always source the
current submodule version at build time.
Avoid using the BYPRODUCTS option, as it is not supported in many
older cmake versions (e.g., Centos 7).
Signed-off-by: Matt Benjamin <mbenjamin@redhat.com>
Log entries don't get added to the log for ECBackend until reads are
done, yet we still want any other requests with the same id to wait.
ReplicatedPG::update_range should consider the projected log as well.
Signed-off-by: Samuel Just <sjust@redhat.com>
With the PGBackend changes, it's not necessarily the case that
calling simple_opc_submit syncronously updates the SnapMapper.
Thus, we can't rely on being able to just ask the snap mapper
for the next object immediately (we could well loop on the same
one if ECBackend is flushing the pipeline). Instead, update
SnapMapper and the SnapTrimmer to grab N at a time.
Additionally, we need to make sure we don't try this again until
all of the previously submitted repops are flushed (a good idea
anyway). To that end, this patch also refactors the SnapTrimmer
machine to be fully explicit about why it's blocked so we can be
sure that we don't queue an async work item unless we really
want to.
Signed-off-by: Samuel Just <sjust@redhat.com>
Previously, we used an empty transaction to indicate when we
were sending the op to a backfill peer which needs the logs,
but can't run the transaction. I'd like to be able to send
and empty transaction for the rollforward side effect without
it causing the peer to think it missed a backfill op, so
instead, use an explicit flag. Compatability is handled by
interpretting an old version encoding with an empty transaction
as having the backfill field filled.
Signed-off-by: Samuel Just <sjust@redhat.com>
If the read can be completed immediately, objects_read_async will call
the callback syncronously, which will result in ctx being cleaned up.
Clear pending_async_reads before the call.
Signed-off-by: Samuel Just <sjust@redhat.com>
Without this change, we might submit new log entries for marking objects
unfound in a way that causes replicas to process them out-of-order with
pending writes with lower version numbers. That would be bad. Instead,
add an interface to allow an arbitrary callback to be called after any
previously submitted transaction commit, but before any subsequently
submitted operations commit.
Signed-off-by: Samuel Just <sjust@redhat.com>
The RMW pipeline means that we don't start committing an update
immediately, so we can't update the log syncronously with
submit_transaction. Thus, in order to pipeline writes, PG/ReplicatedPG
will need to project last_update and abstain from updating info
directly (updating info.stats was the only offender).
Signed-off-by: Samuel Just <sjust@redhat.com>
Implements the rmw pipeline and integrates the cache.
HashInfo now maintains a projected size for use during the planning
phase of the pipeline.
(Doesn't build without subsequent patches, not worth stubbing out
the interfaces)
Signed-off-by: Samuel Just <sjust@redhat.com>
It was hard to reason about the validity of the IndexedLog internal
pointers and iterators during updates, so this patch cleans that up
a bunch. It also moves responsibility for doing rollbacks into
PGBackend. Finally, it adds support for the new log entry format.
Signed-off-by: Samuel Just <sjust@redhat.com>