Otherwise importing into another pool when the default pool, rbd,
doesn't exist results in an error trying to open the rbd pool.
Reported-by: Sébastien Han <han.sebastien@gmail.com>
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
If the configure osd journal size is > the block device size, warn, but
do not generate an error and abort startup. This makes it safe to have
a default 'osd journal size' value of, say, 1 GB without fear of breaking
existing clusters with smaller jouranl block devices.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Tommi Virtanen <tv@inktank.com>
RGWRados::delete_obj() was updated in commit
93218aeab7, but we
failed to update the corresponding RGWCache api.
This commit fixes it.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
create_pool should only create pool. A pool is not a bucket,
so we don't need to attach any attrs to it. Also, no reason
to make it exclusive.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
We now have a cluster root pool that should hold the
cluster params. The cluster params are now read from
this object on startup, if object does not exist we
set its defaults and write it.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
This is required so that we handle both src and dest atomically. We
also set the prefetch flag on the src object, so that we read the
first chunk along with its attrs.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
For objects with manifest that have a tail, we
copy only the head, and the manifest, and increasing
the reference count on the tail objects.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>
There's no need to set the default pool in set_pool_image_name - this
is done later, in a way that doesn't ignore --pool if --dest-pool
is not specified.
This means --pool and --image can be used with import, just like
the rest of the commands. Without this change, --dest and --dest-pool
had to be used, and --pool would be silently ignored for rbd import.
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
The permission check examines the PG::pool struct. Instead of adding
additional locking there, just push the check into the op thread. This
makes life a bit simpler for the dispatch thread, which is particularly
hot.
Signed-off-by: Sage Weil <sage@inktank.com>
We perform the same check in PG::do_request(), and it is no longer safe to
do this at enqueue_op() time because we aren't holding PG::_lock (only
PG::_qlock).
Signed-off-by: Sage Weil <sage@inktank.com>
Taking the PG::_lock when queuing each op for the worker threads can intorduce
long delays that hold up subsequent operations on other PGs. Use a separate
lock to protect the queuing.
Signed-off-by: Andreas Bluemle <andreas.bluemle@itxperts.de>
Reviewed-by: Sage Weil <sage@inktank.com>
The check 'p->second.last_tx > cutoff' should always be false
since last_tx is periodically updated by OSD::heartbeat()
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Sage Weil <sage@inktank.com>
We need set truncate_seq when redirect the newop to CEPH_OSD_OP_WRITE,
otherwise the code handles CEPH_OSD_OP_WRITE may quietly drop the data.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Sage Weil <sage@inktank.com>
If two clients created a snapshot at the same time, the one with the
higher snapshot id might be created first, so the lower snapshot id
would be added to the snapshot context and the snaphot seq would be
set to the lower one.
Instead of allowing this to happen, return -ESTALE if the snapshot id
is lower than the currently stored snapshot sequence number. On the
client side, get a new id and retry if this error is encountered.
Backport: argonaut
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
These are all tasks, and expected to exit somewhat quickly,
but e.g. ceph-create-keys has a loop where it waits for mon
to reach quorum, so it might still be in that loop when the
machine is shut down.
Now the weight is only set when adding the OSD to the CRUSH map for
the first time. Once it's there, it's only moved, and the weight is
left untouched.
Change the ceph.conf option for the initial weight from
osd_crush_weight to osd_crush_initial_weight, to reflect this.
If you don't want new OSDs to store data automatically (to minimize
balancing and keep a human in the control loop), you can now
set osd_crush_initial_weight=0.
Closes: #3101
Signed-off-by: Tommi Virtanen <tv@inktank.com>
This blindly tries the Subdomain calling format if the ordinary method
fails. In particular, this works around buckets that present a
PermanentRedirect message.
See bug #3128.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Matthew Wodrich <matthew.wodrich@dreamhost.com>
Should fix bug #2761.
If we are already pushing soid, recovery_ops will only be decremented once for
all current pushes, so only increment recovery_ops if we are not currently
pushing it.
This bug causes us to leak a recovery op and get stuck in backfill.
Signed-off-by: Samuel Just <sam.just@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Reorder the snapdir logic and ctx->at_version adjustments prior to filling
in the object_info_t and user_versions and all that stuff. Adjust
at_version after appending the log entry (so that it points to the next
position/version we will write at.. culminating in the actual user
event).
The user log entry contains the request id, which will be used
by replay ops to put themselves in the correct place in the
waiting_for_commit/ack maps. Thus, the repop needs to be tagged
with the same version as the log entry with the request id.
Thus, the request id bearing log entry should be the last in
the log entry vector.
This should fix#3072, wherein a replay which should wait on
the repop tagged as version '36 will instead wait on '35.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Samuel Just <sam.just@inktank.com>