We need set truncate_seq when redirect the newop to CEPH_OSD_OP_WRITE,
otherwise the code handles CEPH_OSD_OP_WRITE may quietly drop the data.
Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Signed-off-by: Sage Weil <sage@inktank.com>
If two clients created a snapshot at the same time, the one with the
higher snapshot id might be created first, so the lower snapshot id
would be added to the snapshot context and the snaphot seq would be
set to the lower one.
Instead of allowing this to happen, return -ESTALE if the snapshot id
is lower than the currently stored snapshot sequence number. On the
client side, get a new id and retry if this error is encountered.
Backport: argonaut
Signed-off-by: Josh Durgin <josh.durgin@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
These are all tasks, and expected to exit somewhat quickly,
but e.g. ceph-create-keys has a loop where it waits for mon
to reach quorum, so it might still be in that loop when the
machine is shut down.
Now the weight is only set when adding the OSD to the CRUSH map for
the first time. Once it's there, it's only moved, and the weight is
left untouched.
Change the ceph.conf option for the initial weight from
osd_crush_weight to osd_crush_initial_weight, to reflect this.
If you don't want new OSDs to store data automatically (to minimize
balancing and keep a human in the control loop), you can now
set osd_crush_initial_weight=0.
Closes: #3101
Signed-off-by: Tommi Virtanen <tv@inktank.com>
This blindly tries the Subdomain calling format if the ordinary method
fails. In particular, this works around buckets that present a
PermanentRedirect message.
See bug #3128.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Matthew Wodrich <matthew.wodrich@dreamhost.com>
Should fix bug #2761.
If we are already pushing soid, recovery_ops will only be decremented once for
all current pushes, so only increment recovery_ops if we are not currently
pushing it.
This bug causes us to leak a recovery op and get stuck in backfill.
Signed-off-by: Samuel Just <sam.just@inktank.com>
Reviewed-by: Sage Weil <sage@inktank.com>
Reorder the snapdir logic and ctx->at_version adjustments prior to filling
in the object_info_t and user_versions and all that stuff. Adjust
at_version after appending the log entry (so that it points to the next
position/version we will write at.. culminating in the actual user
event).
The user log entry contains the request id, which will be used
by replay ops to put themselves in the correct place in the
waiting_for_commit/ack maps. Thus, the repop needs to be tagged
with the same version as the log entry with the request id.
Thus, the request id bearing log entry should be the last in
the log entry vector.
This should fix#3072, wherein a replay which should wait on
the repop tagged as version '36 will instead wait on '35.
Signed-off-by: Sage Weil <sage@inktank.com>
Reviewed-by: Samuel Just <sam.just@inktank.com>
Instead of 'osd crush set NNN osd.NNN weight loc...', make the second
osd.NNN option optional, and allow either NNN or osd.NNN to specify the
osd id. This makes the usage much more sane, but maintains backward
compatibility.
Signed-off-by: Sage Weil <sage@inktank.com>
Create an item in the tree with the given weight, or move it (without
touching the weight) if it is already present.
Closes: #3101
Signed-off-by: Sage Weil <sage@inktank.com>
Create an item if it doesn't exist, with the specified weight. If it is
already in the tree, move it, but do not adjust the weight.
Signed-off-by: Sage Weil <sage@inktank.com>
Apparently we weren't setting header_changed to true in the
case where we handled the CEPH_RGW_UPDATE case and cur_disk.exists
was false. In practice what this created is that in case where
object was created but the index complete call failed (or timed
out), calling rgw_dir_suggest_changes() fixed the entry, however,
we didn't account the new entry. This would lead to negative
stats on the bucket index.
Signed-off-by: Yehuda Sadeh <yehuda@inktank.com>