Since the pool was introduced only in Luminous dev and RC releases we
can probably upgrade without the need to bump up the the struct version
numbers. This needs a pending release notes entry
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
Specify that we don't have access to the reshard pool when encountering
EACCESS.
TODO: get rgw's name and add that in the log message
Fixes http://tracker.ceph.com/issues/20289
Signed-off-by: Karol Mroz <kmroz@suse.de>
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
Deleted objects may still be on-disk after merging a log that includes
deletes, so adjust the asserts accordingly.
A case like:
980'1192 (972'1186) modify foo
--- osd restart ---
999'1196 (980'1192) delete foo
1003'1199 (0'0) modify foo
1015'1208 (1003'1199) delete foo
Would trigger the assert(miter->second.have == oi.info) since the
'have' version would would be reset to 0'0.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
This appears now that deletes are not processed inline from the PG log
- a clone that is missing only on a peer (due to being deleted) would
not stop rollback from promoting the clone, resulting in hitting an
assert on the replica when the promotion tried to write to the missing
object on the replica.
This only affects cache tiering due to the dependence on the
MAP_SNAP_CLONE flag in find_object_context() - missing_oid was not being checked for being
recovered, unlike the target oid for the op (in do_op()).
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
Deletes are the same for EC and replicated pools, so add logic for
handling MOSDPGRecoveryDelete[Reply] to the base PGBackend
class.
Within PrimaryLogPG, add parallel paths for starting deletes,
recover_missing() and prep_object_replica_deletes(), and update the
local and global recovery callbacks to deal with lacking an
ObjectContext after a delete has been performed.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
There's no source needed for deleting an object, so don't keep track
of this. Update is_readable_with_acting/is_unfound, and add an
is_deleted() method to be used later.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
This will track deletes that were in the pg log and still need to be
performed during recovery. Note that with these deleted objects we may
not have an accurate 'have' version, since the object may have already
been deleted locally, so tolerate this when examining divergent entries.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
The existing BackfillRemove message has no reply, and PushOps have too
much logic that would need changing to accomodate deletions.
Signed-off-by: Josh Durgin <jdurgin@redhat.com>
For early clusters, if there isn't an active manager, we eventually want
to trigger a health warning by rolling over the mgrmap epoch. We don't
want to do that if we have no active/available manager after that. Fix
by checking ever_had_active_mgr here.
Signed-off-by: Sage Weil <sage@redhat.com>
According to AWS S3 in this document[1], an ACL can have up to 100
grants.
If the nums of grants is larger than 100, S3 will return like following:
400
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>MalformedACLError</Code><Message>The XML you provided was not well-formed or did not validate against our published schema</Message><RequestId>10EC67824572C378</RequestId><HostId>AWL3NnQChs/HCfOTu5MtyEc9uzRuxpYMhmvXQry2CovCcuxO2/tMqY1zGoWOur86ipQt3v/WEiA=</HostId></Error>
Now if the nums of request acl grants is larger than the maximum allowed, rgw will return
like following:
400
<?xml version="1.0" encoding="UTF-8"?><Error><Code>MalformedACLError</Code><Message>The request is rejected, because the acl grants number you requested is larger than the maximum 101 grants allowed in an acl.</Message><BucketName>222</BucketName><RequestId>tx000000000000000000017-00596b5fad-101a-default</RequestId><HostId>101a-default-default</HostId></Error>
The maximum number of acl grants can be configured in config file with the configuration item:
rgw_acl_grants_max_num
[1] http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
Signed-off-by: Enming Zhang <enming.zhang@umcloud.com>
Now rgw will return like following:
400
<?xml version="1.0" encoding="UTF-8"?><Error><Code>MalformedXML</Code><Message>The XML you provided was larger than the maximum 2048 bytes allowed.</Message><BucketName>333</BucketName><RequestId>tx000000000000000000009-00596a1331-101a-default</RequestId><HostId>101a-default-default</HostId></Error>
Signed-off-by: Enming Zhang <enming.zhang@umcloud.com>
to avoid possible deadlock. quote from doc of Popen.wait()
> This will deadlock when using stdout=PIPE and/or stderr=PIPE and the
child process generates enough output to a pipe such that it blocks
waiting for the OS pipe buffer to accept more data. Use communicate() to
avoid that.
and print out the stdout and stderr using LOG.warn() if the command
fails.
Signed-off-by: Kefu Chai <kchai@redhat.com>