mirror of
https://github.com/ceph/ceph
synced 2025-01-01 08:32:24 +00:00
Merge pull request #28813 from smanjara/wip-user-rename-working
rgw : Bucket mv, bucket chown and user rename utilities Reviewed-by: Casey Bodley <cbodley@redhat.com> Reviewed-by: Matt Benjamin <mbenjamin@redhat.com>
This commit is contained in:
commit
17fc695047
@ -35,6 +35,9 @@ which are as follows:
|
||||
Display information of a user, and any potentially available
|
||||
subusers and keys.
|
||||
|
||||
:command:`user rename`
|
||||
Renames a user.
|
||||
|
||||
:command:`user rm`
|
||||
Remove a user.
|
||||
|
||||
@ -89,6 +92,10 @@ which are as follows:
|
||||
:command:`bucket unlink`
|
||||
Unlink bucket from specified user.
|
||||
|
||||
:command:`bucket chown`
|
||||
Link bucket to specified user and update object ACLs.
|
||||
Use --marker to resume if command gets interrupted.
|
||||
|
||||
:command:`bucket stats`
|
||||
Returns bucket statistics.
|
||||
|
||||
@ -463,6 +470,10 @@ Options
|
||||
|
||||
The radosgw user ID.
|
||||
|
||||
.. option:: --new-uid=uid
|
||||
|
||||
ID of the new user. Used with 'user rename' command.
|
||||
|
||||
.. option:: --subuser=<name>
|
||||
|
||||
Name of the subuser.
|
||||
@ -517,9 +528,10 @@ Options
|
||||
|
||||
Set the system flag on the user.
|
||||
|
||||
.. option:: --bucket=bucket
|
||||
.. option:: --bucket=[tenant-id/]bucket
|
||||
|
||||
Specify the bucket name.
|
||||
Specify the bucket name. If tenant-id is not specified, the tenant-id
|
||||
of the user (--uid) is used.
|
||||
|
||||
.. option:: --pool=<pool>
|
||||
|
||||
@ -546,6 +558,12 @@ Options
|
||||
|
||||
Specify the bucket id.
|
||||
|
||||
.. option:: --bucket-new-name=[tenant-id/]<bucket>
|
||||
|
||||
Optional for `bucket link`; use to rename a bucket.
|
||||
While tenant-id/ can be specified, this is never
|
||||
necessary for normal operation.
|
||||
|
||||
.. option:: --shard-id=<shard-id>
|
||||
|
||||
Optional for mdlog list, bi list, data sync status. Required for ``mdlog trim``.
|
||||
@ -884,6 +902,10 @@ Generate a new user::
|
||||
Remove a user::
|
||||
|
||||
$ radosgw-admin user rm --uid=johnny
|
||||
|
||||
Rename a user::
|
||||
|
||||
$ radosgw-admin user rename --uid=johny --new-uid=joe
|
||||
|
||||
Remove a user and all associated buckets with their contents::
|
||||
|
||||
@ -901,6 +923,18 @@ Unlink bucket from specified user::
|
||||
|
||||
$ radosgw-admin bucket unlink --bucket=foo --uid=johnny
|
||||
|
||||
Rename a bucket::
|
||||
|
||||
$ radosgw-admin bucket link --bucket=foo --bucket-new-name=bar --uid=johnny
|
||||
|
||||
Move a bucket from the old global tenant space to a specified tenant::
|
||||
|
||||
$ radosgw-admin bucket link --bucket=/foo --uid=12345678$12345678'
|
||||
|
||||
Link bucket to specified user and change object ACLs::
|
||||
|
||||
$ radosgw-admin bucket chown --bucket=/foo --uid=12345678$12345678'
|
||||
|
||||
Show the logs of a bucket from April 1st, 2012::
|
||||
|
||||
$ radosgw-admin log show --bucket=foo --date=2012-04-01-01 --bucket-id=default.14193.1
|
||||
|
@ -1527,7 +1527,7 @@ Request Parameters
|
||||
:Description: The bucket id to unlink.
|
||||
:Type: String
|
||||
:Example: ``dev.6607669.420``
|
||||
:Required: Yes
|
||||
:Required: No
|
||||
|
||||
``uid``
|
||||
|
||||
|
@ -46,6 +46,13 @@ For a v3 version of the OpenStack Identity API you should replace
|
||||
rgw keystone admin domain = {keystone admin domain name}
|
||||
rgw keystone admin project = {keystone admin project name}
|
||||
|
||||
For compatibility with previous versions of ceph, it is also
|
||||
possible to set ``rgw keystone implicit tenants`` to either
|
||||
``s3`` or ``swift``. This has the effect of splitting
|
||||
the identity space such that the indicated protocol will
|
||||
only use implicit tenants, and the other protocol will
|
||||
never use implicit tenants. Some older versions of ceph
|
||||
only supported implicit tenants with swift.
|
||||
|
||||
Ocata (and later)
|
||||
-----------------
|
||||
|
@ -154,6 +154,13 @@ are two or more different tenants all creating a container named
|
||||
``foo``, radosgw is able to transparently discern them by their tenant
|
||||
prefix.
|
||||
|
||||
It is also possible to limit the effects of implicit tenants
|
||||
to only apply to swift or s3, by setting ``rgw keystone implicit tenants``
|
||||
to either ``s3`` or ``swift``. This will likely primarily
|
||||
be of use to users who had previously used implicit tenants
|
||||
with older versions of ceph, where implicit tenants
|
||||
only applied to the swift protocol.
|
||||
|
||||
Notes and known issues
|
||||
----------------------
|
||||
|
||||
|
@ -167,9 +167,9 @@ class usage_acc:
|
||||
r.append("malformed summary looking for user " + e['user']
|
||||
+ " " + str(ex))
|
||||
break
|
||||
if s2 == None:
|
||||
r.append("missing summary for user " + e['user'] + " " + str(ex))
|
||||
continue
|
||||
if s2 == None:
|
||||
r.append("missing summary for user " + e['user'] + " " + str(ex))
|
||||
continue
|
||||
try:
|
||||
c2 = s2['categories']
|
||||
except Exception as ex:
|
||||
@ -284,16 +284,21 @@ def task(ctx, config):
|
||||
##
|
||||
user1='foo'
|
||||
user2='fud'
|
||||
user3='bar'
|
||||
user4='bud'
|
||||
subuser1='foo:foo1'
|
||||
subuser2='foo:foo2'
|
||||
display_name1='Foo'
|
||||
display_name2='Fud'
|
||||
display_name3='Bar'
|
||||
email='foo@foo.com'
|
||||
email2='bar@bar.com'
|
||||
access_key='9te6NH5mcdcq0Tc5i8i1'
|
||||
secret_key='Ny4IOauQoL18Gp2zM7lC1vLmoawgqcYP/YGcWfXu'
|
||||
access_key2='p5YnriCv1nAtykxBrupQ'
|
||||
secret_key2='Q8Tk6Q/27hfbFSYdSkPtUqhqx1GgzvpXa4WARozh'
|
||||
access_key3='NX5QOQKC6BH2IDN8HC7A'
|
||||
secret_key3='LnEsqNNqZIpkzauboDcLXLcYaWwLQ3Kop0zAnKIn'
|
||||
swift_secret1='gpS2G9RREMrnbqlp29PP2D36kgPR1tm72n5fPYfL'
|
||||
swift_secret2='ri2VJQcKSYATOY6uaDUX7pxgkW+W1YmC6OCxPHwy'
|
||||
|
||||
@ -317,11 +322,20 @@ def task(ctx, config):
|
||||
host=endpoint.hostname,
|
||||
calling_format=boto.s3.connection.OrdinaryCallingFormat(),
|
||||
)
|
||||
connection3 = boto.s3.connection.S3Connection(
|
||||
aws_access_key_id=access_key3,
|
||||
aws_secret_access_key=secret_key3,
|
||||
is_secure=False,
|
||||
port=endpoint.port,
|
||||
host=endpoint.hostname,
|
||||
calling_format=boto.s3.connection.OrdinaryCallingFormat(),
|
||||
)
|
||||
|
||||
acc = usage_acc()
|
||||
rl = requestlog_queue(acc.generate_make_entry())
|
||||
connection.set_request_hook(rl)
|
||||
connection2.set_request_hook(rl)
|
||||
connection3.set_request_hook(rl)
|
||||
|
||||
# legend (test cases can be easily grep-ed out)
|
||||
# TESTCASE 'testname','object','method','operation','assertion'
|
||||
@ -859,6 +873,90 @@ def task(ctx, config):
|
||||
assert entry['category'] == cat
|
||||
assert entry['successful_ops'] > 0
|
||||
|
||||
# TESTCASE 'user-rename', 'user', 'rename', 'existing user', 'new user', 'succeeds'
|
||||
# create a new user user3
|
||||
(err, out) = rgwadmin(ctx, client, [
|
||||
'user', 'create',
|
||||
'--uid', user3,
|
||||
'--display-name', display_name3,
|
||||
'--access-key', access_key3,
|
||||
'--secret', secret_key3,
|
||||
'--max-buckets', '4'
|
||||
],
|
||||
check_status=True)
|
||||
|
||||
# create a bucket
|
||||
bucket = connection3.create_bucket(bucket_name + '6')
|
||||
|
||||
rl.log_and_clear("create_bucket", bucket_name + '6', user3)
|
||||
|
||||
# create object
|
||||
object_name1 = 'thirteen'
|
||||
key1 = boto.s3.key.Key(bucket, object_name1)
|
||||
key1.set_contents_from_string(object_name1)
|
||||
rl.log_and_clear("put_obj", bucket_name + '6', user3)
|
||||
|
||||
# rename user3
|
||||
(err, out) = rgwadmin(ctx, client, ['user', 'rename', '--uid', user3, '--new-uid', user4], check_status=True)
|
||||
assert out['user_id'] == user4
|
||||
assert out['keys'][0]['access_key'] == access_key3
|
||||
assert out['keys'][0]['secret_key'] == secret_key3
|
||||
|
||||
time.sleep(5)
|
||||
|
||||
# get bucket and object to test if user keys are preserved
|
||||
bucket = connection3.get_bucket(bucket_name + '6')
|
||||
s = key1.get_contents_as_string()
|
||||
rl.log_and_clear("get_obj", bucket_name + '6', user4)
|
||||
assert s == object_name1
|
||||
|
||||
# TESTCASE 'user-rename', 'user', 'rename', 'existing user', 'another existing user', 'fails'
|
||||
# create a new user user2
|
||||
(err, out) = rgwadmin(ctx, client, [
|
||||
'user', 'create',
|
||||
'--uid', user2,
|
||||
'--display-name', display_name2,
|
||||
'--access-key', access_key2,
|
||||
'--secret', secret_key2,
|
||||
'--max-buckets', '4'
|
||||
],
|
||||
check_status=True)
|
||||
|
||||
# create a bucket
|
||||
bucket = connection2.create_bucket(bucket_name + '7')
|
||||
|
||||
rl.log_and_clear("create_bucket", bucket_name + '7', user2)
|
||||
|
||||
# create object
|
||||
object_name2 = 'fourteen'
|
||||
key2 = boto.s3.key.Key(bucket, object_name2)
|
||||
key2.set_contents_from_string(object_name2)
|
||||
rl.log_and_clear("put_obj", bucket_name + '7', user2)
|
||||
|
||||
(err, out) = rgwadmin(ctx, client, ['user', 'rename', '--uid', user4, '--new-uid', user2])
|
||||
assert err
|
||||
|
||||
# test if user 2 and user4 can still access their bucket and objects after rename fails
|
||||
bucket = connection3.get_bucket(bucket_name + '6')
|
||||
s = key1.get_contents_as_string()
|
||||
rl.log_and_clear("get_obj", bucket_name + '6', user4)
|
||||
assert s == object_name1
|
||||
|
||||
bucket = connection2.get_bucket(bucket_name + '7')
|
||||
s = key2.get_contents_as_string()
|
||||
rl.log_and_clear("get_obj", bucket_name + '7', user2)
|
||||
assert s == object_name2
|
||||
|
||||
(err, out) = rgwadmin(ctx, client,
|
||||
['user', 'rm', '--uid', user4, '--purge-data' ],
|
||||
check_status=True)
|
||||
|
||||
(err, out) = rgwadmin(ctx, client,
|
||||
['user', 'rm', '--uid', user2, '--purge-data' ],
|
||||
check_status=True)
|
||||
|
||||
time.sleep(5)
|
||||
|
||||
# should be all through with connection. (anything using connection
|
||||
# should be BEFORE the usage stuff above.)
|
||||
rl.log_and_clear("(before-close)", '-', '-', ignore_this_entry)
|
||||
|
@ -47,6 +47,7 @@ struct obj_version {
|
||||
|
||||
void dump(Formatter *f) const;
|
||||
void decode_json(JSONObj *obj);
|
||||
static void generate_test_instances(list<obj_version*>& o);
|
||||
};
|
||||
WRITE_CLASS_ENCODER(obj_version)
|
||||
|
||||
|
@ -1332,7 +1332,6 @@ OPTION(rgw_keystone_accepted_roles, OPT_STR) // roles required to serve request
|
||||
OPTION(rgw_keystone_accepted_admin_roles, OPT_STR) // list of roles allowing an user to gain admin privileges
|
||||
OPTION(rgw_keystone_token_cache_size, OPT_INT) // max number of entries in keystone token cache
|
||||
OPTION(rgw_keystone_verify_ssl, OPT_BOOL) // should we try to verify keystone's ssl
|
||||
OPTION(rgw_keystone_implicit_tenants, OPT_BOOL) // create new users in their own tenants of the same name
|
||||
OPTION(rgw_cross_domain_policy, OPT_STR)
|
||||
OPTION(rgw_healthcheck_disabling_path, OPT_STR) // path that existence causes the healthcheck to respond 503
|
||||
OPTION(rgw_s3_auth_use_rados, OPT_BOOL) // should we try to use the internal credentials for s3?
|
||||
|
@ -5898,12 +5898,13 @@ std::vector<Option> get_rgw_options() {
|
||||
.set_default(true)
|
||||
.set_description("Should RGW verify the Keystone server SSL certificate."),
|
||||
|
||||
Option("rgw_keystone_implicit_tenants", Option::TYPE_BOOL, Option::LEVEL_ADVANCED)
|
||||
.set_default(false)
|
||||
Option("rgw_keystone_implicit_tenants", Option::TYPE_STR, Option::LEVEL_ADVANCED)
|
||||
.set_default("false")
|
||||
.set_enum_allowed( { "false", "true", "swift", "s3", "both", "0", "1", "none" } )
|
||||
.set_description("RGW Keystone implicit tenants creation")
|
||||
.set_long_description(
|
||||
"Implicitly create new users in their own tenant with the same name when "
|
||||
"authenticating via Keystone."),
|
||||
"authenticating via Keystone. Can be limited to s3 or swift only."),
|
||||
|
||||
Option("rgw_cross_domain_policy", Option::TYPE_STR, Option::LEVEL_ADVANCED)
|
||||
.set_default("<allow-access-from domain=\"*\" secure=\"false\" />")
|
||||
|
@ -52,6 +52,20 @@ void RGWAccessControlList::add_grant(ACLGrant *grant)
|
||||
_add_grant(grant);
|
||||
}
|
||||
|
||||
void RGWAccessControlList::remove_canon_user_grant(rgw_user& user_id)
|
||||
{
|
||||
auto multi_map_iter = grant_map.find(user_id.to_str());
|
||||
if(multi_map_iter != grant_map.end()) {
|
||||
auto grants = grant_map.equal_range(user_id.to_str());
|
||||
grant_map.erase(grants.first, grants.second);
|
||||
}
|
||||
|
||||
auto map_iter = acl_user_map.find(user_id.to_str());
|
||||
if (map_iter != acl_user_map.end()){
|
||||
acl_user_map.erase(map_iter);
|
||||
}
|
||||
}
|
||||
|
||||
uint32_t RGWAccessControlList::get_perm(const DoutPrefixProvider* dpp,
|
||||
const rgw::auth::Identity& auth_identity,
|
||||
const uint32_t perm_mask)
|
||||
|
@ -344,6 +344,7 @@ public:
|
||||
static void generate_test_instances(list<RGWAccessControlList*>& o);
|
||||
|
||||
void add_grant(ACLGrant *grant);
|
||||
void remove_canon_user_grant(rgw_user& user_id);
|
||||
|
||||
multimap<string, ACLGrant>& get_grant_map() { return grant_map; }
|
||||
const multimap<string, ACLGrant>& get_grant_map() const { return grant_map; }
|
||||
|
@ -84,6 +84,7 @@ void usage()
|
||||
cout << " user create create a new user\n" ;
|
||||
cout << " user modify modify user\n";
|
||||
cout << " user info get user info\n";
|
||||
cout << " user rename rename user\n";
|
||||
cout << " user rm remove user\n";
|
||||
cout << " user suspend suspend a user\n";
|
||||
cout << " user enable re-enable user after suspension\n";
|
||||
@ -105,6 +106,7 @@ void usage()
|
||||
cout << " bucket stats returns bucket statistics\n";
|
||||
cout << " bucket rm remove bucket\n";
|
||||
cout << " bucket check check bucket index\n";
|
||||
cout << " bucket chown link bucket to specified user and update its object ACLs\n";
|
||||
cout << " bucket reshard reshard bucket\n";
|
||||
cout << " bucket rewrite rewrite all objects in the specified bucket\n";
|
||||
cout << " bucket sync disable disable bucket sync\n";
|
||||
@ -242,6 +244,7 @@ void usage()
|
||||
cout << "options:\n";
|
||||
cout << " --tenant=<tenant> tenant name\n";
|
||||
cout << " --uid=<id> user id\n";
|
||||
cout << " --new-uid=<id> new user id\n";
|
||||
cout << " --subuser=<name> subuser name\n";
|
||||
cout << " --access-key=<key> S3 access key\n";
|
||||
cout << " --email=<email> user's email address\n";
|
||||
@ -265,6 +268,8 @@ void usage()
|
||||
cout << " --start-date=<date> start date in the format yyyy-mm-dd\n";
|
||||
cout << " --end-date=<date> end date in the format yyyy-mm-dd\n";
|
||||
cout << " --bucket-id=<bucket-id> bucket id\n";
|
||||
cout << " --bucket-new-name=<bucket>\n";
|
||||
cout << " for bucket link: optional new name\n";
|
||||
cout << " --shard-id=<shard-id> optional for: \n";
|
||||
cout << " mdlog list\n";
|
||||
cout << " data sync status\n";
|
||||
@ -387,6 +392,7 @@ enum {
|
||||
OPT_USER_CREATE,
|
||||
OPT_USER_INFO,
|
||||
OPT_USER_MODIFY,
|
||||
OPT_USER_RENAME,
|
||||
OPT_USER_RM,
|
||||
OPT_USER_SUSPEND,
|
||||
OPT_USER_ENABLE,
|
||||
@ -413,6 +419,7 @@ enum {
|
||||
OPT_BUCKET_RM,
|
||||
OPT_BUCKET_REWRITE,
|
||||
OPT_BUCKET_RESHARD,
|
||||
OPT_BUCKET_CHOWN,
|
||||
OPT_POLICY,
|
||||
OPT_POOL_ADD,
|
||||
OPT_POOL_RM,
|
||||
@ -640,6 +647,8 @@ static int get_cmd(const char *cmd, const char *prev_cmd, const char *prev_prev_
|
||||
return OPT_USER_INFO;
|
||||
if (strcmp(cmd, "modify") == 0)
|
||||
return OPT_USER_MODIFY;
|
||||
if (strcmp(cmd, "rename") == 0)
|
||||
return OPT_USER_RENAME;
|
||||
if (strcmp(cmd, "rm") == 0)
|
||||
return OPT_USER_RM;
|
||||
if (strcmp(cmd, "suspend") == 0)
|
||||
@ -678,6 +687,8 @@ static int get_cmd(const char *cmd, const char *prev_cmd, const char *prev_prev_
|
||||
return OPT_BUCKET_STATS;
|
||||
if (strcmp(cmd, "rm") == 0)
|
||||
return OPT_BUCKET_RM;
|
||||
if (strcmp(cmd, "chown") == 0)
|
||||
return OPT_BUCKET_CHOWN;
|
||||
if (strcmp(cmd, "rewrite") == 0)
|
||||
return OPT_BUCKET_REWRITE;
|
||||
if (strcmp(cmd, "reshard") == 0)
|
||||
@ -2742,6 +2753,7 @@ int main(int argc, const char **argv)
|
||||
|
||||
rgw_user user_id;
|
||||
string tenant;
|
||||
rgw_user new_user_id;
|
||||
std::string access_key, secret_key, user_email, display_name;
|
||||
std::string bucket_name, pool_name, object;
|
||||
rgw_pool pool;
|
||||
@ -2784,6 +2796,7 @@ int main(int argc, const char **argv)
|
||||
bool set_temp_url_key = false;
|
||||
map<int, string> temp_url_keys;
|
||||
string bucket_id;
|
||||
string new_bucket_name;
|
||||
Formatter *formatter = NULL;
|
||||
int purge_data = false;
|
||||
int pretty_format = false;
|
||||
@ -2900,6 +2913,8 @@ int main(int argc, const char **argv)
|
||||
break;
|
||||
} else if (ceph_argparse_witharg(args, i, &val, "-i", "--uid", (char*)NULL)) {
|
||||
user_id.from_str(val);
|
||||
} else if (ceph_argparse_witharg(args, i, &val, "-i", "--new-uid", (char*)NULL)) {
|
||||
new_user_id.from_str(val);
|
||||
} else if (ceph_argparse_witharg(args, i, &val, "--tenant", (char*)NULL)) {
|
||||
tenant = val;
|
||||
} else if (ceph_argparse_witharg(args, i, &val, "--access-key", (char*)NULL)) {
|
||||
@ -3044,6 +3059,8 @@ int main(int argc, const char **argv)
|
||||
cerr << "bad bucket-id" << std::endl;
|
||||
exit(1);
|
||||
}
|
||||
} else if (ceph_argparse_witharg(args, i, &val, "--bucket-new-name", (char*)NULL)) {
|
||||
new_bucket_name = val;
|
||||
} else if (ceph_argparse_witharg(args, i, &val, "--format", (char*)NULL)) {
|
||||
format = val;
|
||||
} else if (ceph_argparse_witharg(args, i, &val, "--categories", (char*)NULL)) {
|
||||
@ -3304,6 +3321,11 @@ int main(int argc, const char **argv)
|
||||
}
|
||||
user_id.tenant = tenant;
|
||||
}
|
||||
|
||||
if (!new_user_id.empty() && !tenant.empty()) {
|
||||
new_user_id.tenant = tenant;
|
||||
}
|
||||
|
||||
/* check key parameter conflict */
|
||||
if ((!access_key.empty()) && gen_access_key) {
|
||||
cerr << "ERROR: key parameter conflict, --access-key & --gen-access-key" << std::endl;
|
||||
@ -4924,6 +4946,10 @@ int main(int argc, const char **argv)
|
||||
if (!user_email.empty())
|
||||
user_op.set_user_email(user_email);
|
||||
|
||||
if (!user_id.empty()) {
|
||||
user_op.set_new_user_id(new_user_id);
|
||||
}
|
||||
|
||||
if (!access_key.empty())
|
||||
user_op.set_access_key(access_key);
|
||||
|
||||
@ -5045,6 +5071,14 @@ int main(int argc, const char **argv)
|
||||
}
|
||||
|
||||
output_user_info = false;
|
||||
break;
|
||||
case OPT_USER_RENAME:
|
||||
ret = user.rename(user_op, &err_msg);
|
||||
if (ret < 0) {
|
||||
cerr << "could not rename user: " << err_msg << std::endl;
|
||||
return -ret;
|
||||
}
|
||||
|
||||
break;
|
||||
case OPT_USER_ENABLE:
|
||||
case OPT_USER_SUSPEND:
|
||||
@ -5529,6 +5563,7 @@ int main(int argc, const char **argv)
|
||||
|
||||
if (opt_cmd == OPT_BUCKET_LINK) {
|
||||
bucket_op.set_bucket_id(bucket_id);
|
||||
bucket_op.set_new_bucket_name(new_bucket_name);
|
||||
string err;
|
||||
int r = RGWBucketAdminOp::link(store, bucket_op, &err);
|
||||
if (r < 0) {
|
||||
@ -5545,6 +5580,20 @@ int main(int argc, const char **argv)
|
||||
}
|
||||
}
|
||||
|
||||
if (opt_cmd == OPT_BUCKET_CHOWN) {
|
||||
|
||||
bucket_op.set_bucket_name(bucket_name);
|
||||
bucket_op.set_new_bucket_name(new_bucket_name);
|
||||
string err;
|
||||
string marker;
|
||||
|
||||
int r = RGWBucketAdminOp::chown(store, bucket_op, marker, &err);
|
||||
if (r < 0) {
|
||||
cerr << "failure: " << cpp_strerror(-r) << ": " << err << std::endl;
|
||||
return -r;
|
||||
}
|
||||
}
|
||||
|
||||
if (opt_cmd == OPT_LOG_LIST) {
|
||||
// filter by date?
|
||||
if (date.size() && date.size() != 10) {
|
||||
|
@ -446,8 +446,48 @@ void rgw::auth::RemoteApplier::to_str(std::ostream& out) const
|
||||
<< ", is_admin=" << info.is_admin << ")";
|
||||
}
|
||||
|
||||
void rgw::auth::ImplicitTenants::recompute_value(const ConfigProxy& c)
|
||||
{
|
||||
std::string s = c.get_val<std::string>("rgw_keystone_implicit_tenants");
|
||||
int v = 0;
|
||||
if (boost::iequals(s, "both")
|
||||
|| boost::iequals(s, "true")
|
||||
|| boost::iequals(s, "1")) {
|
||||
v = IMPLICIT_TENANTS_S3|IMPLICIT_TENANTS_SWIFT;
|
||||
} else if (boost::iequals(s, "0")
|
||||
|| boost::iequals(s, "none")
|
||||
|| boost::iequals(s, "false")) {
|
||||
v = 0;
|
||||
} else if (boost::iequals(s, "s3")) {
|
||||
v = IMPLICIT_TENANTS_S3;
|
||||
} else if (boost::iequals(s, "swift")) {
|
||||
v = IMPLICIT_TENANTS_SWIFT;
|
||||
} else { /* "" (and anything else) */
|
||||
v = IMPLICIT_TENANTS_BAD;
|
||||
// assert(0);
|
||||
}
|
||||
saved = v;
|
||||
}
|
||||
|
||||
const char **rgw::auth::ImplicitTenants::get_tracked_conf_keys() const
|
||||
{
|
||||
static const char *keys[] = {
|
||||
"rgw_keystone_implicit_tenants",
|
||||
nullptr };
|
||||
return keys;
|
||||
}
|
||||
|
||||
void rgw::auth::ImplicitTenants::handle_conf_change(const ConfigProxy& c,
|
||||
const std::set <std::string> &changed)
|
||||
{
|
||||
if (changed.count("rgw_keystone_implicit_tenants")) {
|
||||
recompute_value(c);
|
||||
}
|
||||
}
|
||||
|
||||
void rgw::auth::RemoteApplier::create_account(const DoutPrefixProvider* dpp,
|
||||
const rgw_user& acct_user,
|
||||
bool implicit_tenant,
|
||||
RGWUserInfo& user_info) const /* out */
|
||||
{
|
||||
rgw_user new_acct_user = acct_user;
|
||||
@ -459,7 +499,7 @@ void rgw::auth::RemoteApplier::create_account(const DoutPrefixProvider* dpp,
|
||||
|
||||
/* An upper layer may enforce creating new accounts within their own
|
||||
* tenants. */
|
||||
if (new_acct_user.tenant.empty() && implicit_tenants) {
|
||||
if (new_acct_user.tenant.empty() && implicit_tenant) {
|
||||
new_acct_user.tenant = new_acct_user.id;
|
||||
}
|
||||
|
||||
@ -486,6 +526,9 @@ void rgw::auth::RemoteApplier::load_acct_info(const DoutPrefixProvider* dpp, RGW
|
||||
* that belongs to the authenticated identity. Another policy may be
|
||||
* applied by using a RGWThirdPartyAccountAuthApplier decorator. */
|
||||
const rgw_user& acct_user = info.acct_user;
|
||||
auto implicit_value = implicit_tenant_context.get_value();
|
||||
bool implicit_tenant = implicit_value.implicit_tenants_for_(implicit_tenant_bit);
|
||||
bool split_mode = implicit_value.is_split_mode();
|
||||
|
||||
/* Normally, empty "tenant" field of acct_user means the authenticated
|
||||
* identity has the legacy, global tenant. However, due to inclusion
|
||||
@ -497,8 +540,16 @@ void rgw::auth::RemoteApplier::load_acct_info(const DoutPrefixProvider* dpp, RGW
|
||||
* the wiser.
|
||||
* If that fails, we look up in the requested (possibly empty) tenant.
|
||||
* If that fails too, we create the account within the global or separated
|
||||
* namespace depending on rgw_keystone_implicit_tenants. */
|
||||
if (acct_user.tenant.empty()) {
|
||||
* namespace depending on rgw_keystone_implicit_tenants.
|
||||
* For compatibility with previous versions of ceph, it is possible
|
||||
* to enable implicit_tenants for only s3 or only swift.
|
||||
* in this mode ("split_mode"), we must constrain the id lookups to
|
||||
* only use the identifier space that would be used if the id were
|
||||
* to be created. */
|
||||
|
||||
if (split_mode && !implicit_tenant)
|
||||
; /* suppress lookup for id used by "other" protocol */
|
||||
else if (acct_user.tenant.empty()) {
|
||||
const rgw_user tenanted_uid(acct_user.id, acct_user.id);
|
||||
|
||||
if (rgw_get_user_info_by_uid(store, tenanted_uid, user_info) >= 0) {
|
||||
@ -507,11 +558,16 @@ void rgw::auth::RemoteApplier::load_acct_info(const DoutPrefixProvider* dpp, RGW
|
||||
}
|
||||
}
|
||||
|
||||
if (rgw_get_user_info_by_uid(store, acct_user, user_info) < 0) {
|
||||
ldpp_dout(dpp, 0) << "NOTICE: couldn't map swift user " << acct_user << dendl;
|
||||
create_account(dpp, acct_user, user_info);
|
||||
if (split_mode && implicit_tenant)
|
||||
; /* suppress lookup for id used by "other" protocol */
|
||||
else if (rgw_get_user_info_by_uid(store, acct_user, user_info) >= 0) {
|
||||
/* Succeeded. */
|
||||
return;
|
||||
}
|
||||
|
||||
ldout(cct, 0) << "NOTICE: couldn't map swift user " << acct_user << dendl;
|
||||
create_account(dpp, acct_user, implicit_tenant, user_info);
|
||||
|
||||
/* Succeeded if we are here (create_account() hasn't throwed). */
|
||||
}
|
||||
|
||||
|
@ -412,6 +412,43 @@ public:
|
||||
};
|
||||
};
|
||||
|
||||
class ImplicitTenants: public md_config_obs_t {
|
||||
public:
|
||||
enum implicit_tenant_flag_bits {IMPLICIT_TENANTS_SWIFT=1,
|
||||
IMPLICIT_TENANTS_S3=2, IMPLICIT_TENANTS_BAD = -1, };
|
||||
private:
|
||||
int saved;
|
||||
void recompute_value(const ConfigProxy& );
|
||||
class ImplicitTenantValue {
|
||||
friend class ImplicitTenants;
|
||||
int v;
|
||||
ImplicitTenantValue(int v) : v(v) {};
|
||||
public:
|
||||
bool inline is_split_mode()
|
||||
{
|
||||
assert(v != IMPLICIT_TENANTS_BAD);
|
||||
return v == IMPLICIT_TENANTS_SWIFT || v == IMPLICIT_TENANTS_S3;
|
||||
}
|
||||
bool inline implicit_tenants_for_(const implicit_tenant_flag_bits bit)
|
||||
{
|
||||
assert(v != IMPLICIT_TENANTS_BAD);
|
||||
return static_cast<bool>(v&bit);
|
||||
}
|
||||
};
|
||||
public:
|
||||
ImplicitTenants(const ConfigProxy& c) { recompute_value(c);}
|
||||
ImplicitTenantValue get_value() {
|
||||
return ImplicitTenantValue(saved);
|
||||
}
|
||||
private:
|
||||
const char** get_tracked_conf_keys() const override;
|
||||
void handle_conf_change(const ConfigProxy& conf,
|
||||
const std::set <std::string> &changed) override;
|
||||
};
|
||||
|
||||
std::tuple<bool,bool> implicit_tenants_enabled_for_swift(CephContext * const cct);
|
||||
std::tuple<bool,bool> implicit_tenants_enabled_for_s3(CephContext * const cct);
|
||||
|
||||
/* rgw::auth::RemoteApplier targets those authentication engines which don't
|
||||
* need to ask the RADOS store while performing the auth process. Instead,
|
||||
* they obtain credentials from an external source like Keystone or LDAP.
|
||||
@ -464,10 +501,12 @@ protected:
|
||||
const acl_strategy_t extra_acl_strategy;
|
||||
|
||||
const AuthInfo info;
|
||||
const bool implicit_tenants;
|
||||
rgw::auth::ImplicitTenants& implicit_tenant_context;
|
||||
const rgw::auth::ImplicitTenants::implicit_tenant_flag_bits implicit_tenant_bit;
|
||||
|
||||
virtual void create_account(const DoutPrefixProvider* dpp,
|
||||
const rgw_user& acct_user,
|
||||
bool implicit_tenant,
|
||||
RGWUserInfo& user_info) const; /* out */
|
||||
|
||||
public:
|
||||
@ -475,12 +514,14 @@ public:
|
||||
RGWRados* const store,
|
||||
acl_strategy_t&& extra_acl_strategy,
|
||||
const AuthInfo& info,
|
||||
const bool implicit_tenants)
|
||||
rgw::auth::ImplicitTenants& implicit_tenant_context,
|
||||
rgw::auth::ImplicitTenants::implicit_tenant_flag_bits implicit_tenant_bit)
|
||||
: cct(cct),
|
||||
store(store),
|
||||
extra_acl_strategy(std::move(extra_acl_strategy)),
|
||||
info(info),
|
||||
implicit_tenants(implicit_tenants) {
|
||||
implicit_tenant_context(implicit_tenant_context),
|
||||
implicit_tenant_bit(implicit_tenant_bit) {
|
||||
}
|
||||
|
||||
uint32_t get_perms_from_aclspec(const DoutPrefixProvider* dpp, const aclspec_t& aclspec) const override;
|
||||
|
@ -36,9 +36,11 @@ class StrategyRegistry {
|
||||
s3_main_strategy_plain_t s3_main_strategy_plain;
|
||||
s3_main_strategy_boto2_t s3_main_strategy_boto2;
|
||||
|
||||
s3_main_strategy_t(CephContext* const cct, RGWRados* const store)
|
||||
: s3_main_strategy_plain(cct, store),
|
||||
s3_main_strategy_boto2(cct, store) {
|
||||
s3_main_strategy_t(CephContext* const cct,
|
||||
ImplicitTenants& implicit_tenant_context,
|
||||
RGWRados* const store)
|
||||
: s3_main_strategy_plain(cct, implicit_tenant_context, store),
|
||||
s3_main_strategy_boto2(cct, implicit_tenant_context, store) {
|
||||
add_engine(Strategy::Control::SUFFICIENT, s3_main_strategy_plain);
|
||||
add_engine(Strategy::Control::FALLBACK, s3_main_strategy_boto2);
|
||||
}
|
||||
@ -58,11 +60,12 @@ class StrategyRegistry {
|
||||
|
||||
public:
|
||||
StrategyRegistry(CephContext* const cct,
|
||||
ImplicitTenants& implicit_tenant_context,
|
||||
RGWRados* const store)
|
||||
: s3_main_strategy(cct, store),
|
||||
s3_post_strategy(cct, store),
|
||||
swift_strategy(cct, store),
|
||||
sts_strategy(cct, store) {
|
||||
: s3_main_strategy(cct, implicit_tenant_context, store),
|
||||
s3_post_strategy(cct, implicit_tenant_context, store),
|
||||
swift_strategy(cct, implicit_tenant_context, store),
|
||||
sts_strategy(cct, implicit_tenant_context, store) {
|
||||
}
|
||||
|
||||
const s3_main_strategy_t& get_s3_main() const {
|
||||
@ -83,8 +86,9 @@ public:
|
||||
|
||||
static std::shared_ptr<StrategyRegistry>
|
||||
create(CephContext* const cct,
|
||||
ImplicitTenants& implicit_tenant_context,
|
||||
RGWRados* const store) {
|
||||
return std::make_shared<StrategyRegistry>(cct, store);
|
||||
return std::make_shared<StrategyRegistry>(cct, implicit_tenant_context, store);
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -37,6 +37,7 @@ class STSAuthStrategy : public rgw::auth::Strategy,
|
||||
public rgw::auth::RoleApplier::Factory {
|
||||
typedef rgw::auth::IdentityApplier::aplptr_t aplptr_t;
|
||||
RGWRados* const store;
|
||||
rgw::auth::ImplicitTenants& implicit_tenant_context;
|
||||
|
||||
STSEngine sts_engine;
|
||||
|
||||
@ -47,7 +48,8 @@ class STSAuthStrategy : public rgw::auth::Strategy,
|
||||
) const override {
|
||||
auto apl = rgw::auth::add_sysreq(cct, store, s,
|
||||
rgw::auth::RemoteApplier(cct, store, std::move(acl_alg), info,
|
||||
cct->_conf->rgw_keystone_implicit_tenants));
|
||||
implicit_tenant_context,
|
||||
rgw::auth::ImplicitTenants::IMPLICIT_TENANTS_S3));
|
||||
return aplptr_t(new decltype(apl)(std::move(apl)));
|
||||
}
|
||||
|
||||
@ -74,8 +76,10 @@ class STSAuthStrategy : public rgw::auth::Strategy,
|
||||
public:
|
||||
STSAuthStrategy(CephContext* const cct,
|
||||
RGWRados* const store,
|
||||
rgw::auth::ImplicitTenants& implicit_tenant_context,
|
||||
AWSEngine::VersionAbstractor* const ver_abstractor)
|
||||
: store(store),
|
||||
implicit_tenant_context(implicit_tenant_context),
|
||||
sts_engine(cct, store, *ver_abstractor,
|
||||
static_cast<rgw::auth::LocalApplier::Factory*>(this),
|
||||
static_cast<rgw::auth::RemoteApplier::Factory*>(this),
|
||||
@ -94,6 +98,7 @@ class ExternalAuthStrategy : public rgw::auth::Strategy,
|
||||
public rgw::auth::RemoteApplier::Factory {
|
||||
typedef rgw::auth::IdentityApplier::aplptr_t aplptr_t;
|
||||
RGWRados* const store;
|
||||
rgw::auth::ImplicitTenants& implicit_tenant_context;
|
||||
|
||||
using keystone_config_t = rgw::keystone::CephCtxConfig;
|
||||
using keystone_cache_t = rgw::keystone::TokenCache;
|
||||
@ -110,7 +115,8 @@ class ExternalAuthStrategy : public rgw::auth::Strategy,
|
||||
) const override {
|
||||
auto apl = rgw::auth::add_sysreq(cct, store, s,
|
||||
rgw::auth::RemoteApplier(cct, store, std::move(acl_alg), info,
|
||||
cct->_conf->rgw_keystone_implicit_tenants));
|
||||
implicit_tenant_context,
|
||||
rgw::auth::ImplicitTenants::IMPLICIT_TENANTS_S3));
|
||||
/* TODO(rzarzynski): replace with static_ptr. */
|
||||
return aplptr_t(new decltype(apl)(std::move(apl)));
|
||||
}
|
||||
@ -118,8 +124,10 @@ class ExternalAuthStrategy : public rgw::auth::Strategy,
|
||||
public:
|
||||
ExternalAuthStrategy(CephContext* const cct,
|
||||
RGWRados* const store,
|
||||
rgw::auth::ImplicitTenants& implicit_tenant_context,
|
||||
AWSEngine::VersionAbstractor* const ver_abstractor)
|
||||
: store(store),
|
||||
implicit_tenant_context(implicit_tenant_context),
|
||||
ldap_engine(cct, store, *ver_abstractor,
|
||||
static_cast<rgw::auth::RemoteApplier::Factory*>(this)) {
|
||||
|
||||
@ -213,13 +221,14 @@ public:
|
||||
}
|
||||
|
||||
AWSAuthStrategy(CephContext* const cct,
|
||||
rgw::auth::ImplicitTenants& implicit_tenant_context,
|
||||
RGWRados* const store)
|
||||
: store(store),
|
||||
ver_abstractor(cct),
|
||||
anonymous_engine(cct,
|
||||
static_cast<rgw::auth::LocalApplier::Factory*>(this)),
|
||||
external_engines(cct, store, &ver_abstractor),
|
||||
sts_engine(cct, store, &ver_abstractor),
|
||||
external_engines(cct, store, implicit_tenant_context, &ver_abstractor),
|
||||
sts_engine(cct, store, implicit_tenant_context, &ver_abstractor),
|
||||
local_engine(cct, store, ver_abstractor,
|
||||
static_cast<rgw::auth::LocalApplier::Factory*>(this)) {
|
||||
/* The anonymous auth. */
|
||||
|
@ -104,6 +104,8 @@ struct rgw_user {
|
||||
}
|
||||
return (id < rhs.id);
|
||||
}
|
||||
void dump(Formatter *f) const;
|
||||
static void generate_test_instances(list<rgw_user*>& o);
|
||||
};
|
||||
WRITE_CLASS_ENCODER(rgw_user)
|
||||
|
||||
|
@ -24,6 +24,7 @@
|
||||
#include "rgw_user.h"
|
||||
#include "rgw_string.h"
|
||||
#include "rgw_multi.h"
|
||||
#include "rgw_op.h"
|
||||
|
||||
#include "services/svc_zone.h"
|
||||
#include "services/svc_sys_obj.h"
|
||||
@ -33,6 +34,10 @@
|
||||
#include "rgw_common.h"
|
||||
#include "rgw_reshard.h"
|
||||
#include "rgw_lc.h"
|
||||
|
||||
// stolen from src/cls/version/cls_version.cc
|
||||
#define VERSION_ATTR "ceph.objclass.version"
|
||||
|
||||
#include "cls/user/cls_user_types.h"
|
||||
|
||||
#define dout_context g_ceph_context
|
||||
@ -185,11 +190,135 @@ int rgw_bucket_sync_user_stats(RGWRados *store, const string& tenant_name, const
|
||||
return 0;
|
||||
}
|
||||
|
||||
int rgw_set_bucket_acl(RGWRados* store, ACLOwner& owner, rgw_bucket& bucket, RGWBucketInfo& bucket_info, bufferlist& bl)
|
||||
{
|
||||
RGWObjVersionTracker objv_tracker;
|
||||
RGWObjVersionTracker old_version = bucket_info.objv_tracker;
|
||||
|
||||
int r = store->set_bucket_owner(bucket_info.bucket, owner);
|
||||
if (r < 0) {
|
||||
cerr << "ERROR: failed to set bucket owner: " << cpp_strerror(-r) << std::endl;
|
||||
return r;
|
||||
}
|
||||
|
||||
const rgw_pool& root_pool = store->svc.zone->get_zone_params().domain_root;
|
||||
std::string bucket_entry;
|
||||
rgw_make_bucket_entry_name(bucket.tenant, bucket.name, bucket_entry);
|
||||
rgw_raw_obj obj(root_pool, bucket_entry);
|
||||
auto obj_ctx = store->svc.sysobj->init_obj_ctx();
|
||||
auto sysobj = obj_ctx.get_obj(obj);
|
||||
rgw_raw_obj obj_bucket_instance;
|
||||
|
||||
store->get_bucket_instance_obj(bucket, obj_bucket_instance);
|
||||
auto inst_sysobj = obj_ctx.get_obj(obj_bucket_instance);
|
||||
r = inst_sysobj.wop()
|
||||
.set_objv_tracker(&objv_tracker)
|
||||
.write_attr(RGW_ATTR_ACL, bl, null_yield);
|
||||
if (r < 0) {
|
||||
cerr << "failed to set new acl: " << cpp_strerror(-r) << std::endl;
|
||||
return r;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int rgw_bucket_chown(RGWRados* const store, RGWUserInfo& user_info, RGWBucketInfo& bucket_info, const string& marker, map<string, bufferlist>& attrs)
|
||||
{
|
||||
RGWObjectCtx obj_ctx(store);
|
||||
std::vector<rgw_bucket_dir_entry> objs;
|
||||
map<string, bool> common_prefixes;
|
||||
|
||||
RGWRados::Bucket target(store, bucket_info);
|
||||
RGWRados::Bucket::List list_op(&target);
|
||||
|
||||
list_op.params.list_versions = true;
|
||||
list_op.params.allow_unordered = true;
|
||||
list_op.params.marker = marker;
|
||||
|
||||
bool is_truncated = false;
|
||||
int count = 0;
|
||||
int max_entries = 1000;
|
||||
|
||||
//Loop through objects and update object acls to point to bucket owner
|
||||
|
||||
do {
|
||||
objs.clear();
|
||||
int ret = list_op.list_objects(max_entries, &objs, &common_prefixes, &is_truncated, null_yield);
|
||||
if (ret < 0) {
|
||||
ldout(store->ctx(), 0) << "ERROR: list objects failed: " << cpp_strerror(-ret) << dendl;
|
||||
return ret;
|
||||
}
|
||||
|
||||
list_op.params.marker = list_op.get_next_marker();
|
||||
count += objs.size();
|
||||
|
||||
for (const auto& obj : objs) {
|
||||
|
||||
rgw_obj r_obj(bucket_info.bucket, obj.key);
|
||||
RGWRados::Object op_target(store, bucket_info, obj_ctx, r_obj);
|
||||
RGWRados::Object::Read read_op(&op_target);
|
||||
|
||||
read_op.params.attrs = &attrs;
|
||||
ret = read_op.prepare(null_yield);
|
||||
if (ret < 0){
|
||||
ldout(store->ctx(), 0) << "ERROR: failed to read object " << obj.key.name << cpp_strerror(-ret) << dendl;
|
||||
continue;
|
||||
}
|
||||
const auto& aiter = attrs.find(RGW_ATTR_ACL);
|
||||
if (aiter == attrs.end()) {
|
||||
ldout(store->ctx(), 0) << "ERROR: no acls found for object " << obj.key.name << " .Continuing with next object." << dendl;
|
||||
continue;
|
||||
} else {
|
||||
bufferlist& bl = aiter->second;
|
||||
RGWAccessControlPolicy policy(store->ctx());
|
||||
ACLOwner owner;
|
||||
try {
|
||||
decode(policy, bl);
|
||||
owner = policy.get_owner();
|
||||
} catch (buffer::error& err) {
|
||||
ldout(store->ctx(), 0) << "ERROR: decode policy failed" << err << dendl;
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
//Get the ACL from the policy
|
||||
RGWAccessControlList& acl = policy.get_acl();
|
||||
|
||||
//Remove grant that is set to old owner
|
||||
acl.remove_canon_user_grant(owner.get_id());
|
||||
|
||||
//Create a grant and add grant
|
||||
ACLGrant grant;
|
||||
grant.set_canon(bucket_info.owner, user_info.display_name, RGW_PERM_FULL_CONTROL);
|
||||
acl.add_grant(&grant);
|
||||
|
||||
//Update the ACL owner to the new user
|
||||
owner.set_id(bucket_info.owner);
|
||||
owner.set_name(user_info.display_name);
|
||||
policy.set_owner(owner);
|
||||
|
||||
bl.clear();
|
||||
encode(policy, bl);
|
||||
|
||||
obj_ctx.set_atomic(r_obj);
|
||||
ret = store->set_attr(&obj_ctx, bucket_info, r_obj, RGW_ATTR_ACL, bl);
|
||||
if (ret < 0) {
|
||||
ldout(store->ctx(), 0) << "ERROR: modify attr failed " << cpp_strerror(-ret) << dendl;
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
}
|
||||
cerr << count << " objects processed in " << bucket_info.bucket.name
|
||||
<< ". Next marker " << list_op.params.marker.name << std::endl;
|
||||
} while(is_truncated);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int rgw_link_bucket(RGWRados* const store,
|
||||
const rgw_user& user_id,
|
||||
rgw_bucket& bucket,
|
||||
ceph::real_time creation_time,
|
||||
bool update_entrypoint)
|
||||
bool update_entrypoint,
|
||||
rgw_ep_info *pinfo)
|
||||
{
|
||||
int ret;
|
||||
string& tenant_name = bucket.tenant;
|
||||
@ -207,14 +336,22 @@ int rgw_link_bucket(RGWRados* const store,
|
||||
else
|
||||
new_bucket.creation_time = creation_time;
|
||||
|
||||
map<string, bufferlist> attrs;
|
||||
RGWSysObjectCtx obj_ctx = store->svc.sysobj->init_obj_ctx();
|
||||
map<string, bufferlist> attrs, *pattrs;
|
||||
|
||||
if (update_entrypoint) {
|
||||
ret = store->get_bucket_entrypoint_info(obj_ctx, tenant_name, bucket_name, ep, &ot, NULL, &attrs);
|
||||
if (ret < 0 && ret != -ENOENT) {
|
||||
ldout(store->ctx(), 0) << "ERROR: store->get_bucket_entrypoint_info() returned: "
|
||||
<< cpp_strerror(-ret) << dendl;
|
||||
if (pinfo) {
|
||||
ep = pinfo->ep;
|
||||
pattrs = &pinfo->attrs;
|
||||
} else {
|
||||
RGWSysObjectCtx obj_ctx = store->svc.sysobj->init_obj_ctx();
|
||||
|
||||
ret = store->get_bucket_entrypoint_info(obj_ctx,
|
||||
tenant_name, bucket_name, ep, &ot, NULL, &attrs);
|
||||
if (ret < 0 && ret != -ENOENT) {
|
||||
ldout(store->ctx(), 0) << "ERROR: store->get_bucket_entrypoint_info() returned: "
|
||||
<< cpp_strerror(-ret) << dendl;
|
||||
}
|
||||
pattrs = &attrs;
|
||||
}
|
||||
}
|
||||
|
||||
@ -235,7 +372,7 @@ int rgw_link_bucket(RGWRados* const store,
|
||||
ep.linked = true;
|
||||
ep.owner = user_id;
|
||||
ep.bucket = bucket;
|
||||
ret = store->put_bucket_entrypoint_info(tenant_name, bucket_name, ep, false, ot, real_time(), &attrs);
|
||||
ret = store->put_bucket_entrypoint_info(tenant_name, bucket_name, ep, false, ot, real_time(), pattrs);
|
||||
if (ret < 0)
|
||||
goto done_err;
|
||||
|
||||
@ -369,7 +506,7 @@ int rgw_bucket_parse_bucket_key(CephContext *cct, const string& key,
|
||||
|
||||
// split tenant/name
|
||||
auto pos = name.find('/');
|
||||
if (pos != boost::string_ref::npos) {
|
||||
if (pos != string::npos) {
|
||||
auto tenant = name.substr(0, pos);
|
||||
bucket->tenant.assign(tenant.begin(), tenant.end());
|
||||
name = name.substr(pos + 1);
|
||||
@ -377,7 +514,7 @@ int rgw_bucket_parse_bucket_key(CephContext *cct, const string& key,
|
||||
|
||||
// split name:instance
|
||||
pos = name.find(':');
|
||||
if (pos != boost::string_ref::npos) {
|
||||
if (pos != string::npos) {
|
||||
instance = name.substr(pos + 1);
|
||||
name = name.substr(0, pos);
|
||||
}
|
||||
@ -385,7 +522,7 @@ int rgw_bucket_parse_bucket_key(CephContext *cct, const string& key,
|
||||
|
||||
// split instance:shard
|
||||
pos = instance.find(':');
|
||||
if (pos == boost::string_ref::npos) {
|
||||
if (pos == string::npos) {
|
||||
bucket->bucket_id.assign(instance.begin(), instance.end());
|
||||
*shard_id = -1;
|
||||
return 0;
|
||||
@ -781,15 +918,20 @@ static void set_err_msg(std::string *sink, std::string msg)
|
||||
*sink = msg;
|
||||
}
|
||||
|
||||
int RGWBucket::init(RGWRados *storage, RGWBucketAdminOpState& op_state)
|
||||
int RGWBucket::init(RGWRados *storage, RGWBucketAdminOpState& op_state,
|
||||
std::string *err_msg, map<string, bufferlist> *pattrs)
|
||||
{
|
||||
if (!storage)
|
||||
std::string bucket_tenant;
|
||||
if (!storage) {
|
||||
set_err_msg(err_msg, "no storage!");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
store = storage;
|
||||
|
||||
rgw_user user_id = op_state.get_user_id();
|
||||
tenant = user_id.tenant;
|
||||
bucket_tenant = tenant;
|
||||
bucket_name = op_state.get_bucket_name();
|
||||
RGWUserBuckets user_buckets;
|
||||
auto obj_ctx = store->svc.sysobj->init_obj_ctx();
|
||||
@ -797,9 +939,19 @@ int RGWBucket::init(RGWRados *storage, RGWBucketAdminOpState& op_state)
|
||||
if (bucket_name.empty() && user_id.empty())
|
||||
return -EINVAL;
|
||||
|
||||
// split possible tenant/name
|
||||
auto pos = bucket_name.find('/');
|
||||
if (pos != string::npos) {
|
||||
bucket_tenant = bucket_name.substr(0, pos);
|
||||
bucket_name = bucket_name.substr(pos + 1);
|
||||
}
|
||||
|
||||
if (!bucket_name.empty()) {
|
||||
int r = store->get_bucket_info(obj_ctx, tenant, bucket_name, bucket_info, NULL, null_yield);
|
||||
ceph::real_time mtime;
|
||||
int r = store->get_bucket_info(obj_ctx, bucket_tenant, bucket_name,
|
||||
bucket_info, &mtime, null_yield, pattrs);
|
||||
if (r < 0) {
|
||||
set_err_msg(err_msg, "failed to fetch bucket info for bucket=" + bucket_name);
|
||||
ldout(store->ctx(), 0) << "could not get bucket info for bucket=" << bucket_name << dendl;
|
||||
return r;
|
||||
}
|
||||
@ -809,8 +961,10 @@ int RGWBucket::init(RGWRados *storage, RGWBucketAdminOpState& op_state)
|
||||
|
||||
if (!user_id.empty()) {
|
||||
int r = rgw_get_user_info_by_uid(store, user_id, user_info);
|
||||
if (r < 0)
|
||||
if (r < 0) {
|
||||
set_err_msg(err_msg, "failed to fetch user info");
|
||||
return r;
|
||||
}
|
||||
|
||||
op_state.display_name = user_info.display_name;
|
||||
}
|
||||
@ -819,7 +973,8 @@ int RGWBucket::init(RGWRados *storage, RGWBucketAdminOpState& op_state)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int RGWBucket::link(RGWBucketAdminOpState& op_state, std::string *err_msg)
|
||||
int RGWBucket::link(RGWBucketAdminOpState& op_state,
|
||||
map<string, bufferlist>& attrs, std::string *err_msg)
|
||||
{
|
||||
if (!op_state.is_user_op()) {
|
||||
set_err_msg(err_msg, "empty user id");
|
||||
@ -827,99 +982,155 @@ int RGWBucket::link(RGWBucketAdminOpState& op_state, std::string *err_msg)
|
||||
}
|
||||
|
||||
string bucket_id = op_state.get_bucket_id();
|
||||
if (bucket_id.empty()) {
|
||||
set_err_msg(err_msg, "empty bucket instance id");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
std::string display_name = op_state.get_user_display_name();
|
||||
rgw_bucket bucket = op_state.get_bucket();
|
||||
rgw_bucket& bucket = op_state.get_bucket();
|
||||
if (!bucket_id.empty() && bucket_id != bucket.bucket_id) {
|
||||
set_err_msg(err_msg,
|
||||
"specified bucket id does not match " + bucket.bucket_id);
|
||||
return -EINVAL;
|
||||
}
|
||||
rgw_bucket old_bucket = bucket;
|
||||
bucket.tenant = tenant;
|
||||
if (!op_state.new_bucket_name.empty()) {
|
||||
auto pos = op_state.new_bucket_name.find('/');
|
||||
if (pos != string::npos) {
|
||||
bucket.tenant = op_state.new_bucket_name.substr(0, pos);
|
||||
bucket.name = op_state.new_bucket_name.substr(pos + 1);
|
||||
} else {
|
||||
bucket.name = op_state.new_bucket_name;
|
||||
}
|
||||
}
|
||||
|
||||
const rgw_pool& root_pool = store->svc.zone->get_zone_params().domain_root;
|
||||
std::string bucket_entry;
|
||||
rgw_make_bucket_entry_name(tenant, bucket_name, bucket_entry);
|
||||
rgw_raw_obj obj(root_pool, bucket_entry);
|
||||
RGWObjVersionTracker objv_tracker;
|
||||
RGWObjVersionTracker old_version = bucket_info.objv_tracker;
|
||||
|
||||
map<string, bufferlist> attrs;
|
||||
RGWBucketInfo bucket_info;
|
||||
map<string, bufferlist>::iterator aiter = attrs.find(RGW_ATTR_ACL);
|
||||
if (aiter == attrs.end()) {
|
||||
// should never happen; only pre-argonaut buckets lacked this.
|
||||
ldout(store->ctx(), 0) << "WARNING: can't bucket link because no acl on bucket=" << old_bucket.name << dendl;
|
||||
set_err_msg(err_msg,
|
||||
"While crossing the Anavros you have displeased the goddess Hera."
|
||||
" You must sacrifice your ancient bucket " + bucket.bucket_id);
|
||||
return -EINVAL;
|
||||
}
|
||||
bufferlist aclbl = aiter->second;
|
||||
RGWAccessControlPolicy policy;
|
||||
ACLOwner owner;
|
||||
try {
|
||||
auto iter = aclbl.cbegin();
|
||||
decode(policy, iter);
|
||||
owner = policy.get_owner();
|
||||
} catch (buffer::error& err) {
|
||||
set_err_msg(err_msg, "couldn't decode policy");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
auto obj_ctx = store->svc.sysobj->init_obj_ctx();
|
||||
int r = store->get_bucket_instance_info(obj_ctx, bucket, bucket_info, NULL, &attrs, null_yield);
|
||||
int r = rgw_unlink_bucket(store, owner.get_id(),
|
||||
old_bucket.tenant, old_bucket.name, false);
|
||||
if (r < 0) {
|
||||
set_err_msg(err_msg, "could not unlink policy from user " + owner.get_id().to_str());
|
||||
return r;
|
||||
}
|
||||
|
||||
map<string, bufferlist>::iterator aiter = attrs.find(RGW_ATTR_ACL);
|
||||
if (aiter != attrs.end()) {
|
||||
bufferlist aclbl = aiter->second;
|
||||
RGWAccessControlPolicy policy;
|
||||
ACLOwner owner;
|
||||
try {
|
||||
auto iter = aclbl.cbegin();
|
||||
decode(policy, iter);
|
||||
owner = policy.get_owner();
|
||||
} catch (buffer::error& err) {
|
||||
set_err_msg(err_msg, "couldn't decode policy");
|
||||
return -EIO;
|
||||
}
|
||||
// now update the user for the bucket...
|
||||
if (display_name.empty()) {
|
||||
ldout(store->ctx(), 0) << "WARNING: user " << user_info.user_id << " has no display name set" << dendl;
|
||||
}
|
||||
|
||||
r = rgw_unlink_bucket(store, owner.get_id(), bucket.tenant, bucket.name, false);
|
||||
RGWAccessControlPolicy policy_instance;
|
||||
policy_instance.create_default(user_info.user_id, display_name);
|
||||
owner = policy_instance.get_owner();
|
||||
|
||||
aclbl.clear();
|
||||
policy_instance.encode(aclbl);
|
||||
|
||||
if (bucket == old_bucket) {
|
||||
r = rgw_set_bucket_acl(store, owner, bucket, bucket_info, aclbl);
|
||||
if (r < 0) {
|
||||
set_err_msg(err_msg, "could not unlink policy from user " + owner.get_id().to_str());
|
||||
set_err_msg(err_msg, "failed to set new acl");
|
||||
return r;
|
||||
}
|
||||
|
||||
// now update the user for the bucket...
|
||||
if (display_name.empty()) {
|
||||
ldout(store->ctx(), 0) << "WARNING: user " << user_info.user_id << " has no display name set" << dendl;
|
||||
}
|
||||
policy.create_default(user_info.user_id, display_name);
|
||||
|
||||
owner = policy.get_owner();
|
||||
r = store->set_bucket_owner(bucket_info.bucket, owner);
|
||||
if (r < 0) {
|
||||
set_err_msg(err_msg, "failed to set bucket owner: " + cpp_strerror(-r));
|
||||
return r;
|
||||
}
|
||||
|
||||
// ...and encode the acl
|
||||
aclbl.clear();
|
||||
policy.encode(aclbl);
|
||||
|
||||
auto sysobj = obj_ctx.get_obj(obj);
|
||||
r = sysobj.wop()
|
||||
.set_objv_tracker(&objv_tracker)
|
||||
.write_attr(RGW_ATTR_ACL, aclbl, null_yield);
|
||||
if (r < 0) {
|
||||
return r;
|
||||
}
|
||||
|
||||
RGWAccessControlPolicy policy_instance;
|
||||
policy_instance.create_default(user_info.user_id, display_name);
|
||||
aclbl.clear();
|
||||
policy_instance.encode(aclbl);
|
||||
|
||||
rgw_raw_obj obj_bucket_instance;
|
||||
store->get_bucket_instance_obj(bucket, obj_bucket_instance);
|
||||
auto inst_sysobj = obj_ctx.get_obj(obj_bucket_instance);
|
||||
r = inst_sysobj.wop()
|
||||
.set_objv_tracker(&objv_tracker)
|
||||
.write_attr(RGW_ATTR_ACL, aclbl, null_yield);
|
||||
if (r < 0) {
|
||||
return r;
|
||||
}
|
||||
|
||||
r = rgw_link_bucket(store, user_info.user_id, bucket_info.bucket,
|
||||
ceph::real_time());
|
||||
} else {
|
||||
attrs[RGW_ATTR_ACL] = aclbl;
|
||||
bucket_info.bucket = bucket;
|
||||
bucket_info.owner = user_info.user_id;
|
||||
bucket_info.objv_tracker.version_for_read()->ver = 0;
|
||||
r = store->put_bucket_instance_info(bucket_info, true, real_time(), &attrs);
|
||||
if (r < 0) {
|
||||
set_err_msg(err_msg, "ERROR: failed writing bucket instance info: " + cpp_strerror(-r));
|
||||
return r;
|
||||
}
|
||||
}
|
||||
|
||||
RGWBucketEntryPoint ep;
|
||||
ep.bucket = bucket_info.bucket;
|
||||
ep.owner = user_info.user_id;
|
||||
ep.creation_time = bucket_info.creation_time;
|
||||
ep.linked = true;
|
||||
map<string, bufferlist> ep_attrs;
|
||||
rgw_ep_info ep_data{ep, ep_attrs};
|
||||
|
||||
r = rgw_link_bucket(store, user_info.user_id, bucket_info.bucket,
|
||||
ceph::real_time(), true, &ep_data);
|
||||
if (r < 0) {
|
||||
set_err_msg(err_msg, "failed to relink bucket");
|
||||
return r;
|
||||
}
|
||||
if (bucket != old_bucket) {
|
||||
RGWObjVersionTracker ep_version;
|
||||
*ep_version.version_for_read() = bucket_info.ep_objv;
|
||||
// like RGWRados::delete_bucket -- excepting no bucket_index work.
|
||||
r = rgw_bucket_delete_bucket_obj(store,
|
||||
old_bucket.tenant, old_bucket.name, ep_version);
|
||||
if (r < 0) {
|
||||
set_err_msg(err_msg, "failed to unlink old bucket endpoint " + old_bucket.tenant + "/" + old_bucket.name);
|
||||
return r;
|
||||
}
|
||||
string entry = old_bucket.get_key();
|
||||
r = rgw_bucket_instance_remove_entry(store, entry, &old_version);
|
||||
if (r < 0) {
|
||||
set_err_msg(err_msg, "failed to unlink old bucket info " + entry);
|
||||
return r;
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int RGWBucket::chown(RGWBucketAdminOpState& op_state,
|
||||
map<string, bufferlist>& attrs, const string& marker, std::string *err_msg)
|
||||
{
|
||||
//after bucket link
|
||||
rgw_bucket& bucket = op_state.get_bucket();
|
||||
tenant = bucket.tenant;
|
||||
bucket_name = bucket.name;
|
||||
|
||||
RGWBucketInfo bucket_info;
|
||||
RGWSysObjectCtx sys_ctx = store->svc.sysobj->init_obj_ctx();
|
||||
|
||||
int ret = store->get_bucket_info(sys_ctx, tenant, bucket_name, bucket_info, NULL, null_yield, &attrs);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "bucket info failed: tenant: " + tenant + "bucket_name: " + bucket_name + " " + cpp_strerror(-ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
RGWUserInfo user_info;
|
||||
ret = rgw_get_user_info_by_uid(store, bucket_info.owner, user_info);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "user info failed: " + cpp_strerror(-ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = rgw_bucket_chown(store, user_info, bucket_info, marker, attrs);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "Failed to change object ownership" + cpp_strerror(-ret));
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int RGWBucket::unlink(RGWBucketAdminOpState& op_state, std::string *err_msg)
|
||||
{
|
||||
rgw_bucket bucket = op_state.get_bucket();
|
||||
@ -1350,12 +1561,30 @@ int RGWBucketAdminOp::unlink(RGWRados *store, RGWBucketAdminOpState& op_state)
|
||||
int RGWBucketAdminOp::link(RGWRados *store, RGWBucketAdminOpState& op_state, string *err)
|
||||
{
|
||||
RGWBucket bucket;
|
||||
map<string, bufferlist> attrs;
|
||||
|
||||
int ret = bucket.init(store, op_state);
|
||||
int ret = bucket.init(store, op_state, err, &attrs);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return bucket.link(op_state, err);
|
||||
return bucket.link(op_state, attrs, err);
|
||||
|
||||
}
|
||||
|
||||
int RGWBucketAdminOp::chown(RGWRados *store, RGWBucketAdminOpState& op_state, const string& marker, string *err)
|
||||
{
|
||||
RGWBucket bucket;
|
||||
map<string, bufferlist> attrs;
|
||||
|
||||
int ret = bucket.init(store, op_state, err, &attrs);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
ret = bucket.link(op_state, attrs, err);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return bucket.chown(op_state, attrs, marker, err);
|
||||
|
||||
}
|
||||
|
||||
|
@ -200,14 +200,25 @@ extern int rgw_read_user_buckets(RGWRados *store,
|
||||
bool* is_truncated,
|
||||
uint64_t default_amount = 1000);
|
||||
|
||||
struct rgw_ep_info {
|
||||
RGWBucketEntryPoint &ep;
|
||||
map<string, bufferlist>& attrs;
|
||||
rgw_ep_info(RGWBucketEntryPoint &ep, map<string, bufferlist>& attrs)
|
||||
: ep(ep), attrs(attrs) { }
|
||||
};
|
||||
|
||||
extern int rgw_link_bucket(RGWRados* store,
|
||||
const rgw_user& user_id,
|
||||
rgw_bucket& bucket,
|
||||
ceph::real_time creation_time,
|
||||
bool update_entrypoint = true);
|
||||
bool update_entrypoint = true,
|
||||
rgw_ep_info *pinfo = nullptr);
|
||||
extern int rgw_unlink_bucket(RGWRados *store, const rgw_user& user_id,
|
||||
const string& tenant_name, const string& bucket_name, bool update_entrypoint = true);
|
||||
|
||||
extern int rgw_bucket_chown(RGWRados* const store, RGWUserInfo& user_info, RGWBucketInfo& bucket_info,
|
||||
const string& marker, map<string, bufferlist>& attrs);
|
||||
extern int rgw_set_bucket_acl(RGWRados* store, ACLOwner& owner, rgw_bucket& bucket,
|
||||
RGWBucketInfo& bucket_info, bufferlist& bl);
|
||||
extern int rgw_remove_object(RGWRados *store, const RGWBucketInfo& bucket_info, const rgw_bucket& bucket, rgw_obj_key& key);
|
||||
extern int rgw_remove_bucket(RGWRados *store, rgw_bucket& bucket, bool delete_children, optional_yield y);
|
||||
extern int rgw_remove_bucket_bypass_gc(RGWRados *store, rgw_bucket& bucket, int concurrent_max, optional_yield y);
|
||||
@ -227,6 +238,7 @@ struct RGWBucketAdminOpState {
|
||||
std::string bucket_name;
|
||||
std::string bucket_id;
|
||||
std::string object_name;
|
||||
std::string new_bucket_name;
|
||||
|
||||
bool list_buckets;
|
||||
bool stat_buckets;
|
||||
@ -257,6 +269,9 @@ struct RGWBucketAdminOpState {
|
||||
void set_object(std::string& object_str) {
|
||||
object_name = object_str;
|
||||
}
|
||||
void set_new_bucket_name(std::string& new_bucket_str) {
|
||||
new_bucket_name = new_bucket_str;
|
||||
}
|
||||
void set_quota(RGWQuotaInfo& value) {
|
||||
quota = value;
|
||||
}
|
||||
@ -312,7 +327,8 @@ class RGWBucket
|
||||
|
||||
public:
|
||||
RGWBucket() : store(NULL), handle(NULL), failure(false) {}
|
||||
int init(RGWRados *storage, RGWBucketAdminOpState& op_state);
|
||||
int init(RGWRados *storage, RGWBucketAdminOpState& op_state,
|
||||
std::string *err_msg = NULL, map<string, bufferlist> *pattrs = NULL);
|
||||
|
||||
int check_bad_index_multipart(RGWBucketAdminOpState& op_state,
|
||||
RGWFormatterFlusher& flusher, std::string *err_msg = NULL);
|
||||
@ -328,7 +344,9 @@ public:
|
||||
std::string *err_msg = NULL);
|
||||
|
||||
int remove(RGWBucketAdminOpState& op_state, optional_yield y, bool bypass_gc = false, bool keep_index_consistent = true, std::string *err_msg = NULL);
|
||||
int link(RGWBucketAdminOpState& op_state, std::string *err_msg = NULL);
|
||||
int link(RGWBucketAdminOpState& op_state, map<string, bufferlist>& attrs,
|
||||
std::string *err_msg = NULL);
|
||||
int chown(RGWBucketAdminOpState& op_state, map<string, bufferlist>& attrs, const string& marker, std::string *err_msg = NULL);
|
||||
int unlink(RGWBucketAdminOpState& op_state, std::string *err_msg = NULL);
|
||||
int set_quota(RGWBucketAdminOpState& op_state, std::string *err_msg = NULL);
|
||||
|
||||
@ -353,6 +371,7 @@ public:
|
||||
|
||||
static int unlink(RGWRados *store, RGWBucketAdminOpState& op_state);
|
||||
static int link(RGWRados *store, RGWBucketAdminOpState& op_state, string *err_msg = NULL);
|
||||
static int chown(RGWRados *store, RGWBucketAdminOpState& op_state, const string& marker, string *err_msg = NULL);
|
||||
|
||||
static int check_index(RGWRados *store, RGWBucketAdminOpState& op_state,
|
||||
RGWFormatterFlusher& flusher, optional_yield y);
|
||||
|
@ -1268,6 +1268,10 @@ struct rgw_bucket {
|
||||
return (tenant == b.tenant) && (name == b.name) && \
|
||||
(bucket_id == b.bucket_id);
|
||||
}
|
||||
bool operator!=(const rgw_bucket& b) const {
|
||||
return (tenant != b.tenant) || (name != b.name) ||
|
||||
(bucket_id != b.bucket_id);
|
||||
}
|
||||
};
|
||||
WRITE_CLASS_ENCODER(rgw_bucket)
|
||||
|
||||
@ -1599,6 +1603,7 @@ struct RGWBucketEntryPoint
|
||||
|
||||
void dump(Formatter *f) const;
|
||||
void decode_json(JSONObj *obj);
|
||||
static void generate_test_instances(list<RGWBucketEntryPoint*>& o);
|
||||
};
|
||||
WRITE_CLASS_ENCODER(RGWBucketEntryPoint)
|
||||
|
||||
|
@ -573,3 +573,32 @@ void objexp_hint_entry::generate_test_instances(list<objexp_hint_entry*>& o)
|
||||
o.push_back(it);
|
||||
o.push_back(new objexp_hint_entry);
|
||||
}
|
||||
|
||||
void RGWBucketEntryPoint::generate_test_instances(list<RGWBucketEntryPoint*>& o)
|
||||
{
|
||||
RGWBucketEntryPoint *bp = new RGWBucketEntryPoint();
|
||||
init_bucket(&bp->bucket, "tenant", "bucket", "pool", ".index.pool", "marker", "10");
|
||||
bp->owner = "owner";
|
||||
bp->creation_time = ceph::real_clock::from_ceph_timespec({{2}, {3}});
|
||||
|
||||
o.push_back(bp);
|
||||
o.push_back(new RGWBucketEntryPoint);
|
||||
}
|
||||
|
||||
void rgw_user::generate_test_instances(list<rgw_user*>& o)
|
||||
{
|
||||
rgw_user *u = new rgw_user("tenant", "user");
|
||||
|
||||
o.push_back(u);
|
||||
o.push_back(new rgw_user);
|
||||
}
|
||||
|
||||
void obj_version::generate_test_instances(list<obj_version*>& o)
|
||||
{
|
||||
obj_version *v = new obj_version;
|
||||
v->ver = 5;
|
||||
v->tag = "tag";
|
||||
|
||||
o.push_back(v);
|
||||
o.push_back(new obj_version);
|
||||
}
|
||||
|
@ -252,11 +252,16 @@ public:
|
||||
class RGWFrontendPauser : public RGWRealmReloader::Pauser {
|
||||
std::list<RGWFrontend*> &frontends;
|
||||
RGWRealmReloader::Pauser* pauser;
|
||||
rgw::auth::ImplicitTenants& implicit_tenants;
|
||||
|
||||
public:
|
||||
RGWFrontendPauser(std::list<RGWFrontend*> &frontends,
|
||||
rgw::auth::ImplicitTenants& implicit_tenants,
|
||||
RGWRealmReloader::Pauser* pauser = nullptr)
|
||||
: frontends(frontends), pauser(pauser) {}
|
||||
: frontends(frontends),
|
||||
pauser(pauser),
|
||||
implicit_tenants(implicit_tenants) {
|
||||
}
|
||||
|
||||
void pause() override {
|
||||
for (auto frontend : frontends)
|
||||
@ -268,7 +273,7 @@ class RGWFrontendPauser : public RGWRealmReloader::Pauser {
|
||||
/* Initialize the registry of auth strategies which will coordinate
|
||||
* the dynamic reconfiguration. */
|
||||
auto auth_registry = \
|
||||
rgw::auth::StrategyRegistry::create(g_ceph_context, store);
|
||||
rgw::auth::StrategyRegistry::create(g_ceph_context, implicit_tenants, store);
|
||||
|
||||
for (auto frontend : frontends)
|
||||
frontend->unpause_with_new_config(store, auth_registry);
|
||||
|
@ -1760,3 +1760,8 @@ void objexp_hint_entry::dump(Formatter *f) const
|
||||
encode_json("exp_time", ut, f);
|
||||
f->close_section();
|
||||
}
|
||||
|
||||
void rgw_user::dump(Formatter *f) const
|
||||
{
|
||||
::encode_json("user", *this, f);
|
||||
}
|
||||
|
@ -420,8 +420,10 @@ int main(int argc, const char **argv)
|
||||
|
||||
/* Initialize the registry of auth strategies which will coordinate
|
||||
* the dynamic reconfiguration. */
|
||||
rgw::auth::ImplicitTenants implicit_tenant_context{g_conf()};
|
||||
g_conf().add_observer(&implicit_tenant_context);
|
||||
auto auth_registry = \
|
||||
rgw::auth::StrategyRegistry::create(g_ceph_context, store);
|
||||
rgw::auth::StrategyRegistry::create(g_ceph_context, implicit_tenant_context, store);
|
||||
|
||||
/* Header custom behavior */
|
||||
rest.register_x_headers(g_conf()->rgw_log_http_headers);
|
||||
@ -541,7 +543,7 @@ int main(int argc, const char **argv)
|
||||
|
||||
// add a watcher to respond to realm configuration changes
|
||||
RGWPeriodPusher pusher(store);
|
||||
RGWFrontendPauser pauser(fes, &pusher);
|
||||
RGWFrontendPauser pauser(fes, implicit_tenant_context, &pusher);
|
||||
RGWRealmReloader reloader(store, service_map_meta, &pauser);
|
||||
|
||||
RGWRealmWatcher realm_watcher(g_ceph_context, store->svc.zone->get_realm());
|
||||
@ -593,6 +595,7 @@ int main(int argc, const char **argv)
|
||||
rgw_shutdown_resolver();
|
||||
rgw_http_client_cleanup();
|
||||
rgw::curl::cleanup_curl();
|
||||
g_conf().remove_observer(&implicit_tenant_context);
|
||||
|
||||
rgw_perf_stop(g_ceph_context);
|
||||
|
||||
|
@ -448,7 +448,7 @@ static int modify_obj_attr(RGWRados *store, struct req_state *s, const rgw_obj&
|
||||
RGWRados::Object::Read read_op(&op_target);
|
||||
|
||||
read_op.params.attrs = &attrs;
|
||||
|
||||
|
||||
int r = read_op.prepare(s->yield);
|
||||
if (r < 0) {
|
||||
return r;
|
||||
|
@ -2380,5 +2380,4 @@ static inline int parse_value_and_bound(
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
#endif /* CEPH_RGW_OP_H */
|
||||
|
@ -131,17 +131,20 @@ void RGWOp_Bucket_Link::execute()
|
||||
std::string uid_str;
|
||||
std::string bucket;
|
||||
std::string bucket_id;
|
||||
std::string new_bucket_name;
|
||||
|
||||
RGWBucketAdminOpState op_state;
|
||||
|
||||
RESTArgs::get_string(s, "uid", uid_str, &uid_str);
|
||||
RESTArgs::get_string(s, "bucket", bucket, &bucket);
|
||||
RESTArgs::get_string(s, "bucket-id", bucket_id, &bucket_id);
|
||||
RESTArgs::get_string(s, "new-bucket-name", new_bucket_name, &new_bucket_name);
|
||||
|
||||
rgw_user uid(uid_str);
|
||||
op_state.set_user_id(uid);
|
||||
op_state.set_bucket_name(bucket);
|
||||
op_state.set_bucket_id(bucket_id);
|
||||
op_state.set_new_bucket_name(new_bucket_name);
|
||||
|
||||
http_ret = RGWBucketAdminOp::link(store, op_state);
|
||||
}
|
||||
|
@ -1043,37 +1043,6 @@ public:
|
||||
};
|
||||
|
||||
|
||||
class S3AuthFactory : public rgw::auth::RemoteApplier::Factory,
|
||||
public rgw::auth::LocalApplier::Factory {
|
||||
typedef rgw::auth::IdentityApplier::aplptr_t aplptr_t;
|
||||
RGWRados* const store;
|
||||
|
||||
public:
|
||||
explicit S3AuthFactory(RGWRados* const store)
|
||||
: store(store) {
|
||||
}
|
||||
|
||||
aplptr_t create_apl_remote(CephContext* const cct,
|
||||
const req_state* const s,
|
||||
rgw::auth::RemoteApplier::acl_strategy_t&& acl_alg,
|
||||
const rgw::auth::RemoteApplier::AuthInfo &info
|
||||
) const override {
|
||||
return aplptr_t(
|
||||
new rgw::auth::RemoteApplier(cct, store, std::move(acl_alg), info,
|
||||
cct->_conf->rgw_keystone_implicit_tenants));
|
||||
}
|
||||
|
||||
aplptr_t create_apl_local(CephContext* const cct,
|
||||
const req_state* const s,
|
||||
const RGWUserInfo& user_info,
|
||||
const std::string& subuser,
|
||||
const boost::optional<uint32_t>& perm_mask) const override {
|
||||
return aplptr_t(
|
||||
new rgw::auth::LocalApplier(cct, user_info, subuser, perm_mask));
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
} /* namespace s3 */
|
||||
} /* namespace auth */
|
||||
} /* namespace rgw */
|
||||
|
@ -53,6 +53,7 @@ class DefaultStrategy : public rgw::auth::Strategy,
|
||||
public rgw::auth::TokenExtractor,
|
||||
public rgw::auth::WebIdentityApplier::Factory {
|
||||
RGWRados* const store;
|
||||
ImplicitTenants& implicit_tenant_context;
|
||||
|
||||
/* The engine. */
|
||||
const WebTokenEngine web_token_engine;
|
||||
@ -74,8 +75,10 @@ class DefaultStrategy : public rgw::auth::Strategy,
|
||||
|
||||
public:
|
||||
DefaultStrategy(CephContext* const cct,
|
||||
ImplicitTenants& implicit_tenant_context,
|
||||
RGWRados* const store)
|
||||
: store(store),
|
||||
implicit_tenant_context(implicit_tenant_context),
|
||||
web_token_engine(cct,
|
||||
static_cast<rgw::auth::TokenExtractor*>(this),
|
||||
static_cast<rgw::auth::WebIdentityApplier::Factory*>(this)) {
|
||||
|
@ -169,6 +169,7 @@ class DefaultStrategy : public rgw::auth::Strategy,
|
||||
public rgw::auth::LocalApplier::Factory,
|
||||
public rgw::auth::swift::TempURLApplier::Factory {
|
||||
RGWRados* const store;
|
||||
ImplicitTenants& implicit_tenant_context;
|
||||
|
||||
/* The engines. */
|
||||
const rgw::auth::swift::TempURLEngine tempurl_engine;
|
||||
@ -197,7 +198,8 @@ class DefaultStrategy : public rgw::auth::Strategy,
|
||||
rgw::auth::add_3rdparty(store, s->account_name,
|
||||
rgw::auth::add_sysreq(cct, store, s,
|
||||
rgw::auth::RemoteApplier(cct, store, std::move(extra_acl_strategy), info,
|
||||
cct->_conf->rgw_keystone_implicit_tenants)));
|
||||
implicit_tenant_context,
|
||||
rgw::auth::ImplicitTenants::IMPLICIT_TENANTS_SWIFT)));
|
||||
/* TODO(rzarzynski): replace with static_ptr. */
|
||||
return aplptr_t(new decltype(apl)(std::move(apl)));
|
||||
}
|
||||
@ -226,8 +228,10 @@ class DefaultStrategy : public rgw::auth::Strategy,
|
||||
|
||||
public:
|
||||
DefaultStrategy(CephContext* const cct,
|
||||
ImplicitTenants& implicit_tenant_context,
|
||||
RGWRados* const store)
|
||||
: store(store),
|
||||
implicit_tenant_context(implicit_tenant_context),
|
||||
tempurl_engine(cct,
|
||||
store,
|
||||
static_cast<rgw::auth::swift::TempURLApplier::Factory*>(this)),
|
||||
|
@ -172,7 +172,8 @@ int rgw_store_user_info(RGWRados *store,
|
||||
/* check if swift mapping exists */
|
||||
RGWUserInfo inf;
|
||||
int r = rgw_get_user_info_by_swift(store, k.id, inf);
|
||||
if (r >= 0 && inf.user_id.compare(info.user_id) != 0) {
|
||||
if (r >= 0 && inf.user_id.compare(info.user_id) != 0 &&
|
||||
(!old_info || inf.user_id.compare(old_info->user_id) != 0)) {
|
||||
ldout(store->ctx(), 0) << "WARNING: can't store user info, swift id (" << k.id
|
||||
<< ") already mapped to another user (" << info.user_id << ")" << dendl;
|
||||
return -EEXIST;
|
||||
@ -188,7 +189,8 @@ int rgw_store_user_info(RGWRados *store,
|
||||
if (old_info && old_info->access_keys.count(iter->first) != 0)
|
||||
continue;
|
||||
int r = rgw_get_user_info_by_access_key(store, k.id, inf);
|
||||
if (r >= 0 && inf.user_id.compare(info.user_id) != 0) {
|
||||
if (r >= 0 && inf.user_id.compare(info.user_id) != 0 &&
|
||||
(!old_info || inf.user_id.compare(old_info->user_id) != 0)) {
|
||||
ldout(store->ctx(), 0) << "WARNING: can't store user info, access key already mapped to another user" << dendl;
|
||||
return -EEXIST;
|
||||
}
|
||||
@ -222,12 +224,13 @@ int rgw_store_user_info(RGWRados *store,
|
||||
}
|
||||
}
|
||||
|
||||
const bool renamed = old_info && old_info->user_id != info.user_id;
|
||||
if (!info.access_keys.empty()) {
|
||||
map<string, RGWAccessKey>::iterator iter = info.access_keys.begin();
|
||||
for (; iter != info.access_keys.end(); ++iter) {
|
||||
RGWAccessKey& k = iter->second;
|
||||
if (old_info && old_info->access_keys.count(iter->first) != 0)
|
||||
continue;
|
||||
if (old_info && old_info->access_keys.count(iter->first) != 0 && !renamed)
|
||||
continue;
|
||||
|
||||
ret = rgw_put_system_obj(store, store->svc.zone->get_zone_params().user_keys_pool, k.id,
|
||||
link_bl, exclusive, NULL, real_time());
|
||||
@ -239,7 +242,7 @@ int rgw_store_user_info(RGWRados *store,
|
||||
map<string, RGWAccessKey>::iterator siter;
|
||||
for (siter = info.swift_keys.begin(); siter != info.swift_keys.end(); ++siter) {
|
||||
RGWAccessKey& k = siter->second;
|
||||
if (old_info && old_info->swift_keys.count(siter->first) != 0)
|
||||
if (old_info && old_info->swift_keys.count(siter->first) != 0 && !renamed)
|
||||
continue;
|
||||
|
||||
ret = rgw_put_system_obj(store, store->svc.zone->get_zone_params().user_swift_pool, k.id,
|
||||
@ -1936,6 +1939,228 @@ int RGWUser::check_op(RGWUserAdminOpState& op_state, std::string *err_msg)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int RGWUser::execute_user_rename(RGWUserAdminOpState& op_state, std::string *err_msg)
|
||||
{
|
||||
int ret;
|
||||
bool populated = op_state.is_populated();
|
||||
|
||||
if (!op_state.has_existing_user() && !populated) {
|
||||
set_err_msg(err_msg, "user not found");
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
if (!populated) {
|
||||
ret = init(op_state);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "unable to retrieve user info");
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
rgw_user& old_uid = op_state.get_user_id();
|
||||
RGWUserInfo old_user_info = op_state.get_user_info();
|
||||
|
||||
rgw_user& uid = op_state.get_new_uid();
|
||||
|
||||
RGWUserInfo existing_uinfo;
|
||||
if (!uid.empty()) {
|
||||
ret = rgw_get_user_info_by_uid(store, uid, existing_uinfo);
|
||||
if (ret >= 0) {
|
||||
set_err_msg(err_msg, "user name given by --new-uid already exists");
|
||||
return -EEXIST;
|
||||
}
|
||||
}
|
||||
|
||||
if (old_uid.tenant != uid.tenant) {
|
||||
set_err_msg(err_msg, "users have to be under the same tenant namespace"
|
||||
+ old_uid.tenant + "!=" + uid.tenant);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
string display_name = old_user_info.display_name;
|
||||
RGWUserAdminOpState new_op_state;
|
||||
new_op_state.set_user_id(uid);
|
||||
|
||||
std::string subprocess_msg;
|
||||
ret = execute_rename(new_op_state, old_user_info, &subprocess_msg);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "unable to create new user, " + subprocess_msg);
|
||||
return ret;
|
||||
}
|
||||
|
||||
RGWUserInfo user_info;
|
||||
ret = rgw_get_user_info_by_uid(store, uid, user_info);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "failed to fetch user info");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ACLOwner owner;
|
||||
RGWAccessControlPolicy policy_instance;
|
||||
policy_instance.create_default(uid, display_name);
|
||||
owner = policy_instance.get_owner();
|
||||
bufferlist aclbl;
|
||||
policy_instance.encode(aclbl);
|
||||
|
||||
//unlink and link buckets to new user
|
||||
bool is_truncated = false;
|
||||
string marker;
|
||||
string obj_marker;
|
||||
CephContext *cct = store->ctx();
|
||||
size_t max_buckets = cct->_conf->rgw_list_buckets_max_chunk;
|
||||
|
||||
do {
|
||||
RGWUserBuckets buckets;
|
||||
int ret = rgw_read_user_buckets(store, old_uid, buckets, marker, string(),
|
||||
max_buckets, false, &is_truncated);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "unable to read bucket info of user");
|
||||
return ret;
|
||||
}
|
||||
|
||||
map<string, bufferlist> attrs;
|
||||
map<std::string, RGWBucketEnt>& m = buckets.get_buckets();
|
||||
std::map<std::string, RGWBucketEnt>::iterator it;
|
||||
|
||||
for (it = m.begin(); it != m.end(); ++it) {
|
||||
RGWBucketEnt obj = it->second;
|
||||
ret = rgw_unlink_bucket(store, old_uid, obj.bucket.tenant, obj.bucket.name);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "error unlinking bucket " + cpp_strerror(-ret));
|
||||
return ret;
|
||||
}
|
||||
|
||||
marker = it->first;
|
||||
|
||||
RGWBucketInfo bucket_info;
|
||||
RGWSysObjectCtx sys_ctx = store->svc.sysobj->init_obj_ctx();
|
||||
|
||||
ret = store->get_bucket_info(sys_ctx, obj.bucket.tenant, obj.bucket.name,
|
||||
bucket_info, NULL, null_yield, &attrs);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "failed to fetch bucket info for bucket= " + obj.bucket.name);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = rgw_set_bucket_acl(store, owner, obj.bucket, bucket_info, aclbl);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "failed to set acl on bucket " + obj.bucket.name);
|
||||
return ret;
|
||||
}
|
||||
|
||||
RGWBucketEntryPoint ep;
|
||||
ep.bucket = bucket_info.bucket;
|
||||
ep.owner = uid;
|
||||
ep.creation_time = bucket_info.creation_time;
|
||||
ep.linked = true;
|
||||
map<string, bufferlist> ep_attrs;
|
||||
rgw_ep_info ep_data{ep, ep_attrs};
|
||||
|
||||
ret = rgw_link_bucket(store, uid, bucket_info.bucket,
|
||||
ceph::real_time(), true, &ep_data);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "failed to link bucket " + obj.bucket.name + " to new user");
|
||||
return ret;
|
||||
}
|
||||
|
||||
RGWBucketInfo new_bucket_info;
|
||||
ret = store->get_bucket_info(sys_ctx, obj.bucket.tenant, obj.bucket.name,
|
||||
new_bucket_info, NULL, null_yield, &attrs);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "failed to fetch bucket info for bucket= " + obj.bucket.name);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = rgw_bucket_chown(store, user_info, new_bucket_info, obj_marker, attrs);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "failed to run bucket chown" + cpp_strerror(-ret));
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
} while (is_truncated);
|
||||
|
||||
// delete only the old user index without calling execute_remove()
|
||||
string buckets_obj_id;
|
||||
rgw_get_buckets_obj(old_uid, buckets_obj_id);
|
||||
rgw_raw_obj uid_bucks(store->svc.zone->get_zone_params().user_uid_pool, buckets_obj_id);
|
||||
ldout(store->ctx(), 10) << "removing user buckets index" << dendl;
|
||||
auto obj_ctx = store->svc.sysobj->init_obj_ctx();
|
||||
auto sysobj = obj_ctx.get_obj(uid_bucks);
|
||||
ret = sysobj.wop().remove(null_yield);
|
||||
if (ret < 0 && ret != -ENOENT) {
|
||||
ldout(store->ctx(), 0) << "ERROR: could not remove " << old_uid << ":" << uid_bucks << ", should be fixed (err=" << ret << ")" << dendl;
|
||||
return ret;
|
||||
}
|
||||
|
||||
string key;
|
||||
old_uid.to_str(key);
|
||||
|
||||
rgw_raw_obj uid_obj(store->svc.zone->get_zone_params().user_uid_pool, key);
|
||||
ldout(store->ctx(), 10) << "removing user index: " << old_uid << dendl;
|
||||
ret = store->meta_mgr->remove_entry(user_meta_handler, key, &op_state.objv);
|
||||
if (ret < 0 && ret != -ENOENT && ret != -ECANCELED) {
|
||||
ldout(store->ctx(), 0) << "ERROR: could not remove " << old_uid << ":" << uid_obj << ", should be fixed (err=" << ret << ")" << dendl;
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int RGWUser::execute_rename(RGWUserAdminOpState& op_state, RGWUserInfo& old_user_info, std::string *err_msg)
|
||||
{
|
||||
std::string subprocess_msg;
|
||||
int ret = 0;
|
||||
|
||||
rgw_user& user_id = op_state.get_user_id();
|
||||
|
||||
RGWUserInfo user_info;
|
||||
user_info = old_user_info;
|
||||
user_info.user_id = user_id;
|
||||
|
||||
// update swift_keys with new user id
|
||||
auto modify_keys = user_info.swift_keys;
|
||||
map<string, RGWAccessKey>::iterator it;
|
||||
|
||||
user_info.swift_keys.clear();
|
||||
|
||||
for (it = modify_keys.begin(); it != modify_keys.end(); it++) {
|
||||
|
||||
RGWAccessKey old_key;
|
||||
old_key = it->second;
|
||||
|
||||
std::string id;
|
||||
user_id.to_str(id);
|
||||
id.append(":");
|
||||
id.append(old_key.subuser);
|
||||
|
||||
old_key.id = id;
|
||||
user_info.swift_keys[id] = old_key;
|
||||
}
|
||||
|
||||
op_state.set_user_info(user_info);
|
||||
op_state.set_initialized();
|
||||
|
||||
// update the helper objects
|
||||
ret = init_members(op_state);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "unable to initialize user");
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = rgw_store_user_info(store, user_info, &old_info, &op_state.objv, real_time(), false);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "unable to store user info");
|
||||
return ret;
|
||||
}
|
||||
|
||||
old_info = user_info;
|
||||
set_populated();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
int RGWUser::execute_add(RGWUserAdminOpState& op_state, std::string *err_msg)
|
||||
{
|
||||
std::string subprocess_msg;
|
||||
@ -2063,6 +2288,7 @@ int RGWUser::execute_add(RGWUserAdminOpState& op_state, std::string *err_msg)
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
int RGWUser::add(RGWUserAdminOpState& op_state, std::string *err_msg)
|
||||
{
|
||||
std::string subprocess_msg;
|
||||
@ -2083,6 +2309,26 @@ int RGWUser::add(RGWUserAdminOpState& op_state, std::string *err_msg)
|
||||
return 0;
|
||||
}
|
||||
|
||||
int RGWUser::rename(RGWUserAdminOpState& op_state, std::string *err_msg)
|
||||
{
|
||||
std::string subprocess_msg;
|
||||
int ret;
|
||||
|
||||
ret = check_op(op_state, &subprocess_msg);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "unable to parse parameters, " + subprocess_msg);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = execute_user_rename(op_state, &subprocess_msg);
|
||||
if (ret < 0) {
|
||||
set_err_msg(err_msg, "unable to rename user, " + subprocess_msg);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int RGWUser::execute_remove(RGWUserAdminOpState& op_state, std::string *err_msg, optional_yield y)
|
||||
{
|
||||
int ret;
|
||||
|
@ -159,6 +159,7 @@ struct RGWUserAdminOpState {
|
||||
rgw_user user_id;
|
||||
std::string user_email;
|
||||
std::string display_name;
|
||||
rgw_user new_user_id;
|
||||
int32_t max_buckets;
|
||||
__u8 suspended;
|
||||
__u8 admin;
|
||||
@ -257,6 +258,13 @@ struct RGWUserAdminOpState {
|
||||
user_id = id;
|
||||
}
|
||||
|
||||
void set_new_user_id(rgw_user& id) {
|
||||
if (id.empty())
|
||||
return;
|
||||
|
||||
new_user_id = id;
|
||||
}
|
||||
|
||||
void set_user_email(std::string& email) {
|
||||
/* always lowercase email address */
|
||||
boost::algorithm::to_lower(email);
|
||||
@ -446,6 +454,7 @@ struct RGWUserAdminOpState {
|
||||
std::string get_caps() { return caps; }
|
||||
std::string get_user_email() { return user_email; }
|
||||
std::string get_display_name() { return display_name; }
|
||||
rgw_user& get_new_uid() { return new_user_id; }
|
||||
map<int, std::string>& get_temp_url_keys() { return temp_url_keys; }
|
||||
|
||||
RGWUserInfo& get_user_info() { return info; }
|
||||
@ -673,6 +682,8 @@ private:
|
||||
int execute_remove(RGWUserAdminOpState& op_state,
|
||||
std::string *err_msg, optional_yield y);
|
||||
int execute_modify(RGWUserAdminOpState& op_state, std::string *err_msg);
|
||||
int execute_user_rename(RGWUserAdminOpState& op_state, std::string *err_msg);
|
||||
int execute_rename(RGWUserAdminOpState& op_state, RGWUserInfo& old_user_info, std::string *err_msg);
|
||||
|
||||
public:
|
||||
RGWUser();
|
||||
@ -693,8 +704,11 @@ public:
|
||||
|
||||
/* API Contracted Methods */
|
||||
int add(RGWUserAdminOpState& op_state, std::string *err_msg = NULL);
|
||||
|
||||
int remove(RGWUserAdminOpState& op_state, optional_yield y, std::string *err_msg = NULL);
|
||||
|
||||
int rename(RGWUserAdminOpState& op_state, std::string *err_msg = NULL);
|
||||
|
||||
/* remove an already populated RGWUser */
|
||||
int remove(std::string *err_msg = NULL);
|
||||
|
||||
|
@ -4,6 +4,7 @@
|
||||
user create create a new user
|
||||
user modify modify user
|
||||
user info get user info
|
||||
user rename rename user
|
||||
user rm remove user
|
||||
user suspend suspend a user
|
||||
user enable re-enable user after suspension
|
||||
@ -25,6 +26,7 @@
|
||||
bucket stats returns bucket statistics
|
||||
bucket rm remove bucket
|
||||
bucket check check bucket index
|
||||
bucket chown link bucket to specified user and update its object ACLs
|
||||
bucket reshard reshard bucket
|
||||
bucket rewrite rewrite all objects in the specified bucket
|
||||
bucket sync disable disable bucket sync
|
||||
@ -162,6 +164,7 @@
|
||||
options:
|
||||
--tenant=<tenant> tenant name
|
||||
--uid=<id> user id
|
||||
--new-uid=<id> new user id
|
||||
--subuser=<name> subuser name
|
||||
--access-key=<key> S3 access key
|
||||
--email=<email> user's email address
|
||||
@ -185,6 +188,8 @@
|
||||
--start-date=<date> start date in the format yyyy-mm-dd
|
||||
--end-date=<date> end date in the format yyyy-mm-dd
|
||||
--bucket-id=<bucket-id> bucket id
|
||||
--bucket-new-name=<bucket>
|
||||
for bucket link: optional new name
|
||||
--shard-id=<shard-id> optional for:
|
||||
mdlog list
|
||||
data sync status
|
||||
|
Loading…
Reference in New Issue
Block a user