Since the `test_zonegroup_remove` actually destroys a zonegroup, we
could just filter this out and run the suite as
`nosetests -a !destructive ../path/to/test-multi.py`
for provisioning a multisite mstart cluster.
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
Currently, if we want to use the default options for rbd, we need to omit the
RBD_IMAGE_OPTION_FEATURES, but if we want --image-shared. we need to overwrite
something bese on the default value of image options.
This patch introduce two flags in image_options, RBD_IMAGE_OPTION_FEATURES_SET
means, we want to set something after you get the features from default, parent
or user. And RBD_IMAGE_OPTION_FEATURES_CLEAR will do a bit clear.
Signed-off-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Currently when there is no bucket in the cluster and user execute
"$radosgw-admin user stats --uid=testid" command then rgw admin
not throwing meaningful error message.
With this fix it will show proper meaningful error message.
Fixes: http://tracker.ceph.com/issues/16444
Reported-by: Abhishek Lekshmanan <abhishek@suse.com>
Signed-off-by: Gaurav Kumar Garg <garg.gaurav52@gmail.com>
In current multisite scenarios,if a bucket is created in master, we end
up storing multipart metadata in `$source-zone.rgw.buckets.non-ec` pool
instead of the zone's own non-ec pool, so we end up additionally
creating this pool and storing multipart metadata entries in it. Also if
a bucket is created in a secondary zone, and we initiate a multipart
upload, before mdlog sync with master, we end up getting errors during
complete multipart requests as omap entries are partly stored in the
`$zone.rgw.buckets.non-ec` as well as `$source-zone.rgw.buckets.non-ec`
pools which leads to total number of parts mismatch.
Fixes: http://tracker.ceph.com/issues/16712
Signed-off-by: Abhishek Lekshmanan <abhishek@suse.com>
we raise UnicodeDecodeError at seeing non-ascii args if we fail to match
it with any command signatures. instead, we should use a unicode string
for representing the error in that case. please note, the exception is
not printed at all in real-world. =)
Fixes: http://tracker.ceph.com/issues/12287
Signed-off-by: Kefu Chai <kchai@redhat.com>
* ceph-base: use ${python:Depends} instead of listing the python
dependencies manually, dh_python2 will scan the requirements
of ceph-detect-init. and fill the subst var for us.
* ceph-common: add ${python:Depends}, as it packages ceph,
and ceph-brag client.
* ceph-osd: it packages ceph-disk, so should add ${python:Depends}
as its dependencies.
dh_python2 will figure them out.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* debian/control:
as we have listed the linked libraries in Depends section, for example,
python-rados depends on librados. and we don't need `dpkg-shlibdeps` to
help figure out shared library substvar dependencies for us. by removing
them, we can silence the warnings of
```
warning: dpkg-shlibdeps: package could avoid a useless dependency if
debian/python-rados/usr/lib/python2.7/dist-packages/rados.x86_64-linux-gnu.so
was not linked against libpthread.so.0 (it uses none of the library's
symbols)
```
-lpthread is introduced by `python-config --ldflags` but it turns out we
are not using any symbols from pthread in the extension directly. and
pthread is included in glibc. so this does not added any extra
dependency to python-* pacakges. but it's desirable to have less
warnings.
* debian/rules: exclude python-* packages from dh_shlibdeps, as we will
not use it to prepare the shlib deps substvars for these packages any
more.
Signed-off-by: Kefu Chai <kchai@redhat.com>
some packages do not package python modules or scripts. so override
dh_python2 to exclude them.
this change silences warnings like:
```
warning: dpkg-gencontrol: package ceph-mon: unused substitution
variable ${python:Provides}
```
Signed-off-by: Kefu Chai <kchai@redhat.com>
As same as amazon S3 interface,"PUT Bucket lifecycle" and
"DELETE Bucket lifecycle" have been implemented,
"GET Bucket lifecycle" not realized yet as S3cmd has not
realize it also.
The feature`s main point is to remove expire file per day.
Files transfer from hot layer to cold layer is not supported.
ToDo:Maybe to transfer from replicate pool to EC pool or
from ssd to sata pool will be valuable.
Now put all buckets which should do lifecycle into shard
objects in .rgw.lc pool.
lifecycle config file format:
<LifecycleConfiguration>
<Rule>
<ID>sample-rule</ID>
<Prefix></Prefix>
<Status>enable</Status>
<Expiration>
<Days>1</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>
Signed-off-by: Ji Chen <insomnia@139.com>