we raise UnicodeDecodeError at seeing non-ascii args if we fail to match
it with any command signatures. instead, we should use a unicode string
for representing the error in that case. please note, the exception is
not printed at all in real-world. =)
Fixes: http://tracker.ceph.com/issues/12287
Signed-off-by: Kefu Chai <kchai@redhat.com>
* ceph-base: use ${python:Depends} instead of listing the python
dependencies manually, dh_python2 will scan the requirements
of ceph-detect-init. and fill the subst var for us.
* ceph-common: add ${python:Depends}, as it packages ceph,
and ceph-brag client.
* ceph-osd: it packages ceph-disk, so should add ${python:Depends}
as its dependencies.
dh_python2 will figure them out.
Signed-off-by: Kefu Chai <kchai@redhat.com>
* debian/control:
as we have listed the linked libraries in Depends section, for example,
python-rados depends on librados. and we don't need `dpkg-shlibdeps` to
help figure out shared library substvar dependencies for us. by removing
them, we can silence the warnings of
```
warning: dpkg-shlibdeps: package could avoid a useless dependency if
debian/python-rados/usr/lib/python2.7/dist-packages/rados.x86_64-linux-gnu.so
was not linked against libpthread.so.0 (it uses none of the library's
symbols)
```
-lpthread is introduced by `python-config --ldflags` but it turns out we
are not using any symbols from pthread in the extension directly. and
pthread is included in glibc. so this does not added any extra
dependency to python-* pacakges. but it's desirable to have less
warnings.
* debian/rules: exclude python-* packages from dh_shlibdeps, as we will
not use it to prepare the shlib deps substvars for these packages any
more.
Signed-off-by: Kefu Chai <kchai@redhat.com>
some packages do not package python modules or scripts. so override
dh_python2 to exclude them.
this change silences warnings like:
```
warning: dpkg-gencontrol: package ceph-mon: unused substitution
variable ${python:Provides}
```
Signed-off-by: Kefu Chai <kchai@redhat.com>
As same as amazon S3 interface,"PUT Bucket lifecycle" and
"DELETE Bucket lifecycle" have been implemented,
"GET Bucket lifecycle" not realized yet as S3cmd has not
realize it also.
The feature`s main point is to remove expire file per day.
Files transfer from hot layer to cold layer is not supported.
ToDo:Maybe to transfer from replicate pool to EC pool or
from ssd to sata pool will be valuable.
Now put all buckets which should do lifecycle into shard
objects in .rgw.lc pool.
lifecycle config file format:
<LifecycleConfiguration>
<Rule>
<ID>sample-rule</ID>
<Prefix></Prefix>
<Status>enable</Status>
<Expiration>
<Days>1</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>
Signed-off-by: Ji Chen <insomnia@139.com>
librbd will replay these ops when opening an image, so rbd-mirror
should also ensure these ops are replayed.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
When multiple pools are being replicated, start the shut down
process concurrently across all pool replayers.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Ensure that, by default, IO journal events are broken up into manageable
sizes when factoring in that an rbd-mirror daemon might be replaying
events from thousands of images.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Operation request op finish events should not be fire and forget.
Instead, ensure the event is committed to the journal before
completing the op. This will avoid several possible split-brain
events during mirroring.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
When streaming playback, avoid the unnecessary watch delay when
one or more entries have been pruned.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
If a future flush is requested at the exact same moment that an
overflow is detected, the two threads will deadlock since locks
are not taken in a consistent order.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
rbd-mirror debugging involved potentially thousands of journals
concurrently running. The instance address will correlate log
messages between journals.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Now that it's possible for the ObjectPlayer to only read a
partial subset of available entries, the JournalPlayer needs
to detect that more entries might be available.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Previously it was prefetching up to 2 object sets worth of journal
data objects which consumed too much memory.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Journal playback will need to read at least a full entry which was
currently limited to the maximum object size. In memory constrained
environment, this new optional limit will set a fix upper bound on
memory usage.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Support fetching the full object or incremental chunks (with a
minimum of at least a single decoded entry if available).
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
Additional runtime configuration settings will be needed. The
new class will avoid the need to expand the constructor.
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
This file documents how to configure RGW to use Apache/FastCGI, so rename
the file and modify the title and intro to make that clear.
Also, add a note that CGI can pose a security risk - e.g. http://httpoxy.org
Signed-off-by: Nathan Cutler <ncutler@suse.com>
This is obviously not the proper place to check the allocation
result. Also the origin check logic is more portable, thus we
drop the "assert(r == 0)" here.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
If max_alloc_size is 0(means no limit), the original logic will
always hardcode need_blks to 0, which is incorrect.
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>