When retrieving the status of a mirrored image from the Python rbd
library, a TypeError is raised.
*To Reproduce:*
Set up two Ceph clusters for block storage, and configure image
mirroring between their pools. Create a least one image with mirroring
enabled, then run the following script on either cluster (once the image
exists everywhere):
```python
import rados, rbd
CONF_PATH = "YOUR-CONF-PATH"
POOL_NAME = "YOUR-POOL-NAME"
IMAGE_LABEL = "YOUR-IMAGE-LABEL"
with rados.Rados(conffile=CONF_PATH) as cluster:
with cluster.open_ioctx(POOL_NAME) as ioctx:
with rbd.Image(ioctx, IMAGE_LABEL) as image:
image.mirror_image_get_status()
```
This will result in the following stack trace:
```
Traceback (most recent call last):
File "repo-bug.py", line 10, in <module>
image.mirror_image_get_status()
File "rbd.pyx", line 3363, in rbd.requires_not_closed.wrapper
File "rbd.pyx", line 5209, in rbd.Image.mirror_image_get_status
TypeError: list indices must be integers or slices, not str
```
Fixes: https://tracker.ceph.com/issues/51867
Signed-off-by: Will Smith <wsmith@linode.com>
back in 2623fec1cd, the vaiants of, for
instance, ctz() are consolidated to a single template. so the
ctz<>() dispatches by the size of argument after that change.
but the tests were not updated accordingly.
in this change:
* the tests are updated to use the template.
* instead of using integer literal postfix, use the macros like
UINT64_C to define integer constants for better portability on
different architectures where the lengths of integer *might* be
different from amd64. also, it's more readable than postfixes
like ULL in this context, as we really care about the exact
length of an integer in this context when counting the leading
zeros.
Signed-off-by: Kefu Chai <kchai@redhat.com>
it'd be easier for the static analyzer (like GCC), to reason about if
a variable is initialized before being used.
this change also helps to improve the readability, and to silence the
false alarm like:
In file included from ../src/os/bluestore/BlueStore.h:42,
from ../src/os/bluestore/BlueStore.cc:26:
../src/common/bloom_filter.hpp: In member function 'void std::vector<_Tp, _Alloc>::_M_fill_insert(std::vector<_Tp, _Alloc>::iterator, std::vector<_Tp, _Alloc>::size_type, const value_type&) [with _Tp = bloom_filter; _Alloc = mempool::pool_allocator<mempool::mempool_bluestore_fsck, bloom_filter>]':
../src/common/bloom_filter.hpp:118:46: warning: '*((void*)(& __tmp)+8).bloom_filter::table_size_' may be used uninitialized in this function [-Wmaybe-uninitialized]
118 | mempool::bloom_filter::alloc_byte.deallocate(bit_table_, table_size_);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
Signed-off-by: Kefu Chai <kchai@redhat.com>
Improvements and some adaptations related to the jenkins job.
Fixes: https://tracker.ceph.com/issues/51612
Signed-off-by: Alfonso Martínez <almartin@redhat.com>
this change silences following false positive warning:
In file included from ../src/include/encoding.h:41,
from ../src/kv/KeyValueDB.h:12,
from ../src/os/bluestore/bluestore_common.h:20,
from ../src/os/bluestore/BlueFS.cc:5:
../src/include/denc.h: In function ‘std::enable_if_t<(is_same_v<T, bluefs_extent_t> || is_same_v<T, const bluefs_extent_t>)> _denc_friend(T&, P&) [with T = bluefs_extent_t; P = ceph::buffer::v15_2_0::p$
../src/include/denc.h:639:11: warning: ‘shift’ may be used uninitialized in this function [-Wmaybe-uninitialized]
639 | shift += 7;
| ~~~~~~^~~~
../src/include/denc.h:613:7: note: ‘shift’ was declared here
613 | int shift;
| ^~~~~
Signed-off-by: Kefu Chai <kchai@redhat.com>
The cluster has already multiple the full ratio before returning
the "max_avail".
Fixes: https://tracker.ceph.com/issues/50984
Signed-off-by: Xiubo Li <xiubli@redhat.com>
For kclient, the write() will return -ENOSPC instead of the fsync().
Fixes: https://tracker.ceph.com/issues/45434
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Setting the pg_num to 8 is too small that some osds maybe not covered by the
pools, some osds maybe overloaded. Remove the hardcodeing pg_num here and let
the pg autoscale mode to calculate it as needed, and at the same time set the
pg_num_min to 64 to avoid the pg_num to small.
If ec pool is used, for the test cases most datas will go to the ec pool and
the primary replicated pool will store a small amount of metadata for all the
files only, so set the target size ratio to 0.05 should be enough.
Fixes: https://tracker.ceph.com/issues/45434
Signed-off-by: Xiubo Li <xiubli@redhat.com>
Set the object_size to 1MB to make the objects destributed more even
among the OSDs.
Fixes: https://tracker.ceph.com/issues/45434
Signed-off-by: Xiubo Li <xiubli@redhat.com>
'ceph df detail' reports a column for DIRTY objects under POOLS even
though cache tiers not being used. In replicated or EC pool all objects
in the pool are reported as logically DIRTY as they have never been
flushed .
we display N/A for DIRTY objects if the pool is not a cache tier.
Signed-off-by: Deepika Upadhyay <dupadhya@redhat.com>
This doesn't normally happen, but did before the daemon inventory breakage
(see previous patches) was fixed.
Signed-off-by: Sage Weil <sage@newdream.net>
The bucket owner can always read/write to the bucket, so use those creds
for the export. This is less complicated than setting up a dedicated
user anyway.
Signed-off-by: Sage Weil <sage@newdream.net>
- clean up language
- move config hierarchy to the bottom (this is an implementation detail
that is only useful if managing ganesha externally)
Signed-off-by: Sage Weil <sage@newdream.net>