Remove seqdiag assets to determine whether the docs can be built if they
are absent. (Currently they cannot be built when they are present.) If
this works, then these diagrams will be replaced, probably with .png
files.
Signed-off-by: Zac Dover <zac.dover@proton.me>
crimson/ertr: let ErrVisitorT return plain value if ValueFuncT returns seastar::future
Reviewed-by: Samuel Just <sjust@redhat.com>
Reviewed-by: chunmei-liu <chunmei.liu@intel.com>
Reviewed-by: Yingxin Cheng <yingxin.cheng@intel.com>
Correct several misspelling of "S3 Select". Hat tip to Anthony D'Atri,
who caught this in an earlier PR.
Signed-off-by: Zac Dover <zac.dover@proton.me>
Edit the "Basic Workflow" section in doc/radosgw/s3select.rst.
Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
Fixes: https://tracker.ceph.com/issues/61661
The valgrind leak indication is a false positive in this case,
it is because the libaio internal thread have not timed out yet
when radosgw is terminated.
```
man aio_init
...
aio_idle_time
This field specifies the amount of time in seconds that a worker thread
should wait for further requests before terminating, after having
completed a previous request. The
default value is 1.
...
```
for the sake of teuthology reducing the timeout
waiting for 2 minutes for example like below would also prevent the leak report
```
❯ env
LD_LIBRARY_PATH=/mnt/nvme5n1p1/src-git/ceph--up--master-clang/build/lib/:$LD_LIBRARY_PATH
PYTHONPATH=$PYTHONPATH:/mnt/nvme5n1p1/src-git/ceph--up--master-clang/build/lib/cython_modules/lib.3
RAGWEED_CONF=$(realpath ./ragweed.conf) RAGWEED_STAGES=prepare,check tox
-- -v |& ccze -Aonolookups ; sleep 2m | pv -t ; pkill radosgw
```
Signed-off-by: Mark Kogan <mkogan@redhat.com>
With the new implementation in messenger, the order of replacement reset
and accept events cannot be determined because they are from different
connections.
Modify the heatbeat logic to tolerate the both cases.
Signed-off-by: Yingxin Cheng <yingxin.cheng@intel.com>
if the source object was both compressed and encrypted, preserve its
original compression attribute so it can be decompressed the same way it
was originally compressed
Signed-off-by: Casey Bodley <cbodley@redhat.com>
fetch_remote_obj() transfers objects in their encrypted form, so does
not have access to the decrypted data for checksum verification
Signed-off-by: Casey Bodley <cbodley@redhat.com>
compression is applied before encryption. so if we skip decryption, we
can't decompress either
Fixes: https://tracker.ceph.com/issues/57905
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Edit the "Overview" section in doc/radosgw/s3select.rst.
Co-authored-by: Anthony D'Atri <anthony.datri@gmail.com>
Signed-off-by: Zac Dover <zac.dover@proton.me>
* refs/pull/49971/head:
doc/cephfs: document MDS_CLIENTS_LAGGY health warning
qa: ignore warnings
qa: add test cases to check client eviction if an OSD is laggy
mds,messages: enable beacon to report clients lagginess
mds: do not evict client on laggy osds
common: add new config option to defer client eviction
osd: add method to check for laggy osds
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Added some logs as their values where not very clear while parsing though the
log files.
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
* refs/pull/51858/head:
pybind/mgr/devicehealth: do not crash if db not ready
Reviewed-by: Laura Flores <lflores@redhat.com>
Reviewed-by: Yaarit Hatuka <yaarithatuka@gmail.com>
On force promote if the opposite site is down then we currently show
image status description as "local image linked to unknown peer"
Previously:
----------
$ rbd --cluster=site-b mirror image status pool1/img1
img1:
global_id: a73341a6-8302-4c97-ac6e-278083fd347e
state: up+stopping_replay
description: local image linked to unknown peer
service: admin on localhost.localdomain
last_update: 2023-06-15 19:47:45
peer_sites:
name: site-a
state: up+stopped
description: local image is primary
last_update: 2023-06-15 19:47:32
snapshots:
9 .mirror.primary.a73341a6-8302-4c97-ac6e-278083fd347e.1f101367-277f-42f0-8308-e51201d0529a (peer_uuids:[c46c6d97-f59b-4591-9d35-d7ff9d0d72f7])
Currently:
---------
$ rbd --cluster=site-b mirror image status pool1/img1
img1:
global_id: 2a6d61e1-8e76-42c4-af76-8f61ce65c7e2
state: up+stopped
description: orphan (force promoting)
service: admin on localhost.localdomain
last_update: 2023-06-15 19:29:22
peer_sites:
name: site-a
state: down+stopped
description: local image is primary
last_update: 2023-06-15 19:29:05
snapshots:
9 .mirror.primary.2a6d61e1-8e76-42c4-af76-8f61ce65c7e2.99f82a30-0241-4e51-8428-7a2376d137f6 (peer_uuids:[3150c6ef-aeee-45dc-8d0e-5dc5a53d88eb])
Fixes: https://tracker.ceph.com/issues/52913
Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Otherwise, the MDS that just got replaced can transition to a rank
for another file system and the test cannot deterministically infer
which MDS needs to checked.
Fixes: http://tracker.ceph.com/issues/61764
Signed-off-by: Venky Shankar <vshankar@redhat.com>