Let's install hwe kernel only on Ubuntu focal, otherwise we only shift the
problem on Ubuntu bionic given that the hwe kernel for bionic is 5.4.
Fixes: https://tracker.ceph.com/issues/53863
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
disable the sending of async datalog notifications on one zone per
cluster. this helps to verify that tests don't rely on notifications to
succeed
Signed-off-by: Casey Bodley <cbodley@redhat.com>
the data changes log for multisite will occasionally broadcast recent
changes to other zones, which they can use to prioritize sync of some
of the most recent changes. they'll eventually see all changes as they
replay the data changes log, though, so notifications aren't required
for successful sync. the ability to turn them off is useful for testing
Fixes: https://tracker.ceph.com/issues/49723
Signed-off-by: Casey Bodley <cbodley@redhat.com>
When a new journal client is registered, all already registered
clients are checked, and a client with min position is selected
as a position for the new client. Thus we may expect that
starting from the registered position all journal entries will be
available (not trimmed) for the new client.
But when looking for a min commit position, the client_register
function did not take into account that a registered client might
be in disconnected state, and in that case the journal entries
might be trimmed for this client.
Fixes: https://tracker.ceph.com/issues/53888
Signed-off-by: Mykola Golub <mgolub@suse.com>
The following command:
```
echo /dev/sda | tee /sys/kernel/config/nvmet/subsystems/sda/namespaces/1/device_path
```
makes nvme_loop fail because fascinatingly, it adds an unexpected newline.
See:
```
/dev/sda
/dev/sda
1
tee: /sys/kernel/config/nvmet/subsystems/sda/namespaces/1/enable: No such file or directory
/dev/sda
1
```
Other distros don't have the same behavior:
```
CentOS 8
/dev/sda
/dev/sda
1
Ubuntu 20.04
/dev/sda
/dev/sda
1
```
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
The recent changes from PR #43536 introduced a regeression preventing from
running ceph-volume in a containerized context on Ubuntu 18.04.
Given that the path for the binary `lvs` differs between CentOS 8 and Ubuntu 18.04.
(`/usr/sbin/lvs` and `/sbin/lvs` respictively). It means that ceph-volume running
in the container on CentOS 8 sees the `lvs` binary at `/usr/sbin/lvs` and try to
run it with `nsenter` on the host which is running Ubuntu 18.04.
Fixes: https://tracker.ceph.com/issues/53812
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
(cherry picked from commit 95e88cda3df76b59b548ae808df0ef7f19db1f63)
Signed-off-by: Guillaume Abrioux <gabrioux@redhat.com>
qa/suites/orch/cephadm: Also run the rbd/iscsi suite
Reviewed-by: Adam King <adking@redhat.com>
Reviewed-by: Deepika Upadhyay <dupadhya@redhat.com>
Reviewed-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Melissa Li <mingkli@redhat.com>
Problem:
The subvolume snapshot clone fails if the quota on the source
has exceeded. Since the quota is not strictly enforced at the
byte range, this is a possibility.
Cause:
The quota on the clone is set prior to copying the data
from the source. Hence the quota mostly get enforced before
copying the entire data from the source resulting in the
clone failure.
Solution:
Enforce quota on the clone after the data is copied.
Fixes: https://tracker.ceph.com/issues/53848
Signed-off-by: Kotresh HR <khiremat@redhat.com>
dbstore_log.h sets global dout_subsys/dout_prefix macros, and was
leaking into rgw_main.cc through the common/dbstore.h. this caused all
of rgw_main's log output to start with the wrong prefix "rgw dbstore: "
Fixes: https://tracker.ceph.com/issues/53177
Signed-off-by: Casey Bodley <cbodley@redhat.com>
crimson/osd: Port rgw object classes to run in crimson
Reviewed-by: Kefu Chai <tchaikov@gmail.com>
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
osd: PeeringState: fix selection order in calc_replicated_acting_stretch
Reviewed-by: Neha Ojha <nojha@redhat.com>
Reviewed-by: Samuel Just <sjust@redhat.com>