These host e2e test were failing, since we are already checking this on the Dashboard Cephadm e2e tests we can get rid of these ones.
Fixes: https://tracker.ceph.com/issues/62491
Signed-off-by: Pedro Gonzalez Gomez <pegonzal@redhat.com>
includes subvolume and subvolume groups e2es
Also taking care of renaming of Volume to File Systems in the remaining
actions like Edit and Remove
Fixes: https://tracker.ceph.com/issues/62564
Signed-off-by: Nizamudeen A <nia@redhat.com>
* refs/pull/52676/head:
mds/Server: mark a cap acquisition throttle event in the request
Reviewed-by: Venky Shankar <vshankar@redhat.com>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Reviewed-by: Kotresh Hiremath Ravishankar <khiremat@redhat.com>
Reviewed-by: Xiubo Li <xiubli@redhat.com>
In Multisite page, When we create a realm the realm sets to default even if some other realm is already default and default checkbox in unchecked as well while creating.
Fixes: https://tracker.ceph.com/issues/62453
Signed-off-by: Aashish Sharma <aasharma@redhat.com>
Revert "osd/SnapMapper: Maintain the prefix_itr between calls to avoid search…"
Reviewed-by: Gabriel BenHanokh <gbenhano@redhat.com>
Reviewed-by: Neha Ojha <nojha@redhat.com>
Maintain the prefix_itr between calls to SnapMapper::get_next_objects_to_trim() to prevent searching depleted prefixes.
We got 8 distinct hash prefixes used for searching objects owned by a given PG.
On each call to SnapMapper::get_next_objects_to_trim() we start from the first prefix even after all objects mapped to it were depleted.
This means that we will be searching for 1 non-existing prefix after the first prefix was depleted, 2 after the first two prefixes were depleted... and so on until we will search 7 non-existing prefixes after the first 7 prefixes were depleted.
This is a performance improvement PR only!
It maintains the existing behavior and does not try to fix/change any of the TRIM logic.
I added an extra step after the last object is trimmed doing a full scan of the DB and only if no object was found it will return ENOENT.
This should make the new code no-worse than existing code which returns ENOENT after a full scan found no object.
It should not impact performance in real life snaps as it should only happen once per-snap.
added snap-mapper tests to rados-test-suite
disabled osd_debug_trim_objects when running (SnapMapperTest, prefix_itr) to prevent asserts(as this code does illegal inserts into DELETED snaps)
Code beautifing
Signed-off-by: Gabriel BenHanokh <gbenhano@redhat.com>
Unfortunately, this code is filling 0s at the beginning of the short-read
buffer.
Fixes: https://tracker.ceph.com/issues/62492
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
osd/scheduler/mClockScheduler: Use same profile and client ids for all clients to ensure allocated QoS limit consumption.
Reviewed-by: Samuel Just <sjust@redhat.com>
os/bluestore: expand BlueFS log if available space is insufficient
Reviewed-by: Adam Kupczyk <akupczyk@redhat.com>
Reviewed-by: Igor Fedotov <ifedotov@suse.com>
Before it was always converting the OSError to
our self-defined "Error" class. This causes an issue
with the port_in_use function that has special handling
for OSError when the errno is EADDRNOTAVAIL or
EAFNOSUPPORT. Since the error being raised was no
longer an OSError it wasn't being caught and checked
properly in port_in_use.
This has the additional property of being necessary
to check port availability for haproxy on its VIP. If
we fail deployment when EADDRNOTAVAIL is raised, it becomes
difficult to deploy the ingress service. If we deploy
haproxy first it fails because the VIP isn't available
yet (since keepalive isn't up) and it fails saying the port
it wants to bind to is unavailable (specifically EADDRNOTAVAIL).
If we try to deploy keepalive first it fails because it
needs to know the location of the haproxy daemons in
order to build its config file. This has worked in the past
by just having the haproxy fail to bind at first and then
fix itself once the keepalive daemon is deployed. That
no longer works if the haproxy daemon fails to deploy
because cephadm is reporting the port it needs is
unavailable. Since EADDRNOTAVAIL when deploying
haproxy likely means the VIP is not up rather than
something else is taking up the port it needs, fixing
the handling of this allows ingress deployment to
work while also allowing multiple haproxy daemons
on the same host to use the same frontend port
bound to different VIPs.
Signed-off-by: Adam King <adking@redhat.com>
If we know what IP the frontend_port will be binding
to, we can pass that down through the port_ips mapping
so cephadm will only check if that port on that specific
VIP if in use. This allows multiple haproxy daemons
to be bound to the same port on different VIPs on the
same host.
Note that you still must use a different monitor port
for the two different ingress services as that port
is bound to on the actual IP of the host. Only the
frontend port can be the same for haproxies on the
same host as long as the VIP is different.
Fixes: https://tracker.ceph.com/issues/57614
Signed-off-by: Adam King <adking@redhat.com>
This is mostly for checking for port conflicts.
Currently, we just check if the port is bound to
on any IP on the host. This mechanism should allow
certain daemon types to specify a port -> IP mapping
that will be passed to the cephadm binary. That
mapping will then be used by cephadm to only
check for the port being bound to on that specific
IP rather than any IP on the host. The end result
is we could have daemons bound to the same
port on different IPs on the same node.
It's expected that daemon types will set this
up as part of their prepare_create or generate_config
functions where they may have more info about the
specific IPs and ports they need.
Signed-off-by: Adam King <adking@redhat.com>