rgw: fix bug where variable referenced after data moved out
Reviewed-by: Daniel Gryniewicz <dang@redhat.com>
Reviewed-by: Adam C. Emerson <aemerson@redhat.com>
cephadm/mgr: adding logic to handle --no-overwrite for tuned profiles
Reviewed-by: Adam King <adking@redhat.com>
Reviewed-by: Anthony D'Atri <anthonyeleven@users.noreply.github.com>
rgw: avoid use-after-move in RGWDataSyncSingleEntryCR ctor
Reviewed-by: Yuval Lifshitz <ylifshit@redhat.com>
Reviewed-by: Daniel Gryniewicz <dang@redhat.com>
Reviewed-by: J. Eric Ivancich <ivancich@redhat.com>
The test (in the standalone/scrub suite) verifies that the scrubber
detects (and issues a cluster-log error) whenever a mapping entry
("SNA_") is missing in the SnapMapper DB.
Specifically, here the entry is corrupted - shortened as per
https://tracker.ceph.com/issues/56147.
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
Whenever the scrubber access the SnapMapper for the snaps of a specific
clone, the mapper will now verify that the snaps have the required
mapping DB entries (the 'SNA_' keys).
Signed-off-by: Ronen Friedman <rfriedma@redhat.com>
Fix the following warning which is manifesting as a result of
the ceph adopting C++20.
warning: implicit capture of ‘this’ via ‘[=]’ is deprecated in C++20 [-Wdeprecated]
Signed-off-by: Aravind Ramesh <aravind.ramesh@wdc.com>
Due to maintaining super block and other tracking information in the
disk, the entire disk size is not available, so rename the function
to represent that it actually returns available size on the device.
get_available_size() represents together free and used space available
on the device.
Signed-off-by: Aravind Ramesh <aravind.ramesh@wdc.com>
In BlockSegmentManager, super block is updated with device size.
But a small amount of device capacity is reserved to store the
super block information and other tracking information.
Number of segments is calculated after discounting super block
size and tracking information size. This creates a mismatch
with the actual available size versus actual number of segments.
Update the available size after considering the reserved device
capacity and the number of segments and segment size.
Signed-off-by: Aravind Ramesh <aravind.ramesh@wdc.com>
In reset_device(), if the total number of 512B sectors on the device
is more than INT_MAX then there was a overflow happening, rendering
the nr_sectors as 0, which was causing the failure of the ioctl and
subsequent crash, fix the overflow.
Signed-off-by: Aravind Ramesh <aravind.ramesh@wdc.com>
ZNS SSDs have an attribute called zone_capacity which can be less than or
equal to zone_size. zone_capacity represents the actual writable media in
a zone. When zone_capacity is less than zone_size, writing to offsets
beyond zone_capacity will cause write errors.
Set the segment size as equal to zone_capacity, so that segment managers
writes only upto capacity of the zone/segment.
Update device size to actual available bytes so that the gc can kick in
at appropriate thresholds.
Signed-off-by: Aravind Ramesh <aravind.ramesh@wdc.com>
For a ZNS device, a open/full zone has to be reset before it can be
reused to write from start. Seastore releases a segment/zone and marks
it empty and expects to be able to write to it from start. So as a part
of release reset the zone, so it moves to empty state on the device.
Signed-off-by: Aravind Ramesh <aravind.ramesh@wdc.com>
Zones in IMP-OPEN, EXP-OPEN, CLOSED states in a ZNS device are
counted as active resources. ZNS SSDs can have a limit on the
number of zones that can be active at the same time (max_active_resources).
If CLOSED zones reach max_active_zones supported by the device, then
opening/writing to newer zones will fail.
So a close_segment() from Seastore is essentially a FINISH
operation on a ZNS zone.
Do FINISH operation on a zone instead of CLOSE from segment_close().
Signed-off-by: Aravind Ramesh <aravind.ramesh@wdc.com>
SegmentAllocator::close_segment() writes tail information to a
segment before closing the segment, and this is written at the
end of segment. However, for ZNS SSDs, the writes have to always happen
at write pointer, so writing tail info at the end of a zone fails if
the WP is not at the offset requested by close_segment().
If the write pointer is not at lba where the tail information is written,
then advance write pointer by writing zeroes to the zone from it's current
write pointer. Then write the tail information at the end of zone.
Added advance_wp() function which advances the write pointer and then write
tail information, in case of ZNS devices but for a regular device it
continues to write at the end of segment.
Do close_segment() call after writing tail information, closing a segment
first and then writing tail information can cause potential race conditions
on a zns backed segment.
Signed-off-by: Aravind Ramesh <aravind.ramesh@wdc.com>