Current formula calculates the stripe size, however that's not what we
want in the case of RAID1/DUP profiles. In those cases since chunk are
mirrored across devices we want the full size of the chunk. Without this
patch the 'btrfs fi usage' output from an fs which is using RAID1 is:
Data,RAID1: Size:2.00GiB, Used:1.00GiB (50.03%)
/dev/vdc 1.00GiB
/dev/vdf 1.00GiB
Metadata,RAID1: Size:256.00MiB, Used:1.34MiB (0.52%)
/dev/vdc 128.00MiB
/dev/vdf 128.00MiB
System,RAID1: Size:8.00MiB, Used:16.00KiB (0.20%)
/dev/vdc 4.00MiB
/dev/vdf 4.00MiB
Unallocated:
/dev/vdc 8.87GiB
/dev/vdf 8.87GiB
So a 2 gigabyte RAID1 chunk actually will take up 4 gigabytes on the
actual disks 2 each. In this case this is being miscalculated as taking
up 1GiB on each device.
This also leads to erroneously calculated unallocated space. The correct
output in this case is:
Data,RAID1: Size:2.00GiB, Used:1.00GiB (50.03%)
/dev/vdc 2.00GiB
/dev/vdf 2.00GiB
Metadata,RAID1: Size:256.00MiB, Used:1.34MiB (0.52%)
/dev/vdc 256.00MiB
/dev/vdf 256.00MiB
System,RAID1: Size:8.00MiB, Used:16.00KiB (0.20%)
/dev/vdc 8.00MiB
/dev/vdf 8.00MiB
Unallocated:
/dev/vdc 7.74GiB
/dev/vdf 7.74GiB
Fix it by only utilising the chunk formula for profiles which are not
RAID1/DUP.
Issue: #422
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Commit 80714610f3 ("btrfs-progs: use raid table for ncopies")
slightly broke how raid ratio are being calculated since the resulting
code would always reset ratio to be 1 in case we didn't have RAID56
profile. The correct behavior is to simply set it to 0 if we have RAID56
as the calculation is different in this case and leave it intact
otherwise.
This bug manifests by doing all size-related calculation for 'btrfs
filesystem usage' command as if all block groups are of type SINGLE. Fix
this by only resetting ratio 0 in case of RAID56.
Issue: #422
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
There are a lot of call sites where we use the following code snippet:
u8 super_block_data[BTRFS_SUPER_INFO_SIZE];
struct btrfs_super_block *sb;
u64 ret;
sb = (struct btrfs_super_block *)super_block_data;
The reason for this is, structure btrfs_super_block was smaller than
BTRFS_SUPER_INFO_SIZE.
Thus for anything with csum involved, we have to use a proper 4K buffer.
Since the recent unification of sizeof(struct btrfs_super_block), we no
longer need such workaround, and can use struct btrfs_super_block
directly to do any operation.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The profile descriptions allow us to use a single formula to calculate
chunk size. Right now there are no profiles with parity (raid5-like) and
sub_stripes (raid10-like), which makes it easier.
- parity stripes are subtracted from the total count
- then divided by number of sub stripes
Practically speaking, 1:1 copy profiles do not have any adjustments.
Signed-off-by: David Sterba <dsterba@suse.com>
The striped profiles covering arbitrary number of devices are often
hardcoded so use the new helper btrfs_bg_type_is_stripey for that.
Signed-off-by: David Sterba <dsterba@suse.com>
There's opencoded value of raid table ncopies in
print_filesystem_usage_overall, add a helper and use it.
Signed-off-by: David Sterba <dsterba@suse.com>
There is some code that using NAME_MAX but it doesn't include header
that is defined. This patch adds a line that includes linux/limits.h
which defines NAME_MAX.
Issue: #386
Issue: #385
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Sidong Yang <realwakka@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The Used and Free should be together, while all the device information
is in the first section.
Example:
Overall:
Device size: 128.00GiB
Device allocated: 24.00GiB
Device unallocated: 104.00GiB
Device missing: 0.00B
Device zone unusable: 5.13MiB
Device zone size: 256.00MiB
Used: 213.33MiB
Free (estimated): 111.79GiB (min: 111.79GiB)
Free (statfs, df): 111.79GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 25.58MiB (used: 16.00KiB)
Multiple profiles: no
Signed-off-by: David Sterba <dsterba@suse.com>
Read device size and print it in the overall overview in zoned mode. The
total unusable size is there so the zone size is complementing it. It's
read from the first device assuming that kernel mandates that all
devices have the same zone size.
Example:
Overall:
Device size: 128.00GiB
Device allocated: 24.00GiB
Device unallocated: 104.00GiB
Device missing: 0.00B
Used: 213.33MiB
Device zone unusable: 5.13MiB
Device zone size: 256.00MiB
Free (estimated): 111.79GiB (min: 111.79GiB)
Free (statfs, df): 111.79GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 25.58MiB (used: 16.00KiB)
Multiple profiles: no
Signed-off-by: David Sterba <dsterba@suse.com>
Print number of stripes for striped profiles in device usage commands.
It helps to see profiles easily. The output is like below.
/dev/vdc, ID: 1
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID0/2: 912.62MiB
Data,RAID0/3: 912.62MiB
Metadata,RAID1: 102.38MiB
System,RAID1: 8.00MiB
Unallocated: 1.00MiB
Multiple lines can appear in case a balance conversion process was
interrupted or if there's been a new device added and new data written
to the full stripe.
Issue: #372
Signed-off-by: Sidong Yang <realwakka@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Print the total zone_unusable size in the summary for 'fi usage' for a
filesystem in zoned mode. It's a sum of all the zone_unusable values
from 'fi df'. Per-device stats are not implemented and would need more
complicated calculations from raw data, kernel does not export that (but
it could).
As of 5.12, the zone_unusable is stored only in memory so we'd have to
map raw block device zones to the block groups and the live extents in
the associated block groups to get the exact numbers.
Example:
# btrfs fi usage /mnt
Overall:
Device size: 2.00GiB
Device allocated: 768.00MiB
Device unallocated: 1.25GiB
Device missing: 0.00B
Device zone unusable: 320.00KiB
Used: 128.00KiB
Free (estimated): 1.50GiB (min: 1.50GiB)
Free (statfs, df): 1.50GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 3.25MiB (used: 32.00KiB)
Multiple profiles: no
Data,single: Size:256.00MiB, Used:0.00B (0.00%)
/dev/nullb0 256.00MiB
Metadata,single: Size:256.00MiB, Used:112.00KiB (0.04%)
/dev/nullb0 256.00MiB
System,single: Size:256.00MiB, Used:16.00KiB (0.01%)
/dev/nullb0 256.00MiB
Unallocated:
/dev/nullb0 1.25GiB
# btrfs fi df
Data, single: total=256.00MiB, used=0.00B, zone_unusable=0.00B
System, single: total=256.00MiB, used=16.00KiB, zone_unusable=160.00KiB
Metadata, single: total=256.00MiB, used=112.00KiB, zone_unusable=160.00KiB
GlobalReserve, single: total=3.25MiB, used=32.00KiB
Signed-off-by: David Sterba <dsterba@suse.com>
There's a group of functions that are related to opening filesystem in
various modes, this can be moved to a separate file.
Signed-off-by: David Sterba <dsterba@suse.com>
Add available space information from statfs(). This can be different from
'Free (estimated)' in some cases. This patch provide more information
about filesystem usage like below.
Overall:
Device size: 5.00GiB
Device allocated: 1.02GiB
Device unallocated: 3.98GiB
Device missing: 0.00B
Used: 88.00KiB
Free (estimated): 4.48GiB (min: 2.49GiB)
Free (statfs, df) 4.48GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 832.00KiB (used: 0.00B)
Multiple profiles: no
Issue: #306
Signed-off-by: Sidong Yang <realwakka@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The function print_filesystem_usage_overall() prints the info on the
basis of the r_*_chunk, r_*_used and l_*_chunks values computed for
data, metadata and system chunks.
For the RAID1/10/1C3/1C4/DUP these info are easily accessible from the
info returned by load_space_info().
However for RAID5/6 this is not true because the ratios between the l_*
and r_* values are not fixed but depend by the number of devices
involved in the chunk.
A new function called get_raid56_space_info() is created to compute
the values r_*_chunk, and r_*_used for data, metadata and system
chunks in case of a RAID5/6 profile.
The r_*_chunk values are computed from the chunk_info array.
In order to compute the r_*_used values, a new function
get_raid56_logical_ratio() is created. This function computes the ratio
l_*_used / l_*_chunk from the ioctl_space_args array. So we can get:
'r_*_used' = 'r_*_chunk' * 'l_*_used' / 'l_*_chunk'
Even tough this is not mathematically true every time, it is true on
"average" (for example if the RAID5 chunks use different number of disks
the real values depend by which chunk contains the data).
Signed-off-by: Goffredo Baroncelli <kreijack@inwind.it>
Signed-off-by: David Sterba <dsterba@suse.com>
Update the summary of 'fi usage' where the multiple profiles will be
listed by type, like:
Multiple profiles: yes (data, metadata)
The string is returned from btrfs_test_for_multiple_profiles so the
callers don't have to assemble it together from the other profile
strings.
Signed-off-by: David Sterba <dsterba@suse.com>
The term 'mixed' is confusing as it's commonly used for mised block
group profiles created by 'mkfs.btrfs --mixed'. We're interested in
multiple profiles for each type, so use the term 'multiple'.
Signed-off-by: David Sterba <dsterba@suse.com>
A new line in the "Overall" section is added to inform that 'Multiple
profiles' are present.
Signed-off-by: Goffredo Baroncelli <kreijack@inwind.it>
Signed-off-by: David Sterba <dsterba@suse.com>
This is based on idea from Stanislaw Gruszka to print the ratio,
originally suggested for the 'fi df', but we don't want to add new
things there and let people use 'fi us' instead. The new output fits
there and is printed by default without options:
Example output:
$ btrfs fi us /mnt
[...]
Data,single: Size:339.00GiB, Used:172.05GiB (50.75%)
Metadata,single: Size:7.00GiB, Used:3.41GiB (48.70%)
System,single: Size:32.00MiB, Used:64.00KiB (0.20%)
Signed-off-by: David Sterba <dsterba@suse.com>
For options that do not have the long description, the empty string is
required to mark where the options start. Some commands were missing
that.
Signed-off-by: David Sterba <dsterba@suse.com>