Due to refactoring in 88c25674c7 ("btrfs-progs: convert device info
to struct array") the variable tracking number of devices was not
updated and led to an error.
$ btrfs device usage /path
ERROR: unexpected number of devices: 0 != 1
...
Issue: #697
Signed-off-by: David Sterba <dsterba@suse.com>
There are quite some variable shadowing in btrfs-progs, most of them are
just reusing some common names like tmp.
And those are quite safe and the shadowed one are even different type.
But there are some exceptions:
- @end in traverse_tree_blocks()
There is already an @end with the same type, but a different meaning
(the end of the current extent buffer passed in).
Just rename it to @child_end.
- @start in generate_new_data_csums_range()
Just rename it to @csum_start.
- @size of fixup_chunk_tree_block()
This one is particularly bad, we declare a local @size and initialize
it to -1, then before we really utilize the variable @size, we
immediately reset it to 0, then pass it to logical_to_physical().
Then there is a location to check if @size is -1, which will always be
true.
According to the code in logical_to_physical(), @size would be clamped
down by its original value, thus our local @size will always be 0.
This patch would rename the local @size to @found_size, and only set
it to -1.
The call site is only to pass something as logical_to_physical()
requires a non-NULL pointer.
We don't really need to bother the returned value.
- duplicated @ref declaration in run_delayed_tree_ref()
- duplicated @super_flags in change_meta_csums()
Just delete the duplicated one.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
I was seeing test-cli/016 failures because it claimed we were getting
EPERM from the TREE_SEARCH ioctl to get the chunk info out of the file
system. This turned out to be because errno was already set going into
this function, the ioctl itself wasn't actually failing. Fix this by
checking for a return value from the ioctl first, and then returning
-EPERM if appropriate. This fixed the failures in my setup.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The sysfs could use more convenience helpers so move the current code to
own file before adding more helpers.
Signed-off-by: David Sterba <dsterba@suse.com>
While syncing messages.[ch] I had to back out the ASSERT() code in
kerncompat.h, which means we now rely on the kernel code for ASSERT().
In order to maintain some semblance of separation introduce UASSERT()
and use that in all the purely userspace code.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>
For older kernels, the sysfs interface providing the fsid may not be present yet.
Since 32c2e57c65 ("btrfs-progs: read fsid from the sysfs in
device_is_seed"), the fallback to the previous approach to determine
the fsid was not used anymore.
Ensure negative return values of sysfs_open_fsid_file are handled by
falling back to the dev_to_fsid in this case.
Pull-request: #599
Signed-off-by: David Sterba <dsterba@suse.com>
The kernel commit a26d60dedf9a ("btrfs: sysfs: add devinfo/fsid to
retrieve actual fsid from the device") introduced a sysfs interface
to access the device's fsid from the userspace. This is a more
reliable method to obtain the fsid compared to reading the
superblock, and it even works if the device is not present.
Additionally, this sysfs interface can be read by non-root users.
Therefore, it is recommended to utilize this new sysfs interface to
retrieve the fsid.
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: David Sterba <dsterba@suse.com>
load_device_info() checks if the device is a seed device by reading
superblock::fsid and comparing it with the mount fsid, and it fails
to work if the device is missing (a RAID1 seed fs). Move this part
of the code into a new helper function device_is_seed() in
preparation to make device_is_seed() work with the new sysfs
devinfo/<devid>/fsid interface.
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The (unsigned long long) type casts can be dropped, printf understands
%llu and u64 and does not warn. In cases where the type is not u64 keep
the cast.
Signed-off-by: David Sterba <dsterba@suse.com>
Replace printf by the level-aware helper. No change for commands that
don't have the global -q/-v options, otherwise the output can be
quieted.
Signed-off-by: David Sterba <dsterba@suse.com>
Switch the remaining use of assert() as it lacks the verbose assert that
we have for ASSERT (but otherwise is equivalent).
Signed-off-by: David Sterba <dsterba@suse.com>
The preferred order:
- system headers
- standard headers
- libraries
- kernel library
- kernel shared
- common headers
- other tools
- own headers
Signed-off-by: David Sterba <dsterba@suse.com>
The size reported as Unallocated in the table was different that the one
in the listing, calculated differently. The values should reflect the
unallocated area available for the filesystem - not necessarily the
total size of the device. If there's such slack space it's reported
separately.
The values in the table mean:
- Unallocated: block device size - slack - allocated
- Total: block device size - slack
- Slack: block device size - filesystem
The new columns make the table wider but the values are deemed to be
important by users and for filesystems with normal profiles it fits
under reasonable line width. During balance or with multiple profiles it
can get wider but this should not be a serious problem.
Example output:
Overall:
Device size: 13.00GiB
Device allocated: 536.00MiB
Device unallocated: 12.48GiB
Device missing: 0.00B
Device slack: 1.00GiB
Used: 2.31MiB
Free (estimated): 12.48GiB (min: 6.24GiB)
Free (statfs, df): 12.48GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 3.50MiB (used: 0.00B)
Multiple profiles: no
Data Metadata System
Id Path single DUP DUP Unallocated Total Slack
-- ---------- ------- --------- -------- ----------- -------- -------
1 /dev/loop0 8.00MiB 512.00MiB 16.00MiB 2.48GiB 3.00GiB 1.00GiB
2 /dev/loop1 - - - 10.00GiB 10.00GiB -
-- ---------- ------- --------- -------- ----------- -------- -------
Total 8.00MiB 256.00MiB 8.00MiB 12.48GiB 13.00GiB 1.00GiB
Used 2.00MiB 144.00KiB 16.00KiB
Issue: #508
Pull-request: #509 (partial fix)
Signed-off-by: David Sterba <dsterba@suse.com>
Current formula calculates the stripe size, however that's not what we
want in the case of RAID1/DUP profiles. In those cases since chunk are
mirrored across devices we want the full size of the chunk. Without this
patch the 'btrfs fi usage' output from an fs which is using RAID1 is:
Data,RAID1: Size:2.00GiB, Used:1.00GiB (50.03%)
/dev/vdc 1.00GiB
/dev/vdf 1.00GiB
Metadata,RAID1: Size:256.00MiB, Used:1.34MiB (0.52%)
/dev/vdc 128.00MiB
/dev/vdf 128.00MiB
System,RAID1: Size:8.00MiB, Used:16.00KiB (0.20%)
/dev/vdc 4.00MiB
/dev/vdf 4.00MiB
Unallocated:
/dev/vdc 8.87GiB
/dev/vdf 8.87GiB
So a 2 gigabyte RAID1 chunk actually will take up 4 gigabytes on the
actual disks 2 each. In this case this is being miscalculated as taking
up 1GiB on each device.
This also leads to erroneously calculated unallocated space. The correct
output in this case is:
Data,RAID1: Size:2.00GiB, Used:1.00GiB (50.03%)
/dev/vdc 2.00GiB
/dev/vdf 2.00GiB
Metadata,RAID1: Size:256.00MiB, Used:1.34MiB (0.52%)
/dev/vdc 256.00MiB
/dev/vdf 256.00MiB
System,RAID1: Size:8.00MiB, Used:16.00KiB (0.20%)
/dev/vdc 8.00MiB
/dev/vdf 8.00MiB
Unallocated:
/dev/vdc 7.74GiB
/dev/vdf 7.74GiB
Fix it by only utilising the chunk formula for profiles which are not
RAID1/DUP.
Issue: #422
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Commit 80714610f3 ("btrfs-progs: use raid table for ncopies")
slightly broke how raid ratio are being calculated since the resulting
code would always reset ratio to be 1 in case we didn't have RAID56
profile. The correct behavior is to simply set it to 0 if we have RAID56
as the calculation is different in this case and leave it intact
otherwise.
This bug manifests by doing all size-related calculation for 'btrfs
filesystem usage' command as if all block groups are of type SINGLE. Fix
this by only resetting ratio 0 in case of RAID56.
Issue: #422
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
There are a lot of call sites where we use the following code snippet:
u8 super_block_data[BTRFS_SUPER_INFO_SIZE];
struct btrfs_super_block *sb;
u64 ret;
sb = (struct btrfs_super_block *)super_block_data;
The reason for this is, structure btrfs_super_block was smaller than
BTRFS_SUPER_INFO_SIZE.
Thus for anything with csum involved, we have to use a proper 4K buffer.
Since the recent unification of sizeof(struct btrfs_super_block), we no
longer need such workaround, and can use struct btrfs_super_block
directly to do any operation.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The profile descriptions allow us to use a single formula to calculate
chunk size. Right now there are no profiles with parity (raid5-like) and
sub_stripes (raid10-like), which makes it easier.
- parity stripes are subtracted from the total count
- then divided by number of sub stripes
Practically speaking, 1:1 copy profiles do not have any adjustments.
Signed-off-by: David Sterba <dsterba@suse.com>
The striped profiles covering arbitrary number of devices are often
hardcoded so use the new helper btrfs_bg_type_is_stripey for that.
Signed-off-by: David Sterba <dsterba@suse.com>
There's opencoded value of raid table ncopies in
print_filesystem_usage_overall, add a helper and use it.
Signed-off-by: David Sterba <dsterba@suse.com>
There is some code that using NAME_MAX but it doesn't include header
that is defined. This patch adds a line that includes linux/limits.h
which defines NAME_MAX.
Issue: #386
Issue: #385
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Sidong Yang <realwakka@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The Used and Free should be together, while all the device information
is in the first section.
Example:
Overall:
Device size: 128.00GiB
Device allocated: 24.00GiB
Device unallocated: 104.00GiB
Device missing: 0.00B
Device zone unusable: 5.13MiB
Device zone size: 256.00MiB
Used: 213.33MiB
Free (estimated): 111.79GiB (min: 111.79GiB)
Free (statfs, df): 111.79GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 25.58MiB (used: 16.00KiB)
Multiple profiles: no
Signed-off-by: David Sterba <dsterba@suse.com>
Read device size and print it in the overall overview in zoned mode. The
total unusable size is there so the zone size is complementing it. It's
read from the first device assuming that kernel mandates that all
devices have the same zone size.
Example:
Overall:
Device size: 128.00GiB
Device allocated: 24.00GiB
Device unallocated: 104.00GiB
Device missing: 0.00B
Used: 213.33MiB
Device zone unusable: 5.13MiB
Device zone size: 256.00MiB
Free (estimated): 111.79GiB (min: 111.79GiB)
Free (statfs, df): 111.79GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 25.58MiB (used: 16.00KiB)
Multiple profiles: no
Signed-off-by: David Sterba <dsterba@suse.com>
Print number of stripes for striped profiles in device usage commands.
It helps to see profiles easily. The output is like below.
/dev/vdc, ID: 1
Device size: 1.00GiB
Device slack: 0.00B
Data,RAID0/2: 912.62MiB
Data,RAID0/3: 912.62MiB
Metadata,RAID1: 102.38MiB
System,RAID1: 8.00MiB
Unallocated: 1.00MiB
Multiple lines can appear in case a balance conversion process was
interrupted or if there's been a new device added and new data written
to the full stripe.
Issue: #372
Signed-off-by: Sidong Yang <realwakka@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Print the total zone_unusable size in the summary for 'fi usage' for a
filesystem in zoned mode. It's a sum of all the zone_unusable values
from 'fi df'. Per-device stats are not implemented and would need more
complicated calculations from raw data, kernel does not export that (but
it could).
As of 5.12, the zone_unusable is stored only in memory so we'd have to
map raw block device zones to the block groups and the live extents in
the associated block groups to get the exact numbers.
Example:
# btrfs fi usage /mnt
Overall:
Device size: 2.00GiB
Device allocated: 768.00MiB
Device unallocated: 1.25GiB
Device missing: 0.00B
Device zone unusable: 320.00KiB
Used: 128.00KiB
Free (estimated): 1.50GiB (min: 1.50GiB)
Free (statfs, df): 1.50GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 3.25MiB (used: 32.00KiB)
Multiple profiles: no
Data,single: Size:256.00MiB, Used:0.00B (0.00%)
/dev/nullb0 256.00MiB
Metadata,single: Size:256.00MiB, Used:112.00KiB (0.04%)
/dev/nullb0 256.00MiB
System,single: Size:256.00MiB, Used:16.00KiB (0.01%)
/dev/nullb0 256.00MiB
Unallocated:
/dev/nullb0 1.25GiB
# btrfs fi df
Data, single: total=256.00MiB, used=0.00B, zone_unusable=0.00B
System, single: total=256.00MiB, used=16.00KiB, zone_unusable=160.00KiB
Metadata, single: total=256.00MiB, used=112.00KiB, zone_unusable=160.00KiB
GlobalReserve, single: total=3.25MiB, used=32.00KiB
Signed-off-by: David Sterba <dsterba@suse.com>
There's a group of functions that are related to opening filesystem in
various modes, this can be moved to a separate file.
Signed-off-by: David Sterba <dsterba@suse.com>
Add available space information from statfs(). This can be different from
'Free (estimated)' in some cases. This patch provide more information
about filesystem usage like below.
Overall:
Device size: 5.00GiB
Device allocated: 1.02GiB
Device unallocated: 3.98GiB
Device missing: 0.00B
Used: 88.00KiB
Free (estimated): 4.48GiB (min: 2.49GiB)
Free (statfs, df) 4.48GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 832.00KiB (used: 0.00B)
Multiple profiles: no
Issue: #306
Signed-off-by: Sidong Yang <realwakka@gmail.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The function print_filesystem_usage_overall() prints the info on the
basis of the r_*_chunk, r_*_used and l_*_chunks values computed for
data, metadata and system chunks.
For the RAID1/10/1C3/1C4/DUP these info are easily accessible from the
info returned by load_space_info().
However for RAID5/6 this is not true because the ratios between the l_*
and r_* values are not fixed but depend by the number of devices
involved in the chunk.
A new function called get_raid56_space_info() is created to compute
the values r_*_chunk, and r_*_used for data, metadata and system
chunks in case of a RAID5/6 profile.
The r_*_chunk values are computed from the chunk_info array.
In order to compute the r_*_used values, a new function
get_raid56_logical_ratio() is created. This function computes the ratio
l_*_used / l_*_chunk from the ioctl_space_args array. So we can get:
'r_*_used' = 'r_*_chunk' * 'l_*_used' / 'l_*_chunk'
Even tough this is not mathematically true every time, it is true on
"average" (for example if the RAID5 chunks use different number of disks
the real values depend by which chunk contains the data).
Signed-off-by: Goffredo Baroncelli <kreijack@inwind.it>
Signed-off-by: David Sterba <dsterba@suse.com>
Update the summary of 'fi usage' where the multiple profiles will be
listed by type, like:
Multiple profiles: yes (data, metadata)
The string is returned from btrfs_test_for_multiple_profiles so the
callers don't have to assemble it together from the other profile
strings.
Signed-off-by: David Sterba <dsterba@suse.com>
The term 'mixed' is confusing as it's commonly used for mised block
group profiles created by 'mkfs.btrfs --mixed'. We're interested in
multiple profiles for each type, so use the term 'multiple'.
Signed-off-by: David Sterba <dsterba@suse.com>
A new line in the "Overall" section is added to inform that 'Multiple
profiles' are present.
Signed-off-by: Goffredo Baroncelli <kreijack@inwind.it>
Signed-off-by: David Sterba <dsterba@suse.com>
This is based on idea from Stanislaw Gruszka to print the ratio,
originally suggested for the 'fi df', but we don't want to add new
things there and let people use 'fi us' instead. The new output fits
there and is printed by default without options:
Example output:
$ btrfs fi us /mnt
[...]
Data,single: Size:339.00GiB, Used:172.05GiB (50.75%)
Metadata,single: Size:7.00GiB, Used:3.41GiB (48.70%)
System,single: Size:32.00MiB, Used:64.00KiB (0.20%)
Signed-off-by: David Sterba <dsterba@suse.com>