Print the source directory for --rootdir and if --shrink is used. With
-vv then print the individual files as added:
$ mkfs.btrfs --rootdir dir --shrink -vv img
...
Rootdir from: Documentation
ADD: /btrfs-progs/Documentation/btrfs-check.rst
...
ADD: /btrfs-progs/Documentation/btrfs-send.rst
Shrink: yes
Label: (null)
UUID: 40d3a16f-02d8-40d7-824b-239cee528093
...
The 'Rootdir from' is printed before the files are added so there's now
message before the files are added which could take some time.
Issue: #627
Signed-off-by: David Sterba <dsterba@suse.com>
Move files from m4/ to config that is also used for build and we can
reduce the number of toplevel directories.
Signed-off-by: David Sterba <dsterba@suse.com>
If I run autoreconf -fi in btrfs-progs repo in VirtualBox, it fails with
possibly undefined macro AC_DEFINE. To fix this, I added m4 as the macro
directory in configure.ac as follows. Adding that line fixes the error
when running autoreconf.
Versions:
autoreconf 2.69
automake 1.15.1
btrfs-progs v6.3-fba31d63
openSUSE 15.5
VirtualBox 6.1.40
configure.ac:7: error: possibly undefined macro: AC_DEFINE
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure.ac:63: error: possibly undefined macro: AC_MSG_ERROR
configure.ac:276: error: possibly undefined macro: AC_MSG_WARN
autoreconf: /usr/bin/autoconf failed with exit status: 1
Issue: #632
Signed-off-by: David Sterba <dsterba@suse.com>
More complete test coverage:
- json an string table formatters
- fuzz tests (no mount)
- libbtrfs build test
- libbtrfsutil python test
- ioctl build test
- hash tests
Signed-off-by: David Sterba <dsterba@suse.com>
The links to repositories and contributors is hard to find as it's below
the feature list. Move it up as it is for overview.
Signed-off-by: David Sterba <dsterba@suse.com>
There are reports that json output of 'qgroup show' crashes due to
internal error when printing the limit values:
INTERNAL ERROR: unknown unit base, mode 2304
btrfs(internal_error+0x10a)[0x5605c37ce48a]
btrfs(pretty_size_snprintf+0x5c)[0x5605c37d105c]
btrfs(fmt_print+0x44e)[0x5605c37d178e]
btrfs(+0x7ed1d)[0x5605c3800d1d]
btrfs(main+0x8f)[0x5605c379beff]
/lib64/libc.so.6(+0x27bb0)[0x7f83924ddbb0]
/lib64/libc.so.6(__libc_start_main+0x8b)[0x7f83924ddc79]
btrfs(_start+0x25)[0x5605c379d405]
common/units.c:82: pretty_size_snprintf: Assertion `0` failed, value 0
btrfs(+0x1d4b1)[0x5605c379f4b1]
btrfs(pretty_size_snprintf+0x7b)[0x5605c37d107b]
btrfs(fmt_print+0x44e)[0x5605c37d178e]
btrfs(+0x7ed1d)[0x5605c3800d1d]
btrfs(main+0x8f)[0x5605c379beff]
/lib64/libc.so.6(+0x27bb0)[0x7f83924ddbb0]
/lib64/libc.so.6(__libc_start_main+0x8b)[0x7f83924ddc79]
btrfs(_start+0x25)[0x5605c379d405]
This is caused by "size" format that requires the unit mode, but it was not
specified and some stack value used. As json prints the raw values, use
the plain %llu format.
Link: https://bugzilla.opensuse.org/show_bug.cgi?id=1206960
Link: https://bugzilla.opensuse.org/show_bug.cgi?id=1209136#c15
Signed-off-by: David Sterba <dsterba@suse.com>
Currently "btrfstune --csum" allows us to change the csum to the same
one, this is good for testing but not good for end users, as if the end
user interrupts it, they have to resume the change (even it's to the
same csum type) until it finished, or kernel would reject such fs.
Furthermore, we never change the super block csum type until we
completely changed the csum type, thus for resume cases, the fs would
still show up as using the old csum type thus won't cause any problem
resuming.
So here we just reject the csum conversion if the target csum type is
the same as the existing csum type.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
When the csum conversion is interrupted when changing data csum
objectid, we should just resume the objectid conversion.
This situation can be detected by comparing the old and new csum items.
They should both exist but doesn't intersect (interrupted halfway), or
only new csum items exist (interrupted after we have deleted old csums).
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
If the csum conversion is interrupted when old csums are being deleted,
we should resume by continue deleting the old csums.
The function delete_old_data_csums() can handle half deleted cases
already.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
We have a very rare chance to hit a fs with empty csum tree but still
has CHANGING_DATA_CSUM flag.
The window is very small, but it's still possible, so handle it by
jumping directly to metadata csum change.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
There are two possible situations where there is no new csum items at
all:
- We just inserted csum change item
This means all csums are really old csum type, and we can start
the conversion from the beginning, only need to skip the csum change
item insert.
- We finished data csum conversion but not yet started metadata
conversion
This means all csums are already new csum type, and we can resume
by starting changing metadata csums.
To distinguish the two cases, we need to read the first sector, and
verify the data content against both csum types.
If the csum matches with old csum type, we resume from generating new
data csum.
If the csum matches with new csum type, we resume from rewriting
metadata csum.
If the csum doesn't match either csum type, we have some big problems
then.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
If a csum change is interrupted at new data csum generation stage, we
can detect such situation by checking the old and new csum items.
At the new data csum generation stage, old csums are untouched, and only
new csums items (with different objectid) are inserted into the csum
tree.
Thus the old csum items should cover a larger range, while the new csum
items should be a subset of the old csums.
The resume part would start by re-generating the remaining part, then go
through the conversion stages.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
For interrupted metadata checksum change, we only need to call the same
change_meta_csums().
Since we don't have any record on the last converted metadata, thus we
have to go through all metadata anyway.
And the existing change_meta_csums() has already implemented the needed
checks to skip converted metadata.
Since we're here, also implement all the surrounding checks, like making
sure the new target csum type matches the interrupted one.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
[BUG]
Doing the following csum change in a row, it would fail:
# mkfs.btrfs -f --csum crc32c $dev
# btrfstune --csum sha256 $dev
# btrfstune --csum crc32c $dev
# btrfstune --csum sha256 $dev
WARNING: Experimental build with unstable or unfinished features
WARNING: Switching checksums is experimental, do not use for valuable data!
Proceed to switch checksums
ERROR: failed to insert csum change item: File exists
ERROR: failed to generate new data csums: File exists
WARNING: reserved space leaked, flag=0x4 bytes_reserved=16384
extent buffer leak: start 30572544 len 16384
extent buffer leak: start 30441472 len 16384
WARNING: dirty eb leak (aborted trans): start 30441472 len 16384
[CAUSE]
During every csum change operation, btrfstune would insert an temporaray
csum change item into root tree.
But unfortunately after the conversion btrfstune doesn't properly delete
the csum change item, result the following items in the root tree:
item 10 key (CSUM_CHANGE TEMPORARY_ITEM 0) itemoff 13423 itemsize 0
temporary item objectid CSUM_CHANGE offset 0
target csum type crc32c (0)
item 11 key (CSUM_CHANGE TEMPORARY_ITEM 2) itemoff 13423 itemsize 0
temporary item objectid CSUM_CHANGE offset 2
target csum type sha256 (2)
Thus at the last conversion try to go back to SHA256, we failed to
insert the same item, and caused the above error.
[FIX]
After finishing the metadata csum conversion, do a proper removal of the
csum item.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Since commit c8593f65cbf3 ("btrfs-progs: sync tree-checker.[ch] from
kernel"), btrfs-progs can do the kernel level tree block checks, which
is not really sutiable for dump-tree.
Under a lot of cases, we're using dump-tree tool to debug to collect the
details from end users.
If it's a bitflip causing a rejection, we would be unable to determine
the cause.
So this patch would add OPEN_CTREE_SKIP_LEAF_ITEM_CHECKS for dump-tree.
Reviewed-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The csum change for metadata is like uuid-change, we go with in-place
csum update without any COW.
During the rewrite, we will manually check the csum (both old and new)
for each tree block.
And only rewrite the csum if the tree block matches its old csum.
(For tree block matches its new csum, we need to do nothing).
And when everything is done, just update the superblock to reflect the
csum type change.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
At this stage, the csum tree should only contain the temporary csum
items (CSUM_CHANGE, EXTENT_CSUM, logical), and no more old csum items.
Now we can convert those temporary csum items back to regular csum items
by changing their key objectids back to EXTENT_CSUM.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The new helper function, delete_old_data_csums(), would delete the old
data csums while keep the new one untouched.
Since the new data csums have a key objectid (-13) smaller than the
old data csums (-10), we can safely delete from the tail of the btree.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This patch would modify btrfs_csum_file_block() to handle csum type
other than the one used in the current fs.
The new data checksum would use a different objectid (-13) to
distinguish with the existing one (-10).
This needs to change tree-checker to skip the item size checks,
since new csum can be larger than the original csum.
After this stage, the resulted csum tree would look like this:
item 0 key (CSUM_CHANGE EXTENT_CSUM 13631488) itemoff 8091 itemsize 8192
range start 13631488 end 22020096 length 8388608
item 1 key (EXTENT_CSUM EXTENT_CSUM 13631488) itemoff 7067 itemsize 1024
range start 13631488 end 14680064 length 1048576
Note the itemsize is 8 times the original one, as the original csum is
CRC32, while target csum is SHA256, which is 8 times the size.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This patch introduces a new helper function,
read_verify_one_data_sector(), to do the data read and checksum
verification (against the old csum).
This data would be later re-used to generate a new csum.
And since we're introduce the helper function, we also build the
skeleton to iterate the data extents using the old csum tree.
This method is much better compared to iterating using extent tree,
which has no directly indicator on whether the data extent has csum or
not (nodatasum or preallocated).
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The overall idea is to make sure no running operations (balance,
dev-replace, dirty log) for the fs before csum change.
And also reject half converted csums for now.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The existing attempt for changing csum types is as the following:
- Create a new temporary csum root
- Generate new data csums into the temporary csum root
- Drop the old csum tree and make the temporary one as csum root
- Change the checksums for metadata in-place
Unfortunately after some experiments, the csum root switch method has a
big pitfall, the backref items in extent tree.
Those backref items still point back to the old tree, meaning without a
lot of extra tricks, the extent tree would be corrupted.
Thus we have to go a new single tree variant:
- Generate new data csums into the csum root
The new data csums would have a different objectid to distinguish
them.
- Drop the old data csum items
- Change the key objectids of the new csums
- Change the checksums for metadata in-place
This means unfortunately we have to revert most of the old code, and
update the temporary item format.
The new temporary item would only record the target csum type.
At every stage we have a method to determine the progress, thus no need
for an item, but in the future it's still open for change.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Add structures and prototypes. Leave out undefined types (time64_t,
block_group_rsv) for now. Kernel 6.4-rc1.
Signed-off-by: David Sterba <dsterba@suse.com>
The send.h for libbtrfs has been separated some time ago so we're now
free to keep up with kernel, 6.4-rc1.
Signed-off-by: David Sterba <dsterba@suse.com>
It's a known problem that a received subvolume would lose its UUID after
switching to RW. Thus it can lead to later receive problems for
snapshotting and cloning.
In that case, we just output a simple error message like:
ERROR: cannot find parent subvolume
Or
ERROR: clone: did not find source subvol
Normally we need to use "btrfs receive --dump" to know what the missing
subvolume UUID is, which would take extra work.
This patch would:
- Add extra subvolume UUID to the output
- Unify the error messages to the same format
Now the error messages would look like:
ERROR: snapshot: cannot find parent subvolume 1b4e28ba-2fa1-11d2-883f-b9a761bde3fb
ERROR: clone: cannot find source subvolume 1b4e28ba-2fa1-11d2-883f-b9a761bde3fb
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The new test case would create an empty ext4 with 64K block size, which
can lead to a new data chunk which is no longer 1:1 mapped.
Then convert the fs and verify it with --check-data-csum to make sure
the image file is fine.
Reviewed-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
[BUG]
There is a report that btrfs-convert leads to bad csum for the image
file.
The reproducer looks like this:
(note the 64K block size, it's used to force a certain chunk layout)
# touch test.img
# truncate -s 10G test.img
# mkfs.ext4 -b 64K test.img
# btrfs-convert -N 64K test.img
# btrfs check --check-data-csum test.img
Opening filesystem to check...
Checking filesystem on /home/adam/test.img
UUID: 39d49537-a9f5-47f1-b6ab-7857707b9133
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking csums against data
mirror 1 bytenr 4563140608 csum 0x3f1fa0ef expected csum 0xa4c4c072
mirror 1 bytenr 4563206144 csum 0x55dcf0d3 expected csum 0xa4c4c072
mirror 1 bytenr 4563271680 csum 0x4491b00a expected csum 0xa4c4c072
mirror 1 bytenr 4563337216 csum 0x655d1f61 expected csum 0xa4c4c072
mirror 1 bytenr 4563402752 csum 0xd37114d3 expected csum 0xa4c4c072
mirror 1 bytenr 4563468288 csum 0x4c2dab30 expected csum 0xa4c4c072
mirror 1 bytenr 4563533824 csum 0xa80fceed expected csum 0xa4c4c072
mirror 1 bytenr 4563599360 csum 0xaf610db8 expected csum 0xa4c4c072
mirror 1 bytenr 4563795968 csum 0x67b3c8a0 expected csum 0xa4c4c072
ERROR: errors found in csum tree
[6/7] checking root refs
...
[CAUSE]
Above initial failure is for logical bytenr of 4563140608, which is
inside the relocated range of the image file offset [0, 1M).
During convert, we migrate the original image file ranges which would
later be covered by super and other reserved ranges.
The migration happens as:
- Read out the original data
- Reserve a new file extent
- Write the data back to the file extent
Note that, the new file extent can be inside some new data chunks,
thus it's no longer 1:1 mapped.
- Generate the new csum for the new file extent
The problem happens at the last stage. We should read out the data from
the new file extent, but we call read_disk_extent() using the logical
bytenr, however read_disk_extent() is not doing logical -> physical
mapping.
Thus we will read some garbage, not the newly written data, and use
those garbage to generate csum. And caused the above problem.
[FIX]
Instead of read_disk_extent(), call read_data_from_disk(), which would
do the proper logical -> physical mapping, thus would fix the bug.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The function write_and_map_eb() is quite abused as a way to write any
generic buffer back to disk.
But we have a more suitable function already, write_data_to_disk().
This patch would remove the abused write_data_to_disk() calls, and
convert the only three valid call sites to write_data_to_disk() instead.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The following functions accept a buffer for write, which can be marked
as const:
- btrfs_pwrite()
- write_data_to_disk()
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
It's not a common practice to use the same io function for both read and
write (we have pread() and pwrite(), not pio()).
Furthermore the original function has the following problems:
- Not returning proper error number
If we had ioctl/stat errors we just return 0 with errno set.
Thus caller would treat it as a short read, not a proper error.
- Unnecessary @ret_rw
This is not that obvious if we have different handling for read and
write, but if we split them it's super obvious we can reuse @ret.
- No proper copy back for short read
- Unable to constify the @buf pointer for write operation
All those problems would be addressed in this patch.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
When compiling on a system with gcc 12.2.1, the following warning is
generated. It can be fixed by adding a static storage class specifier.
cmds/inspect.c:733:5: warning: no previous prototype for ‘cmp_cse_devid_start’ [-Wmissing-prototypes]
733 | int cmp_cse_devid_start(const void *va, const void *vb)
| ^~~~~~~~~~~~~~~~~~~
cmds/inspect.c:754:5: warning: no previous prototype for ‘cmp_cse_devid_lstart’ [-Wmissing-prototypes]
754 | int cmp_cse_devid_lstart(const void *va, const void *vb)
| ^~~~~~~~~~~~~~~~~~~~
cmds/inspect.c:775:5: warning: no previous prototype for ‘print_list_chunks’ [-Wmissing-prototypes]
775 | int print_list_chunks(struct list_chunks_ctx *ctx, unsigned sort_mode,
| ^~~~~~~~~~~~~~~~~
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Compiler is throwing out this false positive warning in the following
function flow. Apparently, %parent and %p are initialized in tree_search_for_insert().
__set_extent_bit()
state = tree_search_for_insert(tree, start, &p, &parent);
insert_state_fast(tree, prealloc, p, parent, bits, changeset);
rb_link_node(&state->rb_node, parent, node);
Compile warnings:
In file included from ./common/extent-cache.h:23,
from kernel-shared/ctree.h:26,
from kernel-shared/extent-io-tree.c:4:
kernel-shared/extent-io-tree.c: In function ‘__set_extent_bit’:
./kernel-lib/rbtree.h:80:28: warning: ‘parent’ may be used uninitialized in this function [-Wmaybe-uninitialized]
node->__rb_parent_color = (unsigned long)parent;
^~~~~~~~~~~~~~~~~~~~~
kernel-shared/extent-io-tree.c:996:18: note: ‘parent’ was declared here
struct rb_node *parent;
^~~~~~
In file included from ./common/extent-cache.h:23,
from kernel-shared/ctree.h:26,
from kernel-shared/extent-io-tree.c:4:
./kernel-lib/rbtree.h:83:11: warning: ‘p’ may be used uninitialized in this function [-Wmaybe-uninitialized]
*rb_link = node;
~~~~~~~~~^~~~~~
kernel-shared/extent-io-tree.c:995:19: note: ‘p’ was declared here
struct rb_node **p;
Fix:
Initialize to NULL, as in the kernel.
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: David Sterba <dsterba@suse.com>
With all the known warnings fixed, we can enable -Wmissing-prototypes
and prevent such warnings from happening.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The fixes involve the following changes:
- Unexport functions which are not utilized out of the file
* print_path_column()
* parse_reflink_range()
* btrfs_list_setup_print_column()
* device_get_partition_size_sysfs()
* max_zone_append_size()
- Include related headers before implementing the function
* change-uuid.c
* convert-bgt.c
* seed.h
- Add missing headers caused by the above header changes
* include <uuid/uuid.h> for tune/tune.h.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
The optimized implementation sha256_process_x86() is not declared
anywhere, this can be caught by -Wmissing-prototypes option.
Just declare it properly in sha.h.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Since kernel 3.12, any btrfs mounted by a kernel would have an UUID
tree created, to record all the UUID of its subvolumes.
Without UUID tree, libbtrfs send functionality has to go through all the
subvolumes seen so far, and record those subvolumes' UUID internally so
that libbtrfs send can locate a desired subvolume.
Since commit 194b90aa2c ("btrfs-progs: libbtrfs: remove declarations
without exports in send-utils") we're deprecating this old interface
already, meaning deprecated users won't be able to build its own
subvolume list already.
And we received no error report on this so far. So let's finish the
cleanup by removing the support for fs without an UUID tree.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This function is introduced by commit b031fe84fd ("btrfs-progs: zoned:
implement zoned chunk allocator") but it never got called since then.
Furthermore in the kernel zoned code, there is no such function from the
very beginning, and everything is handled by
btrfs_find_allocatable_zones().
Thus we can safely remove the function.
Reviewed-by: Anand Jain <anand.jain@oracle.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
This syncs tree-checker.c from the kernel. The main modification was to
add a open ctree flag to skip the deeper leaf checks, and plumbing this
through tree-checker.c. We need this for things like fsck or
btrfs-image that need to work with slightly corrupted file systems, and
these checks simply make us unable to look at the corrupted blocks.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: David Sterba <dsterba@suse.com>