If scrub start discovers that scrub is already running,
we need to set prg_fd to -1 before goto out, or we'll
try to close it again in the error path.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Zach Brown <zab@redhat.com>
If connection fails the socket is leaked when the status file is used
instead. Close it to trivially cut down on fd use and to bring down the
noise in static code analysis.
Signed-off-by: Zach Brown <zab@redhat.com>
It looks possible to hit the search_again label without using the
prealloc. A new prealloc is allocated, leaking the current one.
Every use of prealloc sets it to null so let's just allocate a new
prealloc when we don't already have one.
Signed-off-by: Zach Brown <zab@redhat.com>
btrfs_scan_one_dir() can overflow an arbitrarily small 256 byte buffer
with an arbitrarily slightly larger 1024 byte buffer as it remembers the
path of a dir to later descend.
Make these buffers the same size to stop the overflow and chose PATH_MAX
for that size so that it won't fail on legitimately bonkers paths.
Signed-off-by: Zach Brown <zab@redhat.com>
Path allocation failure already has its own return, remember to free the
path when the error label is taken.
Signed-off-by: Zach Brown <zab@redhat.com>
struct btrfs_super is about 3.5k but a few writing paths were writing it
out as the full 4k BTRFS_SUPER_INFO_SIZE, leaking a few hundred bytes
after the super_block onto disk. In practice this meant the memory
after super_copy in struct btrfs_fs_info and whatever came after it in
the heap.
Signed-off-by: Zach Brown <zab@redhat.com>
old_left_nritems is unsigned so BUG_ON(old_left_nritems < 0) is
impossible. Presumably the BUG_ON() meant to test that it wasn't 0 so
that btrfs_item_offset_nr() doesn't get a nr of -1.
Signed-off-by: Zach Brown <zab@redhat.com>
Check for failure by testing for a negative file descriptor, not a
descriptor of 0. And if it failed we have nothing to close().
Signed-off-by: Zach Brown <zab@redhat.com>
btrfs_free_path() derefs the path before freeing it. It can't be passed
a null pointer when allocation fails.
Signed-off-by: Zach Brown <zab@redhat.com>
'next' can never be non-null in the body of these loops. It's
initialized to NULL and the loop is terminated the moment it is set.
Signed-off-by: Zach Brown <zab@redhat.com>
copy_one_inline() meant to test the return of pwrite() with ram_size.
Presumably the comparison with len was copied from the test earlier in
the function.
Signed-off-by: Zach Brown <zab@redhat.com>
size_sourcedir() uses shockingly bad code to try and estimate the size
of the files and directories in a subtree.
- Its use of snprintf(), strcat(), and sscanf() with arbitrarily small
on-stack buffers manages to overflow the stack a few times when given
long file names.
$ BIG=$(perl -e 'print "a" x 200')
$ mkdir -p /tmp/$BIG/$BIG/$BIG/$BIG/$BIG
$ mkfs.btrfs /tmp/img -r /tmp/$BIG/$BIG/$BIG/$BIG/$BIG
*** stack smashing detected ***: mkfs.btrfs terminated
- It passes raw paths to system() allowing interpreting file names as
shell control characters.
$ mkfs.btrfs /tmp/img -r /tmp/spacey\ dir/
du: cannot access `/tmp/spacey': No such file or directory
du: cannot access `dir/': No such file or directory
- It redirects du output to "temp_file" in the current directory,
allowing overwriting of files through symlinks.
$ echo hi > target
$ ln -s target temp_file
$ mkfs.btrfs /tmp/img -r /tmp/somedir/
$ cat target
3 /tmp/somedir/
This fixes the worst problems while maintaining -r functionality by
tearing out the system() code and using ftw() to walk the source tree
and sum up st.st_size.
Signed-off-by: Zach Brown <zab@redhat.com>
check_owner_ref() could deref a null path node if btrfs_search_slot()
fails or simply doesn't find a tree tall enough to get to the parent of
the desired block.
This was flagged by static analysis warning that btrfs_search_slot()'s
return value wasn't being checked.
Signed-off-by: Zach Brown <zab@redhat.com>
Again: caught by static analysis.
Errors cow-ing the root block are silently being dropped. This is
just a step towards error handling because both the caller and calee
assert on errors.
Signed-off-by: Zach Brown <zab@redhat.com>
The super block magic is a le64 whose value looks like an unterminated
string in memory. The lack of null termination leads to clumsy use of
string functions and causes static analysis tools to warn that the
string will be unterminated.
So let's just treat it as the le64 that it is. Endian wrappers are used
on the constant so that they're compiled into run-time constants.
Signed-off-by: Zach Brown <zab@redhat.com>
This was a bug from long time ago that never actually got fixed. We start
with bytenr 0 when looping through all of the block groups, but
btrfs_lookup_block_group will bail out since it couldn't find a block group
with 0 as the bytenr. Btrfs_lookup_first_block_group will be nice and
adjust the start up to the right value, so this way we reset all the block
groups properly and not screw up the users block group accounting. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
unless it was intentional to include uuid when -s
option is (show snapshot only) given, we would need
this break statement.
Signed-off-by: Anand Jain <anand.jain@oracle.com>
This patch makes btrfsck print the filesystem, which is to be checked,
to stdout, as well as the UUID of the corresponding partition.
This should be helpful when analyzing (copied and pasted) output of
btrfsck.
Signed-off-by: Dieter Ries <mail@dieterries.net>
- rename to match code where applicable
- add missing options
- unify the help strings in short and detailed sections
- fix a few typos
Signed-off-by: David Sterba <dsterba@suse.cz>
Rename filter options in 'subvol list' subcommand, that way we can
distinguish them from the options that just show some option in the
output and can have a matching uppercase filter.
Signed-off-by: David Sterba <dsterba@suse.cz>
btrfs snapshot list command will stop by the deleted subvolumes.
The problem may happen by two ways:
1. a subvolume deletion is not commited, that is ROOT_BACKREF has been deleted,
but ROOT_ITEM still exists. The command will fail to fill the path of
the deleted subvolumes because we can not get the parent fs/file tree.
2. a subvolume is possibly deleted when we fill the path, For example,
Fs tree
|->subv0
|->subv1
We may fill the path of subv1 firstly, after that, some user deletes subv1
and subv0, and then we fill the path of subv0. The command will fail to
fill the path of subv0 because we can not get path of subv0. And the command
also will fail to make the full path of subv1 because we don't have the path
of subv0.
Since these subvolumes have been deleted, we should filter them. This patch
fixed the above problem by this way.
For the 1st case, ->ref_tree of the deleted subvolumes are 0.
For the 2nd case, if we found the error number that ioctl() returns is ENOENT,
we will set ->ref_tree to 0.
And when we make the full path of the subvolumes, we will check ->ref_tree of
them and their parent. If someone's ->ref_tree or its parent's ->ref_tree is 0,
we will filter it.
Reported-by: Stefan Priebe <s.priebe@profihost.ag>
Signed-off-by: Wang Shilong <wangsl-fnst@cn.fujitsu.com>
Signed-off-by: Anand Jain <anand.jain@oracle.com>
This adds show sub-command to the btrfs subvol cli
to display detailed inforamtion of the given subvol
or snapshot.
Signed-off-by: Anand Jain <anand.jain@oracle.com>
get_subvol_name can be used other than the just with in cmds-send.c
so this patch will make it possible with out changing the original
intentions.
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Currently you can print subvol in a list or table format.
This patch will provide a way to extend this to other formats
like the upcoming raw format.
Signed-off-by: Anand Jain <anand.jain@oracle.com>
We need a function which can get the root_info of a given
subvol. This is in preparation to add support for the show
sub-cli.
Signed-off-by: Anand Jain <anand.jain@oracle.com>
As we would add more ways to list and manage the subvols
and snapshots, its better if we have struct root_info
defined in the header file.
Signed-off-by: Anand Jain <anand.jain@oracle.com>