btrfs-progs/tests
David Sterba 7de5fafc64 btrfs-progs: tests: enhance common umount helper to take optional paths
The run_check_umount_test_dev umounts the TEST_DEV and also optionally
uses the arguments but this would not work as expected if the TEST_DEV
is not a vald path for umount (eg. a restored image).

Update the helper so it tries to umount all paths, or fallback to
TEST_DEV to keep the current behaviour.

Signed-off-by: David Sterba <dsterba@suse.com>
2018-01-31 15:14:03 +01:00
..
cli-tests btrfs-progs: tests: don't pass size to prepare_test_dev if not necessary 2017-11-14 15:59:00 +01:00
convert-tests btrfs-progs: tests/convert: ensure btrfs-convert won't rollback the filesystem after balance 2018-01-03 17:29:19 +01:00
fsck-tests btrfs-progs: tests: 029-super-recovery: cleanup the test 2018-01-31 15:14:02 +01:00
fuzz-tests btrfs-progs: tests: add more fuzzed images from bugzilla 2017-09-08 16:15:05 +02:00
misc-tests btrfs-progs: tests: 029-super-recovery: cleanup the test 2018-01-31 15:14:02 +01:00
mkfs-tests btrfs-progs: tests: disable some mkfs/010 testcases inside travis 2018-01-31 15:14:03 +01:00
build-tests.sh btrfs-progs: tests: add more configure option coverage 2017-10-13 20:15:54 +02:00
clean-tests.sh btrfs-progs: tests: add quotation around variables in support scripts 2016-11-11 16:25:19 +01:00
cli-tests.sh btrfs-progs: tests: enhance TEST_LOG features 2016-11-23 10:51:43 +01:00
common btrfs-progs: tests: enhance common umount helper to take optional paths 2018-01-31 15:14:03 +01:00
common.convert btrfs-progs: tests: check for kernel support for reiserfs 2017-09-08 16:15:05 +02:00
common.local btrfs-progs: tests: Allow check test to repair in lowmem mode for certain errors 2017-10-16 20:33:00 +02:00
convert-tests.sh btrfs-progs: tests: check for kernel support for reiserfs 2017-09-08 16:15:05 +02:00
fsck-tests.sh btrfs-progs: tests: Fix missing internal deps in check and misc tests 2017-07-20 17:43:43 +02:00
fssum.c btrfs-progs: treewide: Replace strerror(errno) with %m. 2018-01-31 15:14:03 +01:00
fuzz-tests.sh btrfs-progs: tests: enhance TEST_LOG features 2016-11-23 10:51:43 +01:00
misc-tests.sh btrfs-progs: tests: Fix missing internal deps in check and misc tests 2017-07-20 17:43:43 +02:00
mkfs-tests.sh btrfs-progs: tests: enhance TEST_LOG features 2016-11-23 10:51:43 +01:00
README.md btrfs-progs: tests: Explictly state test.sh must be executable 2018-01-31 15:14:01 +01:00
scan-results.sh btrfs-progs: tests: add more sanitizer message patterns to log scanner 2017-09-08 16:15:05 +02:00
sha224-256.c btrfs-progs: tests: Remove misleading BCP 78 boilerplate from SHA implementation 2017-09-25 15:18:15 +02:00
sha-private.h btrfs-progs: tests: Add required IETF Trust copyright to SHA implementation 2017-09-25 15:18:12 +02:00
sha.h btrfs-progs: tests: add SHA256 2017-03-16 17:02:44 +01:00
test-console.sh btrfs-progs: tests: add quotation around variables in support scripts 2016-11-11 16:25:19 +01:00

Btrfs-progs tests

A testsuite covering functionality of btrfs-progs, ie. the checker, image, mkfs and similar tools. There are no additional requirements on kernel features (other than CONFIG_BTRFS_FS built-in or module), the tests build on top of the core functionality like snapshots and device management. In some cases optional features are turned on by mkfs and the filesystem image could be mounted, such tests might fail if there's lack of support.

Quick start

Run the tests from the top directory:

$ make test
$ make test-fsck
$ make test-convert

or selectively from the tests/ directory:

$ ./fsck-tests.sh
$ ./misc-tests.sh

The verbose output of the tests is logged into a file named after the test category, eg. fsck-tests-results.txt.

Selective testing

The tests are prefixed by a number for ordering and uniqueness. To run a particular test use:

$ make TEST=MASK test

where MASK is a glob expression that will execute only tests that match the MASK. Here the test number comes handy:

$ make TEST=001\* test-fsck
$ TEST=001\* ./fsck-tests.sh

will run the first test in fsck-tests subdirectory.

Test structure

tests/fsck-tests/:

  • tests targeted at bugs that are fixable by fsck

tests/convert-tests/:

  • coverage tests of ext2/3/4 and btrfs-convert options

tests/fuzz-tests/:

  • collection of fuzzed or crafted images
  • tests that are supposed to run various utilities on the images and not crash

tests/cli-tests/:

  • tests for command line interface, option coverage, weird option combinations that should not work
  • not necessary to do any functional testing, could be rather lightweight
  • functional tests should go to to other test dirs
  • the driver script will only execute ./test.sh in the test directory

tests/misc-tests/:

  • anything that does not fit to the above, the test driver script will only execute ./test.sh in the test directory

tests/common, tests/common.convert:

  • script with shell helpers, separated by functionality

tests/test.img:

  • default testing image, the file is never deleted by the scripts but truncated to 0 bytes, so it keeps it's permissions. It's eg. possible to host it on NFS, make it chmod a+w for root.

Other tuning, environment variables

Instrumentation

It's possible to wrap the tested commands to utilities that might do more checking or catch failures at runtime. This can be done by setting the INSTRUMENT environment variable:

INSTRUMENT=valgrind ./fuzz-tests.sh    # in tests/
make INSTRUMENT=valgrind test-fuzz     # in the top directory

The variable is prepended to the command unquoted, all sorts of shell tricks are possible.

Note: instrumentation is not applied to privileged commands (anything that uses the root helper).

Verbosity, test tuning

  • TEST_LOG=tty -- setting the variable will print all commands executed by some of the wrappers (run_check etc), other commands are not printed to the terminal (but the full output is in the log)

  • TEST_LOG=dump -- dump the entire testing log when a test fails

  • TEST_ENABLE_OVERRIDE -- defined either as make arguments or via tests/common.local to enable additional arguments to some commands, using the variable(s) below (default: false, enable by setting to 'true')

  • TEST_ARGS_CHECK -- user-defined arguments to btrfs check, before the test-specific arguments

Multiple values can be separated by ,.

Permissions

Some commands require root privileges (to mount/umount, access loop devices). It is assumed that sudo will work in some way (no password, password asked and cached). Note that instrumentation is not applied in this case, for safety reasons. You need to modify the test script instead.

Cleanup

The tests are supposed to cleanup after themselves if they pass. In case of failure, the rest of the tests are skipped and intermediate files, mounts and loop devices are kept. This should help to investigate the test failure but at least the mounts and loop devices need to be cleaned before the next run.

This is partially done by the script clean-tests.sh, you may want to check the loop devices as they are managed on a per-test basis.

Prototyping tests, quick tests

There's a script test-console.sh that will run shell commands in a loop and logs the output with the testing environment set up.

Runtime dependencies

The tests use some common system utilities like find, rm, dd. Additionally, specific tests need the following packages installed: acl, attr, e2fsprogs, reiserfsprogs

New test

  1. Pick the category for the new test or fallback to misc-tests if not sure. For an easy start copy an existing test.sh script from some test that might be close to the purpose of your new test. The environment setup includes the common scripts and/or prepares the test devices. Other scripts contain examples how to do mkfs, mount, unmount, check, etc.

  2. Use the highest unused number in the sequence, write a short descriptive title and join by dashes -. This will become the directory name, eg. 012-subvolume-sync-must-wait.

  3. Write a short description of the bug and how it's tested to the comment at the begining of test.sh. You don't need to add the file to git yet. Don't forget to make the file executable, otherwise it's not going to be executed by the infrastructure.

  4. Write the test commands, comment anything that's not obvious.

  5. Test your test. Use the TEST variable to jump right to your test:

$ make TEST=012\* tests-misc           # from top directory
$ TEST=012\* ./misc-tests.sh           # from tests/
  1. The commit changelog should reference a commit that either introduced or fixed the bug (or both). Subject line of the shall mention the name of the new directory for ease of search, eg. btrfs-progs: tests: add 012-subvolume-sync-must-wait

Crafted/fuzzed images

Images that are create by fuzzing or specially crafted to trigger some error conditions should be added to the directory fuzz-tests/images, accompanied by a textual description of the source (bugzilla, mail), the reporter, brief description of the problem or the stack trace.

If you have a fix for the problem, please submit it prior to the test image, so the fuzz tests always succeed when run on random checked out. This helps bisectability.

Coding style, best practices

do

  • quote all variables by default, any path, even the TOP could need that, and we use it everywhere
    • there are exceptions:
      • $SUDO_HELPER as it might be intentionally unset
      • the variable is obviously set to a value that does not require it
  • use #!/bin/bash explicitly
  • check for all external dependencies (check_prereq_global)
  • check for internal dependencies (check_prereq), though the basic set is always built when the tests are started through make
  • use functions instead of repeating code
    • generic helpers could be factored to the common script
  • cleanup after successful test
  • use common helpers and variables

do not

  • pull external dependencies if we can find a way to replace them: example is xfs_io that's conveniently used in fstests but we'd require xfsprogs, so use dd instead
  • throw away (redirect to /dev/null) output of commands unless it's justified (ie. really too much text, unnecessary slowdown) -- the test output log is regenerated all the time and we need to be able to analyze test failures or just observe how the tests progress
  • cleanup after failed test -- the testsuite stops on first failure and the developer can eg. access the environment that the test created and do further debugging
    • this might change in the future so the tests cover as much as possible, but this would require to enhance all tests with a cleanup phase