2021-12-09 19:46:42 +00:00
|
|
|
Btrfs supports transparent file compression. There are three algorithms
|
|
|
|
available: ZLIB, LZO and ZSTD (since v4.14), with various levels.
|
|
|
|
The compression happens on the level of file extents and the algorithm is
|
|
|
|
selected by file property, mount option or by a defrag command.
|
|
|
|
You can have a single btrfs mount point that has some files that are
|
|
|
|
uncompressed, some that are compressed with LZO, some with ZLIB, for instance
|
|
|
|
(though you may not want it that way, it is supported).
|
|
|
|
|
2022-12-07 20:00:25 +00:00
|
|
|
Once the compression is set, all newly written data will be compressed, i.e.
|
2021-12-09 19:46:42 +00:00
|
|
|
existing data are untouched. Data are split into smaller chunks (128KiB) before
|
|
|
|
compression to make random rewrites possible without a high performance hit. Due
|
|
|
|
to the increased number of extents the metadata consumption is higher. The
|
|
|
|
chunks are compressed in parallel.
|
|
|
|
|
|
|
|
The algorithms can be characterized as follows regarding the speed/ratio
|
|
|
|
trade-offs:
|
|
|
|
|
|
|
|
ZLIB
|
|
|
|
* slower, higher compression ratio
|
|
|
|
* levels: 1 to 9, mapped directly, default level is 3
|
|
|
|
* good backward compatibility
|
|
|
|
LZO
|
|
|
|
* faster compression and decompression than zlib, worse compression ratio, designed to be fast
|
|
|
|
* no levels
|
|
|
|
* good backward compatibility
|
|
|
|
ZSTD
|
|
|
|
* compression comparable to zlib with higher compression/decompression speeds and different ratio
|
2022-02-07 16:55:28 +00:00
|
|
|
* levels: 1 to 15, mapped directly (higher levels are not available)
|
2021-12-09 19:46:42 +00:00
|
|
|
* since 4.14, levels since 5.1
|
|
|
|
|
|
|
|
The differences depend on the actual data set and cannot be expressed by a
|
|
|
|
single number or recommendation. Higher levels consume more CPU time and may
|
|
|
|
not bring a significant improvement, lower levels are close to real time.
|
|
|
|
|
|
|
|
How to enable compression
|
|
|
|
-------------------------
|
|
|
|
|
|
|
|
Typically the compression can be enabled on the whole filesystem, specified for
|
|
|
|
the mount point. Note that the compression mount options are shared among all
|
|
|
|
mounts of the same filesystem, either bind mounts or subvolume mounts.
|
|
|
|
Please refer to section *MOUNT OPTIONS*.
|
|
|
|
|
|
|
|
.. code-block:: shell
|
|
|
|
|
|
|
|
$ mount -o compress=zstd /dev/sdx /mnt
|
|
|
|
|
|
|
|
This will enable the ``zstd`` algorithm on the default level (which is 3).
|
|
|
|
The level can be specified manually too like ``zstd:3``. Higher levels compress
|
|
|
|
better at the cost of time. This in turn may cause increased write latency, low
|
|
|
|
levels are suitable for real-time compression and on reasonably fast CPU don't
|
2022-01-10 15:20:34 +00:00
|
|
|
cause noticeable performance drops.
|
2021-12-09 19:46:42 +00:00
|
|
|
|
|
|
|
.. code-block:: shell
|
|
|
|
|
|
|
|
$ btrfs filesystem defrag -czstd file
|
|
|
|
|
|
|
|
The command above will start defragmentation of the whole *file* and apply
|
|
|
|
the compression, regardless of the mount option. (Note: specifying level is not
|
2021-12-17 09:49:39 +00:00
|
|
|
yet implemented). The compression algorithm is not persistent and applies only
|
2021-12-09 19:46:42 +00:00
|
|
|
to the defragmentation command, for any other writes other compression settings
|
|
|
|
apply.
|
|
|
|
|
|
|
|
Persistent settings on a per-file basis can be set in two ways:
|
|
|
|
|
|
|
|
.. code-block:: shell
|
|
|
|
|
|
|
|
$ chattr +c file
|
|
|
|
$ btrfs property set file compression zstd
|
|
|
|
|
|
|
|
The first command is using legacy interface of file attributes inherited from
|
|
|
|
ext2 filesystem and is not flexible, so by default the *zlib* compression is
|
|
|
|
set. The other command sets a property on the file with the given algorithm.
|
|
|
|
(Note: setting level that way is not yet implemented.)
|
|
|
|
|
|
|
|
Compression levels
|
|
|
|
------------------
|
|
|
|
|
|
|
|
The level support of ZLIB has been added in v4.14, LZO does not support levels
|
|
|
|
(the kernel implementation provides only one), ZSTD level support has been added
|
|
|
|
in v5.1.
|
|
|
|
|
|
|
|
There are 9 levels of ZLIB supported (1 to 9), mapping 1:1 from the mount option
|
|
|
|
to the algorithm defined level. The default is level 3, which provides the
|
|
|
|
reasonably good compression ratio and is still reasonably fast. The difference
|
|
|
|
in compression gain of levels 7, 8 and 9 is comparable but the higher levels
|
|
|
|
take longer.
|
|
|
|
|
|
|
|
The ZSTD support includes levels 1 to 15, a subset of full range of what ZSTD
|
|
|
|
provides. Levels 1-3 are real-time, 4-8 slower with improved compression and
|
|
|
|
9-15 try even harder though the resulting size may not be significantly improved.
|
|
|
|
|
|
|
|
Level 0 always maps to the default. The compression level does not affect
|
|
|
|
compatibility.
|
|
|
|
|
|
|
|
Incompressible data
|
|
|
|
-------------------
|
|
|
|
|
|
|
|
Files with already compressed data or with data that won't compress well with
|
|
|
|
the CPU and memory constraints of the kernel implementations are using a simple
|
|
|
|
decision logic. If the first portion of data being compressed is not smaller
|
|
|
|
than the original, the compression of the file is disabled -- unless the
|
|
|
|
filesystem is mounted with *compress-force*. In that case compression will
|
|
|
|
always be attempted on the file only to be later discarded. This is not optimal
|
|
|
|
and subject to optimizations and further development.
|
|
|
|
|
|
|
|
If a file is identified as incompressible, a flag is set (*NOCOMPRESS*) and it's
|
|
|
|
sticky. On that file compression won't be performed unless forced. The flag
|
|
|
|
can be also set by **chattr +m** (since e2fsprogs 1.46.2) or by properties with
|
|
|
|
value *no* or *none*. Empty value will reset it to the default that's currently
|
|
|
|
applicable on the mounted filesystem.
|
|
|
|
|
|
|
|
There are two ways to detect incompressible data:
|
|
|
|
|
|
|
|
* actual compression attempt - data are compressed, if the result is not smaller,
|
|
|
|
it's discarded, so this depends on the algorithm and level
|
|
|
|
* pre-compression heuristics - a quick statistical evaluation on the data is
|
2021-12-17 09:49:39 +00:00
|
|
|
performed and based on the result either compression is performed or skipped,
|
2021-12-09 19:46:42 +00:00
|
|
|
the NOCOMPRESS bit is not set just by the heuristic, only if the compression
|
2021-12-17 09:49:39 +00:00
|
|
|
algorithm does not make an improvement
|
2021-12-09 19:46:42 +00:00
|
|
|
|
|
|
|
.. code-block:: shell
|
|
|
|
|
|
|
|
$ lsattr file
|
|
|
|
---------------------m file
|
|
|
|
|
|
|
|
Using the forcing compression is not recommended, the heuristics are
|
|
|
|
supposed to decide that and compression algorithms internally detect
|
|
|
|
incompressible data too.
|
|
|
|
|
|
|
|
Pre-compression heuristics
|
|
|
|
--------------------------
|
|
|
|
|
|
|
|
The heuristics aim to do a few quick statistical tests on the compressed data
|
|
|
|
in order to avoid probably costly compression that would turn out to be
|
|
|
|
inefficient. Compression algorithms could have internal detection of
|
|
|
|
incompressible data too but this leads to more overhead as the compression is
|
|
|
|
done in another thread and has to write the data anyway. The heuristic is
|
|
|
|
read-only and can utilize cached memory.
|
|
|
|
|
2021-12-17 09:49:39 +00:00
|
|
|
The tests performed based on the following: data sampling, long repeated
|
2021-12-09 19:46:42 +00:00
|
|
|
pattern detection, byte frequency, Shannon entropy.
|
|
|
|
|
|
|
|
Compatibility
|
|
|
|
-------------
|
|
|
|
|
|
|
|
Compression is done using the COW mechanism so it's incompatible with
|
|
|
|
*nodatacow*. Direct IO works on compressed files but will fall back to buffered
|
2022-01-10 15:20:34 +00:00
|
|
|
writes and leads to recompression. Currently *nodatasum* and compression don't
|
2021-12-09 19:46:42 +00:00
|
|
|
work together.
|
|
|
|
|
|
|
|
The compression algorithms have been added over time so the version
|
|
|
|
compatibility should be also considered, together with other tools that may
|
|
|
|
access the compressed data like bootloaders.
|