mirror of
https://github.com/borgbackup/borg.git
synced 2025-03-06 19:49:20 +00:00
docs: Small changes regarding compression
- Mention zstd as the best general choice when not using lz4 (as often acknowledged by public benchmarks) - Mention 'auto' more prominently as a good heuristic to improve speed while retaining good compression - Link to compression options
This commit is contained in:
parent
411f6f3222
commit
f656f6b1f2
2 changed files with 16 additions and 21 deletions
|
@ -292,35 +292,29 @@ Backup compression
|
|||
------------------
|
||||
|
||||
The default is lz4 (very fast, but low compression ratio), but other methods are
|
||||
supported for different situations.
|
||||
supported for different situations. Compression not only helps you save disk space,
|
||||
but will especially speed up remote backups since less data needs to be transferred.
|
||||
|
||||
You can use zstd for a wide range from high speed (and relatively low
|
||||
compression) using N=1 to high compression (and lower speed) using N=22.
|
||||
|
||||
zstd is a modern compression algorithm and might be preferable over zlib and
|
||||
lzma.::
|
||||
zstd is a modern compression algorithm which can be parametrized to anything between
|
||||
N=1 for highest speed (and relatively low compression) to N=22 for highest compression
|
||||
(and lower speed)::
|
||||
|
||||
$ borg create --compression zstd,N arch ~
|
||||
|
||||
Other options are:
|
||||
|
||||
If you have a fast repo storage and you want minimum CPU usage, no compression::
|
||||
If you have a fast repo storage and you want minimum CPU usage you can disable
|
||||
compression::
|
||||
|
||||
$ borg create --compression none arch ~
|
||||
|
||||
If you have a less fast repo storage and you want a bit more compression (N=0..9,
|
||||
0 means no compression, 9 means high compression):
|
||||
You can also use zlib and lzma instead of zstd, although zstd usually provides the
|
||||
the best compression for a given resource consumption. Please see :ref:`borg_compression`
|
||||
for all options.
|
||||
|
||||
::
|
||||
An interesting alternative is ``auto``, which first checks with lz4 whether a chunk is
|
||||
compressible (that check is very fast), and only if it is, compresses it with the
|
||||
specified algorithm::
|
||||
|
||||
$ borg create --compression zlib,N arch ~
|
||||
|
||||
If you have a very slow repo storage and you want high compression (N=0..9, 0 means
|
||||
low compression, 9 means high compression):
|
||||
|
||||
::
|
||||
|
||||
$ borg create --compression lzma,N arch ~
|
||||
$ borg create --compression auto,zstd,7 arch ~
|
||||
|
||||
You'll need to experiment a bit to find the best compression for your use case.
|
||||
Keep an eye on CPU load and throughput.
|
||||
|
|
|
@ -417,7 +417,8 @@ class HelpMixIn:
|
|||
The heuristic tries with lz4 whether the data is compressible.
|
||||
For incompressible data, it will not use compression (uses "none").
|
||||
For compressible data, it uses the given C[,L] compression - with C[,L]
|
||||
being any valid compression specifier.
|
||||
being any valid compression specifier. This can be helpful for media files
|
||||
which often cannot be compressed much more.
|
||||
|
||||
obfuscate,SPEC,C[,L]
|
||||
Use compressed-size obfuscation to make fingerprinting attacks based on
|
||||
|
|
Loading…
Add table
Reference in a new issue