Using borgbackup to backup the root partition works fine, just
remember to exclude non-essential directories.
Signed-off-by: Fredrik Mikker <fredrik@mikker.se>
(cherry picked from commit d01a8f54b6)
Plus typo-fix by @enkore.
(cherry picked from commit dfdf590445)
make_parent(path) helper to reduce code duplication.
also use it for directories although makedirs can also do it.
bugfix: also create parent dir for device files, if needed.
(cherry picked from commit d4e27e2952)
* Set warning exit code when xattr is too big
* Warnings for more extended attributes errors (ENOTSUP, EACCES)
* Add tests for all xattr warnings
(cherry picked from commit 63b5cbfc99)
* trigger bug in --verify-data, see #2221
* raise decompression errors as DecompressionError, fixes#2221
this is a subclass of IntegrityError, so borg check --verify-data works correctly if
the decompressor stumbles over corrupted data before the plaintext gets verified
(in a unencrypted repository, otherwise the MAC check would fail first).
* fixup: fix exception docstring, add placeholder, change wording
also: add some missing assertion messages
severity:
- no issue on little-endian platforms (== most, including x86/x64)
- harmless even on big-endian as long as refcount is below 0xfffbffff,
which is very likely always the case in practice anyway.
we do not trust the remote, so we are careful unpacking its responses.
the remote could return manipulated msgpack data that announces e.g.
a huge array or map or string. the local would then need to allocate huge
amounts of RAM in expectation of that data (no matter whether really
that much is coming or not).
by using limits in the Unpacker, a ValueError will be raised if unexpected
amounts of data shall get unpacked. memory DoS will be avoided.
# Conflicts:
# borg/archiver.py
# src/borg/archive.py
# src/borg/remote.py
# src/borg/repository.py
if there are too many deleted buckets (tombstones), hashtable performance goes down the drain.
in the worst case of 0 empty buckets and lots of tombstones, this results in full table scans for
new / unknown keys.
thus we make sure we always have a good amount of empty buckets.
hardcoded the encoding for reading it. while utf-8 is the default
encoding on many systems, it does not work everywhere.
and when it tries to decode with the ascii decoder, it fails.