mirror of
https://github.com/borgbackup/borg.git
synced 2024-12-21 15:23:11 +00:00
Merge pull request #8332 from ThomasWaldmann/use-borgstore
use borgstore and other big changes
This commit is contained in:
commit
ea08e49210
166 changed files with 6744 additions and 8421 deletions
2
.github/workflows/black.yaml
vendored
2
.github/workflows/black.yaml
vendored
|
@ -12,4 +12,4 @@ jobs:
|
|||
- uses: actions/checkout@v4
|
||||
- uses: psf/black@stable
|
||||
with:
|
||||
version: "~= 23.0"
|
||||
version: "~= 24.0"
|
||||
|
|
3
.github/workflows/ci.yml
vendored
3
.github/workflows/ci.yml
vendored
|
@ -107,8 +107,7 @@ jobs:
|
|||
pip install -r requirements.d/development.txt
|
||||
- name: Install borgbackup
|
||||
run: |
|
||||
# pip install -e .
|
||||
python setup.py -v develop
|
||||
pip install -e .
|
||||
- name: run tox env
|
||||
env:
|
||||
XDISTN: "4"
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
repos:
|
||||
- repo: https://github.com/psf/black
|
||||
rev: 23.1.0
|
||||
rev: 24.8.0
|
||||
hooks:
|
||||
- id: black
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
|
|
|
@ -69,7 +69,7 @@ Main features
|
|||
**Speed**
|
||||
* performance-critical code (chunking, compression, encryption) is
|
||||
implemented in C/Cython
|
||||
* local caching of files/chunks index data
|
||||
* local caching
|
||||
* quick detection of unmodified files
|
||||
|
||||
**Data encryption**
|
||||
|
|
|
@ -12,8 +12,8 @@ This section provides information about security and corruption issues.
|
|||
Upgrade Notes
|
||||
=============
|
||||
|
||||
borg 1.2.x to borg 2.0
|
||||
----------------------
|
||||
borg 1.2.x/1.4.x to borg 2.0
|
||||
----------------------------
|
||||
|
||||
Compatibility notes:
|
||||
|
||||
|
@ -21,11 +21,11 @@ Compatibility notes:
|
|||
|
||||
We tried to put all the necessary "breaking" changes into this release, so we
|
||||
hopefully do not need another breaking release in the near future. The changes
|
||||
were necessary for improved security, improved speed, unblocking future
|
||||
improvements, getting rid of legacy crap / design limitations, having less and
|
||||
simpler code to maintain.
|
||||
were necessary for improved security, improved speed and parallelism,
|
||||
unblocking future improvements, getting rid of legacy crap and design
|
||||
limitations, having less and simpler code to maintain.
|
||||
|
||||
You can use "borg transfer" to transfer archives from borg 1.1/1.2 repos to
|
||||
You can use "borg transfer" to transfer archives from borg 1.2/1.4 repos to
|
||||
a new borg 2.0 repo, but it will need some time and space.
|
||||
|
||||
Before using "borg transfer", you must have upgraded to borg >= 1.2.6 (or
|
||||
|
@ -84,6 +84,7 @@ Compatibility notes:
|
|||
- removed --nobsdflags (use --noflags)
|
||||
- removed --noatime (default now, see also --atime)
|
||||
- removed --save-space option (does not change behaviour)
|
||||
- removed --bypass-lock option
|
||||
- using --list together with --progress is now disallowed (except with --log-json), #7219
|
||||
- the --glob-archives option was renamed to --match-archives (the short option
|
||||
name -a is unchanged) and extended to support different pattern styles:
|
||||
|
@ -114,12 +115,61 @@ Compatibility notes:
|
|||
fail now that somehow "worked" before (but maybe didn't work as intended due to
|
||||
the contradicting options).
|
||||
|
||||
|
||||
.. _changelog:
|
||||
|
||||
Change Log 2.x
|
||||
==============
|
||||
|
||||
Version 2.0.0b10 (2024-09-09)
|
||||
-----------------------------
|
||||
|
||||
TL;DR: this is a huge change and the first very fundamental change in how borg
|
||||
works since ever:
|
||||
|
||||
- you will need to create new repos.
|
||||
- likely more exciting than previous betas, definitely not for production.
|
||||
|
||||
New features:
|
||||
|
||||
- borgstore based repository, file:, ssh: and sftp: for now, more possible.
|
||||
- repository stores objects separately now, not using segment files.
|
||||
this has more fs overhead, but needs much less I/O because no segment
|
||||
files compaction is required anymore. also, no repository index is
|
||||
needed anymore because we can directly find the objects by their ID.
|
||||
- locking: new borgstore based repository locking with automatic stale
|
||||
lock removal (if lock does not get refreshed, if lock owner process is dead).
|
||||
- simultaneous repository access for many borg commands except check/compact.
|
||||
the cache lock for adhocwithfiles is still exclusive though, so use
|
||||
BORG_CACHE_IMPL=adhoc if you want to try that out using only 1 machine
|
||||
and 1 user (that implementation doesn't use a cache lock). When using
|
||||
multiple client machines or users, it also works with the default cache.
|
||||
- delete/prune: much quicker now and can be undone.
|
||||
- check --repair --undelete-archives: bring archives back from the dead.
|
||||
- rspace: manage reserved space in repository (avoid dead-end situation if
|
||||
repository fs runs full).
|
||||
|
||||
Bugs/issues fixed:
|
||||
|
||||
- a lot! all linked from PR #8332.
|
||||
|
||||
Other changes:
|
||||
|
||||
- repository: remove transactions, solved differently and much simpler now
|
||||
(convergence and write order primarily).
|
||||
- repository: replaced precise reference counting with "object exists in repo?"
|
||||
and "garbage collection of unused objects".
|
||||
- cache: remove transactions, remove chunks cache.
|
||||
removed LocalCache, BORG_CACHE_IMPL=local, solving all related issues.
|
||||
as in beta 9, adhowwithfiles is the default implementation.
|
||||
- compact: needs the borg key now (run it clientside), -v gives nice stats.
|
||||
- transfer: archive transfers from borg 1.x need the --from-borg1 option
|
||||
- check: reimplemented / bigger changes.
|
||||
- code: got rid of a metric ton of not needed complexity.
|
||||
when borg does not need to read borg 1.x repos/archives anymore, after
|
||||
users have transferred their archives, even much more can be removed.
|
||||
- docs: updated / removed outdated stuff
|
||||
|
||||
|
||||
Version 2.0.0b9 (2024-07-20)
|
||||
----------------------------
|
||||
|
||||
|
|
|
@ -3469,7 +3469,7 @@ Other changes:
|
|||
- archiver tests: add check_cache tool - lints refcounts
|
||||
|
||||
- fixed cache sync performance regression from 1.1.0b1 onwards, #1940
|
||||
- syncing the cache without chunks.archive.d (see :ref:`disable_archive_chunks`)
|
||||
- syncing the cache without chunks.archive.d
|
||||
now avoids any merges and is thus faster, #1940
|
||||
- borg check --verify-data: faster due to linear on-disk-order scan
|
||||
- borg debug-xxx commands removed, we use "debug xxx" subcommands now, #1627
|
||||
|
|
|
@ -105,7 +105,7 @@ modify it to suit your needs (e.g. more backup sets, dumping databases etc.).
|
|||
#
|
||||
|
||||
# Options for borg create
|
||||
BORG_OPTS="--stats --one-file-system --compression lz4 --checkpoint-interval 86400"
|
||||
BORG_OPTS="--stats --one-file-system --compression lz4"
|
||||
|
||||
# Set BORG_PASSPHRASE or BORG_PASSCOMMAND somewhere around here, using export,
|
||||
# if encryption is used.
|
||||
|
|
|
@ -68,8 +68,6 @@ can be filled to the specified quota.
|
|||
If storage quotas are used, ensure that all deployed Borg releases
|
||||
support storage quotas.
|
||||
|
||||
Refer to :ref:`internals_storage_quota` for more details on storage quotas.
|
||||
|
||||
**Specificities: Append-only repositories**
|
||||
|
||||
Running ``borg init`` via a ``borg serve --append-only`` server will **not**
|
||||
|
|
163
docs/faq.rst
163
docs/faq.rst
|
@ -14,7 +14,7 @@ What is the difference between a repo on an external hard drive vs. repo on a se
|
|||
If Borg is running in client/server mode, the client uses SSH as a transport to
|
||||
talk to the remote agent, which is another Borg process (Borg is installed on
|
||||
the server, too) started automatically by the client. The Borg server is doing
|
||||
storage-related low-level repo operations (get, put, commit, check, compact),
|
||||
storage-related low-level repo operations (list, load and store objects),
|
||||
while the Borg client does the high-level stuff: deduplication, encryption,
|
||||
compression, dealing with archives, backups, restores, etc., which reduces the
|
||||
amount of data that goes over the network.
|
||||
|
@ -27,17 +27,7 @@ which is slower.
|
|||
Can I back up from multiple servers into a single repository?
|
||||
-------------------------------------------------------------
|
||||
|
||||
In order for the deduplication used by Borg to work, it
|
||||
needs to keep a local cache containing checksums of all file
|
||||
chunks already stored in the repository. This cache is stored in
|
||||
``~/.cache/borg/``. If Borg detects that a repository has been
|
||||
modified since the local cache was updated it will need to rebuild
|
||||
the cache. This rebuild can be quite time consuming.
|
||||
|
||||
So, yes it's possible. But it will be most efficient if a single
|
||||
repository is only modified from one place. Also keep in mind that
|
||||
Borg will keep an exclusive lock on the repository while creating
|
||||
or deleting archives, which may make *simultaneous* backups fail.
|
||||
Yes, you can! Even simultaneously.
|
||||
|
||||
Can I back up to multiple, swapped backup targets?
|
||||
--------------------------------------------------
|
||||
|
@ -124,50 +114,31 @@ Are there other known limitations?
|
|||
remove files which are in the destination, but not in the archive.
|
||||
See :issue:`4598` for a workaround and more details.
|
||||
|
||||
.. _checkpoints_parts:
|
||||
.. _interrupted_backup:
|
||||
|
||||
If a backup stops mid-way, does the already-backed-up data stay there?
|
||||
----------------------------------------------------------------------
|
||||
|
||||
Yes, Borg supports resuming backups.
|
||||
|
||||
During a backup, a special checkpoint archive named ``<archive-name>.checkpoint``
|
||||
is saved at every checkpoint interval (the default value for this is 30
|
||||
minutes) containing all the data backed-up until that point.
|
||||
|
||||
This checkpoint archive is a valid archive, but it is only a partial backup
|
||||
(not all files that you wanted to back up are contained in it and the last file
|
||||
in it might be a partial file). Having it in the repo until a successful, full
|
||||
backup is completed is useful because it references all the transmitted chunks up
|
||||
to the checkpoint. This means that in case of an interruption, you only need to
|
||||
retransfer the data since the last checkpoint.
|
||||
Yes, the data transferred into the repo stays there - just avoid running
|
||||
``borg compact`` before you completed the backup, because that would remove
|
||||
chunks that were already transferred to the repo, but not (yet) referenced
|
||||
by an archive.
|
||||
|
||||
If a backup was interrupted, you normally do not need to do anything special,
|
||||
just invoke ``borg create`` as you always do. If the repository is still locked,
|
||||
you may need to run ``borg break-lock`` before the next backup. You may use the
|
||||
same archive name as in previous attempt or a different one (e.g. if you always
|
||||
include the current datetime), it does not matter.
|
||||
just invoke ``borg create`` as you always do. You may use the same archive name
|
||||
as in previous attempt or a different one (e.g. if you always include the
|
||||
current datetime), it does not matter.
|
||||
|
||||
Borg always does full single-pass backups, so it will start again
|
||||
from the beginning - but it will be much faster, because some of the data was
|
||||
already stored into the repo (and is still referenced by the checkpoint
|
||||
archive), so it does not need to get transmitted and stored again.
|
||||
|
||||
Once your backup has finished successfully, you can delete all
|
||||
``<archive-name>.checkpoint`` archives. If you run ``borg prune``, it will
|
||||
also care for deleting unneeded checkpoints.
|
||||
|
||||
Note: the checkpointing mechanism may create a partial (truncated) last file
|
||||
in a checkpoint archive named ``<filename>.borg_part``. Such partial files
|
||||
won't be contained in the final archive.
|
||||
This is done so that checkpoints work cleanly and promptly while a big
|
||||
file is being processed.
|
||||
already stored into the repo, so it does not need to get transmitted and stored
|
||||
again.
|
||||
|
||||
|
||||
How can I back up huge file(s) over a unstable connection?
|
||||
----------------------------------------------------------
|
||||
|
||||
Yes. For more details, see :ref:`checkpoints_parts`.
|
||||
Yes. For more details, see :ref:`interrupted_backup`.
|
||||
|
||||
How can I restore huge file(s) over an unstable connection?
|
||||
-----------------------------------------------------------
|
||||
|
@ -220,23 +191,6 @@ Yes, if you want to detect accidental data damage (like bit rot), use the
|
|||
If you want to be able to detect malicious tampering also, use an encrypted
|
||||
repo. It will then be able to check using CRCs and HMACs.
|
||||
|
||||
Can I use Borg on SMR hard drives?
|
||||
----------------------------------
|
||||
|
||||
SMR (shingled magnetic recording) hard drives are very different from
|
||||
regular hard drives. Applications have to behave in certain ways or
|
||||
performance will be heavily degraded.
|
||||
|
||||
Borg ships with default settings suitable for SMR drives,
|
||||
and has been successfully tested on *Seagate Archive v2* drives
|
||||
using the ext4 file system.
|
||||
|
||||
Some Linux kernel versions between 3.19 and 4.5 had various bugs
|
||||
handling device-managed SMR drives, leading to IO errors, unresponsive
|
||||
drives and unreliable operation in general.
|
||||
|
||||
For more details, refer to :issue:`2252`.
|
||||
|
||||
.. _faq-integrityerror:
|
||||
|
||||
I get an IntegrityError or similar - what now?
|
||||
|
@ -355,7 +309,7 @@ Why is the time elapsed in the archive stats different from wall clock time?
|
|||
----------------------------------------------------------------------------
|
||||
|
||||
Borg needs to write the time elapsed into the archive metadata before finalizing
|
||||
the archive and committing the repo & cache.
|
||||
the archive and saving the files cache.
|
||||
This means when Borg is run with e.g. the ``time`` command, the duration shown
|
||||
in the archive stats may be shorter than the full time the command runs for.
|
||||
|
||||
|
@ -391,8 +345,7 @@ will of course delete everything in the archive, not only some files.
|
|||
:ref:`borg_recreate` command to rewrite all archives with a different
|
||||
``--exclude`` pattern. See the examples in the manpage for more information.
|
||||
|
||||
Finally, run :ref:`borg_compact` with the ``--threshold 0`` option to delete the
|
||||
data chunks from the repository.
|
||||
Finally, run :ref:`borg_compact` to delete the data chunks from the repository.
|
||||
|
||||
Can I safely change the compression level or algorithm?
|
||||
--------------------------------------------------------
|
||||
|
@ -402,6 +355,7 @@ are calculated *before* compression. New compression settings
|
|||
will only be applied to new chunks, not existing chunks. So it's safe
|
||||
to change them.
|
||||
|
||||
Use ``borg rcompress`` to efficiently recompress a complete repository.
|
||||
|
||||
Security
|
||||
########
|
||||
|
@ -704,38 +658,6 @@ serialized way in a single script, you need to give them ``--lock-wait N`` (with
|
|||
being a bit more than the time the server needs to terminate broken down
|
||||
connections and release the lock).
|
||||
|
||||
.. _disable_archive_chunks:
|
||||
|
||||
The borg cache eats way too much disk space, what can I do?
|
||||
-----------------------------------------------------------
|
||||
|
||||
This may especially happen if borg needs to rebuild the local "chunks" index -
|
||||
either because it was removed, or because it was not coherent with the
|
||||
repository state any more (e.g. because another borg instance changed the
|
||||
repository).
|
||||
|
||||
To optimize this rebuild process, borg caches per-archive information in the
|
||||
``chunks.archive.d/`` directory. It won't help the first time it happens, but it
|
||||
will make the subsequent rebuilds faster (because it needs to transfer less data
|
||||
from the repository). While being faster, the cache needs quite some disk space,
|
||||
which might be unwanted.
|
||||
|
||||
You can disable the cached archive chunk indexes by setting the environment
|
||||
variable ``BORG_USE_CHUNKS_ARCHIVE`` to ``no``.
|
||||
|
||||
This has some pros and cons, though:
|
||||
|
||||
- much less disk space needs for ~/.cache/borg.
|
||||
- chunk cache resyncs will be slower as it will have to transfer chunk usage
|
||||
metadata for all archives from the repository (which might be slow if your
|
||||
repo connection is slow) and it will also have to build the hashtables from
|
||||
that data.
|
||||
chunk cache resyncs happen e.g. if your repo was written to by another
|
||||
machine (if you share same backup repo between multiple machines) or if
|
||||
your local chunks cache was lost somehow.
|
||||
|
||||
The long term plan to improve this is called "borgception", see :issue:`474`.
|
||||
|
||||
Can I back up my root partition (/) with Borg?
|
||||
----------------------------------------------
|
||||
|
||||
|
@ -779,7 +701,7 @@ This can make creation of the first archive slower, but saves time
|
|||
and disk space on subsequent runs. Here what Borg does when you run ``borg create``:
|
||||
|
||||
- Borg chunks the file (using the relatively expensive buzhash algorithm)
|
||||
- It then computes the "id" of the chunk (hmac-sha256 (often slow, except
|
||||
- It then computes the "id" of the chunk (hmac-sha256 (slow, except
|
||||
if your CPU has sha256 acceleration) or blake2b (fast, in software))
|
||||
- Then it checks whether this chunk is already in the repo (local hashtable lookup,
|
||||
fast). If so, the processing of the chunk is completed here. Otherwise it needs to
|
||||
|
@ -790,9 +712,8 @@ and disk space on subsequent runs. Here what Borg does when you run ``borg creat
|
|||
- Transmits to repo. If the repo is remote, this usually involves an SSH connection
|
||||
(does its own encryption / authentication).
|
||||
- Stores the chunk into a key/value store (the key is the chunk id, the value
|
||||
is the data). While doing that, it computes CRC32 / XXH64 of the data (repo low-level
|
||||
checksum, used by borg check --repository) and also updates the repo index
|
||||
(another hashtable).
|
||||
is the data). While doing that, it computes XXH64 of the data (repo low-level
|
||||
checksum, used by borg check --repository).
|
||||
|
||||
Subsequent backups are usually very fast if most files are unchanged and only
|
||||
a few are new or modified. The high performance on unchanged files primarily depends
|
||||
|
@ -826,10 +747,9 @@ If you feel your Borg backup is too slow somehow, here is what you can do:
|
|||
- Don't use any expensive compression. The default is lz4 and super fast.
|
||||
Uncompressed is often slower than lz4.
|
||||
- Just wait. You can also interrupt it and start it again as often as you like,
|
||||
it will converge against a valid "completed" state (see ``--checkpoint-interval``,
|
||||
maybe use the default, but in any case don't make it too short). It is starting
|
||||
it will converge against a valid "completed" state. It is starting
|
||||
from the beginning each time, but it is still faster then as it does not store
|
||||
data into the repo which it already has there from last checkpoint.
|
||||
data into the repo which it already has there.
|
||||
- If you don’t need additional file attributes, you can disable them with ``--noflags``,
|
||||
``--noacls``, ``--noxattrs``. This can lead to noticeable performance improvements
|
||||
when your backup consists of many small files.
|
||||
|
@ -1021,6 +941,12 @@ To achieve this, run ``borg create`` within the mountpoint/snapshot directory:
|
|||
cd /mnt/rootfs
|
||||
borg create rootfs_backup .
|
||||
|
||||
Another way (without changing the directory) is to use the slashdot hack:
|
||||
|
||||
::
|
||||
|
||||
borg create rootfs_backup /mnt/rootfs/./
|
||||
|
||||
|
||||
I am having troubles with some network/FUSE/special filesystem, why?
|
||||
--------------------------------------------------------------------
|
||||
|
@ -1100,16 +1026,6 @@ to make it behave correctly::
|
|||
.. _workaround: https://unix.stackexchange.com/a/123236
|
||||
|
||||
|
||||
Can I disable checking for free disk space?
|
||||
-------------------------------------------
|
||||
|
||||
In some cases, the free disk space of the target volume is reported incorrectly.
|
||||
This can happen for CIFS- or FUSE shares. If you are sure that your target volume
|
||||
will always have enough disk space, you can use the following workaround to disable
|
||||
checking for free disk space::
|
||||
|
||||
borg config -- additional_free_space -2T
|
||||
|
||||
How do I rename a repository?
|
||||
-----------------------------
|
||||
|
||||
|
@ -1126,26 +1042,6 @@ It may be useful to set ``BORG_RELOCATED_REPO_ACCESS_IS_OK=yes`` to avoid the
|
|||
prompts when renaming multiple repositories or in a non-interactive context
|
||||
such as a script. See :doc:`deployment` for an example.
|
||||
|
||||
The repository quota size is reached, what can I do?
|
||||
----------------------------------------------------
|
||||
|
||||
The simplest solution is to increase or disable the quota and resume the backup:
|
||||
|
||||
::
|
||||
|
||||
borg config /path/to/repo storage_quota 0
|
||||
|
||||
If you are bound to the quota, you have to free repository space. The first to
|
||||
try is running :ref:`borg_compact` to free unused backup space (see also
|
||||
:ref:`separate_compaction`):
|
||||
|
||||
::
|
||||
|
||||
borg compact /path/to/repo
|
||||
|
||||
If your repository is already compacted, run :ref:`borg_prune` or
|
||||
:ref:`borg_delete` to delete archives that you do not need anymore, and then run
|
||||
``borg compact`` again.
|
||||
|
||||
My backup disk is full, what can I do?
|
||||
--------------------------------------
|
||||
|
@ -1159,11 +1055,6 @@ conditions, but generally this should be avoided. If your backup disk is already
|
|||
full when Borg starts a write command like `borg create`, it will abort
|
||||
immediately and the repository will stay as-is.
|
||||
|
||||
If you run a backup that stops due to a disk running full, Borg will roll back,
|
||||
delete the new segment file and thus freeing disk space automatically. There
|
||||
may be a checkpoint archive left that has been saved before the disk got full.
|
||||
You can keep it to speed up the next backup or delete it to get back more disk
|
||||
space.
|
||||
|
||||
Miscellaneous
|
||||
#############
|
||||
|
|
Binary file not shown.
Binary file not shown.
Before Width: | Height: | Size: 324 KiB |
|
@ -19,63 +19,51 @@ discussion about internals`_ and also on static code analysis.
|
|||
Repository
|
||||
----------
|
||||
|
||||
.. Some parts of this description were taken from the Repository docstring
|
||||
Borg stores its data in a `Repository`, which is a key-value store and has
|
||||
the following structure:
|
||||
|
||||
Borg stores its data in a `Repository`, which is a file system based
|
||||
transactional key-value store. Thus the repository does not know about
|
||||
the concept of archives or items.
|
||||
config/
|
||||
readme
|
||||
simple text object telling that this is a Borg repository
|
||||
id
|
||||
the unique repository ID encoded as hexadecimal number text
|
||||
version
|
||||
the repository version encoded as decimal number text
|
||||
manifest
|
||||
some data about the repository, binary
|
||||
last-key-checked
|
||||
repository check progress (partial checks, full checks' checkpointing),
|
||||
path of last object checked as text
|
||||
space-reserve.N
|
||||
purely random binary data to reserve space, e.g. for disk-full emergencies
|
||||
|
||||
Each repository has the following file structure:
|
||||
There is a list of pointers to archive objects in this directory:
|
||||
|
||||
README
|
||||
simple text file telling that this is a Borg repository
|
||||
archives/
|
||||
0000... .. ffff...
|
||||
|
||||
config
|
||||
repository configuration
|
||||
The actual data is stored into a nested directory structure, using the full
|
||||
object ID as name. Each (encrypted and compressed) object is stored separately.
|
||||
|
||||
data/
|
||||
directory where the actual data is stored
|
||||
00/ .. ff/
|
||||
00/ .. ff/
|
||||
0000... .. ffff...
|
||||
|
||||
hints.%d
|
||||
hints for repository compaction
|
||||
keys/
|
||||
repokey
|
||||
When using encryption in repokey mode, the encrypted, passphrase protected
|
||||
key is stored here as a base64 encoded text.
|
||||
|
||||
index.%d
|
||||
repository index
|
||||
locks/
|
||||
used by the locking system to manage shared and exclusive locks.
|
||||
|
||||
lock.roster and lock.exclusive/*
|
||||
used by the locking system to manage shared and exclusive locks
|
||||
|
||||
Transactionality is achieved by using a log (aka journal) to record changes. The log is a series of numbered files
|
||||
called segments_. Each segment is a series of log entries. The segment number together with the offset of each
|
||||
entry relative to its segment start establishes an ordering of the log entries. This is the "definition" of
|
||||
time for the purposes of the log.
|
||||
|
||||
.. _config-file:
|
||||
|
||||
Config file
|
||||
~~~~~~~~~~~
|
||||
|
||||
Each repository has a ``config`` file which is a ``INI``-style file
|
||||
and looks like this::
|
||||
|
||||
[repository]
|
||||
version = 2
|
||||
segments_per_dir = 1000
|
||||
max_segment_size = 524288000
|
||||
id = 57d6c1d52ce76a836b532b0e42e677dec6af9fca3673db511279358828a21ed6
|
||||
|
||||
This is where the ``repository.id`` is stored. It is a unique
|
||||
identifier for repositories. It will not change if you move the
|
||||
repository around so you can make a local transfer then decide to move
|
||||
the repository to another (even remote) location at a later time.
|
||||
|
||||
Keys
|
||||
~~~~
|
||||
|
||||
Repository keys are byte-strings of fixed length (32 bytes), they
|
||||
don't have a particular meaning (except for the Manifest_).
|
||||
|
||||
Normally the keys are computed like this::
|
||||
Repository object IDs (which are used as key into the key-value store) are
|
||||
byte-strings of fixed length (256bit, 32 bytes), computed like this::
|
||||
|
||||
key = id = id_hash(plaintext_data) # plain = not encrypted, not compressed, not obfuscated
|
||||
|
||||
|
@ -84,247 +72,68 @@ The id_hash function depends on the :ref:`encryption mode <borg_rcreate>`.
|
|||
As the id / key is used for deduplication, id_hash must be a cryptographically
|
||||
strong hash or MAC.
|
||||
|
||||
Segments
|
||||
~~~~~~~~
|
||||
Repository objects
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Objects referenced by a key are stored inline in files (`segments`) of approx.
|
||||
500 MB size in numbered subdirectories of ``repo/data``. The number of segments
|
||||
per directory is controlled by the value of ``segments_per_dir``. If you change
|
||||
this value in a non-empty repository, you may also need to relocate the segment
|
||||
files manually.
|
||||
Each repository object is stored separately, under its ID into data/xx/yy/xxyy...
|
||||
|
||||
A segment starts with a magic number (``BORG_SEG`` as an eight byte ASCII string),
|
||||
followed by a number of log entries. Each log entry consists of (in this order):
|
||||
A repo object has a structure like this:
|
||||
|
||||
* crc32 checksum (uint32):
|
||||
- for PUT2: CRC32(size + tag + key + digest)
|
||||
- for PUT: CRC32(size + tag + key + payload)
|
||||
- for DELETE: CRC32(size + tag + key)
|
||||
- for COMMIT: CRC32(size + tag)
|
||||
* size (uint32) of the entry (including the whole header)
|
||||
* tag (uint8): PUT(0), DELETE(1), COMMIT(2) or PUT2(3)
|
||||
* key (256 bit) - only for PUT/PUT2/DELETE
|
||||
* payload (size - 41 bytes) - only for PUT
|
||||
* xxh64 digest (64 bit) = XXH64(size + tag + key + payload) - only for PUT2
|
||||
* payload (size - 41 - 8 bytes) - only for PUT2
|
||||
* 32bit meta size
|
||||
* 32bit data size
|
||||
* 64bit xxh64(meta)
|
||||
* 64bit xxh64(data)
|
||||
* meta
|
||||
* data
|
||||
|
||||
PUT2 is new since repository version 2. For new log entries PUT2 is used.
|
||||
PUT is still supported to read version 1 repositories, but not generated any more.
|
||||
If we talk about ``PUT`` in general, it shall usually mean PUT2 for repository
|
||||
version 2+.
|
||||
The size and xxh64 hashes can be used for server-side corruption checks without
|
||||
needing to decrypt anything (which would require the borg key).
|
||||
|
||||
Those files are strictly append-only and modified only once.
|
||||
The overall size of repository objects varies from very small (a small source
|
||||
file will be stored as a single repo object) to medium (big source files will
|
||||
be cut into medium sized chunks of some MB).
|
||||
|
||||
When an object is written to the repository a ``PUT`` entry is written
|
||||
to the file containing the object id and payload. If an object is deleted
|
||||
a ``DELETE`` entry is appended with the object id.
|
||||
Metadata and data are separately encrypted and authenticated (depending on
|
||||
the user's choices).
|
||||
|
||||
A ``COMMIT`` tag is written when a repository transaction is
|
||||
committed. The segment number of the segment containing
|
||||
a commit is the **transaction ID**.
|
||||
See :ref:`data-encryption` for a graphic outlining the anatomy of the
|
||||
encryption.
|
||||
|
||||
When a repository is opened any ``PUT`` or ``DELETE`` operations not
|
||||
followed by a ``COMMIT`` tag are discarded since they are part of a
|
||||
partial/uncommitted transaction.
|
||||
Repo object metadata
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The size of individual segments is limited to 4 GiB, since the offset of entries
|
||||
within segments is stored in a 32-bit unsigned integer in the repository index.
|
||||
Metadata is a msgpacked (and encrypted/authenticated) dict with:
|
||||
|
||||
Objects / Payload structure
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
- ctype (compression type 0..255)
|
||||
- clevel (compression level 0..255)
|
||||
- csize (overall compressed (and maybe obfuscated) data size)
|
||||
- psize (only when obfuscated: payload size without the obfuscation trailer)
|
||||
- size (uncompressed size of the data)
|
||||
|
||||
All data (the manifest, archives, archive item stream chunks and file data
|
||||
chunks) is compressed, optionally obfuscated and encrypted. This produces some
|
||||
additional metadata (size and compression information), which is separately
|
||||
serialized and also encrypted.
|
||||
|
||||
See :ref:`data-encryption` for a graphic outlining the anatomy of the encryption in Borg.
|
||||
What you see at the bottom there is done twice: once for the data and once for the metadata.
|
||||
|
||||
An object (the payload part of a segment file log entry) must be like:
|
||||
|
||||
- length of encrypted metadata (16bit unsigned int)
|
||||
- encrypted metadata (incl. encryption header), when decrypted:
|
||||
|
||||
- msgpacked dict with:
|
||||
|
||||
- ctype (compression type 0..255)
|
||||
- clevel (compression level 0..255)
|
||||
- csize (overall compressed (and maybe obfuscated) data size)
|
||||
- psize (only when obfuscated: payload size without the obfuscation trailer)
|
||||
- size (uncompressed size of the data)
|
||||
- encrypted data (incl. encryption header), when decrypted:
|
||||
|
||||
- compressed data (with an optional all-zero-bytes obfuscation trailer)
|
||||
|
||||
This new, more complex repo v2 object format was implemented to be able to query the
|
||||
metadata efficiently without having to read, transfer and decrypt the (usually much bigger)
|
||||
data part.
|
||||
|
||||
The metadata is encrypted not to disclose potentially sensitive information that could be
|
||||
used for e.g. fingerprinting attacks.
|
||||
Having this separately encrypted metadata makes it more efficient to query
|
||||
the metadata without having to read, transfer and decrypt the (usually much
|
||||
bigger) data part.
|
||||
|
||||
The compression `ctype` and `clevel` is explained in :ref:`data-compression`.
|
||||
|
||||
|
||||
Index, hints and integrity
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The **repository index** is stored in ``index.<TRANSACTION_ID>`` and is used to
|
||||
determine an object's location in the repository. It is a HashIndex_,
|
||||
a hash table using open addressing.
|
||||
|
||||
It maps object keys_ to:
|
||||
|
||||
* segment number (unit32)
|
||||
* offset of the object's entry within the segment (uint32)
|
||||
* size of the payload, not including the entry header (uint32)
|
||||
* flags (uint32)
|
||||
|
||||
The **hints file** is a msgpacked file named ``hints.<TRANSACTION_ID>``.
|
||||
It contains:
|
||||
|
||||
* version
|
||||
* list of segments
|
||||
* compact
|
||||
* shadow_index
|
||||
* storage_quota_use
|
||||
|
||||
The **integrity file** is a msgpacked file named ``integrity.<TRANSACTION_ID>``.
|
||||
It contains checksums of the index and hints files and is described in the
|
||||
:ref:`Checksumming data structures <integrity_repo>` section below.
|
||||
|
||||
If the index or hints are corrupted, they are re-generated automatically.
|
||||
If they are outdated, segments are replayed from the index state to the currently
|
||||
committed transaction.
|
||||
|
||||
Compaction
|
||||
~~~~~~~~~~
|
||||
|
||||
For a given key only the last entry regarding the key, which is called current (all other entries are called
|
||||
superseded), is relevant: If there is no entry or the last entry is a DELETE then the key does not exist.
|
||||
Otherwise the last PUT defines the value of the key.
|
||||
``borg compact`` is used to free repository space. It will:
|
||||
|
||||
By superseding a PUT (with either another PUT or a DELETE) the log entry becomes obsolete. A segment containing
|
||||
such obsolete entries is called sparse, while a segment containing no such entries is called compact.
|
||||
- list all object IDs present in the repository
|
||||
- read all archives and determine which object IDs are in use
|
||||
- remove all unused objects from the repository
|
||||
- inform / warn about anything remarkable it found:
|
||||
|
||||
Since writing a ``DELETE`` tag does not actually delete any data and
|
||||
thus does not free disk space any log-based data store will need a
|
||||
compaction strategy (somewhat analogous to a garbage collector).
|
||||
- warn about IDs used, but not present (data loss!)
|
||||
- inform about IDs that reappeared that were previously lost
|
||||
- compute statistics about:
|
||||
|
||||
Borg uses a simple forward compacting algorithm, which avoids modifying existing segments.
|
||||
Compaction runs when a commit is issued with ``compact=True`` parameter, e.g.
|
||||
by the ``borg compact`` command (unless the :ref:`append_only_mode` is active).
|
||||
- compression and deduplication factors
|
||||
- repository space usage and space freed
|
||||
|
||||
The compaction algorithm requires two inputs in addition to the segments themselves:
|
||||
|
||||
(i) Which segments are sparse, to avoid scanning all segments (impractical).
|
||||
Further, Borg uses a conditional compaction strategy: Only those
|
||||
segments that exceed a threshold sparsity are compacted.
|
||||
|
||||
To implement the threshold condition efficiently, the sparsity has
|
||||
to be stored as well. Therefore, Borg stores a mapping ``(segment
|
||||
id,) -> (number of sparse bytes,)``.
|
||||
|
||||
(ii) Each segment's reference count, which indicates how many live objects are in a segment.
|
||||
This is not strictly required to perform the algorithm. Rather, it is used to validate
|
||||
that a segment is unused before deleting it. If the algorithm is incorrect, or the reference
|
||||
count was not accounted correctly, then an assertion failure occurs.
|
||||
|
||||
These two pieces of information are stored in the hints file (`hints.N`)
|
||||
next to the index (`index.N`).
|
||||
|
||||
Compaction may take some time if a repository has been kept in append-only mode
|
||||
or ``borg compact`` has not been used for a longer time, which both has caused
|
||||
the number of sparse segments to grow.
|
||||
|
||||
Compaction processes sparse segments from oldest to newest; sparse segments
|
||||
which don't contain enough deleted data to justify compaction are skipped. This
|
||||
avoids doing e.g. 500 MB of writing current data to a new segment when only
|
||||
a couple kB were deleted in a segment.
|
||||
|
||||
Segments that are compacted are read in entirety. Current entries are written to
|
||||
a new segment, while superseded entries are omitted. After each segment an intermediary
|
||||
commit is written to the new segment. Then, the old segment is deleted
|
||||
(asserting that the reference count diminished to zero), freeing disk space.
|
||||
|
||||
A simplified example (excluding conditional compaction and with simpler
|
||||
commit logic) showing the principal operation of compaction:
|
||||
|
||||
.. figure:: compaction.png
|
||||
:figwidth: 100%
|
||||
:width: 100%
|
||||
|
||||
(The actual algorithm is more complex to avoid various consistency issues, refer to
|
||||
the ``borg.repository`` module for more comments and documentation on these issues.)
|
||||
|
||||
.. _internals_storage_quota:
|
||||
|
||||
Storage quotas
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Quotas are implemented at the Repository level. The active quota of a repository
|
||||
is determined by the ``storage_quota`` `config` entry or a run-time override (via :ref:`borg_serve`).
|
||||
The currently used quota is stored in the hints file. Operations (PUT and DELETE) during
|
||||
a transaction modify the currently used quota:
|
||||
|
||||
- A PUT adds the size of the *log entry* to the quota,
|
||||
i.e. the length of the data plus the 41 byte header.
|
||||
- A DELETE subtracts the size of the deleted log entry from the quota,
|
||||
which includes the header.
|
||||
|
||||
Thus, PUT and DELETE are symmetric and cancel each other out precisely.
|
||||
|
||||
The quota does not track on-disk size overheads (due to conditional compaction
|
||||
or append-only mode). In normal operation the inclusion of the log entry headers
|
||||
in the quota act as a faithful proxy for index and hints overheads.
|
||||
|
||||
By tracking effective content size, the client can *always* recover from a full quota
|
||||
by deleting archives. This would not be possible if the quota tracked on-disk size,
|
||||
since journaling DELETEs requires extra disk space before space is freed.
|
||||
Tracking effective size on the other hand accounts DELETEs immediately as freeing quota.
|
||||
|
||||
.. rubric:: Enforcing the quota
|
||||
|
||||
The storage quota is meant as a robust mechanism for service providers, therefore
|
||||
:ref:`borg_serve` has to enforce it without loopholes (e.g. modified clients).
|
||||
The following sections refer to using quotas on remotely accessed repositories.
|
||||
For local access, consider *client* and *serve* the same.
|
||||
Accordingly, quotas cannot be enforced with local access,
|
||||
since the quota can be changed in the repository config.
|
||||
|
||||
The quota is enforcible only if *all* :ref:`borg_serve` versions
|
||||
accessible to clients support quotas (see next section). Further, quota is
|
||||
per repository. Therefore, ensure clients can only access a defined set of repositories
|
||||
with their quotas set, using ``--restrict-to-repository``.
|
||||
|
||||
If the client exceeds the storage quota the ``StorageQuotaExceeded`` exception is
|
||||
raised. Normally a client could ignore such an exception and just send a ``commit()``
|
||||
command anyway, circumventing the quota. However, when ``StorageQuotaExceeded`` is raised,
|
||||
it is stored in the ``transaction_doomed`` attribute of the repository.
|
||||
If the transaction is doomed, then commit will re-raise this exception, aborting the commit.
|
||||
|
||||
The transaction_doomed indicator is reset on a rollback (which erases the quota-exceeding
|
||||
state).
|
||||
|
||||
.. rubric:: Compatibility with older servers and enabling quota after-the-fact
|
||||
|
||||
If no quota data is stored in the hints file, Borg assumes zero quota is used.
|
||||
Thus, if a repository with an enabled quota is written to with an older ``borg serve``
|
||||
version that does not understand quotas, then the quota usage will be erased.
|
||||
|
||||
The client version is irrelevant to the storage quota and has no part in it.
|
||||
The form of error messages due to exceeding quota varies with client versions.
|
||||
|
||||
A similar situation arises when upgrading from a Borg release that did not have quotas.
|
||||
Borg will start tracking quota use from the time of the upgrade, starting at zero.
|
||||
|
||||
If the quota shall be enforced accurately in these cases, either
|
||||
|
||||
- delete the ``index.N`` and ``hints.N`` files, forcing Borg to rebuild both,
|
||||
re-acquiring quota data in the process, or
|
||||
- edit the msgpacked ``hints.N`` file (not recommended and thus not
|
||||
documented further).
|
||||
|
||||
The object graph
|
||||
----------------
|
||||
|
@ -344,10 +153,10 @@ More on how this helps security in :ref:`security_structural_auth`.
|
|||
The manifest
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The manifest is the root of the object hierarchy. It references
|
||||
all archives in a repository, and thus all data in it.
|
||||
Since no object references it, it cannot be stored under its ID key.
|
||||
Instead, the manifest has a fixed all-zero key.
|
||||
Compared to borg 1.x:
|
||||
|
||||
- the manifest moved from object ID 0 to config/manifest
|
||||
- the archives list has been moved from the manifest to archives/*
|
||||
|
||||
The manifest is rewritten each time an archive is created, deleted,
|
||||
or modified. It looks like this:
|
||||
|
@ -523,17 +332,18 @@ these may/may not be implemented and purely serve as examples.
|
|||
Archives
|
||||
~~~~~~~~
|
||||
|
||||
Each archive is an object referenced by the manifest. The archive object
|
||||
itself does not store any of the data contained in the archive it describes.
|
||||
Each archive is an object referenced by an entry below archives/.
|
||||
The archive object itself does not store any of the data contained in the
|
||||
archive it describes.
|
||||
|
||||
Instead, it contains a list of chunks which form a msgpacked stream of items_.
|
||||
The archive object itself further contains some metadata:
|
||||
|
||||
* *version*
|
||||
* *name*, which might differ from the name set in the manifest.
|
||||
* *name*, which might differ from the name set in the archives/* object.
|
||||
When :ref:`borg_check` rebuilds the manifest (e.g. if it was corrupted) and finds
|
||||
more than one archive object with the same name, it adds a counter to the name
|
||||
in the manifest, but leaves the *name* field of the archives as it was.
|
||||
in archives/*, but leaves the *name* field of the archives as they were.
|
||||
* *item_ptrs*, a list of "pointer chunk" IDs.
|
||||
Each "pointer chunk" contains a list of chunk IDs of item metadata.
|
||||
* *command_line*, the command line which was used to create the archive
|
||||
|
@ -676,7 +486,7 @@ In memory, the files cache is a key -> value mapping (a Python *dict*) and conta
|
|||
- file size
|
||||
- file ctime_ns (or mtime_ns)
|
||||
- age (0 [newest], 1, 2, 3, ..., BORG_FILES_CACHE_TTL - 1)
|
||||
- list of chunk ids representing the file's contents
|
||||
- list of chunk (id, size) tuples representing the file's contents
|
||||
|
||||
To determine whether a file has not changed, cached values are looked up via
|
||||
the key in the mapping and compared to the current file attribute values.
|
||||
|
@ -717,9 +527,9 @@ The on-disk format of the files cache is a stream of msgpacked tuples (key, valu
|
|||
Loading the files cache involves reading the file, one msgpack object at a time,
|
||||
unpacking it, and msgpacking the value (in an effort to save memory).
|
||||
|
||||
The **chunks cache** is stored in ``cache/chunks`` and is used to determine
|
||||
whether we already have a specific chunk, to count references to it and also
|
||||
for statistics.
|
||||
The **chunks cache** is not persisted to disk, but dynamically built in memory
|
||||
by querying the existing object IDs from the repository.
|
||||
It is used to determine whether we already have a specific chunk.
|
||||
|
||||
The chunks cache is a key -> value mapping and contains:
|
||||
|
||||
|
@ -728,14 +538,10 @@ The chunks cache is a key -> value mapping and contains:
|
|||
- chunk id_hash
|
||||
* value:
|
||||
|
||||
- reference count
|
||||
- size
|
||||
- reference count (always MAX_VALUE as we do not refcount anymore)
|
||||
- size (0 for prev. existing objects, we can't query their plaintext size)
|
||||
|
||||
The chunks cache is a HashIndex_. Due to some restrictions of HashIndex,
|
||||
the reference count of each given chunk is limited to a constant, MAX_VALUE
|
||||
(introduced below in HashIndex_), approximately 2**32.
|
||||
If a reference count hits MAX_VALUE, decrementing it yields MAX_VALUE again,
|
||||
i.e. the reference count is pinned to MAX_VALUE.
|
||||
The chunks cache is a HashIndex_.
|
||||
|
||||
.. _cache-memory-usage:
|
||||
|
||||
|
@ -747,14 +553,12 @@ Here is the estimated memory usage of Borg - it's complicated::
|
|||
chunk_size ~= 2 ^ HASH_MASK_BITS (for buzhash chunker, BLOCK_SIZE for fixed chunker)
|
||||
chunk_count ~= total_file_size / chunk_size
|
||||
|
||||
repo_index_usage = chunk_count * 48
|
||||
|
||||
chunks_cache_usage = chunk_count * 40
|
||||
|
||||
files_cache_usage = total_file_count * 240 + chunk_count * 80
|
||||
files_cache_usage = total_file_count * 240 + chunk_count * 165
|
||||
|
||||
mem_usage ~= repo_index_usage + chunks_cache_usage + files_cache_usage
|
||||
= chunk_count * 164 + total_file_count * 240
|
||||
mem_usage ~= chunks_cache_usage + files_cache_usage
|
||||
= chunk_count * 205 + total_file_count * 240
|
||||
|
||||
Due to the hashtables, the best/usual/worst cases for memory allocation can
|
||||
be estimated like that::
|
||||
|
@ -772,11 +576,9 @@ It is also assuming that typical chunk size is 2^HASH_MASK_BITS (if you have
|
|||
a lot of files smaller than this statistical medium chunk size, you will have
|
||||
more chunks than estimated above, because 1 file is at least 1 chunk).
|
||||
|
||||
If a remote repository is used the repo index will be allocated on the remote side.
|
||||
|
||||
The chunks cache, files cache and the repo index are all implemented as hash
|
||||
tables. A hash table must have a significant amount of unused entries to be
|
||||
fast - the so-called load factor gives the used/unused elements ratio.
|
||||
The chunks cache and files cache are all implemented as hash tables.
|
||||
A hash table must have a significant amount of unused entries to be fast -
|
||||
the so-called load factor gives the used/unused elements ratio.
|
||||
|
||||
When a hash table gets full (load factor getting too high), it needs to be
|
||||
grown (allocate new, bigger hash table, copy all elements over to it, free old
|
||||
|
@ -802,7 +604,7 @@ b) with ``create --chunker-params buzhash,19,23,21,4095`` (default):
|
|||
HashIndex
|
||||
---------
|
||||
|
||||
The chunks cache and the repository index are stored as hash tables, with
|
||||
The chunks cache is implemented as a hash table, with
|
||||
only one slot per bucket, spreading hash collisions to the following
|
||||
buckets. As a consequence the hash is just a start position for a linear
|
||||
search. If a key is looked up that is not in the table, then the hash table
|
||||
|
@ -905,7 +707,7 @@ Both modes
|
|||
~~~~~~~~~~
|
||||
|
||||
Encryption keys (and other secrets) are kept either in a key file on the client
|
||||
('keyfile' mode) or in the repository config on the server ('repokey' mode).
|
||||
('keyfile' mode) or in the repository under keys/repokey ('repokey' mode).
|
||||
In both cases, the secrets are generated from random and then encrypted by a
|
||||
key derived from your passphrase (this happens on the client before the key
|
||||
is stored into the keyfile or as repokey).
|
||||
|
@ -923,8 +725,7 @@ Key files
|
|||
When initializing a repository with one of the "keyfile" encryption modes,
|
||||
Borg creates an associated key file in ``$HOME/.config/borg/keys``.
|
||||
|
||||
The same key is also used in the "repokey" modes, which store it in the repository
|
||||
in the configuration file.
|
||||
The same key is also used in the "repokey" modes, which store it in the repository.
|
||||
|
||||
The internal data structure is as follows:
|
||||
|
||||
|
@ -1016,11 +817,10 @@ methods in one repo does not influence deduplication.
|
|||
|
||||
See ``borg create --help`` about how to specify the compression level and its default.
|
||||
|
||||
Lock files
|
||||
----------
|
||||
Lock files (fslocking)
|
||||
----------------------
|
||||
|
||||
Borg uses locks to get (exclusive or shared) access to the cache and
|
||||
the repository.
|
||||
Borg uses filesystem locks to get (exclusive or shared) access to the cache.
|
||||
|
||||
The locking system is based on renaming a temporary directory
|
||||
to `lock.exclusive` (for
|
||||
|
@ -1037,24 +837,46 @@ to `lock.exclusive`, it has the lock for it. If renaming fails
|
|||
denotes a thread on the host which is still alive), lock acquisition fails.
|
||||
|
||||
The cache lock is usually in `~/.cache/borg/REPOID/lock.*`.
|
||||
The repository lock is in `repository/lock.*`.
|
||||
|
||||
Locks (storelocking)
|
||||
--------------------
|
||||
|
||||
To implement locking based on ``borgstore``, borg stores objects below locks/.
|
||||
|
||||
The objects contain:
|
||||
|
||||
- a timestamp when lock was created (or refreshed)
|
||||
- host / process / thread information about lock owner
|
||||
- lock type: exclusive or shared
|
||||
|
||||
Using that information, borg implements:
|
||||
|
||||
- lock auto-expiry: if a lock is old and has not been refreshed in time,
|
||||
it will be automatically ignored and deleted. the primary purpose of this
|
||||
is to get rid of stale locks by borg processes on other machines.
|
||||
- lock auto-removal if the owner process is dead. the primary purpose of this
|
||||
is to quickly get rid of stale locks by borg processes on the same machine.
|
||||
|
||||
Breaking the locks
|
||||
------------------
|
||||
|
||||
In case you run into troubles with the locks, you can use the ``borg break-lock``
|
||||
command after you first have made sure that no Borg process is
|
||||
running on any machine that accesses this resource. Be very careful, the cache
|
||||
or repository might get damaged if multiple processes use it at the same time.
|
||||
|
||||
If there is an issue just with the repository lock, it will usually resolve
|
||||
automatically (see above), just retry later.
|
||||
|
||||
|
||||
Checksumming data structures
|
||||
----------------------------
|
||||
|
||||
As detailed in the previous sections, Borg generates and stores various files
|
||||
containing important meta data, such as the repository index, repository hints,
|
||||
chunks caches and files cache.
|
||||
containing important meta data, such as the files cache.
|
||||
|
||||
Data corruption in these files can damage the archive data in a repository,
|
||||
e.g. due to wrong reference counts in the chunks cache. Only some parts of Borg
|
||||
were designed to handle corrupted data structures, so a corrupted files cache
|
||||
may cause crashes or write incorrect archives.
|
||||
Data corruption in the files cache could create incorrect archives, e.g. due
|
||||
to wrong object IDs or sizes in the files cache.
|
||||
|
||||
Therefore, Borg calculates checksums when writing these files and tests checksums
|
||||
when reading them. Checksums are generally 64-bit XXH64 hashes.
|
||||
|
@ -1086,11 +908,11 @@ xxHash was expressly designed for data blocks of these sizes.
|
|||
Lower layer — file_integrity
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To accommodate the different transaction models used for the cache and repository,
|
||||
there is a lower layer (borg.crypto.file_integrity.IntegrityCheckedFile)
|
||||
wrapping a file-like object, performing streaming calculation and comparison of checksums.
|
||||
Checksum errors are signalled by raising an exception (borg.crypto.file_integrity.FileIntegrityError)
|
||||
at the earliest possible moment.
|
||||
There is a lower layer (borg.crypto.file_integrity.IntegrityCheckedFile)
|
||||
wrapping a file-like object, performing streaming calculation and comparison
|
||||
of checksums.
|
||||
Checksum errors are signalled by raising an exception at the earliest possible
|
||||
moment (borg.crypto.file_integrity.FileIntegrityError).
|
||||
|
||||
.. rubric:: Calculating checksums
|
||||
|
||||
|
@ -1134,19 +956,13 @@ The *digests* key contains a mapping of part names to their digests.
|
|||
Integrity data is generally stored by the upper layers, introduced below. An exception
|
||||
is the DetachedIntegrityCheckedFile, which automatically writes and reads it from
|
||||
a ".integrity" file next to the data file.
|
||||
It is used for archive chunks indexes in chunks.archive.d.
|
||||
|
||||
Upper layer
|
||||
~~~~~~~~~~~
|
||||
|
||||
Storage of integrity data depends on the component using it, since they have
|
||||
different transaction mechanisms, and integrity data needs to be
|
||||
transacted with the data it is supposed to protect.
|
||||
|
||||
.. rubric:: Main cache files: chunks and files cache
|
||||
|
||||
The integrity data of the ``chunks`` and ``files`` caches is stored in the
|
||||
cache ``config``, since all three are transacted together.
|
||||
The integrity data of the ``files`` cache is stored in the cache ``config``.
|
||||
|
||||
The ``[integrity]`` section is used:
|
||||
|
||||
|
@ -1162,7 +978,7 @@ The ``[integrity]`` section is used:
|
|||
|
||||
[integrity]
|
||||
manifest = 10e...21c
|
||||
chunks = {"algorithm": "XXH64", "digests": {"HashHeader": "eab...39e3", "final": "e2a...b24"}}
|
||||
files = {"algorithm": "XXH64", "digests": {"HashHeader": "eab...39e3", "final": "e2a...b24"}}
|
||||
|
||||
The manifest ID is duplicated in the integrity section due to the way all Borg
|
||||
versions handle the config file. Instead of creating a "new" config file from
|
||||
|
@ -1182,52 +998,6 @@ easy to tell whether the checksums concern the current state of the cache.
|
|||
Integrity errors are fatal in these files, terminating the program,
|
||||
and are not automatically corrected at this time.
|
||||
|
||||
.. rubric:: chunks.archive.d
|
||||
|
||||
Indices in chunks.archive.d are not transacted and use DetachedIntegrityCheckedFile,
|
||||
which writes the integrity data to a separate ".integrity" file.
|
||||
|
||||
Integrity errors result in deleting the affected index and rebuilding it.
|
||||
This logs a warning and increases the exit code to WARNING (1).
|
||||
|
||||
.. _integrity_repo:
|
||||
|
||||
.. rubric:: Repository index and hints
|
||||
|
||||
The repository associates index and hints files with a transaction by including the
|
||||
transaction ID in the file names. Integrity data is stored in a third file
|
||||
("integrity.<TRANSACTION_ID>"). Like the hints file, it is msgpacked:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
'version': 2,
|
||||
'hints': '{"algorithm": "XXH64", "digests": {"final": "411208db2aa13f1a"}}',
|
||||
'index': '{"algorithm": "XXH64", "digests": {"HashHeader": "846b7315f91b8e48", "final": "cb3e26cadc173e40"}}'
|
||||
}
|
||||
|
||||
The *version* key started at 2, the same version used for the hints. Since Borg has
|
||||
many versioned file formats, this keeps the number of different versions in use
|
||||
a bit lower.
|
||||
|
||||
The other keys map an auxiliary file, like *index* or *hints* to their integrity data.
|
||||
Note that the JSON is stored as-is, and not as part of the msgpack structure.
|
||||
|
||||
Integrity errors result in deleting the affected file(s) (index/hints) and rebuilding the index,
|
||||
which is the same action taken when corruption is noticed in other ways (e.g. HashIndex can
|
||||
detect most corrupted headers, but not data corruption). A warning is logged as well.
|
||||
The exit code is not influenced, since remote repositories cannot perform that action.
|
||||
Raising the exit code would be possible for local repositories, but is not implemented.
|
||||
|
||||
Unlike the cache design this mechanism can have false positives whenever an older version
|
||||
*rewrites* the auxiliary files for a transaction created by a newer version,
|
||||
since that might result in a different index (due to hash-table resizing) or hints file
|
||||
(hash ordering, or the older version 1 format), while not invalidating the integrity file.
|
||||
|
||||
For example, using 1.1 on a repository, noticing corruption or similar issues and then running
|
||||
``borg-1.0 check --repair``, which rewrites the index and hints, results in this situation.
|
||||
Borg 1.1 would erroneously report checksum errors in the hints and/or index files and trigger
|
||||
an automatic rebuild of these files.
|
||||
|
||||
HardLinkManager and the hlid concept
|
||||
------------------------------------
|
||||
|
|
Binary file not shown.
Binary file not shown.
Before Width: | Height: | Size: 380 KiB After Width: | Height: | Size: 98 KiB |
|
@ -31,14 +31,14 @@ deleted between attacks).
|
|||
Under these circumstances Borg guarantees that the attacker cannot
|
||||
|
||||
1. modify the data of any archive without the client detecting the change
|
||||
2. rename, remove or add an archive without the client detecting the change
|
||||
2. rename or add an archive without the client detecting the change
|
||||
3. recover plain-text data
|
||||
4. recover definite (heuristics based on access patterns are possible)
|
||||
structural information such as the object graph (which archives
|
||||
refer to what chunks)
|
||||
|
||||
The attacker can always impose a denial of service per definition (he could
|
||||
forbid connections to the repository, or delete it entirely).
|
||||
forbid connections to the repository, or delete it partly or entirely).
|
||||
|
||||
|
||||
.. _security_structural_auth:
|
||||
|
@ -47,12 +47,12 @@ Structural Authentication
|
|||
-------------------------
|
||||
|
||||
Borg is fundamentally based on an object graph structure (see :ref:`internals`),
|
||||
where the root object is called the manifest.
|
||||
where the root objects are the archives.
|
||||
|
||||
Borg follows the `Horton principle`_, which states that
|
||||
not only the message must be authenticated, but also its meaning (often
|
||||
expressed through context), because every object used is referenced by a
|
||||
parent object through its object ID up to the manifest. The object ID in
|
||||
parent object through its object ID up to the archive list entry. The object ID in
|
||||
Borg is a MAC of the object's plaintext, therefore this ensures that
|
||||
an attacker cannot change the context of an object without forging the MAC.
|
||||
|
||||
|
@ -64,8 +64,8 @@ represent packed file metadata. On their own, it's not clear that these objects
|
|||
would represent what they do, but by the archive item referring to them
|
||||
in a particular part of its own data structure assigns this meaning.
|
||||
|
||||
This results in a directed acyclic graph of authentication from the manifest
|
||||
to the data chunks of individual files.
|
||||
This results in a directed acyclic graph of authentication from the archive
|
||||
list entry to the data chunks of individual files.
|
||||
|
||||
Above used to be all for borg 1.x and was the reason why it needed the
|
||||
tertiary authentication mechanism (TAM) for manifest and archives.
|
||||
|
@ -80,11 +80,23 @@ the object ID (via giving the ID as AAD), there is no way an attacker (without
|
|||
access to the borg key) could change the type of the object or move content
|
||||
to a different object ID.
|
||||
|
||||
This effectively 'anchors' the manifest (and also other metadata, like archives)
|
||||
to the key, which is controlled by the client, thereby anchoring the entire DAG,
|
||||
making it impossible for an attacker to add, remove or modify any part of the
|
||||
This effectively 'anchors' each archive to the key, which is controlled by the
|
||||
client, thereby anchoring the DAG starting from the archives list entry,
|
||||
making it impossible for an attacker to add or modify any part of the
|
||||
DAG without Borg being able to detect the tampering.
|
||||
|
||||
Please note that removing an archive by removing an entry from archives/*
|
||||
is possible and is done by ``borg delete`` and ``borg prune`` within their
|
||||
normal operation. An attacker could also remove some entries there, but, due to
|
||||
encryption, would not know what exactly they are removing. An attacker with
|
||||
repository access could also remove other parts of the repository or the whole
|
||||
repository, so there is not much point in protecting against archive removal.
|
||||
|
||||
The borg 1.x way of having the archives list within the manifest chunk was
|
||||
problematic as it required a read-modify-write operation on the manifest,
|
||||
requiring a lock on the repository. We want to try less locking and more
|
||||
parallelism in future.
|
||||
|
||||
Passphrase notes
|
||||
----------------
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-BENCHMARK-CPU" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-BENCHMARK-CPU" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-benchmark-cpu \- Benchmark CPU bound operations.
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-BENCHMARK-CRUD" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-BENCHMARK-CRUD" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-benchmark-crud \- Benchmark Create, Read, Update, Delete for archives.
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-BENCHMARK" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-BENCHMARK" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-benchmark \- benchmark command
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-BREAK-LOCK" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-BREAK-LOCK" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-break-lock \- Break the repository lock (e.g. in case it was left by a dead borg.
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-CHECK" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-CHECK" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-check \- Check repository consistency
|
||||
.SH SYNOPSIS
|
||||
|
@ -40,8 +40,8 @@ It consists of two major steps:
|
|||
.INDENT 0.0
|
||||
.IP 1. 3
|
||||
Checking the consistency of the repository itself. This includes checking
|
||||
the segment magic headers, and both the metadata and data of all objects in
|
||||
the segments. The read data is checked by size and CRC. Bit rot and other
|
||||
the file magic headers, and both the metadata and data of all objects in
|
||||
the repository. The read data is checked by size and hash. Bit rot and other
|
||||
types of accidental damage can be detected this way. Running the repository
|
||||
check can be split into multiple partial checks using \fB\-\-max\-duration\fP\&.
|
||||
When checking a remote repository, please note that the checks run on the
|
||||
|
@ -77,13 +77,12 @@ archive checks, nor enable repair mode. Consequently, if you want to use
|
|||
.sp
|
||||
\fBWarning:\fP Please note that partial repository checks (i.e. running it with
|
||||
\fB\-\-max\-duration\fP) can only perform non\-cryptographic checksum checks on the
|
||||
segment files. A full repository check (i.e. without \fB\-\-max\-duration\fP) can
|
||||
also do a repository index check. Enabling partial repository checks excepts
|
||||
archive checks for the same reason. Therefore partial checks may be useful with
|
||||
very large repositories only where a full check would take too long.
|
||||
repository files. Enabling partial repository checks excepts archive checks
|
||||
for the same reason. Therefore partial checks may be useful with very large
|
||||
repositories only where a full check would take too long.
|
||||
.sp
|
||||
The \fB\-\-verify\-data\fP option will perform a full integrity verification (as
|
||||
opposed to checking the CRC32 of the segment) of data, which means reading the
|
||||
opposed to checking just the xxh64) of data, which means reading the
|
||||
data from the repository, decrypting and decompressing it. It is a complete
|
||||
cryptographic verification and hence very time consuming, but will detect any
|
||||
accidental and malicious corruption. Tamper\-resistance is only guaranteed for
|
||||
|
@ -122,17 +121,15 @@ by definition, a potentially lossy task.
|
|||
In practice, repair mode hooks into both the repository and archive checks:
|
||||
.INDENT 0.0
|
||||
.IP 1. 3
|
||||
When checking the repository\(aqs consistency, repair mode will try to recover
|
||||
as many objects from segments with integrity errors as possible, and ensure
|
||||
that the index is consistent with the data stored in the segments.
|
||||
When checking the repository\(aqs consistency, repair mode removes corrupted
|
||||
objects from the repository after it did a 2nd try to read them correctly.
|
||||
.IP 2. 3
|
||||
When checking the consistency and correctness of archives, repair mode might
|
||||
remove whole archives from the manifest if their archive metadata chunk is
|
||||
corrupt or lost. On a chunk level (i.e. the contents of files), repair mode
|
||||
will replace corrupt or lost chunks with a same\-size replacement chunk of
|
||||
zeroes. If a previously zeroed chunk reappears, repair mode will restore
|
||||
this lost chunk using the new chunk. Lastly, repair mode will also delete
|
||||
orphaned chunks (e.g. caused by read errors while creating the archive).
|
||||
this lost chunk using the new chunk.
|
||||
.UNINDENT
|
||||
.sp
|
||||
Most steps taken by repair mode have a one\-time effect on the repository, like
|
||||
|
@ -152,6 +149,12 @@ replace the all\-zero replacement chunk by the reappeared chunk. If all lost
|
|||
chunks of a \(dqzero\-patched\(dq file reappear, this effectively \(dqheals\(dq the file.
|
||||
Consequently, if lost chunks were repaired earlier, it is advised to run
|
||||
\fB\-\-repair\fP a second time after creating some new backups.
|
||||
.sp
|
||||
If \fB\-\-repair \-\-undelete\-archives\fP is given, Borg will scan the repository
|
||||
for archive metadata and if it finds some where no corresponding archives
|
||||
directory entry exists, it will create the entries. This is basically undoing
|
||||
\fBborg delete archive\fP or \fBborg prune ...\fP commands and only possible before
|
||||
\fBborg compact\fP would remove the archives\(aq data completely.
|
||||
.SH OPTIONS
|
||||
.sp
|
||||
See \fIborg\-common(1)\fP for common options of Borg commands.
|
||||
|
@ -170,6 +173,9 @@ perform cryptographic archive data integrity verification (conflicts with \fB\-\
|
|||
.B \-\-repair
|
||||
attempt to repair any inconsistencies found
|
||||
.TP
|
||||
.B \-\-undelete\-archives
|
||||
attempt to undelete archives (use with \-\-repair)
|
||||
.TP
|
||||
.BI \-\-max\-duration \ SECONDS
|
||||
do only a partial repo check for max. SECONDS seconds (Default: unlimited)
|
||||
.UNINDENT
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-COMMON" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-COMMON" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-common \- Common options of Borg commands
|
||||
.SH SYNOPSIS
|
||||
|
@ -64,10 +64,7 @@ format using IEC units (1KiB = 1024B)
|
|||
Output one JSON object per log line instead of formatted text.
|
||||
.TP
|
||||
.BI \-\-lock\-wait \ SECONDS
|
||||
wait at most SECONDS for acquiring a repository/cache lock (default: 1).
|
||||
.TP
|
||||
.B \-\-bypass\-lock
|
||||
Bypass locking mechanism
|
||||
wait at most SECONDS for acquiring a repository/cache lock (default: 10).
|
||||
.TP
|
||||
.B \-\-show\-version
|
||||
show/log the borg version
|
||||
|
|
|
@ -27,40 +27,25 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-COMPACT" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-COMPACT" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-compact \- compact segment files in the repository
|
||||
borg-compact \- Collect garbage in repository
|
||||
.SH SYNOPSIS
|
||||
.sp
|
||||
borg [common options] compact [options]
|
||||
.SH DESCRIPTION
|
||||
.sp
|
||||
This command frees repository space by compacting segments.
|
||||
Free repository space by deleting unused chunks.
|
||||
.sp
|
||||
Use this regularly to avoid running out of space \- you do not need to use this
|
||||
after each borg command though. It is especially useful after deleting archives,
|
||||
because only compaction will really free repository space.
|
||||
borg compact analyzes all existing archives to find out which chunks are
|
||||
actually used. There might be unused chunks resulting from borg delete or prune,
|
||||
which can be removed to free space in the repository.
|
||||
.sp
|
||||
borg compact does not need a key, so it is possible to invoke it from the
|
||||
client or also from the server.
|
||||
.sp
|
||||
Depending on the amount of segments that need compaction, it may take a while,
|
||||
so consider using the \fB\-\-progress\fP option.
|
||||
.sp
|
||||
A segment is compacted if the amount of saved space is above the percentage value
|
||||
given by the \fB\-\-threshold\fP option. If omitted, a threshold of 10% is used.
|
||||
When using \fB\-\-verbose\fP, borg will output an estimate of the freed space.
|
||||
.sp
|
||||
See \fIseparate_compaction\fP in Additional Notes for more details.
|
||||
Differently than borg 1.x, borg2\(aqs compact needs the borg key if the repo is
|
||||
encrypted.
|
||||
.SH OPTIONS
|
||||
.sp
|
||||
See \fIborg\-common(1)\fP for common options of Borg commands.
|
||||
.SS optional arguments
|
||||
.INDENT 0.0
|
||||
.TP
|
||||
.BI \-\-threshold \ PERCENT
|
||||
set minimum threshold for saved space in PERCENT (Default: 10)
|
||||
.UNINDENT
|
||||
.SH EXAMPLES
|
||||
.INDENT 0.0
|
||||
.INDENT 3.5
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-COMPRESSION" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-COMPRESSION" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-compression \- Details regarding compression
|
||||
.SH DESCRIPTION
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-CREATE" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-CREATE" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-create \- Create new archive
|
||||
.SH SYNOPSIS
|
||||
|
@ -53,9 +53,7 @@ stdin\fP below for details.
|
|||
The archive will consume almost no disk space for files or parts of files that
|
||||
have already been stored in other archives.
|
||||
.sp
|
||||
The archive name needs to be unique. It must not end in \(aq.checkpoint\(aq or
|
||||
\(aq.checkpoint.N\(aq (with N being a number), because these names are used for
|
||||
checkpoints and treated in special ways.
|
||||
The archive name needs to be unique.
|
||||
.sp
|
||||
In the archive name, you may use the following placeholders:
|
||||
{now}, {utcnow}, {fqdn}, {hostname}, {user} and some others.
|
||||
|
@ -155,12 +153,6 @@ only display items with the given status characters (see description)
|
|||
.B \-\-json
|
||||
output stats as JSON. Implies \fB\-\-stats\fP\&.
|
||||
.TP
|
||||
.B \-\-no\-cache\-sync
|
||||
experimental: do not synchronize the chunks cache.
|
||||
.TP
|
||||
.B \-\-no\-cache\-sync\-forced
|
||||
experimental: do not synchronize the chunks cache (forced).
|
||||
.TP
|
||||
.B \-\-prefer\-adhoc\-cache
|
||||
experimental: prefer AdHocCache (w/o files cache) over AdHocWithFilesCache (with files cache).
|
||||
.TP
|
||||
|
@ -260,12 +252,6 @@ add a comment text to the archive
|
|||
.BI \-\-timestamp \ TIMESTAMP
|
||||
manually specify the archive creation date/time (yyyy\-mm\-ddThh:mm:ss[(+|\-)HH:MM] format, (+|\-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
|
||||
.TP
|
||||
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
|
||||
write checkpoint every SECONDS seconds (Default: 1800)
|
||||
.TP
|
||||
.BI \-\-checkpoint\-volume \ BYTES
|
||||
write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing)
|
||||
.TP
|
||||
.BI \-\-chunker\-params \ PARAMS
|
||||
specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095
|
||||
.TP
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-DELETE" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-DELETE" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-delete \- Delete archives
|
||||
.SH SYNOPSIS
|
||||
|
@ -42,16 +42,9 @@ you run \fBborg compact\fP\&.
|
|||
.sp
|
||||
When in doubt, use \fB\-\-dry\-run \-\-list\fP to see what would be deleted.
|
||||
.sp
|
||||
When using \fB\-\-stats\fP, you will get some statistics about how much data was
|
||||
deleted \- the \(dqDeleted data\(dq deduplicated size there is most interesting as
|
||||
that is how much your repository will shrink.
|
||||
Please note that the \(dqAll archives\(dq stats refer to the state after deletion.
|
||||
.sp
|
||||
You can delete multiple archives by specifying a matching pattern,
|
||||
using the \fB\-\-match\-archives PATTERN\fP option (for more info on these patterns,
|
||||
see \fIborg_patterns\fP).
|
||||
.sp
|
||||
Always first use \fB\-\-dry\-run \-\-list\fP to see what would be deleted.
|
||||
.SH OPTIONS
|
||||
.sp
|
||||
See \fIborg\-common(1)\fP for common options of Borg commands.
|
||||
|
@ -63,18 +56,6 @@ do not change repository
|
|||
.TP
|
||||
.B \-\-list
|
||||
output verbose list of archives
|
||||
.TP
|
||||
.B \-\-consider\-checkpoints
|
||||
consider checkpoint archives for deletion (default: not considered).
|
||||
.TP
|
||||
.B \-s\fP,\fB \-\-stats
|
||||
print statistics for the deleted archive
|
||||
.TP
|
||||
.B \-\-force
|
||||
force deletion of corrupted archives, use \fB\-\-force \-\-force\fP in case \fB\-\-force\fP does not work.
|
||||
.TP
|
||||
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
|
||||
write checkpoint every SECONDS seconds (Default: 1800)
|
||||
.UNINDENT
|
||||
.SS Archive filters
|
||||
.INDENT 0.0
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-DIFF" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-DIFF" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-diff \- Diff contents of two archives
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-EXPORT-TAR" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-EXPORT-TAR" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-export-tar \- Export archive contents as a tarball
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-EXTRACT" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-EXTRACT" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-extract \- Extract archive contents
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-IMPORT-TAR" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-IMPORT-TAR" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-import-tar \- Create a backup archive from a tarball
|
||||
.SH SYNOPSIS
|
||||
|
@ -126,12 +126,6 @@ add a comment text to the archive
|
|||
.BI \-\-timestamp \ TIMESTAMP
|
||||
manually specify the archive creation date/time (yyyy\-mm\-ddThh:mm:ss[(+|\-)HH:MM] format, (+|\-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
|
||||
.TP
|
||||
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
|
||||
write checkpoint every SECONDS seconds (Default: 1800)
|
||||
.TP
|
||||
.BI \-\-checkpoint\-volume \ BYTES
|
||||
write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing)
|
||||
.TP
|
||||
.BI \-\-chunker\-params \ PARAMS
|
||||
specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095
|
||||
.TP
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-INFO" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-INFO" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-info \- Show archive details such as disk space used
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-KEY-CHANGE-LOCATION" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-KEY-CHANGE-LOCATION" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-key-change-location \- Change repository key location
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-KEY-CHANGE-PASSPHRASE" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-KEY-CHANGE-PASSPHRASE" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-key-change-passphrase \- Change repository key file passphrase
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-KEY-EXPORT" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-KEY-EXPORT" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-key-export \- Export the repository key for backup
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-KEY-IMPORT" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-KEY-IMPORT" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-key-import \- Import the repository key from backup
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-KEY" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-KEY" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-key \- Manage a keyfile or repokey of a repository
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-LIST" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-LIST" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-list \- List archive contents
|
||||
.SH SYNOPSIS
|
||||
|
@ -186,12 +186,8 @@ flags: file flags
|
|||
.IP \(bu 2
|
||||
size: file size
|
||||
.IP \(bu 2
|
||||
dsize: deduplicated size
|
||||
.IP \(bu 2
|
||||
num_chunks: number of chunks in this file
|
||||
.IP \(bu 2
|
||||
unique_chunks: number of unique chunks in this file
|
||||
.IP \(bu 2
|
||||
mtime: file modification time
|
||||
.IP \(bu 2
|
||||
ctime: file change time
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-MATCH-ARCHIVES" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-MATCH-ARCHIVES" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-match-archives \- Details regarding match-archives
|
||||
.SH DESCRIPTION
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-MOUNT" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-MOUNT" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-mount \- Mount archive or an entire repository as a FUSE filesystem
|
||||
.SH SYNOPSIS
|
||||
|
@ -110,9 +110,6 @@ paths to extract; patterns are supported
|
|||
.SS optional arguments
|
||||
.INDENT 0.0
|
||||
.TP
|
||||
.B \-\-consider\-checkpoints
|
||||
Show checkpoint archives in the repository contents list (default: hidden).
|
||||
.TP
|
||||
.B \-f\fP,\fB \-\-foreground
|
||||
stay in foreground, do not daemonize
|
||||
.TP
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-PATTERNS" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-PATTERNS" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-patterns \- Details regarding patterns
|
||||
.SH DESCRIPTION
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-PLACEHOLDERS" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-PLACEHOLDERS" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-placeholders \- Details regarding placeholders
|
||||
.SH DESCRIPTION
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-PRUNE" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-PRUNE" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-prune \- Prune repository archives according to specified rules
|
||||
.SH SYNOPSIS
|
||||
|
@ -45,11 +45,6 @@ certain number of historic backups. This retention policy is commonly referred t
|
|||
\fI\%GFS\fP
|
||||
(Grandfather\-father\-son) backup rotation scheme.
|
||||
.sp
|
||||
Also, prune automatically removes checkpoint archives (incomplete archives left
|
||||
behind by interrupted backup runs) except if the checkpoint is the latest
|
||||
archive (and thus still needed). Checkpoint archives are not considered when
|
||||
comparing archive counts against the retention limits (\fB\-\-keep\-X\fP).
|
||||
.sp
|
||||
If you use \-\-match\-archives (\-a), then only archives that match the pattern are
|
||||
considered for deletion and only those archives count towards the totals
|
||||
specified by the rules.
|
||||
|
@ -85,11 +80,6 @@ The \fB\-\-keep\-last N\fP option is doing the same as \fB\-\-keep\-secondly N\f
|
|||
keep the last N archives under the assumption that you do not create more than one
|
||||
backup archive in the same second).
|
||||
.sp
|
||||
When using \fB\-\-stats\fP, you will get some statistics about how much data was
|
||||
deleted \- the \(dqDeleted data\(dq deduplicated size there is most interesting as
|
||||
that is how much your repository will shrink.
|
||||
Please note that the \(dqAll archives\(dq stats refer to the state after pruning.
|
||||
.sp
|
||||
You can influence how the \fB\-\-list\fP output is formatted by using the \fB\-\-short\fP
|
||||
option (less wide output) or by giving a custom format using \fB\-\-format\fP (see
|
||||
the \fBborg rlist\fP description for more details about the format string).
|
||||
|
@ -102,12 +92,6 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
|
|||
.B \-n\fP,\fB \-\-dry\-run
|
||||
do not change repository
|
||||
.TP
|
||||
.B \-\-force
|
||||
force pruning of corrupted archives, use \fB\-\-force \-\-force\fP in case \fB\-\-force\fP does not work.
|
||||
.TP
|
||||
.B \-s\fP,\fB \-\-stats
|
||||
print statistics for the deleted archive
|
||||
.TP
|
||||
.B \-\-list
|
||||
output verbose list of archives it keeps/prunes
|
||||
.TP
|
||||
|
@ -146,9 +130,6 @@ number of monthly archives to keep
|
|||
.TP
|
||||
.B \-y\fP,\fB \-\-keep\-yearly
|
||||
number of yearly archives to keep
|
||||
.TP
|
||||
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
|
||||
write checkpoint every SECONDS seconds (Default: 1800)
|
||||
.UNINDENT
|
||||
.SS Archive filters
|
||||
.INDENT 0.0
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-RCOMPRESS" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-RCOMPRESS" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-rcompress \- Repository (re-)compression
|
||||
.SH SYNOPSIS
|
||||
|
@ -37,20 +37,14 @@ borg [common options] rcompress [options]
|
|||
.sp
|
||||
Repository (re\-)compression (and/or re\-obfuscation).
|
||||
.sp
|
||||
Reads all chunks in the repository (in on\-disk order, this is important for
|
||||
compaction) and recompresses them if they are not already using the compression
|
||||
type/level and obfuscation level given via \fB\-\-compression\fP\&.
|
||||
Reads all chunks in the repository and recompresses them if they are not already
|
||||
using the compression type/level and obfuscation level given via \fB\-\-compression\fP\&.
|
||||
.sp
|
||||
If the outcome of the chunk processing indicates a change in compression
|
||||
type/level or obfuscation level, the processed chunk is written to the repository.
|
||||
Please note that the outcome might not always be the desired compression
|
||||
type/level \- if no compression gives a shorter output, that might be chosen.
|
||||
.sp
|
||||
Every \fB\-\-checkpoint\-interval\fP, progress is committed to the repository and
|
||||
the repository is compacted (this is to keep temporary repo space usage in bounds).
|
||||
A lower checkpoint interval means lower temporary repo space usage, but also
|
||||
slower progress due to higher overhead (and vice versa).
|
||||
.sp
|
||||
Please note that this command can not work in low (or zero) free disk space
|
||||
conditions.
|
||||
.sp
|
||||
|
@ -72,9 +66,6 @@ select compression algorithm, see the output of the \(dqborg help compression\(d
|
|||
.TP
|
||||
.B \-s\fP,\fB \-\-stats
|
||||
print statistics
|
||||
.TP
|
||||
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
|
||||
write checkpoint every SECONDS seconds (Default: 1800)
|
||||
.UNINDENT
|
||||
.SH EXAMPLES
|
||||
.INDENT 0.0
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-RCREATE" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-RCREATE" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-rcreate \- Create a new, empty repository
|
||||
.SH SYNOPSIS
|
||||
|
@ -35,8 +35,8 @@ borg-rcreate \- Create a new, empty repository
|
|||
borg [common options] rcreate [options]
|
||||
.SH DESCRIPTION
|
||||
.sp
|
||||
This command creates a new, empty repository. A repository is a filesystem
|
||||
directory containing the deduplicated data from zero or more archives.
|
||||
This command creates a new, empty repository. A repository is a \fBborgstore\fP store
|
||||
containing the deduplicated data from zero or more archives.
|
||||
.SS Encryption mode TLDR
|
||||
.sp
|
||||
The encryption mode can only be configured when creating a new repository \- you can
|
||||
|
@ -226,6 +226,12 @@ Optionally, if you use \fB\-\-copy\-crypt\-key\fP you can also keep the same cry
|
|||
keys to manage.
|
||||
.sp
|
||||
Creating related repositories is useful e.g. if you want to use \fBborg transfer\fP later.
|
||||
.SS Creating a related repository for data migration from borg 1.2 or 1.4
|
||||
.sp
|
||||
You can use \fBborg rcreate \-\-other\-repo ORIG_REPO \-\-from\-borg1 ...\fP to create a related
|
||||
repository that uses the same secret key material as the given other/original repository.
|
||||
.sp
|
||||
Then use \fBborg transfer \-\-other\-repo ORIG_REPO \-\-from\-borg1 ...\fP to transfer the archives.
|
||||
.SH OPTIONS
|
||||
.sp
|
||||
See \fIborg\-common(1)\fP for common options of Borg commands.
|
||||
|
@ -235,6 +241,9 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
|
|||
.BI \-\-other\-repo \ SRC_REPOSITORY
|
||||
reuse the key material from the other repository
|
||||
.TP
|
||||
.B \-\-from\-borg1
|
||||
other repository is borg 1.x
|
||||
.TP
|
||||
.BI \-e \ MODE\fR,\fB \ \-\-encryption \ MODE
|
||||
select encryption key mode \fB(required)\fP
|
||||
.TP
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-RDELETE" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-RDELETE" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-rdelete \- Delete a repository
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-RECREATE" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-RECREATE" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-recreate \- Re-create archives
|
||||
.SH SYNOPSIS
|
||||
|
@ -157,12 +157,6 @@ consider archives newer than (now \- TIMESPAN), e.g. 7d or 12m.
|
|||
.BI \-\-target \ TARGET
|
||||
create a new archive with the name ARCHIVE, do not replace existing archive (only applies for a single archive)
|
||||
.TP
|
||||
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
|
||||
write checkpoint every SECONDS seconds (Default: 1800)
|
||||
.TP
|
||||
.BI \-\-checkpoint\-volume \ BYTES
|
||||
write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing)
|
||||
.TP
|
||||
.BI \-\-comment \ COMMENT
|
||||
add a comment text to the archive
|
||||
.TP
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-RENAME" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-RENAME" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-rename \- Rename an existing archive
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-RINFO" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-RINFO" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-rinfo \- Show repository infos
|
||||
.SH SYNOPSIS
|
||||
|
@ -36,15 +36,6 @@ borg [common options] rinfo [options]
|
|||
.SH DESCRIPTION
|
||||
.sp
|
||||
This command displays detailed information about the repository.
|
||||
.sp
|
||||
Please note that the deduplicated sizes of the individual archives do not add
|
||||
up to the deduplicated size of the repository (\(dqall archives\(dq), because the two
|
||||
are meaning different things:
|
||||
.sp
|
||||
This archive / deduplicated size = amount of data stored ONLY for this archive
|
||||
= unique chunks of this archive.
|
||||
All archives / deduplicated size = amount of data stored in the repo
|
||||
= all chunks in the repository.
|
||||
.SH OPTIONS
|
||||
.sp
|
||||
See \fIborg\-common(1)\fP for common options of Borg commands.
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-RLIST" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-RLIST" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-rlist \- List the archives contained in a repository
|
||||
.SH SYNOPSIS
|
||||
|
@ -42,9 +42,6 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
|
|||
.SS optional arguments
|
||||
.INDENT 0.0
|
||||
.TP
|
||||
.B \-\-consider\-checkpoints
|
||||
Show checkpoint archives in the repository contents list (default: hidden).
|
||||
.TP
|
||||
.B \-\-short
|
||||
only print the archive names, nothing else
|
||||
.TP
|
||||
|
|
94
docs/man/borg-rspace.1
Normal file
94
docs/man/borg-rspace.1
Normal file
|
@ -0,0 +1,94 @@
|
|||
.\" Man page generated from reStructuredText.
|
||||
.
|
||||
.
|
||||
.nr rst2man-indent-level 0
|
||||
.
|
||||
.de1 rstReportMargin
|
||||
\\$1 \\n[an-margin]
|
||||
level \\n[rst2man-indent-level]
|
||||
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
-
|
||||
\\n[rst2man-indent0]
|
||||
\\n[rst2man-indent1]
|
||||
\\n[rst2man-indent2]
|
||||
..
|
||||
.de1 INDENT
|
||||
.\" .rstReportMargin pre:
|
||||
. RS \\$1
|
||||
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
|
||||
. nr rst2man-indent-level +1
|
||||
.\" .rstReportMargin post:
|
||||
..
|
||||
.de UNINDENT
|
||||
. RE
|
||||
.\" indent \\n[an-margin]
|
||||
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.nr rst2man-indent-level -1
|
||||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-RSPACE" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-rspace \- Manage reserved space in repository
|
||||
.SH SYNOPSIS
|
||||
.sp
|
||||
borg [common options] rspace [options]
|
||||
.SH DESCRIPTION
|
||||
.sp
|
||||
This command manages reserved space in a repository.
|
||||
.sp
|
||||
Borg can not work in disk\-full conditions (can not lock a repo and thus can
|
||||
not run prune/delete or compact operations to free disk space).
|
||||
.sp
|
||||
To avoid running into dead\-end situations like that, you can put some objects
|
||||
into a repository that take up some disk space. If you ever run into a
|
||||
disk\-full situation, you can free that space and then borg will be able to
|
||||
run normally, so you can free more disk space by using prune/delete/compact.
|
||||
After that, don\(aqt forget to reserve space again, in case you run into that
|
||||
situation again at a later time.
|
||||
.sp
|
||||
Examples:
|
||||
.INDENT 0.0
|
||||
.INDENT 3.5
|
||||
.sp
|
||||
.nf
|
||||
.ft C
|
||||
# Create a new repository:
|
||||
$ borg rcreate ...
|
||||
# Reserve approx. 1GB of space for emergencies:
|
||||
$ borg rspace \-\-reserve 1G
|
||||
|
||||
# Check amount of reserved space in the repository:
|
||||
$ borg rspace
|
||||
|
||||
# EMERGENCY! Free all reserved space to get things back to normal:
|
||||
$ borg rspace \-\-free
|
||||
$ borg prune ...
|
||||
$ borg delete ...
|
||||
$ borg compact \-v # only this actually frees space of deleted archives
|
||||
$ borg rspace \-\-reserve 1G # reserve space again for next time
|
||||
.ft P
|
||||
.fi
|
||||
.UNINDENT
|
||||
.UNINDENT
|
||||
.sp
|
||||
Reserved space is always rounded up to use full reservation blocks of 64MiB.
|
||||
.SH OPTIONS
|
||||
.sp
|
||||
See \fIborg\-common(1)\fP for common options of Borg commands.
|
||||
.SS optional arguments
|
||||
.INDENT 0.0
|
||||
.TP
|
||||
.BI \-\-reserve \ SPACE
|
||||
Amount of space to reserve (e.g. 100M, 1G). Default: 0.
|
||||
.TP
|
||||
.B \-\-free
|
||||
Free all reserved space. Don\(aqt forget to reserve space later again.
|
||||
.UNINDENT
|
||||
.SH SEE ALSO
|
||||
.sp
|
||||
\fIborg\-common(1)\fP
|
||||
.SH AUTHOR
|
||||
The Borg Collective
|
||||
.\" Generated by docutils manpage writer.
|
||||
.
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-SERVE" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-SERVE" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-serve \- Start in server mode. This command is usually not used manually.
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-TRANSFER" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-TRANSFER" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-transfer \- archives transfer from other repository, optionally upgrade data format
|
||||
.SH SYNOPSIS
|
||||
|
@ -46,7 +46,14 @@ any case) and keep data compressed \(dqas is\(dq (saves time as no data compress
|
|||
If you want to globally change compression while transferring archives to the DST_REPO,
|
||||
give \fB\-\-compress=WANTED_COMPRESSION \-\-recompress=always\fP\&.
|
||||
.sp
|
||||
Suggested use for general purpose archive transfer (not repo upgrades):
|
||||
The default is to transfer all archives.
|
||||
.sp
|
||||
You could use the misc. archive filter options to limit which archives it will
|
||||
transfer, e.g. using the \fB\-a\fP option. This is recommended for big
|
||||
repositories with multiple data sets to keep the runtime per invocation lower.
|
||||
.SS General purpose archive transfer
|
||||
.sp
|
||||
Transfer borg2 archives into a related other borg2 repository:
|
||||
.INDENT 0.0
|
||||
.INDENT 3.5
|
||||
.sp
|
||||
|
@ -54,7 +61,7 @@ Suggested use for general purpose archive transfer (not repo upgrades):
|
|||
.ft C
|
||||
# create a related DST_REPO (reusing key material from SRC_REPO), so that
|
||||
# chunking and chunk id generation will work in the same way as before.
|
||||
borg \-\-repo=DST_REPO rcreate \-\-other\-repo=SRC_REPO \-\-encryption=DST_ENC
|
||||
borg \-\-repo=DST_REPO rcreate \-\-encryption=DST_ENC \-\-other\-repo=SRC_REPO
|
||||
|
||||
# transfer archives from SRC_REPO to DST_REPO
|
||||
borg \-\-repo=DST_REPO transfer \-\-other\-repo=SRC_REPO \-\-dry\-run # check what it would do
|
||||
|
@ -64,26 +71,23 @@ borg \-\-repo=DST_REPO transfer \-\-other\-repo=SRC_REPO \-\-dry\-run # check!
|
|||
.fi
|
||||
.UNINDENT
|
||||
.UNINDENT
|
||||
.SS Data migration / upgrade from borg 1.x
|
||||
.sp
|
||||
The default is to transfer all archives, including checkpoint archives.
|
||||
.sp
|
||||
You could use the misc. archive filter options to limit which archives it will
|
||||
transfer, e.g. using the \fB\-a\fP option. This is recommended for big
|
||||
repositories with multiple data sets to keep the runtime per invocation lower.
|
||||
.sp
|
||||
For repository upgrades (e.g. from a borg 1.2 repo to a related borg 2.0 repo), usage is
|
||||
quite similar to the above:
|
||||
To migrate your borg 1.x archives into a related, new borg2 repository, usage is quite similar
|
||||
to the above, but you need the \fB\-\-from\-borg1\fP option:
|
||||
.INDENT 0.0
|
||||
.INDENT 3.5
|
||||
.sp
|
||||
.nf
|
||||
.ft C
|
||||
# fast: compress metadata with zstd,3, but keep data chunks compressed as they are:
|
||||
borg \-\-repo=DST_REPO transfer \-\-other\-repo=SRC_REPO \-\-upgrader=From12To20 \e
|
||||
\-\-compress=zstd,3 \-\-recompress=never
|
||||
borg \-\-repo=DST_REPO rcreate \-\-encryption=DST_ENC \-\-other\-repo=SRC_REPO \-\-from\-borg1
|
||||
|
||||
# compress metadata and recompress data with zstd,3
|
||||
borg \-\-repo=DST_REPO transfer \-\-other\-repo=SRC_REPO \-\-upgrader=From12To20 \e
|
||||
# to continue using lz4 compression as you did in SRC_REPO:
|
||||
borg \-\-repo=DST_REPO transfer \-\-other\-repo=SRC_REPO \-\-from\-borg1 \e
|
||||
\-\-compress=lz4 \-\-recompress=never
|
||||
|
||||
# alternatively, to recompress everything to zstd,3:
|
||||
borg \-\-repo=DST_REPO transfer \-\-other\-repo=SRC_REPO \-\-from\-borg1 \e
|
||||
\-\-compress=zstd,3 \-\-recompress=always
|
||||
.ft P
|
||||
.fi
|
||||
|
@ -101,6 +105,9 @@ do not change repository, just check
|
|||
.BI \-\-other\-repo \ SRC_REPOSITORY
|
||||
transfer archives from the other repository
|
||||
.TP
|
||||
.B \-\-from\-borg1
|
||||
other repository is borg 1.x
|
||||
.TP
|
||||
.BI \-\-upgrader \ UPGRADER
|
||||
use the upgrader to convert transferred data (default: no conversion)
|
||||
.TP
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-UMOUNT" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-UMOUNT" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-umount \- un-mount the FUSE filesystem
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-VERSION" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-VERSION" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-version \- Display the borg client / borg server version
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG-WITH-LOCK" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG-WITH-LOCK" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg-with-lock \- run a user specified command with the repository lock held
|
||||
.SH SYNOPSIS
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORG" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORG" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borg \- deduplicating and encrypting backup tool
|
||||
.SH SYNOPSIS
|
||||
|
@ -238,6 +238,10 @@ Note: you may also prepend a \fBfile://\fP to a filesystem path to get URL style
|
|||
.sp
|
||||
\fBssh://user@host:port/~/path/to/repo\fP \- path relative to user\(aqs home directory
|
||||
.sp
|
||||
\fBRemote repositories\fP accessed via sftp:
|
||||
.sp
|
||||
\fBsftp://user@host:port/path/to/repo\fP \- absolute path\(ga
|
||||
.sp
|
||||
If you frequently need the same repo URL, it is a good idea to set the
|
||||
\fBBORG_REPO\fP environment variable to set a default for the repo URL:
|
||||
.INDENT 0.0
|
||||
|
@ -491,10 +495,6 @@ given order, e.g.:
|
|||
Choose the implementation for the clientside cache, choose one of:
|
||||
.INDENT 7.0
|
||||
.IP \(bu 2
|
||||
\fBlocal\fP: uses a persistent chunks cache and keeps it in a perfect state (precise refcounts and
|
||||
sizes), requiring a potentially resource expensive cache sync in multi\-client scenarios.
|
||||
Also has a persistent files cache.
|
||||
.IP \(bu 2
|
||||
\fBadhoc\fP: builds a non\-persistent chunks cache by querying the repo. Chunks cache contents
|
||||
are somewhat sloppy for already existing chunks, concerning their refcount (\(dqinfinite\(dq) and
|
||||
size (0). No files cache (slow, will chunk all input files). DEPRECATED.
|
||||
|
@ -698,38 +698,48 @@ mode 600, root:root).
|
|||
.UNINDENT
|
||||
.SS File systems
|
||||
.sp
|
||||
We strongly recommend against using Borg (or any other database\-like
|
||||
software) on non\-journaling file systems like FAT, since it is not
|
||||
possible to assume any consistency in case of power failures (or a
|
||||
sudden disconnect of an external drive or similar failures).
|
||||
We recommend using a reliable, scalable journaling filesystem for the
|
||||
repository, e.g. zfs, btrfs, ext4, apfs.
|
||||
.sp
|
||||
While Borg uses a data store that is resilient against these failures
|
||||
when used on journaling file systems, it is not possible to guarantee
|
||||
this with some hardware \-\- independent of the software used. We don\(aqt
|
||||
know a list of affected hardware.
|
||||
Borg now uses the \fBborgstore\fP package to implement the key/value store it
|
||||
uses for the repository.
|
||||
.sp
|
||||
If you are suspicious whether your Borg repository is still consistent
|
||||
and readable after one of the failures mentioned above occurred, run
|
||||
\fBborg check \-\-verify\-data\fP to make sure it is consistent.
|
||||
Requirements for Borg repository file systems
|
||||
It currently uses the \fBfile:\fP Store (posixfs backend) either with a local
|
||||
directory or via ssh and a remote \fBborg serve\fP agent using borgstore on the
|
||||
remote side.
|
||||
.sp
|
||||
This means that it will store each chunk into a separate filesystem file
|
||||
(for more details, see the \fBborgstore\fP project).
|
||||
.sp
|
||||
This has some pros and cons (compared to legacy borg 1.x\(aqs segment files):
|
||||
.sp
|
||||
Pros:
|
||||
.INDENT 0.0
|
||||
.IP \(bu 2
|
||||
Long file names
|
||||
Simplicity and better maintainability of the borg code.
|
||||
.IP \(bu 2
|
||||
At least three directory levels with short names
|
||||
Sometimes faster, less I/O, better scalability: e.g. borg compact can just
|
||||
remove unused chunks by deleting a single file and does not need to read
|
||||
and re\-write segment files to free space.
|
||||
.IP \(bu 2
|
||||
Typically, file sizes up to a few hundred MB.
|
||||
Large repositories may require large files (>2 GB).
|
||||
In future, easier to adapt to other kinds of storage:
|
||||
borgstore\(aqs backends are quite simple to implement.
|
||||
A \fBsftp:\fP backend already exists, cloud storage might be easy to add.
|
||||
.IP \(bu 2
|
||||
Up to 1000 files per directory.
|
||||
Parallel repository access with less locking is easier to implement.
|
||||
.UNINDENT
|
||||
.sp
|
||||
Cons:
|
||||
.INDENT 0.0
|
||||
.IP \(bu 2
|
||||
rename(2) / MoveFile(Ex) should work as specified, i.e. on the same file system
|
||||
it should be a move (not a copy) operation, and in case of a directory
|
||||
it should fail if the destination exists and is not an empty directory,
|
||||
since this is used for locking.
|
||||
The repository filesystem will have to deal with a big amount of files (there
|
||||
are provisions in borgstore against having too many files in a single directory
|
||||
by using a nested directory structure).
|
||||
.IP \(bu 2
|
||||
Also hardlinks are used for more safe and secure file updating (e.g. of the repo
|
||||
config file), but the code tries to work also if hardlinks are not supported.
|
||||
Bigger fs space usage overhead (will depend on allocation block size \- modern
|
||||
filesystems like zfs are rather clever here using a variable block size).
|
||||
.IP \(bu 2
|
||||
Sometimes slower, due to less sequential / more random access operations.
|
||||
.UNINDENT
|
||||
.SS Units
|
||||
.sp
|
||||
|
@ -747,6 +757,10 @@ For more information about that, see: \fI\%https://xkcd.com/1179/\fP
|
|||
.sp
|
||||
Unless otherwise noted, we display local date and time.
|
||||
Internally, we store and process date and time as UTC.
|
||||
TIMESPAN
|
||||
.sp
|
||||
Some options accept a TIMESPAN parameter, which can be given as a
|
||||
number of days (e.g. \fB7d\fP) or months (e.g. \fB12m\fP).
|
||||
.SS Resource Usage
|
||||
.sp
|
||||
Borg might use a lot of resources depending on the size of the data set it is dealing with.
|
||||
|
|
|
@ -27,7 +27,7 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
|||
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
|
||||
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
|
||||
..
|
||||
.TH "BORGFS" 1 "2024-07-19" "" "borg backup tool"
|
||||
.TH "BORGFS" 1 "2024-09-08" "" "borg backup tool"
|
||||
.SH NAME
|
||||
borgfs \- Mount archive or an entire repository as a FUSE filesystem
|
||||
.SH SYNOPSIS
|
||||
|
@ -54,9 +54,6 @@ paths to extract; patterns are supported
|
|||
.B \-V\fP,\fB \-\-version
|
||||
show version number and exit
|
||||
.TP
|
||||
.B \-\-consider\-checkpoints
|
||||
Show checkpoint archives in the repository contents list (default: hidden).
|
||||
.TP
|
||||
.B \-f\fP,\fB \-\-foreground
|
||||
stay in foreground, do not daemonize
|
||||
.TP
|
||||
|
|
|
@ -35,18 +35,6 @@ of free space on the destination filesystem that has your backup repository
|
|||
(and also on ~/.cache). A few GB should suffice for most hard-drive sized
|
||||
repositories. See also :ref:`cache-memory-usage`.
|
||||
|
||||
Borg doesn't use space reserved for root on repository disks (even when run as root).
|
||||
On file systems which do not support this mechanism (e.g. XFS) we recommend to reserve
|
||||
some space in Borg itself just to be safe by adjusting the ``additional_free_space``
|
||||
setting (a good starting point is ``2G``)::
|
||||
|
||||
borg config additional_free_space 2G
|
||||
|
||||
If Borg runs out of disk space, it tries to free as much space as it
|
||||
can while aborting the current operation safely, which allows the user to free more space
|
||||
by deleting/pruning archives. This mechanism is not bullet-proof in some
|
||||
circumstances [1]_.
|
||||
|
||||
If you do run out of disk space, it can be hard or impossible to free space,
|
||||
because Borg needs free space to operate - even to delete backup archives.
|
||||
|
||||
|
@ -55,18 +43,13 @@ in your backup log files (you check them regularly anyway, right?).
|
|||
|
||||
Also helpful:
|
||||
|
||||
- create a big file as a "space reserve", that you can delete to free space
|
||||
- use `borg rspace` to reserve some disk space that can be freed when the fs
|
||||
does not have free space any more.
|
||||
- if you use LVM: use a LV + a filesystem that you can resize later and have
|
||||
some unallocated PEs you can add to the LV.
|
||||
- consider using quotas
|
||||
- use `prune` and `compact` regularly
|
||||
|
||||
.. [1] This failsafe can fail in these circumstances:
|
||||
|
||||
- The underlying file system doesn't support statvfs(2), or returns incorrect
|
||||
data, or the repository doesn't reside on a single file system
|
||||
- Other tasks fill the disk simultaneously
|
||||
- Hard quotas (which may not be reflected in statvfs(2))
|
||||
|
||||
Important note about permissions
|
||||
--------------------------------
|
||||
|
@ -270,7 +253,7 @@ A passphrase should be a single line of text. Any trailing linefeed will be
|
|||
stripped.
|
||||
|
||||
Do not use empty passphrases, as these can be trivially guessed, which does not
|
||||
leave any encrypted data secure.
|
||||
leave any encrypted data secure.
|
||||
|
||||
Avoid passphrases containing non-ASCII characters.
|
||||
Borg can process any unicode text, but problems may arise at input due to text
|
||||
|
@ -420,6 +403,15 @@ You can also use other remote filesystems in a similar way. Just be careful,
|
|||
not all filesystems out there are really stable and working good enough to
|
||||
be acceptable for backup usage.
|
||||
|
||||
Other kinds of repositories
|
||||
---------------------------
|
||||
|
||||
Due to using the `borgstore` project, borg now also supports other kinds of
|
||||
(remote) repositories besides `file:` and `ssh:`:
|
||||
|
||||
- sftp: the borg client will directly talk to an sftp server.
|
||||
This does not require borg being installed on the sftp server.
|
||||
- Others may come in the future, adding backends to `borgstore` is rather simple.
|
||||
|
||||
Restoring a backup
|
||||
------------------
|
||||
|
|
|
@ -37,6 +37,7 @@ Usage
|
|||
usage/general
|
||||
|
||||
usage/rcreate
|
||||
usage/rspace
|
||||
usage/rlist
|
||||
usage/rinfo
|
||||
usage/rcompress
|
||||
|
|
|
@ -23,6 +23,8 @@ borg check
|
|||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--repair`` | attempt to repair any inconsistencies found |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--undelete-archives`` | attempt to undelete archives (use with --repair) |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--max-duration SECONDS`` | do only a partial repo check for max. SECONDS seconds (Default: unlimited) |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| .. class:: borg-common-opt-ref |
|
||||
|
@ -65,6 +67,7 @@ borg check
|
|||
--archives-only only perform archives checks
|
||||
--verify-data perform cryptographic archive data integrity verification (conflicts with ``--repository-only``)
|
||||
--repair attempt to repair any inconsistencies found
|
||||
--undelete-archives attempt to undelete archives (use with --repair)
|
||||
--max-duration SECONDS do only a partial repo check for max. SECONDS seconds (Default: unlimited)
|
||||
|
||||
|
||||
|
@ -89,8 +92,8 @@ The check command verifies the consistency of a repository and its archives.
|
|||
It consists of two major steps:
|
||||
|
||||
1. Checking the consistency of the repository itself. This includes checking
|
||||
the segment magic headers, and both the metadata and data of all objects in
|
||||
the segments. The read data is checked by size and CRC. Bit rot and other
|
||||
the file magic headers, and both the metadata and data of all objects in
|
||||
the repository. The read data is checked by size and hash. Bit rot and other
|
||||
types of accidental damage can be detected this way. Running the repository
|
||||
check can be split into multiple partial checks using ``--max-duration``.
|
||||
When checking a remote repository, please note that the checks run on the
|
||||
|
@ -125,13 +128,12 @@ archive checks, nor enable repair mode. Consequently, if you want to use
|
|||
|
||||
**Warning:** Please note that partial repository checks (i.e. running it with
|
||||
``--max-duration``) can only perform non-cryptographic checksum checks on the
|
||||
segment files. A full repository check (i.e. without ``--max-duration``) can
|
||||
also do a repository index check. Enabling partial repository checks excepts
|
||||
archive checks for the same reason. Therefore partial checks may be useful with
|
||||
very large repositories only where a full check would take too long.
|
||||
repository files. Enabling partial repository checks excepts archive checks
|
||||
for the same reason. Therefore partial checks may be useful with very large
|
||||
repositories only where a full check would take too long.
|
||||
|
||||
The ``--verify-data`` option will perform a full integrity verification (as
|
||||
opposed to checking the CRC32 of the segment) of data, which means reading the
|
||||
opposed to checking just the xxh64) of data, which means reading the
|
||||
data from the repository, decrypting and decompressing it. It is a complete
|
||||
cryptographic verification and hence very time consuming, but will detect any
|
||||
accidental and malicious corruption. Tamper-resistance is only guaranteed for
|
||||
|
@ -168,17 +170,15 @@ by definition, a potentially lossy task.
|
|||
|
||||
In practice, repair mode hooks into both the repository and archive checks:
|
||||
|
||||
1. When checking the repository's consistency, repair mode will try to recover
|
||||
as many objects from segments with integrity errors as possible, and ensure
|
||||
that the index is consistent with the data stored in the segments.
|
||||
1. When checking the repository's consistency, repair mode removes corrupted
|
||||
objects from the repository after it did a 2nd try to read them correctly.
|
||||
|
||||
2. When checking the consistency and correctness of archives, repair mode might
|
||||
remove whole archives from the manifest if their archive metadata chunk is
|
||||
corrupt or lost. On a chunk level (i.e. the contents of files), repair mode
|
||||
will replace corrupt or lost chunks with a same-size replacement chunk of
|
||||
zeroes. If a previously zeroed chunk reappears, repair mode will restore
|
||||
this lost chunk using the new chunk. Lastly, repair mode will also delete
|
||||
orphaned chunks (e.g. caused by read errors while creating the archive).
|
||||
this lost chunk using the new chunk.
|
||||
|
||||
Most steps taken by repair mode have a one-time effect on the repository, like
|
||||
removing a lost archive from the repository. However, replacing a corrupt or
|
||||
|
@ -196,4 +196,10 @@ repair mode Borg will check whether a previously lost chunk reappeared and will
|
|||
replace the all-zero replacement chunk by the reappeared chunk. If all lost
|
||||
chunks of a "zero-patched" file reappear, this effectively "heals" the file.
|
||||
Consequently, if lost chunks were repaired earlier, it is advised to run
|
||||
``--repair`` a second time after creating some new backups.
|
||||
``--repair`` a second time after creating some new backups.
|
||||
|
||||
If ``--repair --undelete-archives`` is given, Borg will scan the repository
|
||||
for archive metadata and if it finds some where no corresponding archives
|
||||
directory entry exists, it will create the entries. This is basically undoing
|
||||
``borg delete archive`` or ``borg prune ...`` commands and only possible before
|
||||
``borg compact`` would remove the archives' data completely.
|
|
@ -8,8 +8,7 @@
|
|||
-p, --progress show progress information
|
||||
--iec format using IEC units (1KiB = 1024B)
|
||||
--log-json Output one JSON object per log line instead of formatted text.
|
||||
--lock-wait SECONDS wait at most SECONDS for acquiring a repository/cache lock (default: 1).
|
||||
--bypass-lock Bypass locking mechanism
|
||||
--lock-wait SECONDS wait at most SECONDS for acquiring a repository/cache lock (default: 10).
|
||||
--show-version show/log the borg version
|
||||
--show-rc show/log the return code (rc)
|
||||
--umask M set umask to M (local only, default: 0077)
|
||||
|
|
|
@ -12,15 +12,11 @@ borg compact
|
|||
|
||||
.. class:: borg-options-table
|
||||
|
||||
+-------------------------------------------------------+-------------------------+----------------------------------------------------------------+
|
||||
| **optional arguments** |
|
||||
+-------------------------------------------------------+-------------------------+----------------------------------------------------------------+
|
||||
| | ``--threshold PERCENT`` | set minimum threshold for saved space in PERCENT (Default: 10) |
|
||||
+-------------------------------------------------------+-------------------------+----------------------------------------------------------------+
|
||||
| .. class:: borg-common-opt-ref |
|
||||
| |
|
||||
| :ref:`common_options` |
|
||||
+-------------------------------------------------------+-------------------------+----------------------------------------------------------------+
|
||||
+-------------------------------------------------------+
|
||||
| .. class:: borg-common-opt-ref |
|
||||
| |
|
||||
| :ref:`common_options` |
|
||||
+-------------------------------------------------------+
|
||||
|
||||
.. raw:: html
|
||||
|
||||
|
@ -34,30 +30,17 @@ borg compact
|
|||
|
||||
|
||||
|
||||
optional arguments
|
||||
--threshold PERCENT set minimum threshold for saved space in PERCENT (Default: 10)
|
||||
|
||||
|
||||
:ref:`common_options`
|
||||
|
|
||||
|
||||
Description
|
||||
~~~~~~~~~~~
|
||||
|
||||
This command frees repository space by compacting segments.
|
||||
Free repository space by deleting unused chunks.
|
||||
|
||||
Use this regularly to avoid running out of space - you do not need to use this
|
||||
after each borg command though. It is especially useful after deleting archives,
|
||||
because only compaction will really free repository space.
|
||||
borg compact analyzes all existing archives to find out which chunks are
|
||||
actually used. There might be unused chunks resulting from borg delete or prune,
|
||||
which can be removed to free space in the repository.
|
||||
|
||||
borg compact does not need a key, so it is possible to invoke it from the
|
||||
client or also from the server.
|
||||
|
||||
Depending on the amount of segments that need compaction, it may take a while,
|
||||
so consider using the ``--progress`` option.
|
||||
|
||||
A segment is compacted if the amount of saved space is above the percentage value
|
||||
given by the ``--threshold`` option. If omitted, a threshold of 10% is used.
|
||||
When using ``--verbose``, borg will output an estimate of the freed space.
|
||||
|
||||
See :ref:`separate_compaction` in Additional Notes for more details.
|
||||
Differently than borg 1.x, borg2's compact needs the borg key if the repo is
|
||||
encrypted.
|
|
@ -31,10 +31,6 @@ borg create
|
|||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--json`` | output stats as JSON. Implies ``--stats``. |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--no-cache-sync`` | experimental: do not synchronize the chunks cache. |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--no-cache-sync-forced`` | experimental: do not synchronize the chunks cache (forced). |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--prefer-adhoc-cache`` | experimental: prefer AdHocCache (w/o files cache) over AdHocWithFilesCache (with files cache). |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--stdin-name NAME`` | use NAME in archive for stdin data (default: 'stdin') |
|
||||
|
@ -105,10 +101,6 @@ borg create
|
|||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--timestamp TIMESTAMP`` | manually specify the archive creation date/time (yyyy-mm-ddThh:mm:ss[(+|-)HH:MM] format, (+|-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory. |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``-c SECONDS``, ``--checkpoint-interval SECONDS`` | write checkpoint every SECONDS seconds (Default: 1800) |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--checkpoint-volume BYTES`` | write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing) |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--chunker-params PARAMS`` | specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095 |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``-C COMPRESSION``, ``--compression COMPRESSION`` | select compression algorithm, see the output of the "borg help compression" command for details. |
|
||||
|
@ -136,8 +128,6 @@ borg create
|
|||
--list output verbose list of items (files, dirs, ...)
|
||||
--filter STATUSCHARS only display items with the given status characters (see description)
|
||||
--json output stats as JSON. Implies ``--stats``.
|
||||
--no-cache-sync experimental: do not synchronize the chunks cache.
|
||||
--no-cache-sync-forced experimental: do not synchronize the chunks cache (forced).
|
||||
--prefer-adhoc-cache experimental: prefer AdHocCache (w/o files cache) over AdHocWithFilesCache (with files cache).
|
||||
--stdin-name NAME use NAME in archive for stdin data (default: 'stdin')
|
||||
--stdin-user USER set user USER in archive for stdin data (default: do not store user/uid)
|
||||
|
@ -180,8 +170,6 @@ borg create
|
|||
Archive options
|
||||
--comment COMMENT add a comment text to the archive
|
||||
--timestamp TIMESTAMP manually specify the archive creation date/time (yyyy-mm-ddThh:mm:ss[(+|-)HH:MM] format, (+|-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
|
||||
-c SECONDS, --checkpoint-interval SECONDS write checkpoint every SECONDS seconds (Default: 1800)
|
||||
--checkpoint-volume BYTES write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing)
|
||||
--chunker-params PARAMS specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095
|
||||
-C COMPRESSION, --compression COMPRESSION select compression algorithm, see the output of the "borg help compression" command for details.
|
||||
|
||||
|
@ -207,9 +195,7 @@ stdin* below for details.
|
|||
The archive will consume almost no disk space for files or parts of files that
|
||||
have already been stored in other archives.
|
||||
|
||||
The archive name needs to be unique. It must not end in '.checkpoint' or
|
||||
'.checkpoint.N' (with N being a number), because these names are used for
|
||||
checkpoints and treated in special ways.
|
||||
The archive name needs to be unique.
|
||||
|
||||
In the archive name, you may use the following placeholders:
|
||||
{now}, {utcnow}, {fqdn}, {hostname}, {user} and some others.
|
||||
|
|
|
@ -12,43 +12,35 @@ borg delete
|
|||
|
||||
.. class:: borg-options-table
|
||||
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| **optional arguments** |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``-n``, ``--dry-run`` | do not change repository |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--list`` | output verbose list of archives |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--consider-checkpoints`` | consider checkpoint archives for deletion (default: not considered). |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``-s``, ``--stats`` | print statistics for the deleted archive |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--force`` | force deletion of corrupted archives, use ``--force --force`` in case ``--force`` does not work. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``-c SECONDS``, ``--checkpoint-interval SECONDS`` | write checkpoint every SECONDS seconds (Default: 1800) |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| .. class:: borg-common-opt-ref |
|
||||
| |
|
||||
| :ref:`common_options` |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| **Archive filters** — Archive filters can be applied to repository targets. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``-a PATTERN``, ``--match-archives PATTERN`` | only consider archive names matching the pattern. see "borg help match-archives". |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--sort-by KEYS`` | Comma-separated list of sorting keys; valid keys are: timestamp, archive, name, id; default is: timestamp |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--first N`` | consider first N archives after other filters were applied |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--last N`` | consider last N archives after other filters were applied |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--oldest TIMESPAN`` | consider archives between the oldest archive's timestamp and (oldest + TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--newest TIMESPAN`` | consider archives between the newest archive's timestamp and (newest - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--older TIMESPAN`` | consider archives older than (now - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--newer TIMESPAN`` | consider archives newer than (now - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| **optional arguments** |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``-n``, ``--dry-run`` | do not change repository |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--list`` | output verbose list of archives |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| .. class:: borg-common-opt-ref |
|
||||
| |
|
||||
| :ref:`common_options` |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| **Archive filters** — Archive filters can be applied to repository targets. |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``-a PATTERN``, ``--match-archives PATTERN`` | only consider archive names matching the pattern. see "borg help match-archives". |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--sort-by KEYS`` | Comma-separated list of sorting keys; valid keys are: timestamp, archive, name, id; default is: timestamp |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--first N`` | consider first N archives after other filters were applied |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--last N`` | consider last N archives after other filters were applied |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--oldest TIMESPAN`` | consider archives between the oldest archive's timestamp and (oldest + TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--newest TIMESPAN`` | consider archives between the newest archive's timestamp and (newest - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--older TIMESPAN`` | consider archives older than (now - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--newer TIMESPAN`` | consider archives newer than (now - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
|
||||
.. raw:: html
|
||||
|
||||
|
@ -63,12 +55,8 @@ borg delete
|
|||
|
||||
|
||||
optional arguments
|
||||
-n, --dry-run do not change repository
|
||||
--list output verbose list of archives
|
||||
--consider-checkpoints consider checkpoint archives for deletion (default: not considered).
|
||||
-s, --stats print statistics for the deleted archive
|
||||
--force force deletion of corrupted archives, use ``--force --force`` in case ``--force`` does not work.
|
||||
-c SECONDS, --checkpoint-interval SECONDS write checkpoint every SECONDS seconds (Default: 1800)
|
||||
-n, --dry-run do not change repository
|
||||
--list output verbose list of archives
|
||||
|
||||
|
||||
:ref:`common_options`
|
||||
|
@ -95,13 +83,6 @@ you run ``borg compact``.
|
|||
|
||||
When in doubt, use ``--dry-run --list`` to see what would be deleted.
|
||||
|
||||
When using ``--stats``, you will get some statistics about how much data was
|
||||
deleted - the "Deleted data" deduplicated size there is most interesting as
|
||||
that is how much your repository will shrink.
|
||||
Please note that the "All archives" stats refer to the state after deletion.
|
||||
|
||||
You can delete multiple archives by specifying a matching pattern,
|
||||
using the ``--match-archives PATTERN`` option (for more info on these patterns,
|
||||
see :ref:`borg_patterns`).
|
||||
|
||||
Always first use ``--dry-run --list`` to see what would be deleted.
|
||||
see :ref:`borg_patterns`).
|
|
@ -88,9 +88,6 @@ General:
|
|||
BORG_CACHE_IMPL
|
||||
Choose the implementation for the clientside cache, choose one of:
|
||||
|
||||
- ``local``: uses a persistent chunks cache and keeps it in a perfect state (precise refcounts and
|
||||
sizes), requiring a potentially resource expensive cache sync in multi-client scenarios.
|
||||
Also has a persistent files cache.
|
||||
- ``adhoc``: builds a non-persistent chunks cache by querying the repo. Chunks cache contents
|
||||
are somewhat sloppy for already existing chunks, concerning their refcount ("infinite") and
|
||||
size (0). No files cache (slow, will chunk all input files). DEPRECATED.
|
||||
|
|
|
@ -1,30 +1,37 @@
|
|||
File systems
|
||||
~~~~~~~~~~~~
|
||||
|
||||
We strongly recommend against using Borg (or any other database-like
|
||||
software) on non-journaling file systems like FAT, since it is not
|
||||
possible to assume any consistency in case of power failures (or a
|
||||
sudden disconnect of an external drive or similar failures).
|
||||
We recommend using a reliable, scalable journaling filesystem for the
|
||||
repository, e.g. zfs, btrfs, ext4, apfs.
|
||||
|
||||
While Borg uses a data store that is resilient against these failures
|
||||
when used on journaling file systems, it is not possible to guarantee
|
||||
this with some hardware -- independent of the software used. We don't
|
||||
know a list of affected hardware.
|
||||
Borg now uses the ``borgstore`` package to implement the key/value store it
|
||||
uses for the repository.
|
||||
|
||||
If you are suspicious whether your Borg repository is still consistent
|
||||
and readable after one of the failures mentioned above occurred, run
|
||||
``borg check --verify-data`` to make sure it is consistent.
|
||||
It currently uses the ``file:`` Store (posixfs backend) either with a local
|
||||
directory or via ssh and a remote ``borg serve`` agent using borgstore on the
|
||||
remote side.
|
||||
|
||||
.. rubric:: Requirements for Borg repository file systems
|
||||
This means that it will store each chunk into a separate filesystem file
|
||||
(for more details, see the ``borgstore`` project).
|
||||
|
||||
- Long file names
|
||||
- At least three directory levels with short names
|
||||
- Typically, file sizes up to a few hundred MB.
|
||||
Large repositories may require large files (>2 GB).
|
||||
- Up to 1000 files per directory.
|
||||
- rename(2) / MoveFile(Ex) should work as specified, i.e. on the same file system
|
||||
it should be a move (not a copy) operation, and in case of a directory
|
||||
it should fail if the destination exists and is not an empty directory,
|
||||
since this is used for locking.
|
||||
- Also hardlinks are used for more safe and secure file updating (e.g. of the repo
|
||||
config file), but the code tries to work also if hardlinks are not supported.
|
||||
This has some pros and cons (compared to legacy borg 1.x's segment files):
|
||||
|
||||
Pros:
|
||||
|
||||
- Simplicity and better maintainability of the borg code.
|
||||
- Sometimes faster, less I/O, better scalability: e.g. borg compact can just
|
||||
remove unused chunks by deleting a single file and does not need to read
|
||||
and re-write segment files to free space.
|
||||
- In future, easier to adapt to other kinds of storage:
|
||||
borgstore's backends are quite simple to implement.
|
||||
A ``sftp:`` backend already exists, cloud storage might be easy to add.
|
||||
- Parallel repository access with less locking is easier to implement.
|
||||
|
||||
Cons:
|
||||
|
||||
- The repository filesystem will have to deal with a big amount of files (there
|
||||
are provisions in borgstore against having too many files in a single directory
|
||||
by using a nested directory structure).
|
||||
- Bigger fs space usage overhead (will depend on allocation block size - modern
|
||||
filesystems like zfs are rather clever here using a variable block size).
|
||||
- Sometimes slower, due to less sequential / more random access operations.
|
||||
|
|
|
@ -20,6 +20,9 @@ Note: you may also prepend a ``file://`` to a filesystem path to get URL style.
|
|||
|
||||
``ssh://user@host:port/~/path/to/repo`` - path relative to user's home directory
|
||||
|
||||
**Remote repositories** accessed via sftp:
|
||||
|
||||
``sftp://user@host:port/path/to/repo`` - absolute path`
|
||||
|
||||
If you frequently need the same repo URL, it is a good idea to set the
|
||||
``BORG_REPO`` environment variable to set a default for the repo URL:
|
||||
|
|
|
@ -43,10 +43,6 @@ borg import-tar
|
|||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--timestamp TIMESTAMP`` | manually specify the archive creation date/time (yyyy-mm-ddThh:mm:ss[(+|-)HH:MM] format, (+|-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory. |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``-c SECONDS``, ``--checkpoint-interval SECONDS`` | write checkpoint every SECONDS seconds (Default: 1800) |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--checkpoint-volume BYTES`` | write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing) |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--chunker-params PARAMS`` | specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095 |
|
||||
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``-C COMPRESSION``, ``--compression COMPRESSION`` | select compression algorithm, see the output of the "borg help compression" command for details. |
|
||||
|
@ -83,8 +79,6 @@ borg import-tar
|
|||
Archive options
|
||||
--comment COMMENT add a comment text to the archive
|
||||
--timestamp TIMESTAMP manually specify the archive creation date/time (yyyy-mm-ddThh:mm:ss[(+|-)HH:MM] format, (+|-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
|
||||
-c SECONDS, --checkpoint-interval SECONDS write checkpoint every SECONDS seconds (Default: 1800)
|
||||
--checkpoint-volume BYTES write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing)
|
||||
--chunker-params PARAMS specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095
|
||||
-C COMPRESSION, --compression COMPRESSION select compression algorithm, see the output of the "borg help compression" command for details.
|
||||
|
||||
|
|
|
@ -127,9 +127,7 @@ Keys available only when listing files in an archive:
|
|||
- flags: file flags
|
||||
|
||||
- size: file size
|
||||
- dsize: deduplicated size
|
||||
- num_chunks: number of chunks in this file
|
||||
- unique_chunks: number of unique chunks in this file
|
||||
|
||||
- mtime: file modification time
|
||||
- ctime: file change time
|
||||
|
|
|
@ -21,8 +21,6 @@ borg mount
|
|||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| **optional arguments** |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``--consider-checkpoints`` | Show checkpoint archives in the repository contents list (default: hidden). |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``-f``, ``--foreground`` | stay in foreground, do not daemonize |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | ``-o`` | Extra mount options |
|
||||
|
@ -81,7 +79,6 @@ borg mount
|
|||
|
||||
|
||||
optional arguments
|
||||
--consider-checkpoints Show checkpoint archives in the repository contents list (default: hidden).
|
||||
-f, --foreground stay in foreground, do not daemonize
|
||||
-o Extra mount options
|
||||
--numeric-ids use numeric user and group identifiers from archive(s)
|
||||
|
|
|
@ -12,59 +12,53 @@ borg prune
|
|||
|
||||
.. class:: borg-options-table
|
||||
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| **optional arguments** |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-n``, ``--dry-run`` | do not change repository |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--force`` | force pruning of corrupted archives, use ``--force --force`` in case ``--force`` does not work. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-s``, ``--stats`` | print statistics for the deleted archive |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--list`` | output verbose list of archives it keeps/prunes |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--short`` | use a less wide archive part format |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--list-pruned`` | output verbose list of archives it prunes |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--list-kept`` | output verbose list of archives it keeps |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--format FORMAT`` | specify format for the archive part (default: "{archive:<36} {time} [{id}]") |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--keep-within INTERVAL`` | keep all archives within this time interval |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--keep-last``, ``--keep-secondly`` | number of secondly archives to keep |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--keep-minutely`` | number of minutely archives to keep |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-H``, ``--keep-hourly`` | number of hourly archives to keep |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-d``, ``--keep-daily`` | number of daily archives to keep |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-w``, ``--keep-weekly`` | number of weekly archives to keep |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-m``, ``--keep-monthly`` | number of monthly archives to keep |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-y``, ``--keep-yearly`` | number of yearly archives to keep |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-c SECONDS``, ``--checkpoint-interval SECONDS`` | write checkpoint every SECONDS seconds (Default: 1800) |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| .. class:: borg-common-opt-ref |
|
||||
| |
|
||||
| :ref:`common_options` |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| **Archive filters** — Archive filters can be applied to repository targets. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-a PATTERN``, ``--match-archives PATTERN`` | only consider archive names matching the pattern. see "borg help match-archives". |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--oldest TIMESPAN`` | consider archives between the oldest archive's timestamp and (oldest + TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--newest TIMESPAN`` | consider archives between the newest archive's timestamp and (newest - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--older TIMESPAN`` | consider archives older than (now - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--newer TIMESPAN`` | consider archives newer than (now - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| **optional arguments** |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-n``, ``--dry-run`` | do not change repository |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--list`` | output verbose list of archives it keeps/prunes |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--short`` | use a less wide archive part format |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--list-pruned`` | output verbose list of archives it prunes |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--list-kept`` | output verbose list of archives it keeps |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--format FORMAT`` | specify format for the archive part (default: "{archive:<36} {time} [{id}]") |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--keep-within INTERVAL`` | keep all archives within this time interval |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--keep-last``, ``--keep-secondly`` | number of secondly archives to keep |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--keep-minutely`` | number of minutely archives to keep |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-H``, ``--keep-hourly`` | number of hourly archives to keep |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-d``, ``--keep-daily`` | number of daily archives to keep |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-w``, ``--keep-weekly`` | number of weekly archives to keep |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-m``, ``--keep-monthly`` | number of monthly archives to keep |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-y``, ``--keep-yearly`` | number of yearly archives to keep |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| .. class:: borg-common-opt-ref |
|
||||
| |
|
||||
| :ref:`common_options` |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| **Archive filters** — Archive filters can be applied to repository targets. |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``-a PATTERN``, ``--match-archives PATTERN`` | only consider archive names matching the pattern. see "borg help match-archives". |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--oldest TIMESPAN`` | consider archives between the oldest archive's timestamp and (oldest + TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--newest TIMESPAN`` | consider archives between the newest archive's timestamp and (newest - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--older TIMESPAN`` | consider archives older than (now - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
| | ``--newer TIMESPAN`` | consider archives newer than (now - TIMESPAN), e.g. 7d or 12m. |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------+
|
||||
|
||||
.. raw:: html
|
||||
|
||||
|
@ -80,8 +74,6 @@ borg prune
|
|||
|
||||
optional arguments
|
||||
-n, --dry-run do not change repository
|
||||
--force force pruning of corrupted archives, use ``--force --force`` in case ``--force`` does not work.
|
||||
-s, --stats print statistics for the deleted archive
|
||||
--list output verbose list of archives it keeps/prunes
|
||||
--short use a less wide archive part format
|
||||
--list-pruned output verbose list of archives it prunes
|
||||
|
@ -95,7 +87,6 @@ borg prune
|
|||
-w, --keep-weekly number of weekly archives to keep
|
||||
-m, --keep-monthly number of monthly archives to keep
|
||||
-y, --keep-yearly number of yearly archives to keep
|
||||
-c SECONDS, --checkpoint-interval SECONDS write checkpoint every SECONDS seconds (Default: 1800)
|
||||
|
||||
|
||||
:ref:`common_options`
|
||||
|
@ -122,11 +113,6 @@ certain number of historic backups. This retention policy is commonly referred t
|
|||
`GFS <https://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son>`_
|
||||
(Grandfather-father-son) backup rotation scheme.
|
||||
|
||||
Also, prune automatically removes checkpoint archives (incomplete archives left
|
||||
behind by interrupted backup runs) except if the checkpoint is the latest
|
||||
archive (and thus still needed). Checkpoint archives are not considered when
|
||||
comparing archive counts against the retention limits (``--keep-X``).
|
||||
|
||||
If you use --match-archives (-a), then only archives that match the pattern are
|
||||
considered for deletion and only those archives count towards the totals
|
||||
specified by the rules.
|
||||
|
@ -162,11 +148,6 @@ The ``--keep-last N`` option is doing the same as ``--keep-secondly N`` (and it
|
|||
keep the last N archives under the assumption that you do not create more than one
|
||||
backup archive in the same second).
|
||||
|
||||
When using ``--stats``, you will get some statistics about how much data was
|
||||
deleted - the "Deleted data" deduplicated size there is most interesting as
|
||||
that is how much your repository will shrink.
|
||||
Please note that the "All archives" stats refer to the state after pruning.
|
||||
|
||||
You can influence how the ``--list`` output is formatted by using the ``--short``
|
||||
option (less wide output) or by giving a custom format using ``--format`` (see
|
||||
the ``borg rlist`` description for more details about the format string).
|
|
@ -19,8 +19,6 @@ borg rcompress
|
|||
+-------------------------------------------------------+---------------------------------------------------+--------------------------------------------------------------------------------------------------+
|
||||
| | ``-s``, ``--stats`` | print statistics |
|
||||
+-------------------------------------------------------+---------------------------------------------------+--------------------------------------------------------------------------------------------------+
|
||||
| | ``-c SECONDS``, ``--checkpoint-interval SECONDS`` | write checkpoint every SECONDS seconds (Default: 1800) |
|
||||
+-------------------------------------------------------+---------------------------------------------------+--------------------------------------------------------------------------------------------------+
|
||||
| .. class:: borg-common-opt-ref |
|
||||
| |
|
||||
| :ref:`common_options` |
|
||||
|
@ -41,7 +39,6 @@ borg rcompress
|
|||
optional arguments
|
||||
-C COMPRESSION, --compression COMPRESSION select compression algorithm, see the output of the "borg help compression" command for details.
|
||||
-s, --stats print statistics
|
||||
-c SECONDS, --checkpoint-interval SECONDS write checkpoint every SECONDS seconds (Default: 1800)
|
||||
|
||||
|
||||
:ref:`common_options`
|
||||
|
@ -52,20 +49,14 @@ Description
|
|||
|
||||
Repository (re-)compression (and/or re-obfuscation).
|
||||
|
||||
Reads all chunks in the repository (in on-disk order, this is important for
|
||||
compaction) and recompresses them if they are not already using the compression
|
||||
type/level and obfuscation level given via ``--compression``.
|
||||
Reads all chunks in the repository and recompresses them if they are not already
|
||||
using the compression type/level and obfuscation level given via ``--compression``.
|
||||
|
||||
If the outcome of the chunk processing indicates a change in compression
|
||||
type/level or obfuscation level, the processed chunk is written to the repository.
|
||||
Please note that the outcome might not always be the desired compression
|
||||
type/level - if no compression gives a shorter output, that might be chosen.
|
||||
|
||||
Every ``--checkpoint-interval``, progress is committed to the repository and
|
||||
the repository is compacted (this is to keep temporary repo space usage in bounds).
|
||||
A lower checkpoint interval means lower temporary repo space usage, but also
|
||||
slower progress due to higher overhead (and vice versa).
|
||||
|
||||
Please note that this command can not work in low (or zero) free disk space
|
||||
conditions.
|
||||
|
||||
|
|
|
@ -17,6 +17,8 @@ borg rcreate
|
|||
+-------------------------------------------------------+------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--other-repo SRC_REPOSITORY`` | reuse the key material from the other repository |
|
||||
+-------------------------------------------------------+------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--from-borg1`` | other repository is borg 1.x |
|
||||
+-------------------------------------------------------+------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``-e MODE``, ``--encryption MODE`` | select encryption key mode **(required)** |
|
||||
+-------------------------------------------------------+------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--append-only`` | create an append-only mode repository. Note that this only affects the low level structure of the repository, and running `delete` or `prune` will still be allowed. See :ref:`append_only_mode` in Additional Notes for more details. |
|
||||
|
@ -46,6 +48,7 @@ borg rcreate
|
|||
|
||||
optional arguments
|
||||
--other-repo SRC_REPOSITORY reuse the key material from the other repository
|
||||
--from-borg1 other repository is borg 1.x
|
||||
-e MODE, --encryption MODE select encryption key mode **(required)**
|
||||
--append-only create an append-only mode repository. Note that this only affects the low level structure of the repository, and running `delete` or `prune` will still be allowed. See :ref:`append_only_mode` in Additional Notes for more details.
|
||||
--storage-quota QUOTA Set storage quota of the new repository (e.g. 5G, 1.5T). Default: no quota.
|
||||
|
@ -59,8 +62,8 @@ borg rcreate
|
|||
Description
|
||||
~~~~~~~~~~~
|
||||
|
||||
This command creates a new, empty repository. A repository is a filesystem
|
||||
directory containing the deduplicated data from zero or more archives.
|
||||
This command creates a new, empty repository. A repository is a ``borgstore`` store
|
||||
containing the deduplicated data from zero or more archives.
|
||||
|
||||
Encryption mode TLDR
|
||||
++++++++++++++++++++
|
||||
|
@ -173,4 +176,12 @@ Optionally, if you use ``--copy-crypt-key`` you can also keep the same crypt_key
|
|||
(used for authenticated encryption). Might be desired e.g. if you want to have less
|
||||
keys to manage.
|
||||
|
||||
Creating related repositories is useful e.g. if you want to use ``borg transfer`` later.
|
||||
Creating related repositories is useful e.g. if you want to use ``borg transfer`` later.
|
||||
|
||||
Creating a related repository for data migration from borg 1.2 or 1.4
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
You can use ``borg rcreate --other-repo ORIG_REPO --from-borg1 ...`` to create a related
|
||||
repository that uses the same secret key material as the given other/original repository.
|
||||
|
||||
Then use ``borg transfer --other-repo ORIG_REPO --from-borg1 ...`` to transfer the archives.
|
|
@ -67,10 +67,6 @@ borg recreate
|
|||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--target TARGET`` | create a new archive with the name ARCHIVE, do not replace existing archive (only applies for a single archive) |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``-c SECONDS``, ``--checkpoint-interval SECONDS`` | write checkpoint every SECONDS seconds (Default: 1800) |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--checkpoint-volume BYTES`` | write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing) |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--comment COMMENT`` | add a comment text to the archive |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--timestamp TIMESTAMP`` | manually specify the archive creation date/time (yyyy-mm-ddThh:mm:ss[(+|-)HH:MM] format, (+|-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory. |
|
||||
|
@ -115,21 +111,19 @@ borg recreate
|
|||
|
||||
|
||||
Archive filters
|
||||
-a PATTERN, --match-archives PATTERN only consider archive names matching the pattern. see "borg help match-archives".
|
||||
--sort-by KEYS Comma-separated list of sorting keys; valid keys are: timestamp, archive, name, id; default is: timestamp
|
||||
--first N consider first N archives after other filters were applied
|
||||
--last N consider last N archives after other filters were applied
|
||||
--oldest TIMESPAN consider archives between the oldest archive's timestamp and (oldest + TIMESPAN), e.g. 7d or 12m.
|
||||
--newest TIMESPAN consider archives between the newest archive's timestamp and (newest - TIMESPAN), e.g. 7d or 12m.
|
||||
--older TIMESPAN consider archives older than (now - TIMESPAN), e.g. 7d or 12m.
|
||||
--newer TIMESPAN consider archives newer than (now - TIMESPAN), e.g. 7d or 12m.
|
||||
--target TARGET create a new archive with the name ARCHIVE, do not replace existing archive (only applies for a single archive)
|
||||
-c SECONDS, --checkpoint-interval SECONDS write checkpoint every SECONDS seconds (Default: 1800)
|
||||
--checkpoint-volume BYTES write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing)
|
||||
--comment COMMENT add a comment text to the archive
|
||||
--timestamp TIMESTAMP manually specify the archive creation date/time (yyyy-mm-ddThh:mm:ss[(+|-)HH:MM] format, (+|-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
|
||||
-C COMPRESSION, --compression COMPRESSION select compression algorithm, see the output of the "borg help compression" command for details.
|
||||
--chunker-params PARAMS rechunk using given chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE) or `default` to use the chunker defaults. default: do not rechunk
|
||||
-a PATTERN, --match-archives PATTERN only consider archive names matching the pattern. see "borg help match-archives".
|
||||
--sort-by KEYS Comma-separated list of sorting keys; valid keys are: timestamp, archive, name, id; default is: timestamp
|
||||
--first N consider first N archives after other filters were applied
|
||||
--last N consider last N archives after other filters were applied
|
||||
--oldest TIMESPAN consider archives between the oldest archive's timestamp and (oldest + TIMESPAN), e.g. 7d or 12m.
|
||||
--newest TIMESPAN consider archives between the newest archive's timestamp and (newest - TIMESPAN), e.g. 7d or 12m.
|
||||
--older TIMESPAN consider archives older than (now - TIMESPAN), e.g. 7d or 12m.
|
||||
--newer TIMESPAN consider archives newer than (now - TIMESPAN), e.g. 7d or 12m.
|
||||
--target TARGET create a new archive with the name ARCHIVE, do not replace existing archive (only applies for a single archive)
|
||||
--comment COMMENT add a comment text to the archive
|
||||
--timestamp TIMESTAMP manually specify the archive creation date/time (yyyy-mm-ddThh:mm:ss[(+|-)HH:MM] format, (+|-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
|
||||
-C COMPRESSION, --compression COMPRESSION select compression algorithm, see the output of the "borg help compression" command for details.
|
||||
--chunker-params PARAMS rechunk using given chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE) or `default` to use the chunker defaults. default: do not rechunk
|
||||
|
||||
|
||||
Description
|
||||
|
|
|
@ -44,13 +44,4 @@ borg rinfo
|
|||
Description
|
||||
~~~~~~~~~~~
|
||||
|
||||
This command displays detailed information about the repository.
|
||||
|
||||
Please note that the deduplicated sizes of the individual archives do not add
|
||||
up to the deduplicated size of the repository ("all archives"), because the two
|
||||
are meaning different things:
|
||||
|
||||
This archive / deduplicated size = amount of data stored ONLY for this archive
|
||||
= unique chunks of this archive.
|
||||
All archives / deduplicated size = amount of data stored in the repo
|
||||
= all chunks in the repository.
|
||||
This command displays detailed information about the repository.
|
|
@ -15,8 +15,6 @@ borg rlist
|
|||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| **optional arguments** |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--consider-checkpoints`` | Show checkpoint archives in the repository contents list (default: hidden). |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--short`` | only print the archive names, nothing else |
|
||||
+-----------------------------------------------------------------------------+----------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--format FORMAT`` | specify format for archive listing (default: "{archive:<36} {time} [{id}]{NL}") |
|
||||
|
@ -59,7 +57,6 @@ borg rlist
|
|||
|
||||
|
||||
optional arguments
|
||||
--consider-checkpoints Show checkpoint archives in the repository contents list (default: hidden).
|
||||
--short only print the archive names, nothing else
|
||||
--format FORMAT specify format for archive listing (default: "{archive:<36} {time} [{id}]{NL}")
|
||||
--json Format output as JSON. The form of ``--format`` is ignored, but keys used in it are added to the JSON output. Some keys are always present. Note: JSON can only represent text.
|
||||
|
|
1
docs/usage/rspace.rst
Normal file
1
docs/usage/rspace.rst
Normal file
|
@ -0,0 +1 @@
|
|||
.. include:: rspace.rst.inc
|
80
docs/usage/rspace.rst.inc
Normal file
80
docs/usage/rspace.rst.inc
Normal file
|
@ -0,0 +1,80 @@
|
|||
.. IMPORTANT: this file is auto-generated from borg's built-in help, do not edit!
|
||||
|
||||
.. _borg_rspace:
|
||||
|
||||
borg rspace
|
||||
-----------
|
||||
.. code-block:: none
|
||||
|
||||
borg [common options] rspace [options]
|
||||
|
||||
.. only:: html
|
||||
|
||||
.. class:: borg-options-table
|
||||
|
||||
+-------------------------------------------------------+---------------------+---------------------------------------------------------------------+
|
||||
| **optional arguments** |
|
||||
+-------------------------------------------------------+---------------------+---------------------------------------------------------------------+
|
||||
| | ``--reserve SPACE`` | Amount of space to reserve (e.g. 100M, 1G). Default: 0. |
|
||||
+-------------------------------------------------------+---------------------+---------------------------------------------------------------------+
|
||||
| | ``--free`` | Free all reserved space. Don't forget to reserve space later again. |
|
||||
+-------------------------------------------------------+---------------------+---------------------------------------------------------------------+
|
||||
| .. class:: borg-common-opt-ref |
|
||||
| |
|
||||
| :ref:`common_options` |
|
||||
+-------------------------------------------------------+---------------------+---------------------------------------------------------------------+
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<script type='text/javascript'>
|
||||
$(document).ready(function () {
|
||||
$('.borg-options-table colgroup').remove();
|
||||
})
|
||||
</script>
|
||||
|
||||
.. only:: latex
|
||||
|
||||
|
||||
|
||||
optional arguments
|
||||
--reserve SPACE Amount of space to reserve (e.g. 100M, 1G). Default: 0.
|
||||
--free Free all reserved space. Don't forget to reserve space later again.
|
||||
|
||||
|
||||
:ref:`common_options`
|
||||
|
|
||||
|
||||
Description
|
||||
~~~~~~~~~~~
|
||||
|
||||
This command manages reserved space in a repository.
|
||||
|
||||
Borg can not work in disk-full conditions (can not lock a repo and thus can
|
||||
not run prune/delete or compact operations to free disk space).
|
||||
|
||||
To avoid running into dead-end situations like that, you can put some objects
|
||||
into a repository that take up some disk space. If you ever run into a
|
||||
disk-full situation, you can free that space and then borg will be able to
|
||||
run normally, so you can free more disk space by using prune/delete/compact.
|
||||
After that, don't forget to reserve space again, in case you run into that
|
||||
situation again at a later time.
|
||||
|
||||
Examples::
|
||||
|
||||
# Create a new repository:
|
||||
$ borg rcreate ...
|
||||
# Reserve approx. 1GB of space for emergencies:
|
||||
$ borg rspace --reserve 1G
|
||||
|
||||
# Check amount of reserved space in the repository:
|
||||
$ borg rspace
|
||||
|
||||
# EMERGENCY! Free all reserved space to get things back to normal:
|
||||
$ borg rspace --free
|
||||
$ borg prune ...
|
||||
$ borg delete ...
|
||||
$ borg compact -v # only this actually frees space of deleted archives
|
||||
$ borg rspace --reserve 1G # reserve space again for next time
|
||||
|
||||
|
||||
Reserved space is always rounded up to use full reservation blocks of 64MiB.
|
|
@ -19,6 +19,8 @@ borg transfer
|
|||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--other-repo SRC_REPOSITORY`` | transfer archives from the other repository |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--from-borg1`` | other repository is borg 1.x |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``--upgrader UPGRADER`` | use the upgrader to convert transferred data (default: no conversion) |
|
||||
+-----------------------------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | ``-C COMPRESSION``, ``--compression COMPRESSION`` | select compression algorithm, see the output of the "borg help compression" command for details. |
|
||||
|
@ -63,6 +65,7 @@ borg transfer
|
|||
optional arguments
|
||||
-n, --dry-run do not change repository, just check
|
||||
--other-repo SRC_REPOSITORY transfer archives from the other repository
|
||||
--from-borg1 other repository is borg 1.x
|
||||
--upgrader UPGRADER use the upgrader to convert transferred data (default: no conversion)
|
||||
-C COMPRESSION, --compression COMPRESSION select compression algorithm, see the output of the "borg help compression" command for details.
|
||||
--recompress MODE recompress data chunks according to `MODE` and ``--compression``. Possible modes are `always`: recompress unconditionally; and `never`: do not recompress (faster: re-uses compressed data chunks w/o change).If no MODE is given, `always` will be used. Not passing --recompress is equivalent to "--recompress never".
|
||||
|
@ -96,31 +99,40 @@ any case) and keep data compressed "as is" (saves time as no data compression is
|
|||
If you want to globally change compression while transferring archives to the DST_REPO,
|
||||
give ``--compress=WANTED_COMPRESSION --recompress=always``.
|
||||
|
||||
Suggested use for general purpose archive transfer (not repo upgrades)::
|
||||
The default is to transfer all archives.
|
||||
|
||||
You could use the misc. archive filter options to limit which archives it will
|
||||
transfer, e.g. using the ``-a`` option. This is recommended for big
|
||||
repositories with multiple data sets to keep the runtime per invocation lower.
|
||||
|
||||
General purpose archive transfer
|
||||
++++++++++++++++++++++++++++++++
|
||||
|
||||
Transfer borg2 archives into a related other borg2 repository::
|
||||
|
||||
# create a related DST_REPO (reusing key material from SRC_REPO), so that
|
||||
# chunking and chunk id generation will work in the same way as before.
|
||||
borg --repo=DST_REPO rcreate --other-repo=SRC_REPO --encryption=DST_ENC
|
||||
borg --repo=DST_REPO rcreate --encryption=DST_ENC --other-repo=SRC_REPO
|
||||
|
||||
# transfer archives from SRC_REPO to DST_REPO
|
||||
borg --repo=DST_REPO transfer --other-repo=SRC_REPO --dry-run # check what it would do
|
||||
borg --repo=DST_REPO transfer --other-repo=SRC_REPO # do it!
|
||||
borg --repo=DST_REPO transfer --other-repo=SRC_REPO --dry-run # check! anything left?
|
||||
|
||||
The default is to transfer all archives, including checkpoint archives.
|
||||
|
||||
You could use the misc. archive filter options to limit which archives it will
|
||||
transfer, e.g. using the ``-a`` option. This is recommended for big
|
||||
repositories with multiple data sets to keep the runtime per invocation lower.
|
||||
Data migration / upgrade from borg 1.x
|
||||
++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
For repository upgrades (e.g. from a borg 1.2 repo to a related borg 2.0 repo), usage is
|
||||
quite similar to the above::
|
||||
To migrate your borg 1.x archives into a related, new borg2 repository, usage is quite similar
|
||||
to the above, but you need the ``--from-borg1`` option::
|
||||
|
||||
# fast: compress metadata with zstd,3, but keep data chunks compressed as they are:
|
||||
borg --repo=DST_REPO transfer --other-repo=SRC_REPO --upgrader=From12To20 \
|
||||
--compress=zstd,3 --recompress=never
|
||||
borg --repo=DST_REPO rcreate --encryption=DST_ENC --other-repo=SRC_REPO --from-borg1
|
||||
|
||||
# compress metadata and recompress data with zstd,3
|
||||
borg --repo=DST_REPO transfer --other-repo=SRC_REPO --upgrader=From12To20 \
|
||||
# to continue using lz4 compression as you did in SRC_REPO:
|
||||
borg --repo=DST_REPO transfer --other-repo=SRC_REPO --from-borg1 \
|
||||
--compress=lz4 --recompress=never
|
||||
|
||||
# alternatively, to recompress everything to zstd,3:
|
||||
borg --repo=DST_REPO transfer --other-repo=SRC_REPO --from-borg1 \
|
||||
--compress=zstd,3 --recompress=always
|
||||
|
||||
|
|
|
@ -35,6 +35,8 @@ dependencies = [
|
|||
"platformdirs >=3.0.0, <5.0.0; sys_platform == 'darwin'", # for macOS: breaking changes in 3.0.0,
|
||||
"platformdirs >=2.6.0, <5.0.0; sys_platform != 'darwin'", # for others: 2.6+ works consistently.
|
||||
"argon2-cffi",
|
||||
"borgstore",
|
||||
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
|
|
@ -1 +1 @@
|
|||
black >=23.0, <24
|
||||
black >=24.0, <25
|
||||
|
|
|
@ -1,10 +0,0 @@
|
|||
- Install AFL and the requirements for LLVM mode (see docs)
|
||||
- Compile the fuzzing target, e.g.
|
||||
|
||||
AFL_HARDEN=1 afl-clang-fast main.c -o fuzz-target -O3
|
||||
|
||||
(other options, like using ASan or MSan are possible as well)
|
||||
- Add additional test cases to testcase_dir
|
||||
- Run afl, easiest (but inefficient) way;
|
||||
|
||||
afl-fuzz -i testcase_dir -o findings_dir ./fuzz-target
|
|
@ -1,33 +0,0 @@
|
|||
|
||||
#define BORG_NO_PYTHON
|
||||
|
||||
#include "../../src/borg/_hashindex.c"
|
||||
#include "../../src/borg/cache_sync/cache_sync.c"
|
||||
|
||||
#define BUFSZ 32768
|
||||
|
||||
int main() {
|
||||
char buf[BUFSZ];
|
||||
int len, ret;
|
||||
CacheSyncCtx *ctx;
|
||||
HashIndex *idx;
|
||||
|
||||
/* capacity, key size, value size */
|
||||
idx = hashindex_init(0, 32, 12);
|
||||
ctx = cache_sync_init(idx);
|
||||
|
||||
while (1) {
|
||||
len = read(0, buf, BUFSZ);
|
||||
if (!len) {
|
||||
break;
|
||||
}
|
||||
ret = cache_sync_feed(ctx, buf, len);
|
||||
if(!ret && cache_sync_error(ctx)) {
|
||||
fprintf(stderr, "error: %s\n", cache_sync_error(ctx));
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
hashindex_free(idx);
|
||||
cache_sync_free(ctx);
|
||||
return 0;
|
||||
}
|
Binary file not shown.
File diff suppressed because it is too large
Load diff
|
@ -14,7 +14,6 @@
|
|||
import os
|
||||
import shlex
|
||||
import signal
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from ..logger import create_logger, setup_logging
|
||||
|
@ -68,7 +67,6 @@ def get_func(args):
|
|||
from .benchmark_cmd import BenchmarkMixIn
|
||||
from .check_cmd import CheckMixIn
|
||||
from .compact_cmd import CompactMixIn
|
||||
from .config_cmd import ConfigMixIn
|
||||
from .create_cmd import CreateMixIn
|
||||
from .debug_cmd import DebugMixIn
|
||||
from .delete_cmd import DeleteMixIn
|
||||
|
@ -88,6 +86,7 @@ def get_func(args):
|
|||
from .rinfo_cmd import RInfoMixIn
|
||||
from .rdelete_cmd import RDeleteMixIn
|
||||
from .rlist_cmd import RListMixIn
|
||||
from .rspace_cmd import RSpaceMixIn
|
||||
from .serve_cmd import ServeMixIn
|
||||
from .tar_cmds import TarMixIn
|
||||
from .transfer_cmd import TransferMixIn
|
||||
|
@ -98,7 +97,6 @@ class Archiver(
|
|||
BenchmarkMixIn,
|
||||
CheckMixIn,
|
||||
CompactMixIn,
|
||||
ConfigMixIn,
|
||||
CreateMixIn,
|
||||
DebugMixIn,
|
||||
DeleteMixIn,
|
||||
|
@ -118,6 +116,7 @@ class Archiver(
|
|||
RDeleteMixIn,
|
||||
RInfoMixIn,
|
||||
RListMixIn,
|
||||
RSpaceMixIn,
|
||||
ServeMixIn,
|
||||
TarMixIn,
|
||||
TransferMixIn,
|
||||
|
@ -126,7 +125,6 @@ class Archiver(
|
|||
def __init__(self, lock_wait=None, prog=None):
|
||||
self.lock_wait = lock_wait
|
||||
self.prog = prog
|
||||
self.last_checkpoint = time.monotonic()
|
||||
|
||||
def print_warning(self, msg, *args, **kw):
|
||||
warning_code = kw.get("wc", EXIT_WARNING) # note: wc=None can be used to not influence exit code
|
||||
|
@ -336,7 +334,6 @@ def build_parser(self):
|
|||
self.build_parser_benchmarks(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_check(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_compact(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_config(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_create(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_debug(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_delete(subparsers, common_parser, mid_common_parser)
|
||||
|
@ -356,6 +353,7 @@ def build_parser(self):
|
|||
self.build_parser_rlist(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_recreate(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_rename(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_rspace(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_serve(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_tar(subparsers, common_parser, mid_common_parser)
|
||||
self.build_parser_transfer(subparsers, common_parser, mid_common_parser)
|
||||
|
@ -412,22 +410,6 @@ def parse_args(self, args=None):
|
|||
elif not args.paths_from_stdin:
|
||||
# need at least 1 path but args.paths may also be populated from patterns
|
||||
parser.error("Need at least one PATH argument.")
|
||||
if not getattr(args, "lock", True): # Option --bypass-lock sets args.lock = False
|
||||
bypass_allowed = {
|
||||
self.do_check,
|
||||
self.do_config,
|
||||
self.do_diff,
|
||||
self.do_export_tar,
|
||||
self.do_extract,
|
||||
self.do_info,
|
||||
self.do_rinfo,
|
||||
self.do_list,
|
||||
self.do_rlist,
|
||||
self.do_mount,
|
||||
self.do_umount,
|
||||
}
|
||||
if func not in bypass_allowed:
|
||||
raise Error("Not allowed to bypass locking mechanism for chosen command")
|
||||
# we can only have a complete knowledge of placeholder replacements we should do **after** arg parsing,
|
||||
# e.g. due to options like --timestamp that override the current time.
|
||||
# thus we have to initialize replace_placeholders here and process all args that need placeholder replacement.
|
||||
|
@ -474,20 +456,6 @@ def _setup_topic_debugging(self, args):
|
|||
logger.debug("Enabling debug topic %s", topic)
|
||||
logging.getLogger(topic).setLevel("DEBUG")
|
||||
|
||||
def maybe_checkpoint(self, *, checkpoint_func, checkpoint_interval):
|
||||
checkpointed = False
|
||||
sig_int_triggered = sig_int and sig_int.action_triggered()
|
||||
if sig_int_triggered or checkpoint_interval and time.monotonic() - self.last_checkpoint > checkpoint_interval:
|
||||
if sig_int_triggered:
|
||||
logger.info("checkpoint requested: starting checkpoint creation...")
|
||||
checkpoint_func()
|
||||
checkpointed = True
|
||||
self.last_checkpoint = time.monotonic()
|
||||
if sig_int_triggered:
|
||||
sig_int.action_completed()
|
||||
logger.info("checkpoint requested: finished checkpoint creation!")
|
||||
return checkpointed
|
||||
|
||||
def run(self, args):
|
||||
os.umask(args.umask) # early, before opening files
|
||||
self.lock_wait = args.lock_wait
|
||||
|
@ -617,14 +585,13 @@ def main(): # pragma: no cover
|
|||
|
||||
# Register fault handler for SIGSEGV, SIGFPE, SIGABRT, SIGBUS and SIGILL.
|
||||
faulthandler.enable()
|
||||
with signal_handler("SIGINT", raising_signal_handler(KeyboardInterrupt)), signal_handler(
|
||||
"SIGHUP", raising_signal_handler(SigHup)
|
||||
), signal_handler("SIGTERM", raising_signal_handler(SigTerm)), signal_handler(
|
||||
"SIGUSR1", sig_info_handler
|
||||
), signal_handler(
|
||||
"SIGUSR2", sig_trace_handler
|
||||
), signal_handler(
|
||||
"SIGINFO", sig_info_handler
|
||||
with (
|
||||
signal_handler("SIGINT", raising_signal_handler(KeyboardInterrupt)),
|
||||
signal_handler("SIGHUP", raising_signal_handler(SigHup)),
|
||||
signal_handler("SIGTERM", raising_signal_handler(SigTerm)),
|
||||
signal_handler("SIGUSR1", sig_info_handler),
|
||||
signal_handler("SIGUSR2", sig_trace_handler),
|
||||
signal_handler("SIGINFO", sig_info_handler),
|
||||
):
|
||||
archiver = Archiver()
|
||||
msg = msgid = tb = None
|
||||
|
|
|
@ -1,4 +1,3 @@
|
|||
import argparse
|
||||
import functools
|
||||
import os
|
||||
import textwrap
|
||||
|
@ -13,7 +12,9 @@
|
|||
from ..helpers.nanorst import rst_to_terminal
|
||||
from ..manifest import Manifest, AI_HUMAN_SORT_KEYS
|
||||
from ..patterns import PatternMatcher
|
||||
from ..legacyremote import LegacyRemoteRepository
|
||||
from ..remote import RemoteRepository
|
||||
from ..legacyrepository import LegacyRepository
|
||||
from ..repository import Repository
|
||||
from ..repoobj import RepoObj, RepoObj1
|
||||
from ..patterns import (
|
||||
|
@ -29,9 +30,12 @@
|
|||
logger = create_logger(__name__)
|
||||
|
||||
|
||||
def get_repository(location, *, create, exclusive, lock_wait, lock, append_only, make_parent_dirs, storage_quota, args):
|
||||
def get_repository(
|
||||
location, *, create, exclusive, lock_wait, lock, append_only, make_parent_dirs, storage_quota, args, v1_or_v2
|
||||
):
|
||||
if location.proto in ("ssh", "socket"):
|
||||
repository = RemoteRepository(
|
||||
RemoteRepoCls = LegacyRemoteRepository if v1_or_v2 else RemoteRepository
|
||||
repository = RemoteRepoCls(
|
||||
location,
|
||||
create=create,
|
||||
exclusive=exclusive,
|
||||
|
@ -42,8 +46,21 @@ def get_repository(location, *, create, exclusive, lock_wait, lock, append_only,
|
|||
args=args,
|
||||
)
|
||||
|
||||
else:
|
||||
elif location.proto in ("sftp", "file") and not v1_or_v2: # stuff directly supported by borgstore
|
||||
repository = Repository(
|
||||
location,
|
||||
create=create,
|
||||
exclusive=exclusive,
|
||||
lock_wait=lock_wait,
|
||||
lock=lock,
|
||||
append_only=append_only,
|
||||
make_parent_dirs=make_parent_dirs,
|
||||
storage_quota=storage_quota,
|
||||
)
|
||||
|
||||
else:
|
||||
RepoCls = LegacyRepository if v1_or_v2 else Repository
|
||||
repository = RepoCls(
|
||||
location.path,
|
||||
create=create,
|
||||
exclusive=exclusive,
|
||||
|
@ -98,8 +115,7 @@ def with_repository(
|
|||
decorator_name="with_repository",
|
||||
)
|
||||
|
||||
# To process the `--bypass-lock` option if specified, we need to
|
||||
# modify `lock` inside `wrapper`. Therefore we cannot use the
|
||||
# We may need to modify `lock` inside `wrapper`. Therefore we cannot use the
|
||||
# `nonlocal` statement to access `lock` as modifications would also
|
||||
# affect the scope outside of `wrapper`. Subsequent calls would
|
||||
# only see the overwritten value of `lock`, not the original one.
|
||||
|
@ -129,13 +145,15 @@ def wrapper(self, args, **kwargs):
|
|||
make_parent_dirs=make_parent_dirs,
|
||||
storage_quota=storage_quota,
|
||||
args=args,
|
||||
v1_or_v2=False,
|
||||
)
|
||||
|
||||
with repository:
|
||||
if repository.version not in (2,):
|
||||
if repository.version not in (3,):
|
||||
raise Error(
|
||||
"This borg version only accepts version 2 repos for -r/--repo. "
|
||||
"You can use 'borg transfer' to copy archives from old to new repos."
|
||||
f"This borg version only accepts version 3 repos for -r/--repo, "
|
||||
f"but not version {repository.version}. "
|
||||
f"You can use 'borg transfer' to copy archives from old to new repos."
|
||||
)
|
||||
if manifest or cache:
|
||||
manifest_ = Manifest.load(repository, compatibility)
|
||||
|
@ -185,6 +203,8 @@ def wrapper(self, args, **kwargs):
|
|||
if not location.valid: # nothing to do
|
||||
return method(self, args, **kwargs)
|
||||
|
||||
v1_or_v2 = getattr(args, "v1_or_v2", False)
|
||||
|
||||
repository = get_repository(
|
||||
location,
|
||||
create=False,
|
||||
|
@ -195,11 +215,16 @@ def wrapper(self, args, **kwargs):
|
|||
make_parent_dirs=False,
|
||||
storage_quota=None,
|
||||
args=args,
|
||||
v1_or_v2=v1_or_v2,
|
||||
)
|
||||
|
||||
with repository:
|
||||
if repository.version not in (1, 2):
|
||||
raise Error("This borg version only accepts version 1 or 2 repos for --other-repo.")
|
||||
acceptable_versions = (1, 2) if v1_or_v2 else (3,)
|
||||
if repository.version not in acceptable_versions:
|
||||
raise Error(
|
||||
f"This borg version only accepts version {' or '.join(acceptable_versions)} "
|
||||
f"repos for --other-repo."
|
||||
)
|
||||
kwargs["other_repository"] = repository
|
||||
if manifest or cache:
|
||||
manifest_ = Manifest.load(
|
||||
|
@ -500,17 +525,10 @@ def define_common_options(add_common_option):
|
|||
metavar="SECONDS",
|
||||
dest="lock_wait",
|
||||
type=int,
|
||||
default=int(os.environ.get("BORG_LOCK_WAIT", 1)),
|
||||
default=int(os.environ.get("BORG_LOCK_WAIT", 10)),
|
||||
action=Highlander,
|
||||
help="wait at most SECONDS for acquiring a repository/cache lock (default: %(default)d).",
|
||||
)
|
||||
add_common_option(
|
||||
"--bypass-lock",
|
||||
dest="lock",
|
||||
action="store_false",
|
||||
default=argparse.SUPPRESS, # only create args attribute if option is specified
|
||||
help="Bypass locking mechanism",
|
||||
)
|
||||
add_common_option("--show-version", dest="show_version", action="store_true", help="show/log the borg version")
|
||||
add_common_option("--show-rc", dest="show_rc", action="store_true", help="show/log the return code (rc)")
|
||||
add_common_option(
|
||||
|
|
|
@ -37,10 +37,10 @@ def do_check(self, args, repository):
|
|||
)
|
||||
if args.repair and args.max_duration:
|
||||
raise CommandError("--repair does not allow --max-duration argument.")
|
||||
if args.undelete_archives and not args.repair:
|
||||
raise CommandError("--undelete-archives requires --repair argument.")
|
||||
if args.max_duration and not args.repo_only:
|
||||
# when doing a partial repo check, we can only check crc32 checksums in segment files,
|
||||
# we can't build a fresh repo index in memory to verify the on-disk index against it.
|
||||
# thus, we should not do an archives check based on a unknown-quality on-disk repo index.
|
||||
# when doing a partial repo check, we can only check xxh64 hashes in repository files.
|
||||
# also, there is no max_duration support in the archives check code anyway.
|
||||
raise CommandError("--repository-only is required for --max-duration support.")
|
||||
if not args.archives_only:
|
||||
|
@ -50,6 +50,7 @@ def do_check(self, args, repository):
|
|||
repository,
|
||||
verify_data=args.verify_data,
|
||||
repair=args.repair,
|
||||
undelete_archives=args.undelete_archives,
|
||||
match=args.match_archives,
|
||||
sort_by=args.sort_by or "ts",
|
||||
first=args.first,
|
||||
|
@ -72,8 +73,8 @@ def build_parser_check(self, subparsers, common_parser, mid_common_parser):
|
|||
It consists of two major steps:
|
||||
|
||||
1. Checking the consistency of the repository itself. This includes checking
|
||||
the segment magic headers, and both the metadata and data of all objects in
|
||||
the segments. The read data is checked by size and CRC. Bit rot and other
|
||||
the file magic headers, and both the metadata and data of all objects in
|
||||
the repository. The read data is checked by size and hash. Bit rot and other
|
||||
types of accidental damage can be detected this way. Running the repository
|
||||
check can be split into multiple partial checks using ``--max-duration``.
|
||||
When checking a remote repository, please note that the checks run on the
|
||||
|
@ -108,13 +109,12 @@ def build_parser_check(self, subparsers, common_parser, mid_common_parser):
|
|||
|
||||
**Warning:** Please note that partial repository checks (i.e. running it with
|
||||
``--max-duration``) can only perform non-cryptographic checksum checks on the
|
||||
segment files. A full repository check (i.e. without ``--max-duration``) can
|
||||
also do a repository index check. Enabling partial repository checks excepts
|
||||
archive checks for the same reason. Therefore partial checks may be useful with
|
||||
very large repositories only where a full check would take too long.
|
||||
repository files. Enabling partial repository checks excepts archive checks
|
||||
for the same reason. Therefore partial checks may be useful with very large
|
||||
repositories only where a full check would take too long.
|
||||
|
||||
The ``--verify-data`` option will perform a full integrity verification (as
|
||||
opposed to checking the CRC32 of the segment) of data, which means reading the
|
||||
opposed to checking just the xxh64) of data, which means reading the
|
||||
data from the repository, decrypting and decompressing it. It is a complete
|
||||
cryptographic verification and hence very time consuming, but will detect any
|
||||
accidental and malicious corruption. Tamper-resistance is only guaranteed for
|
||||
|
@ -151,17 +151,15 @@ def build_parser_check(self, subparsers, common_parser, mid_common_parser):
|
|||
|
||||
In practice, repair mode hooks into both the repository and archive checks:
|
||||
|
||||
1. When checking the repository's consistency, repair mode will try to recover
|
||||
as many objects from segments with integrity errors as possible, and ensure
|
||||
that the index is consistent with the data stored in the segments.
|
||||
1. When checking the repository's consistency, repair mode removes corrupted
|
||||
objects from the repository after it did a 2nd try to read them correctly.
|
||||
|
||||
2. When checking the consistency and correctness of archives, repair mode might
|
||||
remove whole archives from the manifest if their archive metadata chunk is
|
||||
corrupt or lost. On a chunk level (i.e. the contents of files), repair mode
|
||||
will replace corrupt or lost chunks with a same-size replacement chunk of
|
||||
zeroes. If a previously zeroed chunk reappears, repair mode will restore
|
||||
this lost chunk using the new chunk. Lastly, repair mode will also delete
|
||||
orphaned chunks (e.g. caused by read errors while creating the archive).
|
||||
this lost chunk using the new chunk.
|
||||
|
||||
Most steps taken by repair mode have a one-time effect on the repository, like
|
||||
removing a lost archive from the repository. However, replacing a corrupt or
|
||||
|
@ -180,6 +178,12 @@ def build_parser_check(self, subparsers, common_parser, mid_common_parser):
|
|||
chunks of a "zero-patched" file reappear, this effectively "heals" the file.
|
||||
Consequently, if lost chunks were repaired earlier, it is advised to run
|
||||
``--repair`` a second time after creating some new backups.
|
||||
|
||||
If ``--repair --undelete-archives`` is given, Borg will scan the repository
|
||||
for archive metadata and if it finds some where no corresponding archives
|
||||
directory entry exists, it will create the entries. This is basically undoing
|
||||
``borg delete archive`` or ``borg prune ...`` commands and only possible before
|
||||
``borg compact`` would remove the archives' data completely.
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
|
@ -207,6 +211,12 @@ def build_parser_check(self, subparsers, common_parser, mid_common_parser):
|
|||
subparser.add_argument(
|
||||
"--repair", dest="repair", action="store_true", help="attempt to repair any inconsistencies found"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--undelete-archives",
|
||||
dest="undelete_archives",
|
||||
action="store_true",
|
||||
help="attempt to undelete archives (use with --repair)",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--max-duration",
|
||||
metavar="SECONDS",
|
||||
|
|
|
@ -1,47 +1,174 @@
|
|||
import argparse
|
||||
from typing import Tuple, Dict
|
||||
|
||||
from ._common import with_repository, Highlander
|
||||
from ._common import with_repository
|
||||
from ..archive import Archive
|
||||
from ..constants import * # NOQA
|
||||
from ..helpers import set_ec, EXIT_WARNING, EXIT_ERROR, format_file_size, bin_to_hex
|
||||
from ..helpers import ProgressIndicatorPercent
|
||||
from ..manifest import Manifest
|
||||
from ..remote import RemoteRepository
|
||||
from ..repository import Repository
|
||||
|
||||
from ..logger import create_logger
|
||||
|
||||
logger = create_logger()
|
||||
|
||||
|
||||
class ArchiveGarbageCollector:
|
||||
def __init__(self, repository, manifest):
|
||||
self.repository = repository
|
||||
assert isinstance(repository, (Repository, RemoteRepository))
|
||||
self.manifest = manifest
|
||||
self.repository_chunks = None # what we have in the repository, id -> stored_size
|
||||
self.used_chunks = None # what archives currently reference
|
||||
self.wanted_chunks = None # chunks that would be nice to have for next borg check --repair
|
||||
self.total_files = None # overall number of source files written to all archives in this repo
|
||||
self.total_size = None # overall size of source file content data written to all archives
|
||||
self.archives_count = None # number of archives
|
||||
|
||||
@property
|
||||
def repository_size(self):
|
||||
if self.repository_chunks is None:
|
||||
return None
|
||||
return sum(self.repository_chunks.values()) # sum of stored sizes
|
||||
|
||||
def garbage_collect(self):
|
||||
"""Removes unused chunks from a repository."""
|
||||
logger.info("Starting compaction / garbage collection...")
|
||||
logger.info("Getting object IDs present in the repository...")
|
||||
self.repository_chunks = self.get_repository_chunks()
|
||||
logger.info("Computing object IDs used by archives...")
|
||||
(self.used_chunks, self.wanted_chunks, self.total_files, self.total_size, self.archives_count) = (
|
||||
self.analyze_archives()
|
||||
)
|
||||
self.report_and_delete()
|
||||
logger.info("Finished compaction / garbage collection...")
|
||||
|
||||
def get_repository_chunks(self) -> Dict[bytes, int]:
|
||||
"""Build a dict id -> size of all chunks present in the repository"""
|
||||
repository_chunks = {}
|
||||
marker = None
|
||||
while True:
|
||||
result = self.repository.list(limit=LIST_SCAN_LIMIT, marker=marker)
|
||||
if not result:
|
||||
break
|
||||
marker = result[-1][0]
|
||||
for id, stored_size in result:
|
||||
repository_chunks[id] = stored_size
|
||||
return repository_chunks
|
||||
|
||||
def analyze_archives(self) -> Tuple[Dict[bytes, int], Dict[bytes, int], int, int, int]:
|
||||
"""Iterate over all items in all archives, create the dicts id -> size of all used/wanted chunks."""
|
||||
used_chunks = {} # chunks referenced by item.chunks
|
||||
wanted_chunks = {} # additional "wanted" chunks seen in item.chunks_healthy
|
||||
archive_infos = self.manifest.archives.list()
|
||||
num_archives = len(archive_infos)
|
||||
pi = ProgressIndicatorPercent(
|
||||
total=num_archives, msg="Computing used/wanted chunks %3.1f%%", step=0.1, msgid="compact.analyze_archives"
|
||||
)
|
||||
total_size, total_files = 0, 0
|
||||
for i, info in enumerate(archive_infos):
|
||||
pi.show(i)
|
||||
logger.info(f"Analyzing archive {info.name} ({i + 1}/{num_archives})")
|
||||
archive = Archive(self.manifest, info.name)
|
||||
# archive metadata size unknown, but usually small/irrelevant:
|
||||
used_chunks[archive.id] = 0
|
||||
for id in archive.metadata.item_ptrs:
|
||||
used_chunks[id] = 0
|
||||
for id in archive.metadata.items:
|
||||
used_chunks[id] = 0
|
||||
# archive items content data:
|
||||
for item in archive.iter_items():
|
||||
total_files += 1 # every fs object counts, not just regular files
|
||||
if "chunks" in item:
|
||||
for id, size in item.chunks:
|
||||
total_size += size # original, uncompressed file content size
|
||||
used_chunks[id] = size
|
||||
if "chunks_healthy" in item:
|
||||
# we also consider the chunks_healthy chunks as referenced - do not throw away
|
||||
# anything that borg check --repair might still need.
|
||||
for id, size in item.chunks_healthy:
|
||||
if id not in used_chunks:
|
||||
wanted_chunks[id] = size
|
||||
pi.finish()
|
||||
return used_chunks, wanted_chunks, total_files, total_size, num_archives
|
||||
|
||||
def report_and_delete(self):
|
||||
run_repair = " Run borg check --repair!"
|
||||
|
||||
missing_new = set(self.used_chunks) - set(self.repository_chunks)
|
||||
if missing_new:
|
||||
logger.error(f"Repository has {len(missing_new)} new missing objects." + run_repair)
|
||||
set_ec(EXIT_ERROR)
|
||||
|
||||
missing_known = set(self.wanted_chunks) - set(self.repository_chunks)
|
||||
if missing_known:
|
||||
logger.warning(f"Repository has {len(missing_known)} known missing objects.")
|
||||
set_ec(EXIT_WARNING)
|
||||
|
||||
missing_found = set(self.wanted_chunks) & set(self.repository_chunks)
|
||||
if missing_found:
|
||||
logger.warning(f"{len(missing_found)} previously missing objects re-appeared!" + run_repair)
|
||||
set_ec(EXIT_WARNING)
|
||||
|
||||
repo_size_before = self.repository_size
|
||||
referenced_chunks = set(self.used_chunks) | set(self.wanted_chunks)
|
||||
unused = set(self.repository_chunks) - referenced_chunks
|
||||
logger.info(f"Repository has {len(unused)} objects to delete.")
|
||||
if unused:
|
||||
logger.info(f"Deleting {len(unused)} unused objects...")
|
||||
pi = ProgressIndicatorPercent(
|
||||
total=len(unused), msg="Deleting unused objects %3.1f%%", step=0.1, msgid="compact.report_and_delete"
|
||||
)
|
||||
for i, id in enumerate(unused):
|
||||
pi.show(i)
|
||||
self.repository.delete(id)
|
||||
del self.repository_chunks[id]
|
||||
pi.finish()
|
||||
repo_size_after = self.repository_size
|
||||
|
||||
count = len(self.repository_chunks)
|
||||
logger.info(f"Overall statistics, considering all {self.archives_count} archives in this repository:")
|
||||
logger.info(
|
||||
f"Source data size was {format_file_size(self.total_size, precision=0)} in {self.total_files} files."
|
||||
)
|
||||
dsize = 0
|
||||
for id in self.repository_chunks:
|
||||
if id in self.used_chunks:
|
||||
dsize += self.used_chunks[id]
|
||||
elif id in self.wanted_chunks:
|
||||
dsize += self.wanted_chunks[id]
|
||||
else:
|
||||
raise KeyError(bin_to_hex(id))
|
||||
logger.info(f"Repository size is {format_file_size(self.repository_size, precision=0)} in {count} objects.")
|
||||
if self.total_size != 0:
|
||||
logger.info(f"Space reduction factor due to deduplication: {dsize / self.total_size:.3f}")
|
||||
if dsize != 0:
|
||||
logger.info(f"Space reduction factor due to compression: {self.repository_size / dsize:.3f}")
|
||||
logger.info(f"Compaction saved {format_file_size(repo_size_before - repo_size_after, precision=0)}.")
|
||||
|
||||
|
||||
class CompactMixIn:
|
||||
@with_repository(manifest=False, exclusive=True)
|
||||
def do_compact(self, args, repository):
|
||||
"""compact segment files in the repository"""
|
||||
# see the comment in do_with_lock about why we do it like this:
|
||||
data = repository.get(Manifest.MANIFEST_ID)
|
||||
repository.put(Manifest.MANIFEST_ID, data)
|
||||
threshold = args.threshold / 100
|
||||
repository.commit(compact=True, threshold=threshold)
|
||||
@with_repository(exclusive=True, compatibility=(Manifest.Operation.DELETE,))
|
||||
def do_compact(self, args, repository, manifest):
|
||||
"""Collect garbage in repository"""
|
||||
ArchiveGarbageCollector(repository, manifest).garbage_collect()
|
||||
|
||||
def build_parser_compact(self, subparsers, common_parser, mid_common_parser):
|
||||
from ._common import process_epilog
|
||||
|
||||
compact_epilog = process_epilog(
|
||||
"""
|
||||
This command frees repository space by compacting segments.
|
||||
Free repository space by deleting unused chunks.
|
||||
|
||||
Use this regularly to avoid running out of space - you do not need to use this
|
||||
after each borg command though. It is especially useful after deleting archives,
|
||||
because only compaction will really free repository space.
|
||||
borg compact analyzes all existing archives to find out which chunks are
|
||||
actually used. There might be unused chunks resulting from borg delete or prune,
|
||||
which can be removed to free space in the repository.
|
||||
|
||||
borg compact does not need a key, so it is possible to invoke it from the
|
||||
client or also from the server.
|
||||
|
||||
Depending on the amount of segments that need compaction, it may take a while,
|
||||
so consider using the ``--progress`` option.
|
||||
|
||||
A segment is compacted if the amount of saved space is above the percentage value
|
||||
given by the ``--threshold`` option. If omitted, a threshold of 10% is used.
|
||||
When using ``--verbose``, borg will output an estimate of the freed space.
|
||||
|
||||
See :ref:`separate_compaction` in Additional Notes for more details.
|
||||
"""
|
||||
Differently than borg 1.x, borg2's compact needs the borg key if the repo is
|
||||
encrypted.
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
"compact",
|
||||
|
@ -50,15 +177,6 @@ def build_parser_compact(self, subparsers, common_parser, mid_common_parser):
|
|||
description=self.do_compact.__doc__,
|
||||
epilog=compact_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="compact segment files / free space in repo",
|
||||
help="compact repository",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_compact)
|
||||
subparser.add_argument(
|
||||
"--threshold",
|
||||
metavar="PERCENT",
|
||||
dest="threshold",
|
||||
type=int,
|
||||
default=10,
|
||||
action=Highlander,
|
||||
help="set minimum threshold for saved space in PERCENT (Default: 10)",
|
||||
)
|
||||
|
|
|
@ -1,177 +0,0 @@
|
|||
import argparse
|
||||
import configparser
|
||||
|
||||
from ._common import with_repository
|
||||
from ..cache import Cache, assert_secure
|
||||
from ..constants import * # NOQA
|
||||
from ..helpers import Error, CommandError
|
||||
from ..helpers import parse_file_size, hex_to_bin
|
||||
from ..manifest import Manifest
|
||||
|
||||
from ..logger import create_logger
|
||||
|
||||
logger = create_logger()
|
||||
|
||||
|
||||
class ConfigMixIn:
|
||||
@with_repository(exclusive=True, manifest=False)
|
||||
def do_config(self, args, repository):
|
||||
"""get, set, and delete values in a repository or cache config file"""
|
||||
|
||||
def repo_validate(section, name, value=None, check_value=True):
|
||||
if section not in ["repository"]:
|
||||
raise ValueError("Invalid section")
|
||||
if name in ["segments_per_dir", "last_segment_checked"]:
|
||||
if check_value:
|
||||
try:
|
||||
int(value)
|
||||
except ValueError:
|
||||
raise ValueError("Invalid value") from None
|
||||
elif name in ["max_segment_size", "additional_free_space", "storage_quota"]:
|
||||
if check_value:
|
||||
try:
|
||||
parse_file_size(value)
|
||||
except ValueError:
|
||||
raise ValueError("Invalid value") from None
|
||||
if name == "storage_quota":
|
||||
if parse_file_size(value) < parse_file_size("10M"):
|
||||
raise ValueError("Invalid value: storage_quota < 10M")
|
||||
elif name == "max_segment_size":
|
||||
if parse_file_size(value) >= MAX_SEGMENT_SIZE_LIMIT:
|
||||
raise ValueError("Invalid value: max_segment_size >= %d" % MAX_SEGMENT_SIZE_LIMIT)
|
||||
elif name in ["append_only"]:
|
||||
if check_value and value not in ["0", "1"]:
|
||||
raise ValueError("Invalid value")
|
||||
elif name in ["id"]:
|
||||
if check_value:
|
||||
hex_to_bin(value, length=32)
|
||||
else:
|
||||
raise ValueError("Invalid name")
|
||||
|
||||
def cache_validate(section, name, value=None, check_value=True):
|
||||
if section not in ["cache"]:
|
||||
raise ValueError("Invalid section")
|
||||
# currently, we do not support setting anything in the cache via borg config.
|
||||
raise ValueError("Invalid name")
|
||||
|
||||
def list_config(config):
|
||||
default_values = {
|
||||
"version": "1",
|
||||
"segments_per_dir": str(DEFAULT_SEGMENTS_PER_DIR),
|
||||
"max_segment_size": str(MAX_SEGMENT_SIZE_LIMIT),
|
||||
"additional_free_space": "0",
|
||||
"storage_quota": repository.storage_quota,
|
||||
"append_only": repository.append_only,
|
||||
}
|
||||
print("[repository]")
|
||||
for key in [
|
||||
"version",
|
||||
"segments_per_dir",
|
||||
"max_segment_size",
|
||||
"storage_quota",
|
||||
"additional_free_space",
|
||||
"append_only",
|
||||
"id",
|
||||
]:
|
||||
value = config.get("repository", key, fallback=False)
|
||||
if value is None:
|
||||
value = default_values.get(key)
|
||||
if value is None:
|
||||
raise Error("The repository config is missing the %s key which has no default value" % key)
|
||||
print(f"{key} = {value}")
|
||||
for key in ["last_segment_checked"]:
|
||||
value = config.get("repository", key, fallback=None)
|
||||
if value is None:
|
||||
continue
|
||||
print(f"{key} = {value}")
|
||||
|
||||
if not args.list:
|
||||
if args.name is None:
|
||||
raise CommandError("No config key name was provided.")
|
||||
try:
|
||||
section, name = args.name.split(".")
|
||||
except ValueError:
|
||||
section = args.cache and "cache" or "repository"
|
||||
name = args.name
|
||||
|
||||
if args.cache:
|
||||
manifest = Manifest.load(repository, (Manifest.Operation.WRITE,))
|
||||
assert_secure(repository, manifest, self.lock_wait)
|
||||
cache = Cache(repository, manifest, lock_wait=self.lock_wait)
|
||||
|
||||
try:
|
||||
if args.cache:
|
||||
cache.cache_config.load()
|
||||
config = cache.cache_config._config
|
||||
save = cache.cache_config.save
|
||||
validate = cache_validate
|
||||
else:
|
||||
config = repository.config
|
||||
save = lambda: repository.save_config(repository.path, repository.config) # noqa
|
||||
validate = repo_validate
|
||||
|
||||
if args.delete:
|
||||
validate(section, name, check_value=False)
|
||||
config.remove_option(section, name)
|
||||
if len(config.options(section)) == 0:
|
||||
config.remove_section(section)
|
||||
save()
|
||||
elif args.list:
|
||||
list_config(config)
|
||||
elif args.value:
|
||||
validate(section, name, args.value)
|
||||
if section not in config.sections():
|
||||
config.add_section(section)
|
||||
config.set(section, name, args.value)
|
||||
save()
|
||||
else:
|
||||
try:
|
||||
print(config.get(section, name))
|
||||
except (configparser.NoOptionError, configparser.NoSectionError) as e:
|
||||
raise Error(e)
|
||||
finally:
|
||||
if args.cache:
|
||||
cache.close()
|
||||
|
||||
def build_parser_config(self, subparsers, common_parser, mid_common_parser):
|
||||
from ._common import process_epilog
|
||||
|
||||
config_epilog = process_epilog(
|
||||
"""
|
||||
This command gets and sets options in a local repository or cache config file.
|
||||
For security reasons, this command only works on local repositories.
|
||||
|
||||
To delete a config value entirely, use ``--delete``. To list the values
|
||||
of the configuration file or the default values, use ``--list``. To get an existing
|
||||
key, pass only the key name. To set a key, pass both the key name and
|
||||
the new value. Keys can be specified in the format "section.name" or
|
||||
simply "name"; the section will default to "repository" and "cache" for
|
||||
the repo and cache configs, respectively.
|
||||
|
||||
|
||||
By default, borg config manipulates the repository config file. Using ``--cache``
|
||||
edits the repository cache's config file instead.
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
"config",
|
||||
parents=[common_parser],
|
||||
add_help=False,
|
||||
description=self.do_config.__doc__,
|
||||
epilog=config_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="get and set configuration values",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_config)
|
||||
subparser.add_argument(
|
||||
"-c", "--cache", dest="cache", action="store_true", help="get and set values from the repo cache"
|
||||
)
|
||||
|
||||
group = subparser.add_mutually_exclusive_group()
|
||||
group.add_argument(
|
||||
"-d", "--delete", dest="delete", action="store_true", help="delete the key from the config file"
|
||||
)
|
||||
group.add_argument("-l", "--list", dest="list", action="store_true", help="list the configuration of the repo")
|
||||
|
||||
subparser.add_argument("name", metavar="NAME", nargs="?", help="name of config key")
|
||||
subparser.add_argument("value", metavar="VALUE", nargs="?", help="new value for key")
|
|
@ -41,7 +41,7 @@
|
|||
|
||||
|
||||
class CreateMixIn:
|
||||
@with_repository(exclusive=True, compatibility=(Manifest.Operation.WRITE,))
|
||||
@with_repository(compatibility=(Manifest.Operation.WRITE,))
|
||||
def do_create(self, args, repository, manifest):
|
||||
"""Create new archive"""
|
||||
key = manifest.key
|
||||
|
@ -196,8 +196,7 @@ def create_inner(archive, cache, fso):
|
|||
archive.stats.rx_bytes = getattr(repository, "rx_bytes", 0)
|
||||
archive.stats.tx_bytes = getattr(repository, "tx_bytes", 0)
|
||||
if sig_int:
|
||||
# do not save the archive if the user ctrl-c-ed - it is valid, but incomplete.
|
||||
# we already have a checkpoint archive in this case.
|
||||
# do not save the archive if the user ctrl-c-ed.
|
||||
raise Error("Got Ctrl-C / SIGINT.")
|
||||
else:
|
||||
archive.save(comment=args.comment, timestamp=args.timestamp)
|
||||
|
@ -224,8 +223,6 @@ def create_inner(archive, cache, fso):
|
|||
manifest,
|
||||
progress=args.progress,
|
||||
lock_wait=self.lock_wait,
|
||||
no_cache_sync_permitted=args.no_cache_sync,
|
||||
no_cache_sync_forced=args.no_cache_sync_forced,
|
||||
prefer_adhoc_cache=args.prefer_adhoc_cache,
|
||||
cache_mode=args.files_cache_mode,
|
||||
iec=args.iec,
|
||||
|
@ -254,16 +251,7 @@ def create_inner(archive, cache, fso):
|
|||
numeric_ids=args.numeric_ids,
|
||||
nobirthtime=args.nobirthtime,
|
||||
)
|
||||
cp = ChunksProcessor(
|
||||
cache=cache,
|
||||
key=key,
|
||||
add_item=archive.add_item,
|
||||
prepare_checkpoint=archive.prepare_checkpoint,
|
||||
write_checkpoint=archive.write_checkpoint,
|
||||
checkpoint_interval=args.checkpoint_interval,
|
||||
checkpoint_volume=args.checkpoint_volume,
|
||||
rechunkify=False,
|
||||
)
|
||||
cp = ChunksProcessor(cache=cache, key=key, add_item=archive.add_item, rechunkify=False)
|
||||
fso = FilesystemObjectProcessors(
|
||||
metadata_collector=metadata_collector,
|
||||
cache=cache,
|
||||
|
@ -587,9 +575,7 @@ def build_parser_create(self, subparsers, common_parser, mid_common_parser):
|
|||
The archive will consume almost no disk space for files or parts of files that
|
||||
have already been stored in other archives.
|
||||
|
||||
The archive name needs to be unique. It must not end in '.checkpoint' or
|
||||
'.checkpoint.N' (with N being a number), because these names are used for
|
||||
checkpoints and treated in special ways.
|
||||
The archive name needs to be unique.
|
||||
|
||||
In the archive name, you may use the following placeholders:
|
||||
{now}, {utcnow}, {fqdn}, {hostname}, {user} and some others.
|
||||
|
@ -799,18 +785,6 @@ def build_parser_create(self, subparsers, common_parser, mid_common_parser):
|
|||
help="only display items with the given status characters (see description)",
|
||||
)
|
||||
subparser.add_argument("--json", action="store_true", help="output stats as JSON. Implies ``--stats``.")
|
||||
subparser.add_argument(
|
||||
"--no-cache-sync",
|
||||
dest="no_cache_sync",
|
||||
action="store_true",
|
||||
help="experimental: do not synchronize the chunks cache.",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--no-cache-sync-forced",
|
||||
dest="no_cache_sync_forced",
|
||||
action="store_true",
|
||||
help="experimental: do not synchronize the chunks cache (forced).",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--prefer-adhoc-cache",
|
||||
dest="prefer_adhoc_cache",
|
||||
|
@ -956,25 +930,6 @@ def build_parser_create(self, subparsers, common_parser, mid_common_parser):
|
|||
help="manually specify the archive creation date/time (yyyy-mm-ddThh:mm:ss[(+|-)HH:MM] format, "
|
||||
"(+|-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.",
|
||||
)
|
||||
archive_group.add_argument(
|
||||
"-c",
|
||||
"--checkpoint-interval",
|
||||
metavar="SECONDS",
|
||||
dest="checkpoint_interval",
|
||||
type=int,
|
||||
default=1800,
|
||||
action=Highlander,
|
||||
help="write checkpoint every SECONDS seconds (Default: 1800)",
|
||||
)
|
||||
archive_group.add_argument(
|
||||
"--checkpoint-volume",
|
||||
metavar="BYTES",
|
||||
dest="checkpoint_volume",
|
||||
type=int,
|
||||
default=0,
|
||||
action=Highlander,
|
||||
help="write checkpoint every BYTES bytes (Default: 0, meaning no volume based checkpointing)",
|
||||
)
|
||||
archive_group.add_argument(
|
||||
"--chunker-params",
|
||||
metavar="PARAMS",
|
||||
|
|
|
@ -11,11 +11,11 @@
|
|||
from ..helpers import bin_to_hex, hex_to_bin, prepare_dump_dict
|
||||
from ..helpers import dash_open
|
||||
from ..helpers import StableDict
|
||||
from ..helpers import positive_int_validator, archivename_validator
|
||||
from ..helpers import archivename_validator
|
||||
from ..helpers import CommandError, RTError
|
||||
from ..manifest import Manifest
|
||||
from ..platform import get_process_id
|
||||
from ..repository import Repository, LIST_SCAN_LIMIT, TAG_PUT, TAG_DELETE, TAG_COMMIT
|
||||
from ..repository import Repository, LIST_SCAN_LIMIT
|
||||
from ..repoobj import RepoObj
|
||||
|
||||
from ._common import with_repository, Highlander
|
||||
|
@ -46,7 +46,7 @@ def do_debug_dump_archive(self, args, repository, manifest):
|
|||
"""dump decoded archive metadata (not: data)"""
|
||||
repo_objs = manifest.repo_objs
|
||||
try:
|
||||
archive_meta_orig = manifest.archives.get_raw_dict()[args.name]
|
||||
archive_meta_orig = manifest.archives.get(args.name, raw=True)
|
||||
except KeyError:
|
||||
raise Archive.DoesNotExist(args.name)
|
||||
|
||||
|
@ -99,7 +99,8 @@ def output(fd):
|
|||
def do_debug_dump_manifest(self, args, repository, manifest):
|
||||
"""dump decoded repository manifest"""
|
||||
repo_objs = manifest.repo_objs
|
||||
_, data = repo_objs.parse(manifest.MANIFEST_ID, repository.get(manifest.MANIFEST_ID), ro_type=ROBJ_MANIFEST)
|
||||
cdata = repository.get_manifest()
|
||||
_, data = repo_objs.parse(manifest.MANIFEST_ID, cdata, ro_type=ROBJ_MANIFEST)
|
||||
|
||||
meta = prepare_dump_dict(msgpack.unpackb(data, object_hook=StableDict))
|
||||
|
||||
|
@ -108,57 +109,34 @@ def do_debug_dump_manifest(self, args, repository, manifest):
|
|||
|
||||
@with_repository(manifest=False)
|
||||
def do_debug_dump_repo_objs(self, args, repository):
|
||||
"""dump (decrypted, decompressed) repo objects, repo index MUST be current/correct"""
|
||||
"""dump (decrypted, decompressed) repo objects"""
|
||||
from ..crypto.key import key_factory
|
||||
|
||||
def decrypt_dump(i, id, cdata, tag=None, segment=None, offset=None):
|
||||
def decrypt_dump(id, cdata):
|
||||
if cdata is not None:
|
||||
_, data = repo_objs.parse(id, cdata, ro_type=ROBJ_DONTCARE)
|
||||
else:
|
||||
_, data = {}, b""
|
||||
tag_str = "" if tag is None else "_" + tag
|
||||
segment_str = "_" + str(segment) if segment is not None else ""
|
||||
offset_str = "_" + str(offset) if offset is not None else ""
|
||||
id_str = "_" + bin_to_hex(id) if id is not None else ""
|
||||
filename = "%08d%s%s%s%s.obj" % (i, segment_str, offset_str, tag_str, id_str)
|
||||
filename = f"{bin_to_hex(id)}.obj"
|
||||
print("Dumping", filename)
|
||||
with open(filename, "wb") as fd:
|
||||
fd.write(data)
|
||||
|
||||
if args.ghost:
|
||||
# dump ghosty stuff from segment files: not yet committed objects, deleted / superseded objects, commit tags
|
||||
|
||||
# set up the key without depending on a manifest obj
|
||||
for id, cdata, tag, segment, offset in repository.scan_low_level():
|
||||
if tag == TAG_PUT:
|
||||
key = key_factory(repository, cdata)
|
||||
repo_objs = RepoObj(key)
|
||||
break
|
||||
i = 0
|
||||
for id, cdata, tag, segment, offset in repository.scan_low_level(segment=args.segment, offset=args.offset):
|
||||
if tag == TAG_PUT:
|
||||
decrypt_dump(i, id, cdata, tag="put", segment=segment, offset=offset)
|
||||
elif tag == TAG_DELETE:
|
||||
decrypt_dump(i, id, None, tag="del", segment=segment, offset=offset)
|
||||
elif tag == TAG_COMMIT:
|
||||
decrypt_dump(i, None, None, tag="commit", segment=segment, offset=offset)
|
||||
i += 1
|
||||
else:
|
||||
# set up the key without depending on a manifest obj
|
||||
ids = repository.list(limit=1, marker=None)
|
||||
cdata = repository.get(ids[0])
|
||||
key = key_factory(repository, cdata)
|
||||
repo_objs = RepoObj(key)
|
||||
state = None
|
||||
i = 0
|
||||
while True:
|
||||
ids, state = repository.scan(limit=LIST_SCAN_LIMIT, state=state) # must use on-disk order scanning here
|
||||
if not ids:
|
||||
break
|
||||
for id in ids:
|
||||
cdata = repository.get(id)
|
||||
decrypt_dump(i, id, cdata)
|
||||
i += 1
|
||||
# set up the key without depending on a manifest obj
|
||||
result = repository.list(limit=1, marker=None)
|
||||
id, _ = result[0]
|
||||
cdata = repository.get(id)
|
||||
key = key_factory(repository, cdata)
|
||||
repo_objs = RepoObj(key)
|
||||
marker = None
|
||||
while True:
|
||||
result = repository.list(limit=LIST_SCAN_LIMIT, marker=marker)
|
||||
if not result:
|
||||
break
|
||||
marker = result[-1][0]
|
||||
for id, stored_size in result:
|
||||
cdata = repository.get(id)
|
||||
decrypt_dump(id, cdata)
|
||||
print("Done.")
|
||||
|
||||
@with_repository(manifest=False)
|
||||
|
@ -191,20 +169,22 @@ def print_finding(info, wanted, data, offset):
|
|||
from ..crypto.key import key_factory
|
||||
|
||||
# set up the key without depending on a manifest obj
|
||||
ids = repository.list(limit=1, marker=None)
|
||||
cdata = repository.get(ids[0])
|
||||
result = repository.list(limit=1, marker=None)
|
||||
id, _ = result[0]
|
||||
cdata = repository.get(id)
|
||||
key = key_factory(repository, cdata)
|
||||
repo_objs = RepoObj(key)
|
||||
|
||||
state = None
|
||||
marker = None
|
||||
last_data = b""
|
||||
last_id = None
|
||||
i = 0
|
||||
while True:
|
||||
ids, state = repository.scan(limit=LIST_SCAN_LIMIT, state=state) # must use on-disk order scanning here
|
||||
if not ids:
|
||||
result = repository.list(limit=LIST_SCAN_LIMIT, marker=marker)
|
||||
if not result:
|
||||
break
|
||||
for id in ids:
|
||||
marker = result[-1][0]
|
||||
for id, stored_size in result:
|
||||
cdata = repository.get(id)
|
||||
_, data = repo_objs.parse(id, cdata, ro_type=ROBJ_DONTCARE)
|
||||
|
||||
|
@ -301,7 +281,7 @@ def do_debug_format_obj(self, args, repository, manifest):
|
|||
with open(args.object_path, "wb") as f:
|
||||
f.write(data_encrypted)
|
||||
|
||||
@with_repository(manifest=False, exclusive=True)
|
||||
@with_repository(manifest=False)
|
||||
def do_debug_put_obj(self, args, repository):
|
||||
"""put file contents into the repository"""
|
||||
with open(args.path, "rb") as f:
|
||||
|
@ -314,12 +294,10 @@ def do_debug_put_obj(self, args, repository):
|
|||
|
||||
repository.put(id, data)
|
||||
print("object %s put." % hex_id)
|
||||
repository.commit(compact=False)
|
||||
|
||||
@with_repository(manifest=False, exclusive=True)
|
||||
def do_debug_delete_obj(self, args, repository):
|
||||
"""delete the objects with the given IDs from the repo"""
|
||||
modified = False
|
||||
for hex_id in args.ids:
|
||||
try:
|
||||
id = hex_to_bin(hex_id, length=32)
|
||||
|
@ -328,46 +306,11 @@ def do_debug_delete_obj(self, args, repository):
|
|||
else:
|
||||
try:
|
||||
repository.delete(id)
|
||||
modified = True
|
||||
print("object %s deleted." % hex_id)
|
||||
except Repository.ObjectNotFound:
|
||||
print("object %s not found." % hex_id)
|
||||
if modified:
|
||||
repository.commit(compact=False)
|
||||
print("Done.")
|
||||
|
||||
@with_repository(manifest=False, exclusive=True, cache=True, compatibility=Manifest.NO_OPERATION_CHECK)
|
||||
def do_debug_refcount_obj(self, args, repository, manifest, cache):
|
||||
"""display refcounts for the objects with the given IDs"""
|
||||
for hex_id in args.ids:
|
||||
try:
|
||||
id = hex_to_bin(hex_id, length=32)
|
||||
except ValueError:
|
||||
print("object id %s is invalid." % hex_id)
|
||||
else:
|
||||
try:
|
||||
refcount = cache.chunks[id][0]
|
||||
print("object %s has %d referrers [info from chunks cache]." % (hex_id, refcount))
|
||||
except KeyError:
|
||||
print("object %s not found [info from chunks cache]." % hex_id)
|
||||
|
||||
@with_repository(manifest=False, exclusive=True)
|
||||
def do_debug_dump_hints(self, args, repository):
|
||||
"""dump repository hints"""
|
||||
if not repository._active_txn:
|
||||
repository.prepare_txn(repository.get_transaction_id())
|
||||
try:
|
||||
hints = dict(
|
||||
segments=repository.segments,
|
||||
compact=repository.compact,
|
||||
storage_quota_use=repository.storage_quota_use,
|
||||
shadow_index={bin_to_hex(k): v for k, v in repository.shadow_index.items()},
|
||||
)
|
||||
with dash_open(args.path, "w") as fd:
|
||||
json.dump(hints, fd, indent=4)
|
||||
finally:
|
||||
repository.rollback()
|
||||
|
||||
def do_debug_convert_profile(self, args):
|
||||
"""convert Borg profile to Python profile"""
|
||||
import marshal
|
||||
|
@ -484,30 +427,6 @@ def build_parser_debug(self, subparsers, common_parser, mid_common_parser):
|
|||
help="dump repo objects (debug)",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_debug_dump_repo_objs)
|
||||
subparser.add_argument(
|
||||
"--ghost",
|
||||
dest="ghost",
|
||||
action="store_true",
|
||||
help="dump all segment file contents, including deleted/uncommitted objects and commits.",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--segment",
|
||||
metavar="SEG",
|
||||
dest="segment",
|
||||
type=positive_int_validator,
|
||||
default=None,
|
||||
action=Highlander,
|
||||
help="used together with --ghost: limit processing to given segment.",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--offset",
|
||||
metavar="OFFS",
|
||||
dest="offset",
|
||||
type=positive_int_validator,
|
||||
default=None,
|
||||
action=Highlander,
|
||||
help="used together with --ghost: limit processing to given offset.",
|
||||
)
|
||||
|
||||
debug_search_repo_objs_epilog = process_epilog(
|
||||
"""
|
||||
|
@ -672,40 +591,6 @@ def build_parser_debug(self, subparsers, common_parser, mid_common_parser):
|
|||
"ids", metavar="IDs", nargs="+", type=str, help="hex object ID(s) to delete from the repo"
|
||||
)
|
||||
|
||||
debug_refcount_obj_epilog = process_epilog(
|
||||
"""
|
||||
This command displays the reference count for objects from the repository.
|
||||
"""
|
||||
)
|
||||
subparser = debug_parsers.add_parser(
|
||||
"refcount-obj",
|
||||
parents=[common_parser],
|
||||
add_help=False,
|
||||
description=self.do_debug_refcount_obj.__doc__,
|
||||
epilog=debug_refcount_obj_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="show refcount for object from repository (debug)",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_debug_refcount_obj)
|
||||
subparser.add_argument("ids", metavar="IDs", nargs="+", type=str, help="hex object ID(s) to show refcounts for")
|
||||
|
||||
debug_dump_hints_epilog = process_epilog(
|
||||
"""
|
||||
This command dumps the repository hints data.
|
||||
"""
|
||||
)
|
||||
subparser = debug_parsers.add_parser(
|
||||
"dump-hints",
|
||||
parents=[common_parser],
|
||||
add_help=False,
|
||||
description=self.do_debug_dump_hints.__doc__,
|
||||
epilog=debug_dump_hints_epilog,
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
help="dump repo hints (debug)",
|
||||
)
|
||||
subparser.set_defaults(func=self.do_debug_dump_hints)
|
||||
subparser.add_argument("path", metavar="PATH", type=str, help="file to dump data into")
|
||||
|
||||
debug_convert_profile_epilog = process_epilog(
|
||||
"""
|
||||
Convert a Borg profile to a Python cProfile compatible profile.
|
||||
|
|
|
@ -1,11 +1,9 @@
|
|||
import argparse
|
||||
import logging
|
||||
|
||||
from ._common import with_repository, Highlander
|
||||
from ..archive import Archive, Statistics
|
||||
from ..cache import Cache
|
||||
from ._common import with_repository
|
||||
from ..constants import * # NOQA
|
||||
from ..helpers import log_multi, format_archive, sig_int, CommandError, Error
|
||||
from ..helpers import format_archive, CommandError
|
||||
from ..manifest import Manifest
|
||||
|
||||
from ..logger import create_logger
|
||||
|
@ -14,7 +12,7 @@
|
|||
|
||||
|
||||
class DeleteMixIn:
|
||||
@with_repository(exclusive=True, manifest=False)
|
||||
@with_repository(manifest=False)
|
||||
def do_delete(self, args, repository):
|
||||
"""Delete archives"""
|
||||
self.output_list = args.output_list
|
||||
|
@ -29,67 +27,28 @@ def do_delete(self, args, repository):
|
|||
"or just delete the whole repository (might be much faster)."
|
||||
)
|
||||
|
||||
if args.forced == 2:
|
||||
deleted = False
|
||||
logger_list = logging.getLogger("borg.output.list")
|
||||
for i, archive_name in enumerate(archive_names, 1):
|
||||
try:
|
||||
current_archive = manifest.archives.pop(archive_name)
|
||||
except KeyError:
|
||||
self.print_warning(f"Archive {archive_name} not found ({i}/{len(archive_names)}).")
|
||||
else:
|
||||
deleted = True
|
||||
if self.output_list:
|
||||
msg = "Would delete: {} ({}/{})" if dry_run else "Deleted archive: {} ({}/{})"
|
||||
logger_list.info(msg.format(format_archive(current_archive), i, len(archive_names)))
|
||||
if dry_run:
|
||||
logger.info("Finished dry-run.")
|
||||
elif deleted:
|
||||
manifest.write()
|
||||
# note: might crash in compact() after committing the repo
|
||||
repository.commit(compact=False)
|
||||
self.print_warning('Done. Run "borg check --repair" to clean up the mess.', wc=None)
|
||||
deleted = False
|
||||
logger_list = logging.getLogger("borg.output.list")
|
||||
for i, archive_name in enumerate(archive_names, 1):
|
||||
try:
|
||||
# this does NOT use Archive.delete, so this code hopefully even works in cases a corrupt archive
|
||||
# would make the code in class Archive crash, so the user can at least get rid of such archives.
|
||||
current_archive = manifest.archives.delete(archive_name)
|
||||
except KeyError:
|
||||
self.print_warning(f"Archive {archive_name} not found ({i}/{len(archive_names)}).")
|
||||
else:
|
||||
self.print_warning("Aborted.", wc=None)
|
||||
return
|
||||
|
||||
stats = Statistics(iec=args.iec)
|
||||
with Cache(repository, manifest, progress=args.progress, lock_wait=self.lock_wait, iec=args.iec) as cache:
|
||||
|
||||
def checkpoint_func():
|
||||
manifest.write()
|
||||
repository.commit(compact=False)
|
||||
cache.commit()
|
||||
|
||||
msg_delete = "Would delete archive: {} ({}/{})" if dry_run else "Deleting archive: {} ({}/{})"
|
||||
msg_not_found = "Archive {} not found ({}/{})."
|
||||
logger_list = logging.getLogger("borg.output.list")
|
||||
uncommitted_deletes = 0
|
||||
for i, archive_name in enumerate(archive_names, 1):
|
||||
if sig_int and sig_int.action_done():
|
||||
break
|
||||
try:
|
||||
archive_info = manifest.archives[archive_name]
|
||||
except KeyError:
|
||||
self.print_warning(msg_not_found.format(archive_name, i, len(archive_names)))
|
||||
else:
|
||||
if self.output_list:
|
||||
logger_list.info(msg_delete.format(format_archive(archive_info), i, len(archive_names)))
|
||||
|
||||
if not dry_run:
|
||||
archive = Archive(manifest, archive_name, cache=cache)
|
||||
archive.delete(stats, progress=args.progress, forced=args.forced)
|
||||
checkpointed = self.maybe_checkpoint(
|
||||
checkpoint_func=checkpoint_func, checkpoint_interval=args.checkpoint_interval
|
||||
)
|
||||
uncommitted_deletes = 0 if checkpointed else (uncommitted_deletes + 1)
|
||||
if sig_int:
|
||||
# Ctrl-C / SIGINT: do not checkpoint (commit) again, we already have a checkpoint in this case.
|
||||
raise Error("Got Ctrl-C / SIGINT.")
|
||||
elif uncommitted_deletes > 0:
|
||||
checkpoint_func()
|
||||
if args.stats:
|
||||
log_multi(str(stats), logger=logging.getLogger("borg.output.stats"))
|
||||
deleted = True
|
||||
if self.output_list:
|
||||
msg = "Would delete: {} ({}/{})" if dry_run else "Deleted archive: {} ({}/{})"
|
||||
logger_list.info(msg.format(format_archive(current_archive), i, len(archive_names)))
|
||||
if dry_run:
|
||||
logger.info("Finished dry-run.")
|
||||
elif deleted:
|
||||
manifest.write()
|
||||
self.print_warning('Done. Run "borg compact" to free space.', wc=None)
|
||||
else:
|
||||
self.print_warning("Aborted.", wc=None)
|
||||
return
|
||||
|
||||
def build_parser_delete(self, subparsers, common_parser, mid_common_parser):
|
||||
from ._common import process_epilog, define_archive_filters_group
|
||||
|
@ -103,16 +62,9 @@ def build_parser_delete(self, subparsers, common_parser, mid_common_parser):
|
|||
|
||||
When in doubt, use ``--dry-run --list`` to see what would be deleted.
|
||||
|
||||
When using ``--stats``, you will get some statistics about how much data was
|
||||
deleted - the "Deleted data" deduplicated size there is most interesting as
|
||||
that is how much your repository will shrink.
|
||||
Please note that the "All archives" stats refer to the state after deletion.
|
||||
|
||||
You can delete multiple archives by specifying a matching pattern,
|
||||
using the ``--match-archives PATTERN`` option (for more info on these patterns,
|
||||
see :ref:`borg_patterns`).
|
||||
|
||||
Always first use ``--dry-run --list`` to see what would be deleted.
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
|
@ -129,30 +81,4 @@ def build_parser_delete(self, subparsers, common_parser, mid_common_parser):
|
|||
subparser.add_argument(
|
||||
"--list", dest="output_list", action="store_true", help="output verbose list of archives"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--consider-checkpoints",
|
||||
action="store_true",
|
||||
dest="consider_checkpoints",
|
||||
help="consider checkpoint archives for deletion (default: not considered).",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"-s", "--stats", dest="stats", action="store_true", help="print statistics for the deleted archive"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--force",
|
||||
dest="forced",
|
||||
action="count",
|
||||
default=0,
|
||||
help="force deletion of corrupted archives, " "use ``--force --force`` in case ``--force`` does not work.",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"-c",
|
||||
"--checkpoint-interval",
|
||||
metavar="SECONDS",
|
||||
dest="checkpoint_interval",
|
||||
type=int,
|
||||
default=1800,
|
||||
action=Highlander,
|
||||
help="write checkpoint every SECONDS seconds (Default: 1800)",
|
||||
)
|
||||
define_archive_filters_group(subparser)
|
||||
|
|
|
@ -18,7 +18,6 @@ class InfoMixIn:
|
|||
def do_info(self, args, repository, manifest, cache):
|
||||
"""Show archive details such as disk space used"""
|
||||
|
||||
args.consider_checkpoints = True
|
||||
archive_names = tuple(x.name for x in manifest.archives.list_considering(args))
|
||||
|
||||
output_data = []
|
||||
|
@ -44,7 +43,6 @@ def do_info(self, args, repository, manifest, cache):
|
|||
Command line: {command_line}
|
||||
Number of files: {stats[nfiles]}
|
||||
Original size: {stats[original_size]}
|
||||
Deduplicated size: {stats[deduplicated_size]}
|
||||
"""
|
||||
)
|
||||
.strip()
|
||||
|
|
|
@ -73,13 +73,8 @@ def do_change_location(self, args, repository, manifest, cache):
|
|||
manifest.key = key_new
|
||||
manifest.repo_objs.key = key_new
|
||||
manifest.write()
|
||||
repository.commit(compact=False)
|
||||
|
||||
# we need to rewrite cache config and security key-type info,
|
||||
# so that the cached key-type will match the repo key-type.
|
||||
cache.begin_txn() # need to start a cache transaction, otherwise commit() does nothing.
|
||||
cache.key = key_new
|
||||
cache.commit()
|
||||
|
||||
loc = key_new.find_key() if hasattr(key_new, "find_key") else None
|
||||
if args.keep:
|
||||
|
@ -88,7 +83,7 @@ def do_change_location(self, args, repository, manifest, cache):
|
|||
key.remove(key.target) # remove key from current location
|
||||
logger.info(f"Key moved to {loc}")
|
||||
|
||||
@with_repository(lock=False, exclusive=False, manifest=False, cache=False)
|
||||
@with_repository(lock=False, manifest=False, cache=False)
|
||||
def do_key_export(self, args, repository):
|
||||
"""Export the repository key for backup"""
|
||||
manager = KeyManager(repository)
|
||||
|
@ -107,7 +102,7 @@ def do_key_export(self, args, repository):
|
|||
except IsADirectoryError:
|
||||
raise CommandError(f"'{args.path}' must be a file, not a directory")
|
||||
|
||||
@with_repository(lock=False, exclusive=False, manifest=False, cache=False)
|
||||
@with_repository(lock=False, manifest=False, cache=False)
|
||||
def do_key_import(self, args, repository):
|
||||
"""Import the repository key from backup"""
|
||||
manager = KeyManager(repository)
|
||||
|
|
|
@ -4,8 +4,7 @@
|
|||
from ._common import with_repository
|
||||
from ..cache import Cache
|
||||
from ..constants import * # NOQA
|
||||
from ..helpers import prepare_subprocess_env, set_ec, CommandError
|
||||
from ..manifest import Manifest
|
||||
from ..helpers import prepare_subprocess_env, set_ec, CommandError, ThreadRunner
|
||||
|
||||
from ..logger import create_logger
|
||||
|
||||
|
@ -16,20 +15,10 @@ class LocksMixIn:
|
|||
@with_repository(manifest=False, exclusive=True)
|
||||
def do_with_lock(self, args, repository):
|
||||
"""run a user specified command with the repository lock held"""
|
||||
# for a new server, this will immediately take an exclusive lock.
|
||||
# to support old servers, that do not have "exclusive" arg in open()
|
||||
# RPC API, we also do it the old way:
|
||||
# re-write manifest to start a repository transaction - this causes a
|
||||
# lock upgrade to exclusive for remote (and also for local) repositories.
|
||||
# by using manifest=False in the decorator, we avoid having to require
|
||||
# the encryption key (and can operate just with encrypted data).
|
||||
data = repository.get(Manifest.MANIFEST_ID)
|
||||
repository.put(Manifest.MANIFEST_ID, data)
|
||||
# usually, a 0 byte (open for writing) segment file would be visible in the filesystem here.
|
||||
# we write and close this file, to rather have a valid segment file on disk, before invoking the subprocess.
|
||||
# we can only do this for local repositories (with .io), though:
|
||||
if hasattr(repository, "io"):
|
||||
repository.io.close_segment()
|
||||
# the repository lock needs to get refreshed regularly, or it will be killed as stale.
|
||||
# refreshing the lock is not part of the repository API, so we do it indirectly via repository.info.
|
||||
lock_refreshing_thread = ThreadRunner(sleep_interval=60, target=repository.info)
|
||||
lock_refreshing_thread.start()
|
||||
env = prepare_subprocess_env(system=True)
|
||||
try:
|
||||
# we exit with the return code we get from the subprocess
|
||||
|
@ -38,13 +27,7 @@ def do_with_lock(self, args, repository):
|
|||
except (FileNotFoundError, OSError, ValueError) as e:
|
||||
raise CommandError(f"Error while trying to run '{args.command}': {e}")
|
||||
finally:
|
||||
# we need to commit the "no change" operation we did to the manifest
|
||||
# because it created a new segment file in the repository. if we would
|
||||
# roll back, the same file would be later used otherwise (for other content).
|
||||
# that would be bad if somebody uses rsync with ignore-existing (or
|
||||
# any other mechanism relying on existing segment data not changing).
|
||||
# see issue #1867.
|
||||
repository.commit(compact=False)
|
||||
lock_refreshing_thread.terminate()
|
||||
|
||||
@with_repository(lock=False, manifest=False)
|
||||
def do_break_lock(self, args, repository):
|
||||
|
|
|
@ -158,12 +158,6 @@ def _define_borg_mount(self, parser):
|
|||
from ._common import define_exclusion_group, define_archive_filters_group
|
||||
|
||||
parser.set_defaults(func=self.do_mount)
|
||||
parser.add_argument(
|
||||
"--consider-checkpoints",
|
||||
action="store_true",
|
||||
dest="consider_checkpoints",
|
||||
help="Show checkpoint archives in the repository contents list (default: hidden).",
|
||||
)
|
||||
parser.add_argument("mountpoint", metavar="MOUNTPOINT", type=str, help="where to mount filesystem")
|
||||
parser.add_argument(
|
||||
"-f", "--foreground", dest="foreground", action="store_true", help="stay in foreground, do not daemonize"
|
||||
|
|
|
@ -4,13 +4,12 @@
|
|||
import logging
|
||||
from operator import attrgetter
|
||||
import os
|
||||
import re
|
||||
|
||||
from ._common import with_repository, Highlander
|
||||
from ..archive import Archive, Statistics
|
||||
from ..archive import Archive
|
||||
from ..cache import Cache
|
||||
from ..constants import * # NOQA
|
||||
from ..helpers import ArchiveFormatter, interval, sig_int, log_multi, ProgressIndicatorPercent, CommandError, Error
|
||||
from ..helpers import ArchiveFormatter, interval, sig_int, ProgressIndicatorPercent, CommandError, Error
|
||||
from ..manifest import Manifest
|
||||
|
||||
from ..logger import create_logger
|
||||
|
@ -71,7 +70,7 @@ def prune_split(archives, rule, n, kept_because=None):
|
|||
|
||||
|
||||
class PruneMixIn:
|
||||
@with_repository(exclusive=True, compatibility=(Manifest.Operation.DELETE,))
|
||||
@with_repository(compatibility=(Manifest.Operation.DELETE,))
|
||||
def do_prune(self, args, repository, manifest):
|
||||
"""Prune repository archives according to specified rules"""
|
||||
if not any(
|
||||
|
@ -91,25 +90,7 @@ def do_prune(self, args, repository, manifest):
|
|||
format = os.environ.get("BORG_PRUNE_FORMAT", "{archive:<36} {time} [{id}]")
|
||||
formatter = ArchiveFormatter(format, repository, manifest, manifest.key, iec=args.iec)
|
||||
|
||||
checkpoint_re = r"\.checkpoint(\.\d+)?"
|
||||
archives_checkpoints = manifest.archives.list(
|
||||
match=args.match_archives,
|
||||
consider_checkpoints=True,
|
||||
match_end=r"(%s)?\Z" % checkpoint_re,
|
||||
sort_by=["ts"],
|
||||
reverse=True,
|
||||
)
|
||||
is_checkpoint = re.compile(r"(%s)\Z" % checkpoint_re).search
|
||||
checkpoints = [arch for arch in archives_checkpoints if is_checkpoint(arch.name)]
|
||||
# keep the latest checkpoint, if there is no later non-checkpoint archive
|
||||
if archives_checkpoints and checkpoints and archives_checkpoints[0] is checkpoints[0]:
|
||||
keep_checkpoints = checkpoints[:1]
|
||||
else:
|
||||
keep_checkpoints = []
|
||||
checkpoints = set(checkpoints)
|
||||
# ignore all checkpoint archives to avoid keeping one (which is an incomplete backup)
|
||||
# that is newer than a successfully completed backup - and killing the successful backup.
|
||||
archives = [arch for arch in archives_checkpoints if arch not in checkpoints]
|
||||
archives = manifest.archives.list(match=args.match_archives, sort_by=["ts"], reverse=True)
|
||||
keep = []
|
||||
# collect the rule responsible for the keeping of each archive in this dict
|
||||
# keys are archive ids, values are a tuple
|
||||
|
@ -126,22 +107,15 @@ def do_prune(self, args, repository, manifest):
|
|||
if num is not None:
|
||||
keep += prune_split(archives, rule, num, kept_because)
|
||||
|
||||
to_delete = (set(archives) | checkpoints) - (set(keep) | set(keep_checkpoints))
|
||||
stats = Statistics(iec=args.iec)
|
||||
to_delete = set(archives) - set(keep)
|
||||
with Cache(repository, manifest, lock_wait=self.lock_wait, iec=args.iec) as cache:
|
||||
|
||||
def checkpoint_func():
|
||||
manifest.write()
|
||||
repository.commit(compact=False)
|
||||
cache.commit()
|
||||
|
||||
list_logger = logging.getLogger("borg.output.list")
|
||||
# set up counters for the progress display
|
||||
to_delete_len = len(to_delete)
|
||||
archives_deleted = 0
|
||||
uncommitted_deletes = 0
|
||||
pi = ProgressIndicatorPercent(total=len(to_delete), msg="Pruning archives %3.0f%%", msgid="prune")
|
||||
for archive in archives_checkpoints:
|
||||
for archive in archives:
|
||||
if sig_int and sig_int.action_done():
|
||||
break
|
||||
if archive in to_delete:
|
||||
|
@ -152,18 +126,12 @@ def checkpoint_func():
|
|||
archives_deleted += 1
|
||||
log_message = "Pruning archive (%d/%d):" % (archives_deleted, to_delete_len)
|
||||
archive = Archive(manifest, archive.name, cache)
|
||||
archive.delete(stats, forced=args.forced)
|
||||
checkpointed = self.maybe_checkpoint(
|
||||
checkpoint_func=checkpoint_func, checkpoint_interval=args.checkpoint_interval
|
||||
)
|
||||
uncommitted_deletes = 0 if checkpointed else (uncommitted_deletes + 1)
|
||||
archive.delete()
|
||||
uncommitted_deletes += 1
|
||||
else:
|
||||
if is_checkpoint(archive.name):
|
||||
log_message = "Keeping checkpoint archive:"
|
||||
else:
|
||||
log_message = "Keeping archive (rule: {rule} #{num}):".format(
|
||||
rule=kept_because[archive.id][0], num=kept_because[archive.id][1]
|
||||
)
|
||||
log_message = "Keeping archive (rule: {rule} #{num}):".format(
|
||||
rule=kept_because[archive.id][0], num=kept_because[archive.id][1]
|
||||
)
|
||||
if (
|
||||
args.output_list
|
||||
or (args.list_pruned and archive in to_delete)
|
||||
|
@ -172,12 +140,9 @@ def checkpoint_func():
|
|||
list_logger.info(f"{log_message:<44} {formatter.format_item(archive, jsonline=False)}")
|
||||
pi.finish()
|
||||
if sig_int:
|
||||
# Ctrl-C / SIGINT: do not checkpoint (commit) again, we already have a checkpoint in this case.
|
||||
raise Error("Got Ctrl-C / SIGINT.")
|
||||
elif uncommitted_deletes > 0:
|
||||
checkpoint_func()
|
||||
if args.stats:
|
||||
log_multi(str(stats), logger=logging.getLogger("borg.output.stats"))
|
||||
manifest.write()
|
||||
|
||||
def build_parser_prune(self, subparsers, common_parser, mid_common_parser):
|
||||
from ._common import process_epilog
|
||||
|
@ -195,11 +160,6 @@ def build_parser_prune(self, subparsers, common_parser, mid_common_parser):
|
|||
`GFS <https://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son>`_
|
||||
(Grandfather-father-son) backup rotation scheme.
|
||||
|
||||
Also, prune automatically removes checkpoint archives (incomplete archives left
|
||||
behind by interrupted backup runs) except if the checkpoint is the latest
|
||||
archive (and thus still needed). Checkpoint archives are not considered when
|
||||
comparing archive counts against the retention limits (``--keep-X``).
|
||||
|
||||
If you use --match-archives (-a), then only archives that match the pattern are
|
||||
considered for deletion and only those archives count towards the totals
|
||||
specified by the rules.
|
||||
|
@ -235,11 +195,6 @@ def build_parser_prune(self, subparsers, common_parser, mid_common_parser):
|
|||
keep the last N archives under the assumption that you do not create more than one
|
||||
backup archive in the same second).
|
||||
|
||||
When using ``--stats``, you will get some statistics about how much data was
|
||||
deleted - the "Deleted data" deduplicated size there is most interesting as
|
||||
that is how much your repository will shrink.
|
||||
Please note that the "All archives" stats refer to the state after pruning.
|
||||
|
||||
You can influence how the ``--list`` output is formatted by using the ``--short``
|
||||
option (less wide output) or by giving a custom format using ``--format`` (see
|
||||
the ``borg rlist`` description for more details about the format string).
|
||||
|
@ -256,15 +211,6 @@ def build_parser_prune(self, subparsers, common_parser, mid_common_parser):
|
|||
)
|
||||
subparser.set_defaults(func=self.do_prune)
|
||||
subparser.add_argument("-n", "--dry-run", dest="dry_run", action="store_true", help="do not change repository")
|
||||
subparser.add_argument(
|
||||
"--force",
|
||||
dest="forced",
|
||||
action="store_true",
|
||||
help="force pruning of corrupted archives, " "use ``--force --force`` in case ``--force`` does not work.",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"-s", "--stats", dest="stats", action="store_true", help="print statistics for the deleted archive"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--list", dest="output_list", action="store_true", help="output verbose list of archives it keeps/prunes"
|
||||
)
|
||||
|
@ -353,13 +299,3 @@ def build_parser_prune(self, subparsers, common_parser, mid_common_parser):
|
|||
help="number of yearly archives to keep",
|
||||
)
|
||||
define_archive_filters_group(subparser, sort_by=False, first_last=False)
|
||||
subparser.add_argument(
|
||||
"-c",
|
||||
"--checkpoint-interval",
|
||||
metavar="SECONDS",
|
||||
dest="checkpoint_interval",
|
||||
type=int,
|
||||
default=1800,
|
||||
action=Highlander,
|
||||
help="write checkpoint every SECONDS seconds (Default: 1800)",
|
||||
)
|
||||
|
|
|
@ -5,7 +5,8 @@
|
|||
from ..constants import * # NOQA
|
||||
from ..compress import CompressionSpec, ObfuscateSize, Auto, COMPRESSOR_TABLE
|
||||
from ..helpers import sig_int, ProgressIndicatorPercent, Error
|
||||
|
||||
from ..repository import Repository
|
||||
from ..remote import RemoteRepository
|
||||
from ..manifest import Manifest
|
||||
|
||||
from ..logger import create_logger
|
||||
|
@ -15,27 +16,16 @@
|
|||
|
||||
def find_chunks(repository, repo_objs, stats, ctype, clevel, olevel):
|
||||
"""find chunks that need processing (usually: recompression)."""
|
||||
# to do it this way is maybe not obvious, thus keeping the essential design criteria here:
|
||||
# - determine the chunk ids at one point in time (== do a **full** scan in one go) **before**
|
||||
# writing to the repo (and especially before doing a compaction, which moves segment files around)
|
||||
# - get the chunk ids in **on-disk order** (so we can efficiently compact while processing the chunks)
|
||||
# - only put the ids into the list that actually need recompression (keeps it a little shorter in some cases)
|
||||
recompress_ids = []
|
||||
compr_keys = stats["compr_keys"] = set()
|
||||
compr_wanted = ctype, clevel, olevel
|
||||
state = None
|
||||
chunks_count = len(repository)
|
||||
chunks_limit = min(1000, max(100, chunks_count // 1000))
|
||||
pi = ProgressIndicatorPercent(
|
||||
total=chunks_count,
|
||||
msg="Searching for recompression candidates %3.1f%%",
|
||||
step=0.1,
|
||||
msgid="rcompress.find_chunks",
|
||||
)
|
||||
marker = None
|
||||
while True:
|
||||
chunk_ids, state = repository.scan(limit=chunks_limit, state=state)
|
||||
if not chunk_ids:
|
||||
result = repository.list(limit=LIST_SCAN_LIMIT, marker=marker)
|
||||
if not result:
|
||||
break
|
||||
marker = result[-1][0]
|
||||
chunk_ids = [id for id, _ in result]
|
||||
for id, chunk_no_data in zip(chunk_ids, repository.get_many(chunk_ids, read_data=False)):
|
||||
meta = repo_objs.parse_meta(id, chunk_no_data, ro_type=ROBJ_DONTCARE)
|
||||
compr_found = meta["ctype"], meta["clevel"], meta.get("olevel", -1)
|
||||
|
@ -44,8 +34,6 @@ def find_chunks(repository, repo_objs, stats, ctype, clevel, olevel):
|
|||
compr_keys.add(compr_found)
|
||||
stats[compr_found] += 1
|
||||
stats["checked_count"] += 1
|
||||
pi.show(increase=1)
|
||||
pi.finish()
|
||||
return recompress_ids
|
||||
|
||||
|
||||
|
@ -100,7 +88,7 @@ def format_compression_spec(ctype, clevel, olevel):
|
|||
|
||||
|
||||
class RCompressMixIn:
|
||||
@with_repository(cache=False, manifest=True, exclusive=True, compatibility=(Manifest.Operation.CHECK,))
|
||||
@with_repository(cache=False, manifest=True, compatibility=(Manifest.Operation.CHECK,))
|
||||
def do_rcompress(self, args, repository, manifest):
|
||||
"""Repository (re-)compression"""
|
||||
|
||||
|
@ -114,25 +102,17 @@ def get_csettings(c):
|
|||
ctype, clevel, olevel = c.ID, c.level, -1
|
||||
return ctype, clevel, olevel
|
||||
|
||||
if not isinstance(repository, (Repository, RemoteRepository)):
|
||||
raise Error("rcompress not supported for legacy repositories.")
|
||||
|
||||
repo_objs = manifest.repo_objs
|
||||
ctype, clevel, olevel = get_csettings(repo_objs.compressor) # desired compression set by --compression
|
||||
|
||||
def checkpoint_func():
|
||||
while repository.async_response(wait=True) is not None:
|
||||
pass
|
||||
repository.commit(compact=True)
|
||||
|
||||
stats_find = defaultdict(int)
|
||||
stats_process = defaultdict(int)
|
||||
recompress_ids = find_chunks(repository, repo_objs, stats_find, ctype, clevel, olevel)
|
||||
recompress_candidate_count = len(recompress_ids)
|
||||
chunks_limit = min(1000, max(100, recompress_candidate_count // 1000))
|
||||
uncommitted_chunks = 0
|
||||
|
||||
# start a new transaction
|
||||
data = repository.get(Manifest.MANIFEST_ID)
|
||||
repository.put(Manifest.MANIFEST_ID, data)
|
||||
uncommitted_chunks += 1
|
||||
|
||||
pi = ProgressIndicatorPercent(
|
||||
total=len(recompress_ids), msg="Recompressing %3.1f%%", step=0.1, msgid="rcompress.process_chunks"
|
||||
|
@ -143,16 +123,13 @@ def checkpoint_func():
|
|||
ids, recompress_ids = recompress_ids[:chunks_limit], recompress_ids[chunks_limit:]
|
||||
process_chunks(repository, repo_objs, stats_process, ids, olevel)
|
||||
pi.show(increase=len(ids))
|
||||
checkpointed = self.maybe_checkpoint(
|
||||
checkpoint_func=checkpoint_func, checkpoint_interval=args.checkpoint_interval
|
||||
)
|
||||
uncommitted_chunks = 0 if checkpointed else (uncommitted_chunks + len(ids))
|
||||
pi.finish()
|
||||
if sig_int:
|
||||
# Ctrl-C / SIGINT: do not checkpoint (commit) again, we already have a checkpoint in this case.
|
||||
# Ctrl-C / SIGINT: do not commit
|
||||
raise Error("Got Ctrl-C / SIGINT.")
|
||||
elif uncommitted_chunks > 0:
|
||||
checkpoint_func()
|
||||
else:
|
||||
while repository.async_response(wait=True) is not None:
|
||||
pass
|
||||
if args.stats:
|
||||
print()
|
||||
print("Recompression stats:")
|
||||
|
@ -185,20 +162,14 @@ def build_parser_rcompress(self, subparsers, common_parser, mid_common_parser):
|
|||
"""
|
||||
Repository (re-)compression (and/or re-obfuscation).
|
||||
|
||||
Reads all chunks in the repository (in on-disk order, this is important for
|
||||
compaction) and recompresses them if they are not already using the compression
|
||||
type/level and obfuscation level given via ``--compression``.
|
||||
Reads all chunks in the repository and recompresses them if they are not already
|
||||
using the compression type/level and obfuscation level given via ``--compression``.
|
||||
|
||||
If the outcome of the chunk processing indicates a change in compression
|
||||
type/level or obfuscation level, the processed chunk is written to the repository.
|
||||
Please note that the outcome might not always be the desired compression
|
||||
type/level - if no compression gives a shorter output, that might be chosen.
|
||||
|
||||
Every ``--checkpoint-interval``, progress is committed to the repository and
|
||||
the repository is compacted (this is to keep temporary repo space usage in bounds).
|
||||
A lower checkpoint interval means lower temporary repo space usage, but also
|
||||
slower progress due to higher overhead (and vice versa).
|
||||
|
||||
Please note that this command can not work in low (or zero) free disk space
|
||||
conditions.
|
||||
|
||||
|
@ -234,14 +205,3 @@ def build_parser_rcompress(self, subparsers, common_parser, mid_common_parser):
|
|||
)
|
||||
|
||||
subparser.add_argument("-s", "--stats", dest="stats", action="store_true", help="print statistics")
|
||||
|
||||
subparser.add_argument(
|
||||
"-c",
|
||||
"--checkpoint-interval",
|
||||
metavar="SECONDS",
|
||||
dest="checkpoint_interval",
|
||||
type=int,
|
||||
default=1800,
|
||||
action=Highlander,
|
||||
help="write checkpoint every SECONDS seconds (Default: 1800)",
|
||||
)
|
||||
|
|
|
@ -4,7 +4,7 @@
|
|||
from ..cache import Cache
|
||||
from ..constants import * # NOQA
|
||||
from ..crypto.key import key_creator, key_argument_names
|
||||
from ..helpers import CancelledByUser
|
||||
from ..helpers import CancelledByUser, CommandError
|
||||
from ..helpers import location_validator, Location
|
||||
from ..helpers import parse_storage_quota
|
||||
from ..manifest import Manifest
|
||||
|
@ -19,6 +19,10 @@ class RCreateMixIn:
|
|||
@with_other_repository(manifest=True, compatibility=(Manifest.Operation.READ,))
|
||||
def do_rcreate(self, args, repository, *, other_repository=None, other_manifest=None):
|
||||
"""Create a new, empty repository"""
|
||||
if args.storage_quota is not None:
|
||||
raise CommandError("storage-quota is not supported (yet?)")
|
||||
if args.append_only:
|
||||
raise CommandError("append-only is not supported (yet?)")
|
||||
other_key = other_manifest.key if other_manifest is not None else None
|
||||
path = args.location.canonical_path()
|
||||
logger.info('Initializing repository at "%s"' % path)
|
||||
|
@ -32,7 +36,6 @@ def do_rcreate(self, args, repository, *, other_repository=None, other_manifest=
|
|||
manifest = Manifest(key, repository)
|
||||
manifest.key = key
|
||||
manifest.write()
|
||||
repository.commit(compact=False)
|
||||
with Cache(repository, manifest, warn_if_unencrypted=False):
|
||||
pass
|
||||
if key.NAME != "plaintext":
|
||||
|
@ -49,16 +52,22 @@ def do_rcreate(self, args, repository, *, other_repository=None, other_manifest=
|
|||
" borg key export -r REPOSITORY encrypted-key-backup\n"
|
||||
" borg key export -r REPOSITORY --paper encrypted-key-backup.txt\n"
|
||||
" borg key export -r REPOSITORY --qr-html encrypted-key-backup.html\n"
|
||||
"2. Write down the borg key passphrase and store it at safe place.\n"
|
||||
"2. Write down the borg key passphrase and store it at safe place."
|
||||
)
|
||||
logger.warning(
|
||||
"\n"
|
||||
"Reserve some repository storage space now for emergencies like 'disk full'\n"
|
||||
"by running:\n"
|
||||
" borg rspace --reserve 1G"
|
||||
)
|
||||
|
||||
def build_parser_rcreate(self, subparsers, common_parser, mid_common_parser):
|
||||
from ._common import process_epilog
|
||||
|
||||
rcreate_epilog = process_epilog(
|
||||
"""
|
||||
This command creates a new, empty repository. A repository is a filesystem
|
||||
directory containing the deduplicated data from zero or more archives.
|
||||
This command creates a new, empty repository. A repository is a ``borgstore`` store
|
||||
containing the deduplicated data from zero or more archives.
|
||||
|
||||
Encryption mode TLDR
|
||||
++++++++++++++++++++
|
||||
|
@ -172,6 +181,14 @@ def build_parser_rcreate(self, subparsers, common_parser, mid_common_parser):
|
|||
keys to manage.
|
||||
|
||||
Creating related repositories is useful e.g. if you want to use ``borg transfer`` later.
|
||||
|
||||
Creating a related repository for data migration from borg 1.2 or 1.4
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
You can use ``borg rcreate --other-repo ORIG_REPO --from-borg1 ...`` to create a related
|
||||
repository that uses the same secret key material as the given other/original repository.
|
||||
|
||||
Then use ``borg transfer --other-repo ORIG_REPO --from-borg1 ...`` to transfer the archives.
|
||||
"""
|
||||
)
|
||||
subparser = subparsers.add_parser(
|
||||
|
@ -193,6 +210,9 @@ def build_parser_rcreate(self, subparsers, common_parser, mid_common_parser):
|
|||
action=Highlander,
|
||||
help="reuse the key material from the other repository",
|
||||
)
|
||||
subparser.add_argument(
|
||||
"--from-borg1", dest="v1_or_v2", action="store_true", help="other repository is borg 1.x"
|
||||
)
|
||||
subparser.add_argument(
|
||||
"-e",
|
||||
"--encryption",
|
||||
|
|
|
@ -29,7 +29,7 @@ def do_rdelete(self, args, repository):
|
|||
msg = []
|
||||
try:
|
||||
manifest = Manifest.load(repository, Manifest.NO_OPERATION_CHECK)
|
||||
n_archives = len(manifest.archives)
|
||||
n_archives = manifest.archives.count()
|
||||
msg.append(
|
||||
f"You requested to DELETE the following repository completely "
|
||||
f"*including* {n_archives} archives it contains:"
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue