Docs grammar fixes.

One cannot "to not x", but one can "not to x".
Avoiding split infinitives gives the added bonus that machine
translation yields better results.

setup (n/adj) vs set(v) up. We don't "I setup it" but "I set it up".

Likewise for login(n/adj) and log(v) in, backup(n/adj) and back(v) up.
This commit is contained in:
Paul D 2022-12-29 00:01:48 +00:00
parent e49b60ae59
commit a85b643866
33 changed files with 106 additions and 106 deletions

View File

@ -22,11 +22,11 @@ What is BorgBackup?
BorgBackup (short: Borg) is a deduplicating backup program.
Optionally, it supports compression and authenticated encryption.
The main goal of Borg is to provide an efficient and secure way to backup data.
The main goal of Borg is to provide an efficient and secure way to back up data.
The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored.
The authenticated encryption technique makes it suitable for backups to not
fully trusted targets.
The authenticated encryption technique makes it suitable for backups to targets not
fully trusted.
See the `installation manual`_ or, if you have already
downloaded Borg, ``docs/installation.rst`` to get started with Borg.

View File

@ -164,7 +164,7 @@ New features:
- borg create --exclude-if-present TAGFILE - exclude directories that have the
given file from the backup. You can additionally give --keep-tag-files to
preserve just the directory roots and the tag-files (but not backup other
preserve just the directory roots and the tag-files (but not back up other
directory contents), #395, attic #128, attic #142
Other changes:
@ -419,7 +419,7 @@ Compatibility notes:
Deprecations:
- --compression N (with N being a number, as in 0.24) is deprecated.
We keep the --compression 0..9 for now to not break scripts, but it is
We keep the --compression 0..9 for now not to break scripts, but it is
deprecated and will be removed later, so better fix your scripts now:
--compression 0 (as in 0.24) is the same as --compression zlib,0 (now).
BUT: if you do not want compression, you rather want --compression none
@ -434,7 +434,7 @@ New features:
- create --compression lz4 (super-fast, but not very high compression)
- create --compression zlib,N (slower, higher compression, default for N is 6)
- create --compression lzma,N (slowest, highest compression, default N is 6)
- honor the nodump flag (UF_NODUMP) and do not backup such items
- honor the nodump flag (UF_NODUMP) and do not back up such items
- list --short just outputs a simple list of the files/directories in an archive
Bug fixes:
@ -541,7 +541,7 @@ Other changes:
- update internals doc about chunker params, memory usage and compression
- added docs about development
- add some words about resource usage in general
- document how to backup a raw disk
- document how to back up a raw disk
- add note about how to run borg from virtual env
- add solutions for (ll)fuse installation problems
- document what borg check does, fixes #138
@ -617,7 +617,7 @@ New features:
- FUSE: reflect deduplication in allocated blocks
- only allow whitelisted RPC calls in server mode
- normalize source/exclude paths before matching
- use posix_fadvise to not spoil the OS cache, fixes attic #252
- use posix_fadvise not to spoil the OS cache, fixes attic #252
- toplevel error handler: show tracebacks for better error analysis
- sigusr1 / sigint handler to print current file infos - attic PR #286
- RPCError: include the exception args we get from remote

View File

@ -864,7 +864,7 @@ New features:
- ability to use a system-provided version of "xxhash"
- create:
- changed the default behaviour to not store the atime of fs items. atime is
- changed the default behaviour not to store the atime of fs items. atime is
often rather not interesting and fragile - it easily changes even if nothing
else has changed and, if stored into the archive, spoils deduplication of
the archive metadata stream.
@ -1781,7 +1781,7 @@ Fixes:
- security fix: configure FUSE with "default_permissions", #3903
"default_permissions" is now enforced by borg by default to let the
kernel check uid/gid/mode based permissions.
"ignore_permissions" can be given to not enforce "default_permissions".
"ignore_permissions" can be given not to enforce "default_permissions".
- make "hostname" short, even on misconfigured systems, #4262
- fix free space calculation on macOS (and others?), #4289
- config: quit with error message when no key is provided, #4223
@ -2235,10 +2235,10 @@ Compatibility notes:
- The deprecated --no-files-cache is not a global/common option any more,
but only available for borg create (it is not needed for anything else).
Use --files-cache=disabled instead of --no-files-cache.
- The nodump flag ("do not backup this file") is not honoured any more by
- The nodump flag ("do not back up this file") is not honoured any more by
default because this functionality (esp. if it happened by error or
unexpected) was rather confusing and unexplainable at first to users.
If you want that "do not backup NODUMP-flagged files" behaviour, use:
If you want that "do not back up NODUMP-flagged files" behaviour, use:
borg create --exclude-nodump ...
- If you are on Linux and do not need bsdflags archived, consider using
``--nobsdflags`` with ``borg create`` to avoid additional syscalls and
@ -3175,7 +3175,7 @@ Bug fixes:
- security fix: configure FUSE with "default_permissions", #3903.
"default_permissions" is now enforced by borg by default to let the
kernel check uid/gid/mode based permissions.
"ignore_permissions" can be given to not enforce "default_permissions".
"ignore_permissions" can be given not to enforce "default_permissions".
- xattrs: fix borg exception handling on ENOSPC error, #3808.
New features:
@ -3478,7 +3478,7 @@ Other changes:
- docs:
- language clarification - VM backup FAQ
- borg create: document how to backup stdin, #2013
- borg create: document how to back up stdin, #2013
- borg upgrade: fix incorrect title levels
- add CVE numbers for issues fixed in 1.0.9, #2106
- fix typos (taken from Debian package patch)
@ -3674,7 +3674,7 @@ Bug fixes:
New features:
- add "borg key export" / "borg key import" commands, #1555, so users are able
to backup / restore their encryption keys more easily.
to back up / restore their encryption keys more easily.
Supported formats are the keyfile format used by borg internally and a
special "paper" format with by line checksums for printed backups. For the
@ -4161,7 +4161,7 @@ Bug fixes:
- do not sleep for >60s while waiting for lock, #773
- unpack file stats before passing to FUSE
- fix build on illumos
- don't try to backup doors or event ports (Solaris and derivatives)
- don't try to back up doors or event ports (Solaris and derivatives)
- remove useless/misleading libc version display, #738
- test suite: reset exit code of persistent archiver, #844
- RemoteRepository: clean up pipe if remote open() fails

View File

@ -4,7 +4,7 @@
Central repository server with Ansible or Salt
==============================================
This section will give an example how to setup a borg repository server for multiple
This section will give an example how to set up a borg repository server for multiple
clients.
Machines
@ -103,7 +103,7 @@ The server should automatically change the current working directory to the `<cl
borg init backup@backup01.srv.local:/home/backup/repos/johndoe.clnt.local/pictures
When `johndoe.clnt.local` tries to access a not restricted path the following error is raised.
John Doe tries to backup into the Web 01 path:
John Doe tries to back up into the Web 01 path:
::
@ -202,7 +202,7 @@ Salt running on a Debian system.
Enhancements
------------
As this section only describes a simple and effective setup it could be further
As this section only describes a simple and effective setup, it could be further
enhanced when supporting (a limited set) of client supplied commands. A wrapper
for starting `borg serve` could be written. Or borg itself could be enhanced to
autodetect it runs under SSH by checking the `SSH_ORIGINAL_COMMAND` environment

View File

@ -24,7 +24,7 @@ is assigned a home directory and repositories of the user reside in her
home directory.
The following ``~user/.ssh/authorized_keys`` file is the most important
piece for a correct deployment. It allows the user to login via
piece for a correct deployment. It allows the user to log in via
their public key (which must be provided by the user), and restricts
SSH access to safe operations only.

View File

@ -33,7 +33,7 @@ deduplicating. For backup, save the disk header and the contents of each partiti
PARTNUM=$(echo $x | grep -Eo "[0-9]+$")
ntfsclone -so - $x | borg create repo::hostname-part$PARTNUM -
done
# to backup non-NTFS partitions as well:
# to back up non-NTFS partitions as well:
echo "$PARTITIONS" | grep -v NTFS | cut -d' ' -f1 | while read x; do
PARTNUM=$(echo $x | grep -Eo "[0-9]+$")
borg create --read-special repo::hostname-part$PARTNUM $x

View File

@ -13,7 +13,7 @@ If you however require the backup server to initiate the connection or prefer
it to initiate the backup run, one of the following workarounds is required to
allow such a pull mode setup.
A common use case for pull mode is to backup a remote server to a local personal
A common use case for pull mode is to back up a remote server to a local personal
computer.
SSHFS

View File

@ -24,7 +24,7 @@ SSHFS, the Borg client only can do file system operations and has no agent
running on the remote side, so *every* operation needs to go over the network,
which is slower.
Can I backup from multiple servers into a single repository?
Can I back up from multiple servers into a single repository?
------------------------------------------------------------
In order for the deduplication used by Borg to work, it
@ -115,8 +115,8 @@ Which file types, attributes, etc. are *not* preserved?
Are there other known limitations?
----------------------------------
- borg extract only supports restoring into an empty destination. After that,
the destination will exactly have the contents of the extracted archive.
- borg extract supports restoring only into an empty destination. After extraction,
the destination will have exactly the contents of the extracted archive.
If you extract into a non-empty destination, borg will (for example) not
remove files which are in the destination, but not in the archive.
See :issue:`4598` for a workaround and more details.
@ -128,12 +128,12 @@ If a backup stops mid-way, does the already-backed-up data stay there?
Yes, Borg supports resuming backups.
During a backup a special checkpoint archive named ``<archive-name>.checkpoint``
is saved every checkpoint interval (the default value for this is 30
During a backup, a special checkpoint archive named ``<archive-name>.checkpoint``
is saved at every checkpoint interval (the default value for this is 30
minutes) containing all the data backed-up until that point.
This checkpoint archive is a valid archive,
but it is only a partial backup (not all files that you wanted to backup are
but it is only a partial backup (not all files that you wanted to back up are
contained in it). Having it in the repo until a successful, full backup is
completed is useful because it references all the transmitted chunks up
to the checkpoint. This means that in case of an interruption, you only need to
@ -163,7 +163,7 @@ really desperate (e.g. if you have no completed backup of that file and you'ld
rather get a partial file extracted than nothing). You do **not** want to give
that option under any normal circumstances.
How can I backup huge file(s) over a unstable connection?
How can I back up huge file(s) over a unstable connection?
---------------------------------------------------------
Yes. For more details, see :ref:`checkpoints_parts`.
@ -334,7 +334,7 @@ Assuming that all your chunks have a size of :math:`2^{21}` bytes (approximately
and we have a "perfect" hash algorithm, we can think that the probability of collision
would be of :math:`p^2/2^{n+1}` then, using SHA-256 (:math:`n=256`) and for example
we have 1000 million chunks (:math:`p=10^9`) (1000 million chunks would be about 2100TB).
The probability would be around to 0.0000000000000000000000000000000000000000000000000000000000043.
The probability would be around 0.0000000000000000000000000000000000000000000000000000000000043.
A mass-murderer space rock happens about once every 30 million years on average.
This leads to a probability of such an event occurring in the next second to about :math:`10^{-15}`.
@ -342,9 +342,9 @@ That's **45** orders of magnitude more probable than the SHA-256 collision. Brie
if you find SHA-256 collisions scary then your priorities are wrong. This example was grabbed from
`this SO answer <https://stackoverflow.com/a/4014407/13359375>`_, it's great honestly.
Still, the real question is if Borg tries to not make this happen?
Still, the real question is if Borg tries not to make this happen?
Well... it used to not check anything but there was a feature added which saves the size
Well... previously it did not check anything until there was a feature added which saves the size
of the chunks too, so the size of the chunks is compared to the size that you got with the
hash and if the check says there is a mismatch it will raise an exception instead of corrupting
the file. This doesn't save us from everything but reduces the chances of corruption.
@ -364,7 +364,7 @@ How do I configure different prune policies for different directories?
----------------------------------------------------------------------
Say you want to prune ``/var/log`` faster than the rest of
``/``. How do we implement that? The answer is to backup to different
``/``. How do we implement that? The answer is to back up to different
archive *names* and then implement different prune policies for
different prefixes. For example, you could have a script that does::
@ -489,7 +489,7 @@ Using keyfile-based encryption with a blank passphrase
Using ``BORG_PASSCOMMAND`` with macOS Keychain
macOS has a native manager for secrets (such as passphrases) which is safer
than just using a file as it is encrypted at rest and unlocked manually
(fortunately, the login keyring automatically unlocks when you login). With
(fortunately, the login keyring automatically unlocks when you log in). With
the built-in ``security`` command, you can access it from the command line,
making it useful for ``BORG_PASSCOMMAND``.
@ -567,7 +567,7 @@ otherwise make unavailable) all your backups.
How can I protect against a hacked backup client?
-------------------------------------------------
Assume you backup your backup client machine C to the backup server S and
Assume you back up your backup client machine C to the backup server S and
C gets hacked. In a simple push setup, the attacker could then use borg on
C to delete all backups residing on S.
@ -738,13 +738,13 @@ This has some pros and cons, though:
The long term plan to improve this is called "borgception", see :issue:`474`.
Can I backup my root partition (/) with Borg?
Can I back up my root partition (/) with Borg?
---------------------------------------------
Backing up your entire root partition works just fine, but remember to
exclude directories that make no sense to backup, such as /dev, /proc,
exclude directories that make no sense to back up, such as /dev, /proc,
/sys, /tmp and /run, and to use ``--one-file-system`` if you only want to
backup the root partition (and not any mounted devices e.g.).
back up the root partition (and not any mounted devices e.g.).
If it crashes with a UnicodeError, what can I do?
-------------------------------------------------
@ -955,7 +955,7 @@ Another possible reason is that files don't always have the same path, for
example if you mount a filesystem without stable mount points for each backup
or if you are running the backup from a filesystem snapshot whose name is not
stable. If the directory where you mount a filesystem is different every time,
Borg assumes they are different files. This is true even if you backup these
Borg assumes they are different files. This is true even if you back up these
files with relative pathnames - borg uses full pathnames in files cache regardless.
It is possible for some filesystems, such as ``mergerfs`` or network filesystems,
@ -1006,7 +1006,7 @@ How can I avoid unwanted base directories getting stored into archives?
Possible use cases:
- Another file system is mounted and you want to backup it with original paths.
- Another file system is mounted and you want to back it up with original paths.
- You have created a BTRFS snapshot in a ``/.snapshots`` directory for backup.
To achieve this, run ``borg create`` within the mountpoint/snapshot directory:

View File

@ -161,7 +161,7 @@ This new, more complex repo v2 object format was implemented to be able to effic
query the metadata without having to read, transfer and decrypt the (usually much bigger)
data part.
The metadata is encrypted to not disclose potentially sensitive information that could be
The metadata is encrypted not to disclose potentially sensitive information that could be
used for e.g. fingerprinting attacks.
The compression `ctype` and `clevel` is explained in :ref:`data-compression`.
@ -688,7 +688,7 @@ To determine whether a file has not changed, cached values are looked up via
the key in the mapping and compared to the current file attribute values.
If the file's size, timestamp and inode number is still the same, it is
considered to not have changed. In that case, we check that all file content
considered not to have changed. In that case, we check that all file content
chunks are (still) present in the repository (we check that via the chunks
cache).
@ -818,7 +818,7 @@ bucket is reached.
This particular mode of operation is open addressing with linear probing.
When the hash table is filled to 75%, its size is grown. When it's
emptied to 25%, its size is shrinked. Operations on it have a variable
emptied to 25%, its size is shrunken. Operations on it have a variable
complexity between constant and linear with low factor, and memory overhead
varies between 33% and 300%.
@ -1013,7 +1013,7 @@ while doing no compression at all (none) is a operation that takes no time, it
likely will need to store more data to the storage compared to using lz4.
The time needed to transfer and store the additional data might be much more
than if you had used lz4 (which is super fast, but still might compress your
data about 2:1). This is assuming your data is compressible (if you backup
data about 2:1). This is assuming your data is compressible (if you back up
already compressed data, trying to compress them at backup time is usually
pointless).

View File

@ -60,7 +60,7 @@ In other words, the object ID itself only authenticates the plaintext of the
object and not its context or meaning. The latter is established by a different
object referring to an object ID, thereby assigning a particular meaning to
an object. For example, an archive item contains a list of object IDs that
represent packed file metadata. On their own it's not clear that these objects
represent packed file metadata. On their own, it's not clear that these objects
would represent what they do, but by the archive item referring to them
in a particular part of its own data structure assigns this meaning.

View File

@ -169,7 +169,7 @@ set mode to M in archive for stdin data (default: 0660)
interpret PATH as command and store its stdout. See also section Reading from stdin below.
.TP
.B \-\-paths\-from\-stdin
read DELIM\-separated list of paths to backup from stdin. Will not recurse into directories.
read DELIM\-separated list of paths to back up from stdin. Will not recurse into directories.
.TP
.B \-\-paths\-from\-command
interpret PATH as command and treat its output as \fB\-\-paths\-from\-stdin\fP
@ -264,30 +264,30 @@ select compression algorithm, see the output of the \(dqborg help compression\(d
.sp
.nf
.ft C
# Backup ~/Documents into an archive named \(dqmy\-documents\(dq
# Back up ~/Documents into an archive named \(dqmy\-documents\(dq
$ borg create my\-documents ~/Documents
# same, but list all files as we process them
$ borg create \-\-list my\-documents ~/Documents
# Backup ~/Documents and ~/src but exclude pyc files
# Back up ~/Documents and ~/src but exclude pyc files
$ borg create my\-files \e
~/Documents \e
~/src \e
\-\-exclude \(aq*.pyc\(aq
# Backup home directories excluding image thumbnails (i.e. only
# Back up home directories excluding image thumbnails (i.e. only
# /home/<one directory>/.thumbnails is excluded, not /home/*/*/.thumbnails etc.)
$ borg create my\-files /home \-\-exclude \(aqsh:home/*/.thumbnails\(aq
# Backup the root filesystem into an archive named \(dqroot\-YYYY\-MM\-DD\(dq
# Back up the root filesystem into an archive named \(dqroot\-YYYY\-MM\-DD\(dq
# use zlib compression (good, but slow) \- default is lz4 (fast, low compression ratio)
$ borg create \-C zlib,6 \-\-one\-file\-system root\-{now:%Y\-%m\-%d} /
# Backup into an archive name like FQDN\-root\-TIMESTAMP
# Back up into an archive name like FQDN\-root\-TIMESTAMP
$ borg create \(aq{fqdn}\-root\-{now}\(aq /
# Backup a remote host locally (\(dqpull\(dq style) using sshfs
# Back up a remote host locally (\(dqpull\(dq style) using sshfs
$ mkdir sshfs\-mount
$ sshfs root@example.com:/ sshfs\-mount
$ cd sshfs\-mount
@ -300,10 +300,10 @@ $ fusermount \-u sshfs\-mount
# docs \- same parameters as borg < 1.0):
$ borg create \-\-chunker\-params buzhash,10,23,16,4095 small /smallstuff
# Backup a raw device (must not be active/in use/mounted at that time)
# Back up a raw device (must not be active/in use/mounted at that time)
$ borg create \-\-read\-special \-\-chunker\-params fixed,4194304 my\-sdx /dev/sdX
# Backup a sparse disk image (must not be active/in use/mounted at that time)
# Back up a sparse disk image (must not be active/in use/mounted at that time)
$ borg create \-\-sparse \-\-chunker\-params fixed,4194304 my\-disk my\-disk.raw
# No compression (none)
@ -334,9 +334,9 @@ $ cd /home/user/Documents
$ borg create \(aqdaily\-projectA\-{now:%Y\-%m\-%d}\(aq projectA
# Use external command to determine files to archive
# Use \-\-paths\-from\-stdin with find to only backup files less than 1MB in size
# Use \-\-paths\-from\-stdin with find to back up only files less than 1MB in size
$ find ~ \-size \-1000k | borg create \-\-paths\-from\-stdin small\-files\-only
# Use \-\-paths\-from\-command with find to only backup files from a given user
# Use \-\-paths\-from\-command with find to back up files only from a given user
$ borg create \-\-paths\-from\-command joes\-files \-\- find /srv/samba/shared \-user joe
# Use \-\-paths\-from\-stdin with \-\-paths\-delimiter (for example, for filenames with newlines in them)
$ find ~ \-size \-1000k \-print0 | borg create \e

View File

@ -36,7 +36,7 @@ borg [common options] key export [options] [PATH]
.SH DESCRIPTION
.sp
If repository encryption is used, the repository is inaccessible
without the key. This command allows one to backup this essential key.
without the key. This command allows one to back up this essential key.
Note that the backup produced does not include the passphrase itself
(i.e. the exported key stays encrypted). In order to regain access to a
repository, one needs both the exported key and the original passphrase.

View File

@ -205,7 +205,7 @@ _
.\" nanorst: inline-replace
.
.sp
\fInone\fP mode uses no encryption and no authentication. You\(aqre advised to NOT use this mode
\fInone\fP mode uses no encryption and no authentication. You\(aqre advised NOT to use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.
.sp

View File

@ -40,11 +40,11 @@ borg [common options] <command> [options] [arguments]
BorgBackup (short: Borg) is a deduplicating backup program.
Optionally, it supports compression and authenticated encryption.
.sp
The main goal of Borg is to provide an efficient and secure way to backup data.
The main goal of Borg is to provide an efficient and secure way to back up data.
The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored.
The authenticated encryption technique makes it suitable for backups to not
fully trusted targets.
The authenticated encryption technique makes it suitable for backups to targets not
fully trusted.
.sp
Borg stores a set of files in an \fIarchive\fP\&. A \fIrepository\fP is a collection
of \fIarchives\fP\&. The format of repositories is Borg\-specific. Borg does not
@ -737,7 +737,7 @@ If your repository is remote, all deduplicated (and optionally compressed/
encrypted) data of course has to go over the connection (\fBssh://\fP repo url).
If you use a locally mounted network filesystem, additionally some copy
operations used for transaction support also go over the connection. If
you backup multiple sources to one target repository, additional traffic
you back up multiple sources to one target repository, additional traffic
happens for cache resynchronization.
.UNINDENT
.SS Support for file metadata

View File

@ -13,11 +13,11 @@ DESCRIPTION
BorgBackup (short: Borg) is a deduplicating backup program.
Optionally, it supports compression and authenticated encryption.
The main goal of Borg is to provide an efficient and secure way to backup data.
The main goal of Borg is to provide an efficient and secure way to back data up.
The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored.
The authenticated encryption technique makes it suitable for backups to not
fully trusted targets.
The authenticated encryption technique makes it suitable for backups to targets not
fully trusted.
Borg stores a set of files in an *archive*. A *repository* is a collection
of *archives*. The format of repositories is Borg-specific. Borg does not

View File

@ -74,7 +74,7 @@ Important note about permissions
To avoid permissions issues (in your borg repository or borg cache), **always
access the repository using the same user account**.
If you want to backup files of other users or the operating system, running
If you want to back up files of other users or the operating system, running
borg as root likely will be required (otherwise you'ld get `Permission denied`
errors).
If you only back up your own files, you neither need nor want to run borg as
@ -123,7 +123,7 @@ works well enough without further care for consistency. Log files and
caches might not be in a perfect state, but this is rarely a problem.
For databases, virtual machines, and containers, there are specific
techniques for backing them up that do not simply use Borg to backup
techniques for backing them up that do not simply use Borg to back up
the underlying filesystem. For databases, check your database
documentation for techniques that will save the database state between
transactions. For virtual machines, consider running the backup on
@ -171,7 +171,7 @@ backed up and that the ``prune`` command is keeping and deleting the correct bac
info "Starting backup"
# Backup the most important directories into an archive named after
# Back up the most important directories into an archive named after
# the machine this script is currently running on:
borg create \

View File

@ -2,7 +2,7 @@
$ borg -r /path/to/repo rcreate --encryption=repokey-aes-ocb
2. Backup the ``~/src`` and ``~/Documents`` directories into an archive called
2. Back up the ``~/src`` and ``~/Documents`` directories into an archive called
*Monday*::
$ borg -r /path/to/repo create Monday ~/src ~/Documents

View File

@ -4,30 +4,30 @@ Examples
~~~~~~~~
::
# Backup ~/Documents into an archive named "my-documents"
# Back up ~/Documents into an archive named "my-documents"
$ borg create my-documents ~/Documents
# same, but list all files as we process them
$ borg create --list my-documents ~/Documents
# Backup ~/Documents and ~/src but exclude pyc files
# Back up ~/Documents and ~/src but exclude pyc files
$ borg create my-files \
~/Documents \
~/src \
--exclude '*.pyc'
# Backup home directories excluding image thumbnails (i.e. only
# Back up home directories excluding image thumbnails (i.e. only
# /home/<one directory>/.thumbnails is excluded, not /home/*/*/.thumbnails etc.)
$ borg create my-files /home --exclude 'sh:home/*/.thumbnails'
# Backup the root filesystem into an archive named "root-YYYY-MM-DD"
# Back up the root filesystem into an archive named "root-YYYY-MM-DD"
# use zlib compression (good, but slow) - default is lz4 (fast, low compression ratio)
$ borg create -C zlib,6 --one-file-system root-{now:%Y-%m-%d} /
# Backup into an archive name like FQDN-root-TIMESTAMP
# Back up into an archive name like FQDN-root-TIMESTAMP
$ borg create '{fqdn}-root-{now}' /
# Backup a remote host locally ("pull" style) using sshfs
# Back up a remote host locally ("pull" style) using sshfs
$ mkdir sshfs-mount
$ sshfs root@example.com:/ sshfs-mount
$ cd sshfs-mount
@ -40,10 +40,10 @@ Examples
# docs - same parameters as borg < 1.0):
$ borg create --chunker-params buzhash,10,23,16,4095 small /smallstuff
# Backup a raw device (must not be active/in use/mounted at that time)
# Back up a raw device (must not be active/in use/mounted at that time)
$ borg create --read-special --chunker-params fixed,4194304 my-sdx /dev/sdX
# Backup a sparse disk image (must not be active/in use/mounted at that time)
# Back up a sparse disk image (must not be active/in use/mounted at that time)
$ borg create --sparse --chunker-params fixed,4194304 my-disk my-disk.raw
# No compression (none)
@ -74,9 +74,9 @@ Examples
$ borg create 'daily-projectA-{now:%Y-%m-%d}' projectA
# Use external command to determine files to archive
# Use --paths-from-stdin with find to only backup files less than 1MB in size
# Use --paths-from-stdin with find to back up only files less than 1MB in size
$ find ~ -size -1000k | borg create --paths-from-stdin small-files-only
# Use --paths-from-command with find to only backup files from a given user
# Use --paths-from-command with find to back up files only from a given user
$ borg create --paths-from-command joes-files -- find /srv/samba/shared -user joe
# Use --paths-from-stdin with --paths-delimiter (for example, for filenames with newlines in them)
$ find ~ -size -1000k -print0 | borg create \

View File

@ -43,7 +43,7 @@ borg create
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--content-from-command`` | interpret PATH as command and store its stdout. See also section Reading from stdin below. |
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--paths-from-stdin`` | read DELIM-separated list of paths to backup from stdin. Will not recurse into directories. |
| | ``--paths-from-stdin`` | read DELIM-separated list of paths to back up from stdin. Will not recurse into directories. |
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--paths-from-command`` | interpret PATH as command and treat its output as ``--paths-from-stdin`` |
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
@ -136,7 +136,7 @@ borg create
--stdin-group GROUP set group GROUP in archive for stdin data (default: 'wheel')
--stdin-mode M set mode to M in archive for stdin data (default: 0660)
--content-from-command interpret PATH as command and store its stdout. See also section Reading from stdin below.
--paths-from-stdin read DELIM-separated list of paths to backup from stdin. Will not recurse into directories.
--paths-from-stdin read DELIM-separated list of paths to back up from stdin. Will not recurse into directories.
--paths-from-command interpret PATH as command and treat its output as ``--paths-from-stdin``
--paths-delimiter DELIM set path delimiter for ``--paths-from-stdin`` and ``--paths-from-command`` (default: \n)

View File

@ -91,5 +91,5 @@ Network (only for client/server operation):
encrypted) data of course has to go over the connection (``ssh://`` repo url).
If you use a locally mounted network filesystem, additionally some copy
operations used for transaction support also go over the connection. If
you backup multiple sources to one target repository, additional traffic
you back up multiple sources to one target repository, additional traffic
happens for cache resynchronization.

View File

@ -195,14 +195,14 @@ are added. Exclusion patterns from ``--exclude-from`` files are appended last.
Examples::
# backup pics, but not the ones from 2018, except the good ones:
# back up pics, but not the ones from 2018, except the good ones:
# note: using = is essential to avoid cmdline argument parsing issues.
borg create --pattern=+pics/2018/good --pattern=-pics/2018 archive pics
# backup only JPG/JPEG files (case insensitive) in all home directories:
# back up only JPG/JPEG files (case insensitive) in all home directories:
borg create --pattern '+ re:\.jpe?g(?i)$' archive /home
# backup homes, but exclude big downloads (like .ISO files) or hidden files:
# back up homes, but exclude big downloads (like .ISO files) or hidden files:
borg create --exclude 're:\.iso(?i)$' --exclude 'sh:home/**/.*' archive /home
# use a file with patterns (recursion root '/' via command line):
@ -217,7 +217,7 @@ The patterns.lst file could look like that::
+ home/susan
# also back up this exact file
+ pf:home/bobby/specialfile.txt
# don't backup the other home directories
# don't back up the other home directories
- home/*
# don't even look in /dev, /proc, /run, /sys, /tmp (note: would exclude files like /device, too)
! re:^(dev|proc|run|sys|tmp)

View File

@ -54,7 +54,7 @@ Description
~~~~~~~~~~~
If repository encryption is used, the repository is inaccessible
without the key. This command allows one to backup this essential key.
without the key. This command allows one to back up this essential key.
Note that the backup produced does not include the passphrase itself
(i.e. the exported key stays encrypted). In order to regain access to a
repository, one needs both the exported key and the original passphrase.

View File

@ -30,7 +30,7 @@ for block devices (like disks, partitions, LVM LVs) or raw disk image files.
``--chunker-params=fixed,4096,512`` results in fixed 4kiB sized blocks,
but the first header block will only be 512B long. This might be useful to
dedup files with 1 header + N fixed size data blocks. Be careful to not
dedup files with 1 header + N fixed size data blocks. Be careful not to
produce a too big amount of chunks (like using small block size for huge
files).
@ -63,7 +63,7 @@ For more details, see :ref:`chunker_details`.
``--noatime / --noctime``
~~~~~~~~~~~~~~~~~~~~~~~~~
You can use these ``borg create`` options to not store the respective timestamp
You can use these ``borg create`` options not to store the respective timestamp
into the archive, in case you do not really need it.
Besides saving a little space for the not archived timestamp, it might also
@ -74,7 +74,7 @@ won't deduplicate just because of that.
``--nobsdflags / --noflags``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use this to not query and store (or not extract and set) flags - in case
You can use this not to query and store (or not extract and set) flags - in case
you don't need them or if they are broken somehow for your fs.
On Linux, dealing with the flags needs some additional syscalls. Especially when
@ -132,7 +132,7 @@ scale and perform better if you do not work via the FUSE mount.
Example
+++++++
Imagine you have made some snapshots of logical volumes (LVs) you want to backup.
Imagine you have made some snapshots of logical volumes (LVs) you want to back up.
.. note::

View File

@ -151,7 +151,7 @@ in the upper part of the table, in the lower part is the old and/or unsafe(r) st
.. nanorst: inline-replace
`none` mode uses no encryption and no authentication. You're advised to NOT use this mode
`none` mode uses no encryption and no authentication. You're advised NOT to use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.

View File

@ -1824,7 +1824,7 @@ class ArchiveChecker:
# if we kill the defect chunk here, subsequent actions within this "borg check"
# run will find missing chunks and replace them with all-zero replacement
# chunks and flag the files as "repaired".
# if another backup is done later and the missing chunks get backupped again,
# if another backup is done later and the missing chunks get backed up again,
# a "borg check" afterwards can heal all files where this chunk was missing.
logger.warning(
"Found defect chunks. They will be deleted now, so affected files can "

View File

@ -744,7 +744,7 @@ class CreateMixIn:
subparser.add_argument(
"--paths-from-stdin",
action="store_true",
help="read DELIM-separated list of paths to backup from stdin. Will not " "recurse into directories.",
help="read DELIM-separated list of paths to back up from stdin. Will not " "recurse into directories.",
)
subparser.add_argument(
"--paths-from-command",

View File

@ -199,14 +199,14 @@ class HelpMixIn:
Examples::
# backup pics, but not the ones from 2018, except the good ones:
# back up pics, but not the ones from 2018, except the good ones:
# note: using = is essential to avoid cmdline argument parsing issues.
borg create --pattern=+pics/2018/good --pattern=-pics/2018 archive pics
# backup only JPG/JPEG files (case insensitive) in all home directories:
# back up only JPG/JPEG files (case insensitive) in all home directories:
borg create --pattern '+ re:\\.jpe?g(?i)$' archive /home
# backup homes, but exclude big downloads (like .ISO files) or hidden files:
# back up homes, but exclude big downloads (like .ISO files) or hidden files:
borg create --exclude 're:\\.iso(?i)$' --exclude 'sh:home/**/.*' archive /home
# use a file with patterns (recursion root '/' via command line):
@ -221,7 +221,7 @@ class HelpMixIn:
+ home/susan
# also back up this exact file
+ pf:home/bobby/specialfile.txt
# don't backup the other home directories
# don't back up the other home directories
- home/*
# don't even look in /dev, /proc, /run, /sys, /tmp (note: would exclude files like /device, too)
! re:^(dev|proc|run|sys|tmp)

View File

@ -151,7 +151,7 @@ class KeysMixIn:
key_export_epilog = process_epilog(
"""
If repository encryption is used, the repository is inaccessible
without the key. This command allows one to backup this essential key.
without the key. This command allows one to back up this essential key.
Note that the backup produced does not include the passphrase itself
(i.e. the exported key stays encrypted). In order to regain access to a
repository, one needs both the exported key and the original passphrase.

View File

@ -146,7 +146,7 @@ class RCreateMixIn:
.. nanorst: inline-replace
`none` mode uses no encryption and no authentication. You're advised to NOT use this mode
`none` mode uses no encryption and no authentication. You're advised NOT to use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.

View File

@ -54,7 +54,7 @@ class RDeleteMixIn:
for archive_info in manifest.archives.list(sort_by=["ts"]):
msg.append(format_archive(archive_info))
else:
msg.append("This repository seems to not have any archives.")
msg.append("This repository seems not to have any archives.")
else:
msg.append(
"This repository seems to have no manifest, so we can't "

View File

@ -129,7 +129,7 @@ class Repository:
this is of course way more complex).
LoggedIO gracefully handles truncate/unlink splits as long as the truncate resulted in
a zero length file. Zero length segments are considered to not exist, while LoggedIO.cleanup()
a zero length file. Zero length segments are considered not to exist, while LoggedIO.cleanup()
will still get rid of them.
"""

View File

@ -615,17 +615,17 @@ class IndexCorruptionTestCase(BaseTestCase):
idx = NSIndex()
# create lots of colliding entries
for y in range(700): # stay below max load to not trigger resize
for y in range(700): # stay below max load not to trigger resize
idx[HH(0, y, 0)] = (0, y, 0)
assert idx.size() == 1024 + 1031 * 48 # header + 1031 buckets
# delete lots of the collisions, creating lots of tombstones
for y in range(400): # stay above min load to not trigger resize
for y in range(400): # stay above min load not to trigger resize
del idx[HH(0, y, 0)]
# create lots of colliding entries, within the not yet used part of the hashtable
for y in range(330): # stay below max load to not trigger resize
for y in range(330): # stay below max load not to trigger resize
# at y == 259 a resize will happen due to going beyond max EFFECTIVE load
# if the bug is present, that element will be inserted at the wrong place.
# and because it will be at the wrong place, it can not be found again.

View File

@ -463,7 +463,7 @@ class RepositoryCommitTestCase(RepositoryTestCaseBase):
put_segment = get_latest_segment()
self.repository.commit(compact=False)
# We now delete H(1), and force this segment to not be compacted, which can happen
# We now delete H(1), and force this segment not to be compacted, which can happen
# if it's not sparse enough (symbolized by H(2) here).
self.repository.delete(H(1))
self.repository.put(H(2), fchunk(b"1"))