Docs grammar fixes.

One cannot "to not x", but one can "not to x".
Avoiding split infinitives gives the added bonus that machine
translation yields better results.

setup (n/adj) vs set(v) up. We don't "I setup it" but "I set it up".

Likewise for login(n/adj) and log(v) in, backup(n/adj) and back(v) up.
This commit is contained in:
Paul D 2022-12-29 00:01:48 +00:00
parent e49b60ae59
commit a85b643866
33 changed files with 106 additions and 106 deletions

View File

@ -25,8 +25,8 @@ Optionally, it supports compression and authenticated encryption.
The main goal of Borg is to provide an efficient and secure way to back up data.
The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored.
The authenticated encryption technique makes it suitable for backups to not
fully trusted targets.
The authenticated encryption technique makes it suitable for backups to targets not
fully trusted.
See the `installation manual`_ or, if you have already
downloaded Borg, ``docs/installation.rst`` to get started with Borg.

View File

@ -419,7 +419,7 @@ Compatibility notes:
Deprecations:
- --compression N (with N being a number, as in 0.24) is deprecated.
We keep the --compression 0..9 for now to not break scripts, but it is
We keep the --compression 0..9 for now not to break scripts, but it is
deprecated and will be removed later, so better fix your scripts now:
--compression 0 (as in 0.24) is the same as --compression zlib,0 (now).
BUT: if you do not want compression, you rather want --compression none
@ -617,7 +617,7 @@ New features:
- FUSE: reflect deduplication in allocated blocks
- only allow whitelisted RPC calls in server mode
- normalize source/exclude paths before matching
- use posix_fadvise to not spoil the OS cache, fixes attic #252
- use posix_fadvise not to spoil the OS cache, fixes attic #252
- toplevel error handler: show tracebacks for better error analysis
- sigusr1 / sigint handler to print current file infos - attic PR #286
- RPCError: include the exception args we get from remote

View File

@ -864,7 +864,7 @@ New features:
- ability to use a system-provided version of "xxhash"
- create:
- changed the default behaviour to not store the atime of fs items. atime is
- changed the default behaviour not to store the atime of fs items. atime is
often rather not interesting and fragile - it easily changes even if nothing
else has changed and, if stored into the archive, spoils deduplication of
the archive metadata stream.
@ -1781,7 +1781,7 @@ Fixes:
- security fix: configure FUSE with "default_permissions", #3903
"default_permissions" is now enforced by borg by default to let the
kernel check uid/gid/mode based permissions.
"ignore_permissions" can be given to not enforce "default_permissions".
"ignore_permissions" can be given not to enforce "default_permissions".
- make "hostname" short, even on misconfigured systems, #4262
- fix free space calculation on macOS (and others?), #4289
- config: quit with error message when no key is provided, #4223
@ -3175,7 +3175,7 @@ Bug fixes:
- security fix: configure FUSE with "default_permissions", #3903.
"default_permissions" is now enforced by borg by default to let the
kernel check uid/gid/mode based permissions.
"ignore_permissions" can be given to not enforce "default_permissions".
"ignore_permissions" can be given not to enforce "default_permissions".
- xattrs: fix borg exception handling on ENOSPC error, #3808.
New features:

View File

@ -202,7 +202,7 @@ Salt running on a Debian system.
Enhancements
------------
As this section only describes a simple and effective setup it could be further
As this section only describes a simple and effective setup, it could be further
enhanced when supporting (a limited set) of client supplied commands. A wrapper
for starting `borg serve` could be written. Or borg itself could be enhanced to
autodetect it runs under SSH by checking the `SSH_ORIGINAL_COMMAND` environment

View File

@ -115,8 +115,8 @@ Which file types, attributes, etc. are *not* preserved?
Are there other known limitations?
----------------------------------
- borg extract only supports restoring into an empty destination. After that,
the destination will exactly have the contents of the extracted archive.
- borg extract supports restoring only into an empty destination. After extraction,
the destination will have exactly the contents of the extracted archive.
If you extract into a non-empty destination, borg will (for example) not
remove files which are in the destination, but not in the archive.
See :issue:`4598` for a workaround and more details.
@ -128,8 +128,8 @@ If a backup stops mid-way, does the already-backed-up data stay there?
Yes, Borg supports resuming backups.
During a backup a special checkpoint archive named ``<archive-name>.checkpoint``
is saved every checkpoint interval (the default value for this is 30
During a backup, a special checkpoint archive named ``<archive-name>.checkpoint``
is saved at every checkpoint interval (the default value for this is 30
minutes) containing all the data backed-up until that point.
This checkpoint archive is a valid archive,
@ -334,7 +334,7 @@ Assuming that all your chunks have a size of :math:`2^{21}` bytes (approximately
and we have a "perfect" hash algorithm, we can think that the probability of collision
would be of :math:`p^2/2^{n+1}` then, using SHA-256 (:math:`n=256`) and for example
we have 1000 million chunks (:math:`p=10^9`) (1000 million chunks would be about 2100TB).
The probability would be around to 0.0000000000000000000000000000000000000000000000000000000000043.
The probability would be around 0.0000000000000000000000000000000000000000000000000000000000043.
A mass-murderer space rock happens about once every 30 million years on average.
This leads to a probability of such an event occurring in the next second to about :math:`10^{-15}`.
@ -342,9 +342,9 @@ That's **45** orders of magnitude more probable than the SHA-256 collision. Brie
if you find SHA-256 collisions scary then your priorities are wrong. This example was grabbed from
`this SO answer <https://stackoverflow.com/a/4014407/13359375>`_, it's great honestly.
Still, the real question is if Borg tries to not make this happen?
Still, the real question is if Borg tries not to make this happen?
Well... it used to not check anything but there was a feature added which saves the size
Well... previously it did not check anything until there was a feature added which saves the size
of the chunks too, so the size of the chunks is compared to the size that you got with the
hash and if the check says there is a mismatch it will raise an exception instead of corrupting
the file. This doesn't save us from everything but reduces the chances of corruption.
@ -1006,7 +1006,7 @@ How can I avoid unwanted base directories getting stored into archives?
Possible use cases:
- Another file system is mounted and you want to backup it with original paths.
- Another file system is mounted and you want to back it up with original paths.
- You have created a BTRFS snapshot in a ``/.snapshots`` directory for backup.
To achieve this, run ``borg create`` within the mountpoint/snapshot directory:

View File

@ -161,7 +161,7 @@ This new, more complex repo v2 object format was implemented to be able to effic
query the metadata without having to read, transfer and decrypt the (usually much bigger)
data part.
The metadata is encrypted to not disclose potentially sensitive information that could be
The metadata is encrypted not to disclose potentially sensitive information that could be
used for e.g. fingerprinting attacks.
The compression `ctype` and `clevel` is explained in :ref:`data-compression`.
@ -688,7 +688,7 @@ To determine whether a file has not changed, cached values are looked up via
the key in the mapping and compared to the current file attribute values.
If the file's size, timestamp and inode number is still the same, it is
considered to not have changed. In that case, we check that all file content
considered not to have changed. In that case, we check that all file content
chunks are (still) present in the repository (we check that via the chunks
cache).
@ -818,7 +818,7 @@ bucket is reached.
This particular mode of operation is open addressing with linear probing.
When the hash table is filled to 75%, its size is grown. When it's
emptied to 25%, its size is shrinked. Operations on it have a variable
emptied to 25%, its size is shrunken. Operations on it have a variable
complexity between constant and linear with low factor, and memory overhead
varies between 33% and 300%.

View File

@ -60,7 +60,7 @@ In other words, the object ID itself only authenticates the plaintext of the
object and not its context or meaning. The latter is established by a different
object referring to an object ID, thereby assigning a particular meaning to
an object. For example, an archive item contains a list of object IDs that
represent packed file metadata. On their own it's not clear that these objects
represent packed file metadata. On their own, it's not clear that these objects
would represent what they do, but by the archive item referring to them
in a particular part of its own data structure assigns this meaning.

View File

@ -334,9 +334,9 @@ $ cd /home/user/Documents
$ borg create \(aqdaily\-projectA\-{now:%Y\-%m\-%d}\(aq projectA
# Use external command to determine files to archive
# Use \-\-paths\-from\-stdin with find to only backup files less than 1MB in size
# Use \-\-paths\-from\-stdin with find to back up only files less than 1MB in size
$ find ~ \-size \-1000k | borg create \-\-paths\-from\-stdin small\-files\-only
# Use \-\-paths\-from\-command with find to only backup files from a given user
# Use \-\-paths\-from\-command with find to back up files only from a given user
$ borg create \-\-paths\-from\-command joes\-files \-\- find /srv/samba/shared \-user joe
# Use \-\-paths\-from\-stdin with \-\-paths\-delimiter (for example, for filenames with newlines in them)
$ find ~ \-size \-1000k \-print0 | borg create \e

View File

@ -205,7 +205,7 @@ _
.\" nanorst: inline-replace
.
.sp
\fInone\fP mode uses no encryption and no authentication. You\(aqre advised to NOT use this mode
\fInone\fP mode uses no encryption and no authentication. You\(aqre advised NOT to use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.
.sp

View File

@ -43,8 +43,8 @@ Optionally, it supports compression and authenticated encryption.
The main goal of Borg is to provide an efficient and secure way to back up data.
The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored.
The authenticated encryption technique makes it suitable for backups to not
fully trusted targets.
The authenticated encryption technique makes it suitable for backups to targets not
fully trusted.
.sp
Borg stores a set of files in an \fIarchive\fP\&. A \fIrepository\fP is a collection
of \fIarchives\fP\&. The format of repositories is Borg\-specific. Borg does not

View File

@ -13,11 +13,11 @@ DESCRIPTION
BorgBackup (short: Borg) is a deduplicating backup program.
Optionally, it supports compression and authenticated encryption.
The main goal of Borg is to provide an efficient and secure way to backup data.
The main goal of Borg is to provide an efficient and secure way to back data up.
The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored.
The authenticated encryption technique makes it suitable for backups to not
fully trusted targets.
The authenticated encryption technique makes it suitable for backups to targets not
fully trusted.
Borg stores a set of files in an *archive*. A *repository* is a collection
of *archives*. The format of repositories is Borg-specific. Borg does not

View File

@ -74,9 +74,9 @@ Examples
$ borg create 'daily-projectA-{now:%Y-%m-%d}' projectA
# Use external command to determine files to archive
# Use --paths-from-stdin with find to only backup files less than 1MB in size
# Use --paths-from-stdin with find to back up only files less than 1MB in size
$ find ~ -size -1000k | borg create --paths-from-stdin small-files-only
# Use --paths-from-command with find to only backup files from a given user
# Use --paths-from-command with find to back up files only from a given user
$ borg create --paths-from-command joes-files -- find /srv/samba/shared -user joe
# Use --paths-from-stdin with --paths-delimiter (for example, for filenames with newlines in them)
$ find ~ -size -1000k -print0 | borg create \

View File

@ -30,7 +30,7 @@ for block devices (like disks, partitions, LVM LVs) or raw disk image files.
``--chunker-params=fixed,4096,512`` results in fixed 4kiB sized blocks,
but the first header block will only be 512B long. This might be useful to
dedup files with 1 header + N fixed size data blocks. Be careful to not
dedup files with 1 header + N fixed size data blocks. Be careful not to
produce a too big amount of chunks (like using small block size for huge
files).
@ -63,7 +63,7 @@ For more details, see :ref:`chunker_details`.
``--noatime / --noctime``
~~~~~~~~~~~~~~~~~~~~~~~~~
You can use these ``borg create`` options to not store the respective timestamp
You can use these ``borg create`` options not to store the respective timestamp
into the archive, in case you do not really need it.
Besides saving a little space for the not archived timestamp, it might also
@ -74,7 +74,7 @@ won't deduplicate just because of that.
``--nobsdflags / --noflags``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use this to not query and store (or not extract and set) flags - in case
You can use this not to query and store (or not extract and set) flags - in case
you don't need them or if they are broken somehow for your fs.
On Linux, dealing with the flags needs some additional syscalls. Especially when

View File

@ -151,7 +151,7 @@ in the upper part of the table, in the lower part is the old and/or unsafe(r) st
.. nanorst: inline-replace
`none` mode uses no encryption and no authentication. You're advised to NOT use this mode
`none` mode uses no encryption and no authentication. You're advised NOT to use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.

View File

@ -1824,7 +1824,7 @@ class ArchiveChecker:
# if we kill the defect chunk here, subsequent actions within this "borg check"
# run will find missing chunks and replace them with all-zero replacement
# chunks and flag the files as "repaired".
# if another backup is done later and the missing chunks get backupped again,
# if another backup is done later and the missing chunks get backed up again,
# a "borg check" afterwards can heal all files where this chunk was missing.
logger.warning(
"Found defect chunks. They will be deleted now, so affected files can "

View File

@ -146,7 +146,7 @@ class RCreateMixIn:
.. nanorst: inline-replace
`none` mode uses no encryption and no authentication. You're advised to NOT use this mode
`none` mode uses no encryption and no authentication. You're advised NOT to use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.

View File

@ -54,7 +54,7 @@ class RDeleteMixIn:
for archive_info in manifest.archives.list(sort_by=["ts"]):
msg.append(format_archive(archive_info))
else:
msg.append("This repository seems to not have any archives.")
msg.append("This repository seems not to have any archives.")
else:
msg.append(
"This repository seems to have no manifest, so we can't "

View File

@ -129,7 +129,7 @@ class Repository:
this is of course way more complex).
LoggedIO gracefully handles truncate/unlink splits as long as the truncate resulted in
a zero length file. Zero length segments are considered to not exist, while LoggedIO.cleanup()
a zero length file. Zero length segments are considered not to exist, while LoggedIO.cleanup()
will still get rid of them.
"""

View File

@ -615,17 +615,17 @@ class IndexCorruptionTestCase(BaseTestCase):
idx = NSIndex()
# create lots of colliding entries
for y in range(700): # stay below max load to not trigger resize
for y in range(700): # stay below max load not to trigger resize
idx[HH(0, y, 0)] = (0, y, 0)
assert idx.size() == 1024 + 1031 * 48 # header + 1031 buckets
# delete lots of the collisions, creating lots of tombstones
for y in range(400): # stay above min load to not trigger resize
for y in range(400): # stay above min load not to trigger resize
del idx[HH(0, y, 0)]
# create lots of colliding entries, within the not yet used part of the hashtable
for y in range(330): # stay below max load to not trigger resize
for y in range(330): # stay below max load not to trigger resize
# at y == 259 a resize will happen due to going beyond max EFFECTIVE load
# if the bug is present, that element will be inserted at the wrong place.
# and because it will be at the wrong place, it can not be found again.

View File

@ -463,7 +463,7 @@ class RepositoryCommitTestCase(RepositoryTestCaseBase):
put_segment = get_latest_segment()
self.repository.commit(compact=False)
# We now delete H(1), and force this segment to not be compacted, which can happen
# We now delete H(1), and force this segment not to be compacted, which can happen
# if it's not sparse enough (symbolized by H(2) here).
self.repository.delete(H(1))
self.repository.put(H(2), fchunk(b"1"))