mirror of
https://github.com/borgbackup/borg.git
synced 2025-02-24 15:12:00 +00:00
Merge pull request #3351 from milkey-mouse/fewer-not-less-bp1.1
Correct usage of "fewer" in place of "less" (1.1 backport)
This commit is contained in:
commit
babdef574c
3 changed files with 6 additions and 6 deletions
|
@ -1284,7 +1284,7 @@ Other changes:
|
|||
|
||||
- pass meta-data around, #765
|
||||
- move some constants to new constants module
|
||||
- better readability and less errors with namedtuples, #823
|
||||
- better readability and fewer errors with namedtuples, #823
|
||||
- moved source tree into src/ subdirectory, #1016
|
||||
- made borg.platform a package, #1113
|
||||
- removed dead crypto code, #1032
|
||||
|
@ -2650,7 +2650,7 @@ Version 0.23.0 (2015-06-11)
|
|||
Incompatible changes (compared to attic, fork related):
|
||||
|
||||
- changed sw name and cli command to "borg", updated docs
|
||||
- package name (and name in urls) uses "borgbackup" to have less collisions
|
||||
- package name (and name in urls) uses "borgbackup" to have fewer collisions
|
||||
- changed repo / cache internal magic strings from ATTIC* to BORG*,
|
||||
changed cache location to .cache/borg/ - this means that it currently won't
|
||||
accept attic repos (see issue #21 about improving that)
|
||||
|
|
|
@ -108,7 +108,7 @@ Are there other known limitations?
|
|||
usually corresponding to tens or hundreds of millions of files/dirs.
|
||||
When trying to go beyond that limit, you will get a fatal IntegrityError
|
||||
exception telling that the (archive) object is too big.
|
||||
An easy workaround is to create multiple archives with less items each.
|
||||
An easy workaround is to create multiple archives with fewer items each.
|
||||
See also the :ref:`archive_limitation` and :issue:`1452`.
|
||||
|
||||
:ref:`borg_info` shows how large (relative to the maximum size) existing
|
||||
|
@ -216,7 +216,7 @@ I get an IntegrityError or similar - what now?
|
|||
|
||||
A single error does not necessarily indicate bad hardware or a Borg
|
||||
bug. All hardware exhibits a bit error rate (BER). Hard drives are typically
|
||||
specified as exhibiting less than one error every 12 to 120 TB
|
||||
specified as exhibiting fewer than one error every 12 to 120 TB
|
||||
(one bit error in 10e14 to 10e15 bits). The specification is often called
|
||||
*unrecoverable read error rate* (URE rate).
|
||||
|
||||
|
|
|
@ -535,7 +535,7 @@ ACLs/xattrs), the limit will be ~32 million files/directories per archive.
|
|||
If one tries to create an archive object bigger than MAX_OBJECT_SIZE, a fatal
|
||||
IntegrityError will be raised.
|
||||
|
||||
A workaround is to create multiple archives with less items each, see
|
||||
A workaround is to create multiple archives with fewer items each, see
|
||||
also :issue:`1452`.
|
||||
|
||||
.. _item:
|
||||
|
@ -706,7 +706,7 @@ be estimated like that::
|
|||
All units are Bytes.
|
||||
|
||||
It is assuming every chunk is referenced exactly once (if you have a lot of
|
||||
duplicate chunks, you will have less chunks than estimated above).
|
||||
duplicate chunks, you will have fewer chunks than estimated above).
|
||||
|
||||
It is also assuming that typical chunk size is 2^HASH_MASK_BITS (if you have
|
||||
a lot of files smaller than this statistical medium chunk size, you will have
|
||||
|
|
Loading…
Reference in a new issue