1
0
Fork 0
mirror of https://github.com/borgbackup/borg.git synced 2024-12-23 08:16:54 +00:00

Correct usage of "fewer" in place of "less"

This commit is contained in:
Tom Denley 2017-11-11 11:21:45 +00:00
parent 46d0f3e81d
commit c6591a7c06
3 changed files with 6 additions and 6 deletions

View file

@ -1050,7 +1050,7 @@ Other changes:
- pass meta-data around, #765
- move some constants to new constants module
- better readability and less errors with namedtuples, #823
- better readability and fewer errors with namedtuples, #823
- moved source tree into src/ subdirectory, #1016
- made borg.platform a package, #1113
- removed dead crypto code, #1032
@ -2416,7 +2416,7 @@ Version 0.23.0 (2015-06-11)
Incompatible changes (compared to attic, fork related):
- changed sw name and cli command to "borg", updated docs
- package name (and name in urls) uses "borgbackup" to have less collisions
- package name (and name in urls) uses "borgbackup" to have fewer collisions
- changed repo / cache internal magic strings from ATTIC* to BORG*,
changed cache location to .cache/borg/ - this means that it currently won't
accept attic repos (see issue #21 about improving that)

View file

@ -108,7 +108,7 @@ Are there other known limitations?
usually corresponding to tens or hundreds of millions of files/dirs.
When trying to go beyond that limit, you will get a fatal IntegrityError
exception telling that the (archive) object is too big.
An easy workaround is to create multiple archives with less items each.
An easy workaround is to create multiple archives with fewer items each.
See also the :ref:`archive_limitation` and :issue:`1452`.
:ref:`borg_info` shows how large (relative to the maximum size) existing
@ -215,7 +215,7 @@ I get an IntegrityError or similar - what now?
A single error does not necessarily indicate bad hardware or a Borg
bug. All hardware exhibits a bit error rate (BER). Hard drives are typically
specified as exhibiting less than one error every 12 to 120 TB
specified as exhibiting fewer than one error every 12 to 120 TB
(one bit error in 10e14 to 10e15 bits). The specification is often called
*unrecoverable read error rate* (URE rate).

View file

@ -538,7 +538,7 @@ ACLs/xattrs), the limit will be ~32 million files/directories per archive.
If one tries to create an archive object bigger than MAX_OBJECT_SIZE, a fatal
IntegrityError will be raised.
A workaround is to create multiple archives with less items each, see
A workaround is to create multiple archives with fewer items each, see
also :issue:`1452`.
.. _item:
@ -709,7 +709,7 @@ be estimated like that::
All units are Bytes.
It is assuming every chunk is referenced exactly once (if you have a lot of
duplicate chunks, you will have less chunks than estimated above).
duplicate chunks, you will have fewer chunks than estimated above).
It is also assuming that typical chunk size is 2^HASH_MASK_BITS (if you have
a lot of files smaller than this statistical medium chunk size, you will have