formula is only approximately correct

the movement of the start of the hashing window stops at (file_size - window_size), thus THAT would be the factor in that formula, not just file_size.

for medium and big files, window_size is much smaller than file_size, so guess we can just say "approximately" for the general case.
This commit is contained in:
Thomas Waldmann 2022-01-16 20:39:29 +01:00
parent 79cb4e43e5
commit 94e93ba7e6
1 changed files with 1 additions and 1 deletions

View File

@ -633,7 +633,7 @@ This results in a high chance that a single cluster of changes to a file will on
result in 1-2 new chunks, aiding deduplication.
Using normal hash functions this would be extremely slow,
requiring hashing ``window size * file size`` bytes.
requiring hashing approximately ``window size * file size`` bytes.
A rolling hash is used instead, which allows to add a new input byte and
compute a new hash as well as *remove* a previously added input byte
from the computed hash. This makes the cost of computing a hash for each