mirror of https://github.com/restic/restic.git
c147422ba5
Sparse files contain large regions containing only zero bytes. Checking that a blob only contains zeros is possible with over 100GB/s for modern x86 CPUs. Calculating sha256 hashes is only possible with 500MB/s (or 2GB/s using hardware acceleration). Thus we can speed up the hash calculation for all zero blobs (which always have length chunker.MinSize) by checking for zero bytes and then using the precomputed hash. The all zeros check is only performed for blobs with the minimal chunk size, and thus should add no overhead most of the time. For chunks which are not all zero but have the minimal chunks size, the overhead will be below 2% based on the above performance numbers. This allows reading sparse sections of files as fast as the kernel can return data to us. On my system using BTRFS this resulted in about 4GB/s. |
||
---|---|---|
.. | ||
doc.go | ||
filerestorer.go | ||
filerestorer_test.go | ||
fileswriter.go | ||
fileswriter_test.go | ||
hardlinks_index.go | ||
hardlinks_index_test.go | ||
preallocate_darwin.go | ||
preallocate_linux.go | ||
preallocate_other.go | ||
preallocate_test.go | ||
restorer.go | ||
restorer_test.go | ||
restorer_unix_test.go | ||
sparsewrite.go |