restic/internal/restorer
Michael Eischer c147422ba5 repository: special case SaveBlob for all zero chunks
Sparse files contain large regions containing only zero bytes. Checking
that a blob only contains zeros is possible with over 100GB/s for modern
x86 CPUs. Calculating sha256 hashes is only possible with 500MB/s (or
2GB/s using hardware acceleration). Thus we can speed up the hash
calculation for all zero blobs (which always have length
chunker.MinSize) by checking for zero bytes and then using the
precomputed hash.

The all zeros check is only performed for blobs with the minimal chunk
size, and thus should add no overhead most of the time. For chunks which
are not all zero but have the minimal chunks size, the overhead will be
below 2% based on the above performance numbers.

This allows reading sparse sections of files as fast as the kernel can
return data to us. On my system using BTRFS this resulted in about
4GB/s.
2022-09-24 21:39:39 +02:00
..
doc.go
filerestorer.go repository: special case SaveBlob for all zero chunks 2022-09-24 21:39:39 +02:00
filerestorer_test.go
fileswriter.go restorer: Fix race condition in partialFile.WriteAt 2022-09-24 21:39:39 +02:00
fileswriter_test.go
hardlinks_index.go
hardlinks_index_test.go
preallocate_darwin.go
preallocate_linux.go
preallocate_other.go
preallocate_test.go
restorer.go
restorer_test.go
restorer_unix_test.go restorer: move zeroPrefixLen to restic package 2022-09-24 21:39:39 +02:00
sparsewrite.go restorer: move zeroPrefixLen to restic package 2022-09-24 21:39:39 +02:00