Include condition that path is non empty after applying strip_components into
filter passed to iter_items.
All filtering of files to extract must be done in the filter callable used in
archive.iter_items because iter_items will preload all chunks used in items
it returns. If they are not actually extracted the accumulate in the
responsed dict.
freebsd 10.2:
it does not give rc < 0 and errno == ERANGE if the buffer was too small,
like linux or mac OS X does.
rv == buffer len might be a signal of truncation.
rv > buffer len would be even worse
not sure if some implementation returns the total length of the data,
not just the amount put into the buffer.
but as we use the returned length to "truncate" the buffer, we better
make sure it is not longer than the buffer.
also: freebsd listxattr memoryview len bugfix
this also fixes the race condition seen in #1462 because there is only 1 call now.
either it succeeds, then we get the correct length as result and truncate the result value to that length.
or it fails with ERANGE, then we grow the buffer to double size and repeat.
or it fails with some other error, then we throw OSError.
an acd_cli (amazon cloud drive fuse filesystem) user had "borg init" crash in the line below.
by adding the assertion we tell that we do not expect the transaction_id to be None there,
so it is easier to differentiate from a random coding error.
increasing the mask (target chunk size) from 14 (16kiB) to 17 (128kiB).
this should reduce the amount of item metadata chunks an archive has to reference to 1/8.
this does not completely fix#1452, but at least enables a 8x larger item metadata stream.
we will not get() objects that have a segment entry larger than MAX_OBJECT_SIZE.
thus we should never produce such entries.
also: introduce repository.MAX_DATA_SIZE that gives the max payload size.
start with 0 bytes length (saves memory in case lz4 is not used).
always grow when a bigger buffer is needed.
avoid per-call reallocation / freeing / garbage.
the statically allocated COMPR_BUFFER was right size for chunks,
but not for the archive item which could get larger if you have
many millions of files/dirs.