implemented by introducing one level of indirection, the limit is now
very high, so it is not practically relevant any more.
we always use the indirection (storing the metadata stream chunk ids list not
directly into the archive item, but into some repo objects referenced by the new
ArchiveItem.item_ptrs list).
thus, the code behaves the same for all archive sizes.
work around setuptools puking about:
############################
# Package would be ignored #
############################
Python recognizes 'borg.cache_sync' as an importable package,
but it is not listed in the `packages` configuration of setuptools.
'borg.cache_sync' has been automatically added to the distribution only
because it may contain data files, but this behavior is likely to change
in future versions of setuptools (and therefore is considered deprecated).
Please make sure that 'borg.cache_sync' is included as a package by using
the `packages` configuration field or the proper discovery methods
(for example by using `find_namespace_packages(...)`/`find_namespace:`
instead of `find_packages(...)`/`find:`).
You can read more about "package discovery" and "data files" on setuptools
documentation page.
hopefully this is the final fix.
after first fixing of #6400 (by using os.umask after mkstemp), there
was a new problem that chmod was not supported on some fs.
even after fixing that, there were other issues, see the ACLs issue
documented in #6933.
the root cause of all this is tempfile.mkstemp internally using a
very secure, but hardcoded and for our use case problematic mode
of 0o600.
mkstemp_mode (mosty copy&paste from python stdlib tempfile module +
"black" formatting applied) supports giving the mode via the api,
that is the only change needed.
slightly dirty due to the _xxx imports from tempfile, but hopefully
this will be supported in some future python version.
Since compression type identification has been split into type and
level, the graphic needed a slight update.
Unfortunately, I don't have access to Visio, so I converted this to odg.
While writing my own out-of-band decoder, I had a hard time figuring out
how to unpack the manifest. From the description, I was only able to
read that the manifest is msgpack'd, but I had not been able to figure
out that it's also going through the same encryption+compression logic
as all other things do.
This should make it a little clearer and provide the necessary
information to understand how the compression works.
manifest, repo and cache are committed every checkpoint interval.
also, when ctrl-c is pressed, finish deleting the current archive, commit and then terminate.
the old code did just 1 attempt to detect the repo decryption key.
if the first chunkid we got from the chunks hashtable iterator was accidentally
the id of the chunk we intentionally corrupted in test_delete_double_force,
setup of the key failed and that made the test crash.
in practice, this could of course also happen if chunks are corrupted, thus
we now do many retries with other chunks before giving up.
error handling was improved: do not return None (instead of a key), it just
leads to weird crashes elsewhere, but fail early with IntegrityError and a
reasonable error msg.
rename method to make_key to avoid confusion with borg.crypto.key.identify_key.