in borg 1.1, compact_segments() was always run directly after some repo writing
operation (in same borg process). but now, only "borg compact" is used to compact
segments and it is a separate borg invocation (new process), so we need to persist
the shadow_index so we do not lose that information.
OI rmdir gives errno 17 EEXIST when trying to remove a non-empty dir,
not ENOTEMPTY like other OSes.
Also: fix one error handler to also use a tuple-member check instead of "or".
rephrase some warnings, fixes#5164
borg check --repair and borg recreate are now present in the code since rather long, so they are not experimental any more.
borg recreate might be used wrongly (e.g. accidentally excluding everything / not matching anything when recreating an archive). added some warning words in the docs, but it will not ask for confirmation any more.
borg check: there might be kinds of corruption borg check --repair can not fix and it might make things even worse while trying to fix. so this will still ask for confirmation, just with different wording.
- add a daemonizing() ctx manager
The foreground borg mount process (local repo) survives until the lock
migration (performed by the background) is finished, so the lock to be
migrated is never stale, and the race condition is gone.
- add a test case revealing that locking is not safe during daemonization (borg mount)
- amend printing in testsuite.archiver
locking: fix ExclusiveLock race condition bug, fixes#4923
- ExclusiveLock is now based on os.rename instead of os.mkdir.
- catch FileNotFoundError observed under race condition in ExclusiveLock.release()
and .kill_stale_lock()
- added TestExclusiveLock.test_race_condition() which reveals issue #4923
- updated docs
- locking: use "raise LockTimeout from None" for prettier traceback
Co-authored-by: Thomas Portmann <thomas@portmann.org>
Co-authored-by: Thomas Waldmann <tw@waldmann-edv.de>
The umask value is NOT transmitted from client to server any more,
so the borg client can not influence the borg server umask any more.
If one wants to have a specific umask on the server side, one needs to
use a ssh forced command in .ssh/authorized_keys file.
OTOH, as the default value is 077 (in general, for client as well as for
the server) and the server does not take the value from the client any more,
there usually should be no need to give it on the server side, IF you are
happy with the default value.
Running 'borg key import' on a keyfile repository with the BORG_KEY_FILE
environment variable set works correctly if the BORG_KEY_FILE file
already exists. However, the command crashes if the BORG_KEY_FILE file
does not exist:
$ BORG_KEY_FILE=newborgkey borg key import /home/strager/borg-backups/straglum borgkey
Local Exception
Traceback (most recent call last):
[snip]
File "[snip]/borg/crypto/key.py", line 713, in sanity_check
with open(filename, 'rb') as fd:
FileNotFoundError: [Errno 2] No such file or directory: '[snip]/newborgkey'
Platform: Linux straglum 5.0.0-25-generic #26~18.04.1-Ubuntu SMP Thu Aug 1 13:51:02 UTC 2019 x86_64
Linux: debian buster/sid
Borg: 1.1.11 Python: CPython 3.7.7 msgpack: 0.5.6
PID: 15306 CWD: /home/strager/Projects/borg
sys.argv: ['[snip]/borg', 'key', 'import', '/home/strager/borg-backups/straglum', 'borgkey']
SSH_ORIGINAL_COMMAND: None
Make 'borg key import' not require the BORG_KEY_FILE file to already
exist.
This commit does not change the behavior of 'borg key import' without
BORG_KEY_FILE. This commit also does not change the behavior of 'borg
key import' on a repokey repository.
I want to change the key lookup logic for the 'borg key import' command.
Extract methods out of the KeyfileKey.find_key and
KeyfileKey.get_new_target to make this future change possible without
duplicating code.
This commit should not change behavior.
allow creating archives using stdout of given command
In addition to allowing:
some-command --param value | borg create REPO::ARCH -
also allow:
borg create --content-from-command create REPO::ARCH -- some-command --param value
The difference is that the latter approach deals with errors properly.
In the former example, an archive is created no matter what. Even, if
`some-command` aborts and the output is truncated, Borg won't realize.
In the latter example, the status code is checked and archive creation
is aborted properly when appropriate.
The locally defined preload() function overwrites the preload boolean keyword
argument, always evaluating to true, so preloading is done, even when not
requested by the caller, causing a memory leak.
Also move its definition outside of the loop.
This issue was found by Antonio Larrosa in borg issue #5202.
the old test stumbled over files with an empty chunks list,
so I guess just having some empty file in src/borg could make
that test fail.
fixed this by deleting a chunk of some specific file (we use
this file / this construction at misc. other places in the
archiver tests).