Fix typos

This commit is contained in:
Andrea Gelmini 2020-12-26 21:59:10 +01:00 committed by Thomas Waldmann
parent 268eb2e598
commit 72e7c46fa7
34 changed files with 59 additions and 59 deletions

View File

@ -27,7 +27,7 @@ What's working
- ``borg list`` works as expected.
- ``borg extract --strip-components 1 ::backup-XXXX`` works.
If absolute paths are extracted, it's important to pass ``--strip-components 1`` as
otherwise the data is resotred to the original location!
otherwise the data is restored to the original location!
What's NOT working
------------------

View File

@ -124,7 +124,7 @@ Steps you should take:
Prior versions can access and modify repositories with this measure enabled, however,
to 1.0.9 or later their modifications are indiscernible from an attack and will
raise an error until the below procedure is followed. We are aware that this can
be be annoying in some circumstances, but don't see a way to fix the vulnerability
be annoying in some circumstances, but don't see a way to fix the vulnerability
otherwise.
In case a version prior to 1.0.9 is used to modify a repository where above procedure
@ -426,7 +426,7 @@ Version 1.2.0a8 (2020-04-22)
Fixes:
- fixed potential index corruption / data loss issue due to bug in hashindex_set, #4829.
Please read and follow the more detailled notes close to the top of this document.
Please read and follow the more detailed notes close to the top of this document.
- fix crash when upgrading erroneous hints file, #4922
- commit-time free space calc: ignore bad compact map entries, #4796
- info: if the archive doesn't exist, print a pretty message, #4793
@ -454,7 +454,7 @@ New features:
we just extract another copy instead of making a hardlink.
- move sync_file_range to its own extension for better platform compatibility.
- new --bypass-lock option to bypass locking, e.g. for read-only repos
- accept absolute pathes by removing leading slashes in patterns of all
- accept absolute paths by removing leading slashes in patterns of all
sorts but re: style, #4029
- delete: new --keep-security-info option
@ -880,7 +880,7 @@ Compatibility notes:
Fixes:
- fixed potential index corruption / data loss issue due to bug in hashindex_set, #4829.
Please read and follow the more detailled notes close to the top of this document.
Please read and follow the more detailed notes close to the top of this document.
- upgrade bundled xxhash to 0.7.3, #4891.
0.7.2 is the minimum requirement for correct operations on ARMv6 in non-fixup
mode, where unaligned memory accesses cause bus errors.
@ -1168,7 +1168,7 @@ New features:
- init: add warning to store both key and passphrase at safe place(s)
- BORG_HOST_ID env var to work around all-zero MAC address issue, #3985
- borg debug dump-repo-objs --ghost (dump everything from segment files,
including deleted or superceded objects or commit tags)
including deleted or superseded objects or commit tags)
- borg debug search-repo-objs (search in repo objects for hex bytes or strings)
Other changes:
@ -1361,7 +1361,7 @@ Compatibility notes:
Fixes:
- check: data corruption fix: fix for borg check --repair malfunction, #3444.
See the more detailled notes close to the top of this document.
See the more detailed notes close to the top of this document.
- delete: also delete security dir when deleting a repo, #3427
- prune: fix building the "borg prune" man page, #3398
- init: use given --storage-quota for local repo, #3470
@ -2702,8 +2702,8 @@ Bug fixes:
Fixes a chmod/chown/chgrp/unlink/rename/... crash race between getting
dirents and dispatching to process_symlink.
- yes(): abort on wrong answers, saying so, #1622
- fixed exception borg serve raised when connection was closed before reposiory
was openend. add an error message for this.
- fixed exception borg serve raised when connection was closed before repository
was opened. Add an error message for this.
- fix read-from-closed-FD issue, #1551
(this seems not to get triggered in 1.0.x, but was discovered in master)
- hashindex: fix iterators (always raise StopIteration when exhausted)
@ -3206,7 +3206,7 @@ Bug fixes:
- do not sleep for >60s while waiting for lock, #773
- unpack file stats before passing to FUSE
- fix build on illumos
- don't try to backup doors or event ports (Solaris and derivates)
- don't try to backup doors or event ports (Solaris and derivatives)
- remove useless/misleading libc version display, #738
- test suite: reset exit code of persistent archiver, #844
- RemoteRepository: clean up pipe if remote open() fails

View File

@ -496,7 +496,7 @@ your repositories and it is not encrypted.
However, the assumption is that the cache is being stored on the very
same system which also contains the original files which are being
backed up. So someone with access to the cache files would also have
access the the original files anyway.
access the original files anyway.
The Internals section contains more details about :ref:`cache`. If you ever need to move the cache
to a different location, this can be achieved by using the appropriate :ref:`env_vars`.
@ -507,7 +507,7 @@ How can I specify the encryption passphrase programmatically?
There are several ways to specify a passphrase without human intervention:
Setting ``BORG_PASSPHRASE``
The passphrase can be specified using the ``BORG_PASSPHRASE`` enviroment variable.
The passphrase can be specified using the ``BORG_PASSPHRASE`` environment variable.
This is often the simplest option, but can be insecure if the script that sets it
is world-readable.
@ -630,7 +630,7 @@ C to delete all backups residing on S.
These are your options to protect against that:
- Do not allow to permanently delete data from the repo, see :ref:`append_only_mode`.
- Use a pull-mode setup using ``ssh -R``, see :ref:`pull_backup` for more informations.
- Use a pull-mode setup using ``ssh -R``, see :ref:`pull_backup` for more information.
- Mount C's filesystem on another machine and then create a backup of it.
- Do not give C filesystem-level access to S.

View File

@ -250,7 +250,7 @@ messages such as::
/Users/you/Pictures/Photos Library.photoslibrary: scandir: [Errno 1] Operation not permitted:
To fix this problem, you should grant full disk acccess to cron, and to your
To fix this problem, you should grant full disk access to cron, and to your
Terminal application. More information `can be found here
<https://osxdaily.com/2020/04/27/fix-cron-permissions-macos-full-disk-access/>`__.

View File

@ -55,7 +55,7 @@ time for the purposes of the log.
Config file
~~~~~~~~~~~
Each repository has a ``config`` file which which is a ``INI``-style file
Each repository has a ``config`` file which is a ``INI``-style file
and looks like this::
[repository]

View File

@ -108,7 +108,7 @@ the tampering.
Note that when using BORG_PASSPHRASE the attacker cannot swap the *entire*
repository against a new repository with e.g. repokey mode and no passphrase,
because Borg will abort access when BORG_PASSPRHASE is incorrect.
because Borg will abort access when BORG_PASSPHRASE is incorrect.
However, interactively a user might not notice this kind of attack
immediately, if she assumes that the reason for the absent passphrase

View File

@ -48,7 +48,7 @@ Depending on the amount of segments that need compaction, it may take a while,
so consider using the \fB\-\-progress\fP option.
.sp
A segment is compacted if the amount of saved space is above the percentage value
given by the \fB\-\-threshold\fP option. If ommitted, a threshold of 10% is used.
given by the \fB\-\-threshold\fP option. If omitted, a threshold of 10% is used.
When using \fB\-\-verbose\fP, borg will output an estimate of the freed space.
.sp
After upgrading borg (server) to 1.2+, you can use \fBborg compact \-\-cleanup\-commits\fP

View File

@ -66,11 +66,11 @@ Depending on the amount of segments that need compaction, it may take a while,
so consider using the ``--progress`` option.
A segment is compacted if the amount of saved space is above the percentage value
given by the ``--threshold`` option. If ommitted, a threshold of 10% is used.
given by the ``--threshold`` option. If omitted, a threshold of 10% is used.
When using ``--verbose``, borg will output an estimate of the freed space.
After upgrading borg (server) to 1.2+, you can use ``borg compact --cleanup-commits``
to clean up the numerous 17byte commit-only segments that borg 1.1 did not clean up
due to a bug. It is enough to do that once per repository.
See :ref:`separate_compaction` in Additional Notes for more details.
See :ref:`separate_compaction` in Additional Notes for more details.

View File

@ -156,7 +156,7 @@ Examples::
Via ``--pattern`` or ``--patterns-from`` you can define BOTH inclusion and exclusion
of files using pattern prefixes ``+`` and ``-``. With ``--exclude`` and
``--exlude-from`` ONLY excludes are defined.
``--exclude-from`` ONLY excludes are defined.
Inclusion patterns are useful to include paths that are contained in an excluded
path. The first matching pattern is used so if an include pattern matches before

View File

@ -78,7 +78,7 @@ You can use this to not query and store (or not extract and set) flags - in case
you don't need them or if they are broken somehow for your fs.
On Linux, dealing with the flags needs some additional syscalls. Especially when
dealing with lots of small files, this causes a noticable overhead, so you can
dealing with lots of small files, this causes a noticeable overhead, so you can
use this option also for speeding up operations.
``--umask``

View File

@ -111,7 +111,7 @@ complete -c borg -l 'exclude-if-present' -d 'Exclude directories that
complete -c borg -f -l 'keep-exclude-tags' -d 'Keep tag files of excluded directories' -n "__fish_seen_subcommand_from create"
complete -c borg -f -l 'keep-tag-files' -d 'Keep tag files of excluded directories' -n "__fish_seen_subcommand_from create"
complete -c borg -f -l 'exclude-nodump' -d 'Exclude files flagged NODUMP' -n "__fish_seen_subcommand_from create"
# Filesytem options
# Filesystem options
complete -c borg -f -s x -l 'one-file-system' -d 'Stay in the same file system' -n "__fish_seen_subcommand_from create"
complete -c borg -f -l 'numeric-owner' -d 'Only store numeric user:group identifiers' -n "__fish_seen_subcommand_from create"
complete -c borg -f -l 'noatime' -d 'Do not store atime' -n "__fish_seen_subcommand_from create"

View File

@ -1158,7 +1158,7 @@ __borg_complete_keys() {
compset -S '[^A-Za-z]##*'
[[ -n $ISUFFIX ]] && compstate[to_end]=''
# NOTE: `[[ -n $ISUFFIX ]]` is a workarond for a bug that causes cursor movement to the right further than it should
# NOTE: `[[ -n $ISUFFIX ]]` is a workaround for a bug that causes cursor movement to the right further than it should
# NOTE: the _oldlist completer doesn't respect compstate[to_end]=''
local ipref suf

View File

@ -1,7 +1,7 @@
from distutils.version import LooseVersion
# IMPORTANT keep imports from borg here to a minimum because our testsuite depends on
# beeing able to import borg.constants and then monkey patching borg.constants.PBKDF2_ITERATIONS
# being able to import borg.constants and then monkey patching borg.constants.PBKDF2_ITERATIONS
from ._version import version as __version__

View File

@ -986,7 +986,7 @@ LZ4_FORCE_INLINE int LZ4_compress_generic(
_next_match:
/* at this stage, the following variables must be correctly set :
* - ip : at start of LZ operation
* - match : at start of previous pattern occurence; can be within current prefix, or within extDict
* - match : at start of previous pattern occurrence; can be within current prefix, or within extDict
* - offset : if maybe_ext_memSegment==1 (constant)
* - lowLimit : must be == dictionary to mean "match is within extDict"; must be == source otherwise
* - token and *token : position to write 4-bits for match length; higher 4-bits for literal length supposed already written
@ -1340,8 +1340,8 @@ LZ4_stream_t* LZ4_createStream(void)
return lz4s;
}
#ifndef _MSC_VER /* for some reason, Visual fails the aligment test on 32-bit x86 :
it reports an aligment of 8-bytes,
#ifndef _MSC_VER /* for some reason, Visual fails the alignment test on 32-bit x86 :
it reports an alignment of 8-bytes,
while actually aligning LZ4_stream_t on 4 bytes. */
static size_t LZ4_stream_t_alignment(void)
{
@ -1355,8 +1355,8 @@ LZ4_stream_t* LZ4_initStream (void* buffer, size_t size)
DEBUGLOG(5, "LZ4_initStream");
if (buffer == NULL) { return NULL; }
if (size < sizeof(LZ4_stream_t)) { return NULL; }
#ifndef _MSC_VER /* for some reason, Visual fails the aligment test on 32-bit x86 :
it reports an aligment of 8-bytes,
#ifndef _MSC_VER /* for some reason, Visual fails the alignment test on 32-bit x86 :
it reports an alignment of 8-bytes,
while actually aligning LZ4_stream_t on 4 bytes. */
if (((size_t)buffer) & (LZ4_stream_t_alignment() - 1)) { return NULL; } /* alignment check */
#endif

View File

@ -480,7 +480,7 @@ LZ4LIB_STATIC_API void LZ4_attach_dictionary(LZ4_stream_t* workingStream, const
/*! In-place compression and decompression
*
* It's possible to have input and output sharing the same buffer,
* for highly contrained memory environments.
* for highly constrained memory environments.
* In both cases, it requires input to lay at the end of the buffer,
* and decompression to start at beginning of the buffer.
* Buffer size must feature some margin, hence be larger than final size.

View File

@ -299,7 +299,7 @@ XXH_PUBLIC_API XXH32_hash_t XXH32 (const void* input, size_t length, XXH32_hash_
/******* Streaming *******/
/*
* Streaming functions generate the xxHash value from an incrememtal input.
* Streaming functions generate the xxHash value from an incremental input.
* This method is slower than single-call functions, due to state management.
* For small inputs, prefer `XXH32()` and `XXH64()`, which are better optimized.
*
@ -835,7 +835,7 @@ XXH_PUBLIC_API XXH128_hash_t XXH128(const void* data, size_t len, XXH64_hash_t s
*
* The check costs one initial branch per hash, which is generally negligible, but not zero.
* Moreover, it's not useful to generate binary for an additional code path
* if memory access uses same instruction for both aligned and unaligned adresses.
* if memory access uses same instruction for both aligned and unaligned addresses.
*
* In these cases, the alignment check can be removed by setting this macro to 0.
* Then the code will always use unaligned memory access.
@ -1044,7 +1044,7 @@ static xxh_u32 XXH_read32(const void* memPtr)
#endif /* XXH_FORCE_DIRECT_MEMORY_ACCESS */
/* *** Endianess *** */
/* *** Endianness *** */
typedef enum { XXH_bigEndian=0, XXH_littleEndian=1 } XXH_endianess;
/*!
@ -1212,7 +1212,7 @@ static xxh_u32 XXH32_round(xxh_u32 acc, xxh_u32 input)
* UGLY HACK:
* This inline assembly hack forces acc into a normal register. This is the
* only thing that prevents GCC and Clang from autovectorizing the XXH32
* loop (pragmas and attributes don't work for some resason) without globally
* loop (pragmas and attributes don't work for some reason) without globally
* disabling SSE4.1.
*
* The reason we want to avoid vectorization is because despite working on
@ -4629,7 +4629,7 @@ XXH128(const void* input, size_t len, XXH64_hash_t seed)
/*
* All the functions are actually the same as for 64-bit streaming variant.
* The only difference is the finalizatiom routine.
* The only difference is the finalization routine.
*/
static void

View File

@ -304,7 +304,7 @@ ZSTD_buildSuperBlockEntropy(seqStore_t* seqStorePtr,
* before we know the table size + compressed size, so we have a bound on the
* table size. If we guessed incorrectly, we fall back to uncompressed literals.
*
* We write the header when writeEntropy=1 and set entropyWrriten=1 when we succeeded
* We write the header when writeEntropy=1 and set entropyWritten=1 when we succeeded
* in writing the header, otherwise it is set to 0.
*
* hufMetadata->hType has literals block type info.

View File

@ -472,7 +472,7 @@ MEM_STATIC void ZSTD_cwksp_free(ZSTD_cwksp* ws, ZSTD_customMem customMem) {
/**
* Moves the management of a workspace from one cwksp to another. The src cwksp
* is left in an invalid state (src must be re-init()'ed before its used again).
* is left in an invalid state (src must be re-init()'ed before it's used again).
*/
MEM_STATIC void ZSTD_cwksp_move(ZSTD_cwksp* dst, ZSTD_cwksp* src) {
*dst = *src;

View File

@ -242,7 +242,7 @@ size_t ZSTD_compressBlock_fast_dictMatchState_generic(
assert(endIndex - prefixStartIndex <= maxDistance);
(void)maxDistance; (void)endIndex; /* these variables are not used when assert() is disabled */
/* ensure there will be no no underflow
/* ensure there will be no underflow
* when translating a dict index into a local index */
assert(prefixStartIndex >= (U32)(dictEnd - dictBase));

View File

@ -579,7 +579,7 @@ size_t ZSTD_ldm_blockCompress(rawSeqStore_t* rawSeqStore,
DEBUGLOG(5, "ZSTD_ldm_blockCompress: srcSize=%zu", srcSize);
assert(rawSeqStore->pos <= rawSeqStore->size);
assert(rawSeqStore->size <= rawSeqStore->capacity);
/* Loop through each sequence and apply the block compressor to the lits */
/* Loop through each sequence and apply the block compressor to the lists */
while (rawSeqStore->pos < rawSeqStore->size && ip < iend) {
/* maybeSplitSequence updates rawSeqStore->pos */
rawSeq const sequence = maybeSplitSequence(rawSeqStore,

View File

@ -1576,7 +1576,7 @@ note:
/* Construct the inverse suffix array of type B* suffixes using trsort. */
trsort(ISAb, SA, m, 1);
/* Set the sorted order of tyoe B* suffixes. */
/* Set the sorted order of type B* suffixes. */
for(i = n - 1, j = m, c0 = T[n - 1]; 0 <= i;) {
for(--i, c1 = c0; (0 <= i) && ((c0 = T[i]) >= c1); --i, c1 = c0) { }
if(0 <= i) {

View File

@ -924,7 +924,7 @@ ZSTDLIB_API size_t ZSTD_CCtx_loadDictionary(ZSTD_CCtx* cctx, const void* dict, s
* Reference a prepared dictionary, to be used for all next compressed frames.
* Note that compression parameters are enforced from within CDict,
* and supersede any compression parameter previously set within CCtx.
* The parameters ignored are labled as "superseded-by-cdict" in the ZSTD_cParameter enum docs.
* The parameters ignored are labelled as "superseded-by-cdict" in the ZSTD_cParameter enum docs.
* The ignored parameters will be used again if the CCtx is returned to no-dictionary mode.
* The dictionary will remain valid for future compressed frames using same CCtx.
* @result : 0, or an error code (which can be tested with ZSTD_isError()).
@ -1237,7 +1237,7 @@ ZSTDLIB_API unsigned long long ZSTD_findDecompressedSize(const void* src, size_t
* `srcSize` must be the _exact_ size of this series
* (i.e. there should be a frame boundary at `src + srcSize`)
* @return : - upper-bound for the decompressed size of all data in all successive frames
* - if an error occured: ZSTD_CONTENTSIZE_ERROR
* - if an error occurred: ZSTD_CONTENTSIZE_ERROR
*
* note 1 : an error can occur if `src` contains an invalid or incorrectly formatted frame.
* note 2 : the upper-bound is exact when the decompressed size field is available in every ZSTD encoded frame of `src`.

View File

@ -1955,7 +1955,7 @@ class Archiver:
fd.write(data)
if args.ghost:
# dump ghosty stuff from segment files: not yet committed objects, deleted / superceded objects, commit tags
# dump ghosty stuff from segment files: not yet committed objects, deleted / superseded objects, commit tags
# set up the key without depending on a manifest obj
for id, cdata, tag, segment, offset in repository.scan_low_level():
@ -2046,7 +2046,7 @@ class Archiver:
# try to locate wanted sequence in data
count = data.count(wanted)
if count:
offset = data.find(wanted) # only determine first occurance's offset
offset = data.find(wanted) # only determine first occurrence's offset
info = "%d %s #%d" % (i, id.hex(), count)
print_finding(info, wanted, data, offset)
@ -2291,7 +2291,7 @@ class Archiver:
Via ``--pattern`` or ``--patterns-from`` you can define BOTH inclusion and exclusion
of files using pattern prefixes ``+`` and ``-``. With ``--exclude`` and
``--exlude-from`` ONLY excludes are defined.
``--exclude-from`` ONLY excludes are defined.
Inclusion patterns are useful to include paths that are contained in an excluded
path. The first matching pattern is used so if an include pattern matches before
@ -3048,7 +3048,7 @@ class Archiver:
so consider using the ``--progress`` option.
A segment is compacted if the amount of saved space is above the percentage value
given by the ``--threshold`` option. If ommitted, a threshold of 10% is used.
given by the ``--threshold`` option. If omitted, a threshold of 10% is used.
When using ``--verbose``, borg will output an estimate of the freed space.
After upgrading borg (server) to 1.2+, you can use ``borg compact --cleanup-commits``
@ -4574,7 +4574,7 @@ class Archiver:
value = getattr(client_result, attr_name, not_present)
if value is not not_present:
# note: it is not possible to specify a allowlisted option via a forced command,
# it always gets overridden by the value specified (or defaulted to) by the client commmand.
# it always gets overridden by the value specified (or defaulted to) by the client command.
setattr(result, attr_name, value)
return result

View File

@ -470,7 +470,7 @@ class ObfuscateSize(CompressorBase):
# f = 0.1 .. 1.0 for r in 0.1 .. 0.01 == in 9% of cases
# f = 1.0 .. 10.0 for r in 0.01 .. 0.001 = in 0.9% of cases
# f = 10.0 .. 100.0 for r in 0.001 .. 0.0001 == in 0.09% of cases
r = max(self.min_r, random.random()) # 0..1, but dont get too close to 0
r = max(self.min_r, random.random()) # 0..1, but don't get too close to 0
f = self.factor / r
return int(compr_size * f)

View File

@ -7,7 +7,7 @@ class Error(Exception):
"""Error: {}"""
# Error base class
# if we raise such an Error and it is only catched by the uppermost
# if we raise such an Error and it is only caught by the uppermost
# exception handler (that exits short after with the given exit_code),
# it is always a (fatal and abrupt) EXIT_ERROR, never just a warning.
exit_code = EXIT_ERROR

View File

@ -147,7 +147,7 @@ class ProgressIndicatorPercent(ProgressIndicatorBase):
# truncate the last argument, if no space is available
if info is not None:
if not self.json:
# no need to truncate if we're not outputing to a terminal
# no need to truncate if we're not outputting to a terminal
terminal_space = get_terminal_size(fallback=(-1, -1))[0]
if terminal_space != -1:
space = terminal_space - len(self.msg % tuple([pct] + info[:-1] + ['']))

View File

@ -343,7 +343,7 @@ class Lock:
"""
A Lock for a resource that can be accessed in a shared or exclusive way.
Typically, write access to a resource needs an exclusive lock (1 writer,
noone is allowed reading) and read access to a resource needs a shared
no one is allowed reading) and read access to a resource needs a shared
lock (multiple readers are allowed).
If possible, try to use the contextmanager here like::

View File

@ -117,7 +117,7 @@ def setup_logging(stream=None, conf_fname=None, env_var='BORG_LOGGING_CONF', lev
def find_parent_module():
"""find the name of a the first module calling this module
"""find the name of the first module calling this module
if we cannot find it, we return the current module's name
(__name__) instead.

View File

@ -2012,7 +2012,7 @@ Sha256.Maj = function(x, y, z) { return (x & y) ^ (x & z) ^ (y & z); };
* @private
*/
Sha256.toHexStr = function(n) {
// note can't use toString(16) as it is implementation-dependant,
// note can't use toString(16) as it is implementation-dependent,
// and in IE returns signed numbers when used on full words
var s="", v;
for (var i=7; i>=0; i--) { v = (n>>>(i*4)) & 0xf; s += v.toString(16); }

View File

@ -374,7 +374,7 @@ def parse_inclexcl_command(cmd_line_str, fallback=ShellPattern):
cmd = cmd_prefix_map.get(cmd_line_str[0])
if cmd is None:
raise argparse.ArgumentTypeError("A pattern/command must start with any one of: %s" %
raise argparse.ArgumentTypeError("A pattern/command must start with anyone of: %s" %
', '.join(cmd_prefix_map))
# remaining text on command-line following the command character

View File

@ -1071,7 +1071,7 @@ class Repository:
"""Very low level scan over all segment file entries.
It does NOT care about what's committed and what not.
It does NOT care whether an object might be deleted or superceded later.
It does NOT care whether an object might be deleted or superseded later.
It just yields anything it finds in the segment files.
This is intended as a last-resort way to get access to all repo contents of damaged repos,

View File

@ -235,7 +235,7 @@ class BaseTestCase(unittest.TestCase):
# the borg mount daemon to work properly or the tests
# will just freeze. Therefore, if argument `fork` is not
# specified, the default value is `True`, regardless of
# `FORK_DEFAULT`. However, leaving the possibilty to run
# `FORK_DEFAULT`. However, leaving the possibility to run
# the command with `fork = False` is still necessary for
# testing for mount failures, for example attempting to
# mount a read-only repo.

View File

@ -2009,12 +2009,12 @@ class ArchiverTestCase(ArchiverTestCaseBase):
assert extracted_data == data
def test_create_read_special_broken_symlink(self):
os.symlink('somewhere doesnt exist', os.path.join(self.input_path, 'link'))
os.symlink('somewhere does not exist', os.path.join(self.input_path, 'link'))
self.cmd('init', '--encryption=repokey', self.repository_location)
archive = self.repository_location + '::test'
self.cmd('create', '--read-special', archive, 'input')
output = self.cmd('list', archive)
assert 'input/link -> somewhere doesnt exist' in output
assert 'input/link -> somewhere does not exist' in output
# def test_cmdline_compatibility(self):
# self.create_regular_file('file1', size=1024 * 80)

View File

@ -192,7 +192,7 @@ class HashIndexRefcountingTestCase(BaseTestCase):
idx = ChunkIndex()
idx[H(1)] = ChunkIndex.MAX_VALUE - 1, 1, 2
# 5 is arbitray, any number of incref/decrefs shouldn't move it once it's limited
# 5 is arbitrary, any number of incref/decrefs shouldn't move it once it's limited
for i in range(5):
# first incref to move it to the limit
refcount, *_ = idx.incref(H(1))