update-version-h.sh tries to use {{{svnversion}}} when possible. But when it's not, it looks through the "$Id:" lines in the source file comments and uses the largest version number it finds. The new files tr-dht.[ch] didn't have the line of its $Id: comment formatted in the way update-version-h.sh expected. tr-dht.[ch]'s $Id: line has been homogenized to be like everyone else's...
remove() doesn't have the same behavior on Windows. On that platform, we should use MoveFileEx( oldpath, newpath, MOVEFILE_REPLACE_EXISTING )." Thanks to rb07 for testing & confirming the fix.
When libtransmission gets a "remove torrent" request from RPC, it tries to delegate the work. This is because the GTK+ and Mac clients don't want torrents disappearing in a different thread and causing possible thread issues. So the GTK+ and Mac clients get notification about this via libtransmission's RPC callback and remove the torrents themselves. Unfortunately, that notification doesn't include information about whether or not to delete local data.
This commit adds that information to the RPC callback so that the Mac and GTK+ clients will know whether or not to trash the local files when a third-party RPC client requests that at torrent and its files be deleted.
The 'bad tracker' penalty was introduced in 2009 after a top tier trackers went down. Announces to it would hang, tying up an announce slot in libcurl for minutes at a time. If a user had enough torrents from that tracker, it could bottleneck all announce slots. The workaround was to deprioritize failing trackers so that they wouldn't obstruct other trackers.
Its implementation could be better, however. There are two parts:
1. Deciding how frequently to retry unresponsive trackers
2. Once an unresponsive tracker announce was ready to go, it would be bumped down the queue if other announces were ready too.
Part 2 probably contributes to #3931. If there are enough torrents loaded, there will always be good tracker announces that get pushed ahead of a bad one in the queue. Modifying 2's heuristics would be one option, but it seems simpler to remove it altogether now that getRetryInterval() grades more hashly for consecutive failures. Altering the retry interval also gives better visual feedback to users than Part 2 did.
This commit removes "Part 2" as described above.
When saving a tr_benc object to disk at $dst, bencode.c saves it to a tmp file in the same directory as $dst, unlinks $dst if it exists, and then renames $tmp as $dst. This commit removes the middle step, which is unnecessary because rename() has guarantees about atomically overwriting $dst.
r11813 fixed the timestamp issue by fsync()ing files before close()ing them in tr_close_file(). This causes a little overhead as even read-only files cause a sync as their atimes are modified. Instead, we should call fsync() further back in the call chain in tr_fdFileClose() so that we can know to only sync torrent files that were opened with write access.
fsync() doesn't exist on Windows. bencode had a private function, tr_fsync(), that is a portability wrapper around fsync() on *nix and _commit() on win32. Make this function package-visible, rather than private, so fdlimit.c can use it too.
As pointed out by longinus00 and ijuxda, storing per-piece timestamps in the .resume file can involve a lot of overhead. This commit reduces the overhead by adding a couple of optimizations: (1) in cases where *all* or *none* of the files' pieces were checked after the file's mtime, we can safely fold all the pieces' mtimes into a single per-file mtime. (2) since unix time takes up a lot of space when rendered as a benc integer, find a common per-file "baseline" number, then store the pieces' timestamps as offsets from that number. Also add documentation explaining this new format, and also better explaining the pre-2.20 progress format.
Files downloaded in Transmission 2.20 betas [1..3] forced each piece to be checked twice -- once on download, and once when uploading the piece for the first time. Older versions of Transmission didn't perform the latter check unless the file had changed after it was downloaded. This commit restores that behavior.
#3956's r11780 has uncovered a longstanding memory error that occurs when tr_bencParse() fails to parse a dict and leaves a dangling key. This is fixed by cleaning up the key.
tr_peerIoReconnect() was calling event_del() rather than event_free() on its io.event_read and io.event_write fields, causing those fields to be leaked. This behavior is new with libevent 2 support and doesn't affect transmission 2.1x or older.
User jusid reports prefetch causes load on his NMT to jump from <1 to 3-4. He requests a way to disable prefetch, and suggests that prefetch be disabled by default on lightweight builds. This commit adds a new settings.json key, "prefetch-enabled", which defaults to "true" on standard builds and "false" when compiled with --enable-lightweight.
Don't add linefeeds to base64-encoded data. We don't need it and it just increases the length of the string, which is typically sent over the network to an RPC client.
jusid reports that powl() doesn't exist on uClibc, so getRetryInterval() needs to work without it. A simple left-bit shifting would be fine, but since we max out after a handful of cases, a switch statement seems slightly more readable than shifting or powl().
benc requires its dictionaries to be represented in a sorted form, so we sort them before walking across the entries. However that's overkill when all we're doing is freeing memory, so this commit adds a mechanism in the benc walker to optionally avoid the sorting overhead.
TR_EMBEDDED has been around for awhile, but few (any?) repackagers are aware of it. If it was "advertised" through a configure-time argument, and in the status messages at the end of the configure script, more repackagers would be aware of it.
Super-poussin says some readynas users are reporting high CPU oloads in 2.20 beta 1. My guess is this is due to pieces being reverified. Before now, the .resume files kept timestamps per-file, and 2.20 keeps timestamps per-piece. The problem is that 2.20 beta 1 didn't support reading the older per-file timetstamps from .resume files, so users loading up 2.20 beta 1 may find Transmission thinks none of the pieces in the torrents have been verified.
The fix is to have 2.20 beta 2 read the old per-file timestamps, so upgrading from 2.1x to 2.20 will go smoothly. That's what this commit does.
Unfortunately, the readynas users who have already been bitten by this will continue to be bitten until they reverify their files. 2.20 beta 1, which thinks all those pieces were never verified, has probably overwritten the .resume files from 2.1x... :(
This is caused by libtransmission using tr_webseedIsActive() in two ways: (1) webseed.c uses it to know if there are any pending requests, and (2) tr_torrentStat() uses it to set tr_stat.webseedsSendingToUs. Having a queued task isn't enough to be "active" in use (2) -- it needs to know if the webseed is actually sending data. These two uses should be moved into separate functions.
Use tor->isRunning, rather than tier->isRunning, when testing to see if a torrent can manual announce or not. tor->isRunning means we're inbetween the user pressing "start" and "stop." tier->isRunning means we're inbetween successful "started" and "stopped" announcements. (This is deliberately out-of-sync from tor->isRunning because it can take awhile to for tracker announces to finish.)
Under the old code (using tier->isRunning), it was impossible to manually reannounce unless there had been a successful event=started announce first. It was also possible to manually announce after the "stop" button had been pressed if the "event=stopped" announcement hadn't finished yet.