1. when creating announce URLs in announcer-http.c
2. when creating scrape URLs in announcer-http.c
3. when deep-logging what announces are in a torrent's queue in announcer.c
The problem was that the new number of trackers was not being kept and the old count was retained. So if the count changed, tr_torrentTrackers() could return dangling pointers to the caller.
get_next_scrape_time() was introduced in r12297. The rationale is that by rounding all scrape times to their nearest 10th second, they will tend to occur in batches and improve multiscrape.
If a private tracker scrape says that there are no downloaders in the swarm, mark all the peers in the private swarm as seeds. This can greatly reduce unnecessary overhead on large seedboxes. We don't do this same trick on public torrents, since a public tracker won't know of all the peers.
This is a little overlapping since the utp code can be closed more-or-less immediately, but the udp manager needs to stay open in order to process the udp tracker connection requests before sending out event=stopped. Moreover DNS resolver can be shut down after the UDP tracker is shutdown.
We can stop local peer discovery immediately during shutdown, but need to leave the announcer running for the event=stopped messages. So it doesn't make sense to keep them on the same periodic timer.
During shutdown, we can stop DHT almost immediately, but need to leave the announcer running for the DHT tracker event=stopped messages. So it doesn't make sense to keep them on the same periodic timer.
1. remove duplicate URLs caused by implicit vs. explicit port numbers
2. if two announce URLs are duplicates /except/ for their scheme, put them in the same tier.
3. try announce URLs with a "udp" scheme before trying ones with an "http" scheme.
When seeding on a private tracker, ocelot omits other seeds from the list of peers returned. We can test for this and update our "seed probability" field accordingly.
This commit adds a set of package-visible structs and functions to allow delegating announces and scrapes to different protocol handlers. (Examples: struct tr_announce_request, struct tr_announce_response, struct tr_scrape_request, struct tr_scrape_response.) HTTP is the only protocol handler currently implemented; however, this provides a clean API for other protocol handlers, and having this in trunk will help shake out any bugs in this refactoring.
In addition, logging via the TR_DEBUG_FD environment variable is vastly improved in the announcer module now.
This patch adds two new flags to the callback function -- did_connect and did_timeout -- that are calculated inside of web.c using information from libcurl. This allows the announcer to detect timeouts more accurately and also to distinguish between unresponsive peers (which get the preexisting "Tracker did not respond" error message) and unconnectable peers (which get a new error message, "Could not connect to tracker").
The 'bad tracker' penalty was introduced in 2009 after a top tier trackers went down. Announces to it would hang, tying up an announce slot in libcurl for minutes at a time. If a user had enough torrents from that tracker, it could bottleneck all announce slots. The workaround was to deprioritize failing trackers so that they wouldn't obstruct other trackers.
Its implementation could be better, however. There are two parts:
1. Deciding how frequently to retry unresponsive trackers
2. Once an unresponsive tracker announce was ready to go, it would be bumped down the queue if other announces were ready too.
Part 2 probably contributes to #3931. If there are enough torrents loaded, there will always be good tracker announces that get pushed ahead of a bad one in the queue. Modifying 2's heuristics would be one option, but it seems simpler to remove it altogether now that getRetryInterval() grades more hashly for consecutive failures. Altering the retry interval also gives better visual feedback to users than Part 2 did.
This commit removes "Part 2" as described above.