30 How to request a new tracker
garfield69 edited this page 2023-03-09 21:16:44 +13:00

If you want to request a new Indexer for a web site just use the indexer-request issue template here and fill in the requested details on the form.

Definitions:

  • a web site where you can download torrents without registering (signup) an account is Public
  • a web site where you must have an account to download torrents, and always has an open registration (signup) is Semi-Private
  • a website which does not allow access without an account and the registration (signup) is currently closed (registration is by invitation only) or is open for short intervals (for example one week a year) is Private.

Notes:

  • If the site is private and subscriptions are closed, we will need an invite to be able to work. So please, get an invite and be ready to invite the staff member who will implement it.
  • If the Private site is currently open for registration, then include this info in the ticket.
  • The site must have a search page.
  • The site must provide a .torrent URL and/or a magnet URI.

What makes a Site suitable (or not suitable) for Indexing:

  • At the very minimum, a site should be able to search for a movie or TV series episode by name, and also return recent releases.
  • Sites that only provide direct downloads (DDL) to browser, or direct play/view, or redirection links to file storage sites or paywall services, are not suitable for use with a Jackett indexer.
  • On the site's search results page:
    • If there are titles (or posters) for each torrent/magnet, then we can usually code an indexer in yaml using the Cardigann processor.
    • If there are titles (or posters) with groups of related torrents/magnets, then the indexer can usually be coded in C#.
    • However, if there are only titles (or posters), and you have to navigate to the corresponding details page to find the multiple torrent/magnets, then these sites are not suitable for indexing. This is because for the indexer to complete the torrent search results, Jackett would need to fetch the details page for every title in the search results page, (potentially up to 100 pages), and no site is going to want that kind of bulk traffic, it's a quick way to get your IP banned for bot activity, especially if teamed with automation software like Sonarr or Radarr.
  • If the site supports an API:
    • then this is preferred over the other method, HTML scraping.
    • If the API returns JSON or XML then in most cases we can write the indexer in yaml.
    • If the API returns very complex results that cannot be process with yaml, then we will need to resort to C#.
  • If an API is not available:
    • then as long as the site's results page is all HTML then yaml can be used.
    • However, if the search page comprises results that are dynamically generated via calls to JavaScript functions, then C# will be needed.
  • Yaml indexers are simple to write and maintain, and usually the turnaround from request to implementation is fairly short.
  • C# indexers do not have a quick turnaround from request to implementation, due to the fact that C# developers are scarce and/or busy on other projects.