FIX: When retrieving feeds from 32p and in Auth mode, personal notification feeds contained some invalid html entries that weren't removed properly resulting in no results when attempting to match for downloading, FIX: When searching 32P, if title had a '/' within the title - Mylar would mistakingly skip it due to some previous exceptions that were made for CBT, FIX: Main page would quickly display & hide the have% column instead of always being hidden, FIX: Adjusted some incorrect spacing for non-alphanumeric characters when comparing search results (should result in better matching hopefully), FIX: When adding a series and the most recent issue was present on the weekly-pull list, it would sometimes not mark it as Wanted and auto-attempt to search for it (if auto mark Upcoming enabled), FIX: Added Test Connection button for 32P where it will test logon credentials as well as if Captcha is present, IMP: If captcha is enabled for 32p and signon is required because keys are stale, will not send authentication information and will just bypass as a provider, IMP: Test Connection button added for SABnzbd, IMP: Added ability to directly add torrents to rtorrent and apply label + download directory options (config.ini only atm), FIX: If a search result had a 'vol.' label in it, depending on how the format of the label was mylar would refuse to remove the volume which resulted in failed matches (also fixed a similar issue with failing to remove the volume label when comparing search results), FIX: When filechecking, if a series had a - in the title, will now account for it properly, IMP: Completely redid the filecheck module which allows for integration into other modules as well as more detailed failure logs, IMP: Added Dynamic handder integration into filechecker and subsequent modules that use it which allows for special characters to be replaced with any other type of character, IMP: Manual post-processing speed improved greatly due to new usage of filecheck module, IMP: Importer backend code redone to include new filecheck module, IMP: Added status/counter to import process, IMP: Added force unlock option to importer for failed imports, IMP: Added new status to Import labelled as 'Manual Intervention' for imports that need the user to manually select an option from an available list, FIX: When import said there were search results to view, but none available - would blank screen, IMP: Added a failure log entry showing all the failed files that weren't able to be scanned in during an import (will be in GUI eventually), IMP: if only partial metadata is available during import, Mylar will attempt to use what's available from the metatagging instead of picking all of one/other, IMP: Better grouping of series/volumes when viewing the import results page as well as now indicating if annuals are present within the files, IMP: Added a file-icon beside each imported item on the import result page which allows the user to view the files that are associated with the given series grouping, IMP: Added a blacklisted_publishers option to config.ini which will blacklist specific publishers from being returned during search / import results, FIX: If duplicate dump folder had a value, but duplicate dump wasn't enabled - would still use the duplicate dump folder during post-processing runs, FIX: (#1194) Patch to allow for fixed H1 elements for title (thnx chazlarson), FIX: Removed UnRAR dependency checks in cmtagmylar since not being used anymore, FIX: Fixed a problem with non-ascii characters being recognized during a file-check in certain cases, IMP: Attmept by Mylar to grab an alternate jpg from file when viewing the issue details if it complies with the naming conventions, FIX: Fixed some metatagging issues with ComicBookLover tags not being handled properly if they didn't exist, IMP: Dupecheck now has a failback if it's comparing a cbr/cbr, cbz/cbz and cbr/cbz-priority is enabled, FIX: Quick check added for when adding/refreshing a comic that if a cover already existed, it would delete the cover prior to the attempt to retrieve it, IMP: Added some additional handling for when searching/adding fails, FIX: If a story arc didn't have proper issue dates (or invalid ones) would error out on loading the story arc main page - usually when arcs were imported using a cbl file.

This commit is contained in:
evilhero 2016-04-07 13:09:06 -04:00
parent 4e56a31f77
commit 6085c9f993
71 changed files with 8916 additions and 2566 deletions

View File

@ -155,6 +155,7 @@ table#series_table th#name { text-align: left; min-width: 250px; }
table#series_table th#year { text-align: left; max-width: 25px; }
table#series_table th#issue { text-align: left; min-width: 100px; }
table#series_table th#published { vertical-align: middle; text-align: left; min-width:40px; }
table#series_table th#have_percent { text-align: left; max-width: 25px; }
table#series_table th#have { text-align: center; }
table#series_table th#status { vertical-align: middle; text-align: left; min-width: 25px; }
table#series_table th#active { vertical-align: middle; text-align: left; max-width: 20px; }
@ -164,6 +165,7 @@ table#series_table td#name { text-align: left; max-width: 250px; }
table#series_table td#year { vertical-align: middle; text-align: left; max-width: 30px; }
table#series_table td#issue { vertical-align: middle; text-align: left; max-width: 100px; }
table#series_table td#published { vertical-align: middle; text-align: left; max-width: 40px; }
table#series_table td#have_percent { vertical-align: middle; text-align: left; max-width: 30px; }
table#series_table td#have { text-align: center; }
table#series_table td#status { vertical-align: middle; text-align: left; max-width: 25px; }
table#series_table td#active { vertical-align: middle; text-align: left; max-width: 20px; }

View File

@ -43,6 +43,7 @@
<div id="paddingheader">
<h1>
</br>
%if comic['Status'] == 'Loading':
<img src="interfaces/default/images/loader_black.gif" alt="loading" style="float:left; margin-right: 5px;"/>
%endif
@ -448,7 +449,7 @@
%if linky:
<a href="downloadthis?pathfile=${linky |u}"><img src="interfaces/default/images/download_icon.png" height="25" width="25" title="Download the Issue" class="highqual" /></a>
%if linky.endswith('.cbz'):
<a href="#issue-box" onclick="return runMetaIssue('${linky |u}');" class="issue-window"><img src="interfaces/default/images/issueinfo.png" height="25" width="25" title="View Issue Details" class="highqual" /></a>
<a href="#issue-box" onclick="return runMetaIssue('${linky |u}', '${comic['ComicName']}', '${issue['Issue_Number']}', '${issue['IssueDate']}', '${issue['IssueName'] |u}');" class="issue-window"><img src="interfaces/default/images/issueinfo.png" height="25" width="25" title="View Issue Details" class="highqual" /></a>
<div id="issue-box" class="issue-popup">
<a href="#" class="close"><img src="interfaces/default/images/close_pop.png" class="btn_close" title="Close Window" alt="Close" class="highqual" /></a>
<fieldset>
@ -582,7 +583,7 @@
%if linky:
<a href="downloadthis?pathfile=${linky |u}"><img src="interfaces/default/images/download_icon.png" height="25" width="25" title="Download the annual" class="highqual" /></a>
%if linky.endswith('.cbz'):
<a href="#issue-box" onclick="return runMetaIssue('${linky |u}');" class="issue-window"><img src="interfaces/default/images/issueinfo.png" height="25" width="25" title="View Issue Details" class="highqual" /></a>
<a href="#issue-box" onclick="return runMetaIssue('${linky |u}', '${comic['ComicName']}', '${annual['Issue_Number']}', '${annual['IssueDate']}', '${annual['IssueName']}');" class="issue-window"><img src="interfaces/default/images/issueinfo.png" height="25" width="25" title="View Issue Details" class="highqual" /></a>
<div id="issue-box" class="issue-popup">
<a href="#" class="close"><img src="interfaces/default/images/close_pop.png" class="btn_close" title="Close Window" alt="Close" class="highqual" /></a>
<fieldset>
@ -700,12 +701,12 @@
</script>
<script>
function runMetaIssue(filelink) {
// alert(filelink);
function runMetaIssue(filelink, comicname, issue, date, title) {
alert(filelink);
$.ajax({
type: "GET",
url: "IssueInfo",
data: { filelocation: filelink },
data: { filelocation: filelink, comicname: comicname, issue: issue, date: date, title: title },
success: function(response) {
var names = response
$('#responsethis').html(response);

View File

@ -319,11 +319,11 @@
</div>
</div>
<!--
<div class="row">
<a href="#" style="float:right" type="button" onclick="doAjaxCall('addAction();SABtest',$(this))" data-success="Sucessfully tested SABnzbd connection" data-error="Error testing SABnzbd connection"><span class="ui-icon ui-icon-extlink"></span>Test SABnzbd</a>
<div align="center" class="row">
<input type="button" value="Test SABnzbd" id="test_sab" style="float:center" /></br>
<input type="text" name="sabstatus" style="text-align:center; font-size:11px;" id="sabstatus" size="50" DISABLED />
</div>
-->
</fieldset>
<fieldset id="nzbget_options">
<div class="row">
@ -542,6 +542,10 @@
<input type="password" name="password_32p" value="${config['password_32p']}" size="36">
<small>( monitor the NEW releases feed & your personal notifications )</small>
</div>
<div align="center" class="row">
<input type="button" value="Test Connection" id="test_32p" style="float:center" /></br>
<input type="text" name="status32p" style="text-align:center; font-size:11px;" id="status32p" size="50" DISABLED />
</div>
</fieldset>
</div>
<div class="row checkbox left clearfix">
@ -691,6 +695,7 @@
</fieldset>
<fieldset>
<legend>Duplicate Handling</legend>
<small>( if filetypes are identical, will retain larger filesize )</small>
<div class="row">
<label>Retain based on</label>
<select name="dupeconstraint">
@ -1313,6 +1318,26 @@
$('#api_key').val(data);
});
});
$("#test_32p").click(function(){
$.get('test_32p',
function(data){
if (data.error != undefined) {
alert(data.error);
return;
}
$('#status32p').val(data);
});
});
$("#test_sab").click(function(){
$.get('SABtest',
function(data){
if (data.error != undefined) {
alert(data.error);
return;
}
$('#sabstatus').val(data);
});
});
if ($("#enable_https").is(":checked"))
{
$("#https_options").show();

View File

@ -182,6 +182,11 @@ img.editArt {
max-height: 300px;
position: relative;
}
.className {
width: 500px;
height: 400px;
overflow: scroll;
}
.title {
margin-bottom: 20px;
margin-top: 10px;
@ -334,6 +339,9 @@ form fieldset small.heading {
margin-bottom: 10px;
margin-top: -15px;
}
form .fieldset-auto-width {
display: inline-block;
}
form .row {
font-family: Helvetica, Arial;
margin-bottom: 10px;
@ -861,7 +869,7 @@ div#searchbar .mini-icon {
}
#paddingheader h1 {
line-height: 33px;
width: 450px;
/* width: 450px; */
}
#paddingheader h1 img {
float: left;
@ -1227,10 +1235,14 @@ div#artistheader h2 a {
min-width: 275px;
text-align: left;
}
#series_table th#year,
#series_table th#year {
max-width: 25px;
text-align: left;
}
#series_table th#have_percent {
max-width: 25px;
text-align: left;
display: none;
}
#series_table th#active {
max-width: 40px;
@ -1257,10 +1269,15 @@ div#artistheader h2 a {
vertical-align: middle;
}
#series_table td#year,
max-width: 25px;
text-align: left;
vertical-align: middle;
}
#series_table td#have_percent {
max-width: 25px;
text-align: left;
vertical-align: middle;
display: none;
}
#series_table td#active {
max-width: 40px;
@ -1399,6 +1416,10 @@ div#artistheader h2 a {
min-width: 75px;
text-align: center;
}
#impresults_table th#issues {
min-width: 25px;
text-align: center;
}
#impresults_table th#status {
min-width: 110px;
text-align: center;
@ -1430,6 +1451,11 @@ div#artistheader h2 a {
text-align: left;
vertical-align: middle;
}
#impresults_table td#issues {
min-width: 25px;
text-align: left;
vertical-align: middle;
}
#impresults_table td#status {
min-width: 110px;
text-align: left;
@ -1445,7 +1471,6 @@ div#artistheader h2 a {
vertical-align: middle;
text-align: left;
}
#searchmanage_table th#comicname {
min-width: 325px;
text-align: left;

View File

@ -532,7 +532,7 @@ div#searchbar {
top: -25px;
h1 {
line-height: 33px;
width: 450px;
// width: 450px;
img {
float:left;
margin-right: 5px;

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -31,23 +31,27 @@
<fieldset>
<div class="row checkbox">
<input type="checkbox" name="autoadd" style="vertical-align: middle; margin: 3px; margin-top: -1px;" id="autoadd" value="1" ${checked(mylar.ADD_COMICS)}><label>Auto-add new series</label>
</div>
<div class="row checkbox">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="imp_move" id="imp_move" value="1" ${checked(mylar.IMP_MOVE)}><label>Move files</label>
</div>
%if mylar.RENAME_FILES:
<div class="row checkbox">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="imp_rename" id="imp_rename" value="1" ${checked(mylar.IMP_RENAME)}><label>Rename Files </label>
<small>(After importing, Rename the files to configuration settings)</small>
<label>${mylar.FOLDER_FORMAT}/${mylar.FILE_FORMAT}</label>
</div>
%endif
<div class="row checkbox">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="imp_metadata" id="imp_metadata" value="1" ${checked(mylar.IMP_METADATA)}><label>Use Existing Metadata</label>
<small>(Use existing Metadata to better locate series for import)</small>
</div>
</fieldset>
</div>
<div class="row checkbox">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="imp_move" id="imp_move" value="1" ${checked(mylar.IMP_MOVE)}><label>Move files</label>
</div>
%if mylar.RENAME_FILES:
<div class="row checkbox">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="imp_rename" id="imp_rename" value="1" ${checked(mylar.IMP_RENAME)}><label>Rename Files </label>
<small>(After importing, Rename the files to configuration settings)</small>
<label>${mylar.FOLDER_FORMAT}/${mylar.FILE_FORMAT}</label>
</div>
%endif
<div class="row checkbox">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="imp_metadata" id="imp_metadata" value="1" ${checked(mylar.IMP_METADATA)}><label>Use Existing Metadata</label>
<small>(Use existing Metadata to better locate series for import)</small>
</div>
%if mylar.IMPORTLOCK:
<div class="row checkbox">
<input type="checkbox" onclick="<%mylar.IMPORTLOCK = False%>" style="vertical-align: middle; margin: 3px; margin-top: -1px;"><label>Existing Import running. Force next run?</label>
</div>
%endif
</fieldset>
</span>
</tr>
</table>
@ -70,7 +74,7 @@
<tr><center><h3>To be Imported</h3></center></tr>
<thead>
<tr>
<th id="select"></th>
<th id="select" align="left"><input type="checkbox" onClick="toggle(this)" name="results" class="checkbox" /></th>
<th id="comicname">Comic Name</th>
<th id="comicyear">Year</th>
<th id="issues">Issues</th>
@ -83,19 +87,39 @@
%if results:
%for result in results:
<%
grade = 'X'
if result['Status'] == 'Imported':
grade = 'X'
elif result['Status'] == 'Manual Intervention':
grade = 'C'
else:
grade = 'Z'
if result['DisplayName'] is None:
displayname = result['ComicName']
else:
displayname = result['DisplayName']
displayline = displayname
if result['AnnualCount'] > 0:
displayline += '(+' + str(result['AnnualCount']) + ' Annuals)'
if all([result['Volume'] is not None, result['Volume'] != 'None']):
displayline += ' (' + str(result['Volume']) + ')'
%>
<tr class="grade${grade}">
<td id="select"><input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="${result['ComicName']}" value="${result['ComicName']}" class="checkbox" />
<td id="comicname">${displayname}
%if result['ComicID'] is not None:
[${result['ComicID']}]
%endif
<tr class="${result['Status']} grade${grade}">
<td id="select"><input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="${result['ComicName']}[${result['Volume']}]" value="${result['DynamicName']}" class="checkbox" />
<td id="comicname">${displayline}
%if result['Status'] != 'Imported':
<a href="#issue-box" onclick="return runMetaIssue('${result['ComicName'] |u}','${result['DynamicName']}','${result['Volume']}');" class="issue-window"><img src="interfaces/default/images/files.png" height="15 width="15" title="View files in this group" class="highqual" /></a>
<div id="issue-box" class="issue-popup">
<a href="#" class="close"><img src="interfaces/default/images/close_pop.png" class="btn_close" title="Close Window" alt="Close" class="highqual" /></a>
<div id="responsethis">
<label><strong>[ NODATA ]</strong></label></br>
</div>
</div>
%endif
%if result['ComicID'] is not None:
[${result['ComicID']}]
%endif
</td>
<td id="comicyear"><title="${result['ComicYear']}">${result['ComicYear']}</td>
<td id="comicissues"><title="${result['IssueCount']}">${result['IssueCount']}</td>
@ -111,15 +135,12 @@
</td>
<td id="importdate">${result['ImportDate']}</td>
<td id="addcomic">
%if result['Status'] == 'Not Imported':
[<a href="#" title="Import ${result['ComicName']} into your watchlist" onclick="doAjaxCall('preSearchit?ComicName=${result['ComicName']| u}&displaycomic=${displayname}| u}&comicid=${result['ComicID']}',$(this),'table')" data-success="Imported ${result['ComicName']}">Import</a>]
%endif
[<a href="deleteimport?ComicName=${result['ComicName']}">Remove</a>]
%if result['implog'] is not None:
[<a class="showlog" title="Display the Import log for ${result['ComicName']}" href="importLog?ComicName=${result['ComicName'] |u}&SRID=${result['SRID']}">Log</a>]
%if result['Status'] == 'Not Imported' and result['SRID'] is None:
[<a href="#" title="Import ${result['ComicName']} into your watchlist" onclick="doAjaxCall('preSearchit?ComicName=${result['ComicName']| u}&displaycomic=${displayname}| u}&comicid=${result['ComicID']}&volume=${result['Volume']}&dynamicname=${result['DynamicName']}&displayline=${displayline}',$(this),'table')" data-success="Imported ${result['ComicName']}">Import</a>]
%endif
[<a href="deleteimport?ComicName=${result['ComicName']}&volume=${result['Volume']}&DynamicName=${result['DynamicName']}&Status=${result['Status']}">Remove</a>]
%if result['SRID'] is not None and result['Status'] != 'Imported':
[<a title="Manual intervention is required - more than one result when attempting to import" href="importresults_popup?SRID=${result['SRID']}&ComicName=${result['ComicName'] |u}&imported=yes&ogcname=${result['ComicName'] |u}">Select</a>]
[<a title="Manual intervention is required - more than one result when attempting to import" href="importresults_popup?SRID=${result['SRID']}&ComicName=${result['ComicName'] |u}&imported=yes&ogcname=${result['ComicName'] |u}&DynamicName=${result['DynamicName']}">Select</a>]
%endif
</td>
</tr>
@ -141,9 +162,63 @@
</%def>
<%def name="javascriptIncludes()">
<script src="js/libs/jquery.dataTables.min.js"></script>
<script type="text/javascript">
$('.showlog').click(function (event) {
<script src="js/libs/jquery.dataTables.min.js"></script>
<script>
$(document).ready(function() {
$('a.issue-window').click(function() {
//Getting the variable's value from a link
var issueBox = $(this).attr('href');
//Fade in the Popup
$(issueBox).fadeIn(300);
//Set the center alignment padding + border see css style
var popMargTop = ($(issueBox).height() + 24) / 2;
var popMargLeft = ($(issueBox).width() + 24) / 2;
$(issueBox).css({
'margin-top' : -popMargTop,
'margin-left' : -popMargLeft
});
// Add the mask to body
$('body').append('<div id="mask"></div>');
$('#mask').fadeIn(300);
return false;
});
// When clicking on the button close or the mask layer the popup closed
$('a.close, #mask').on('click', function() {
$('#mask , .issue-popup').fadeOut(300 , function() {
$('#mask').remove();
});
return false;
});
});
</script>
<script>
function runMetaIssue(comicname, dynamicname, volume) {
//alert(comicname);
$.ajax({
type: "GET",
url: "ImportFilelisting",
data: { comicname: comicname, dynamicname: dynamicname, volume: volume },
success: function(response) {
//alert(response)
var names = response
$('#responsethis').html(response);
},
error: function(data)
{
alert('ERROR'+data.responseText);
},
});
}
</script>
<script>
$('.showlog').click(function (event) {
var width = 575,
height = 400,
left = ($(window).width() - width) / 2,
@ -159,8 +234,8 @@
return false;
});
</script>
<script>
</script>
<script>
function initThisPage() {
jQuery( "#tabs" ).tabs();
initActions();

View File

@ -15,7 +15,7 @@
<th id="year">Year</th>
<th id="issue">Last Issue</th>
<th id="published">Published</th>
<th id="have_percent">Have %</th>
<th class="hidden" id="have_percent">Have %</th>
<th id="have">Have</th>
<th id="status">Status</th>
<th id="active">Active</th>
@ -47,7 +47,7 @@
<td id="year"><span title="${comic['ComicYear']}"></span>${comic['ComicYear']}</td>
<td id="issue"><span title="${comic['LatestIssue']}"></span># ${comic['LatestIssue']}</td>
<td id="published">${comic['LatestDate']}</td>
<td id="have_percent">${comic['percent']}</td>
<td class="hidden" id="have_percent">${comic['percent']}</td>
<td id="have"><span title="${comic['percent']}"></span>${css}<div style="width:${comic['percent']}%"><span class="progressbar-front-text">${comic['haveissues']}/${comic['totalissues']}</span></div></td>
<td id="status">${comic['recentstatus']}</td>
<td id="active" align="center">

View File

@ -24,15 +24,51 @@
<li><a href="#tabs-3">Advanced Options</a></li>
</ul>
<div id="tabs-1" class="configtable">
<div style="float:right;position:absolute;right:0;top;0;margin-right:50px;">
<legend>Current Import Status</legend>
%if mylar.IMPORT_STATUS:
%if mylar.IMPORT_STATUS == 'Import completed.':
<input type="text" name="importstatus" id="importstatus" style="text-align:center; font-size:11px;" size="60" DISABLED /></br>
<script>
turnitoff();
</script>
%else:
<input type="text" name="importstatus" id="importstatus" style="text-align:center; font-size:11px;" size="60" DISABLED /></br>
<script>
turniton();
</script>
%endif
%else:
<script>
turnitoff();
</script>
<input type="text" name="importstatus" id="importstatus" style="text-align:center; font-size:11px;" size="60" value="Import is currently not running" DISABLED /></br>
%endif
<label>Number of valid files to process:&nbsp</label>${mylar.IMPORT_TOTALFILES}
%if int(mylar.IMPORT_FILES) != int(mylar.IMPORT_TOTALFILES):
/ ${mylar.IMPORT_FILES}
%endif
</br>
<label>Files with ComicID's present:&nbsp</label>${mylar.IMPORT_CID_COUNT}</br>
<label>Files that were parsed:&nbsp</label>${mylar.IMPORT_PARSED_COUNT}</br>
<label>Files that couldn't be parsed:&nbsp</label>${mylar.IMPORT_FAILURE_COUNT}</br>
%if mylar.IMPORTLOCK:
</br></br></br></br>
<div class="row checkbox">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="forcescan" id="forcescan" value="1"><label>Existing Import Running. Force this import?</label>
</div>
%endif
</div>
<form action="comicScan" method="GET" id="comicScan">
<fieldset>
<legend>Scan Comic Library</legend>
<p><strong>Where do you keep your comics?</strong></p>
<p>You can put in any directory, and it will scan for comic files in that folder
(including all subdirectories). <br/><small>For example: '/Users/name/Comics'</small></p>
<p>You can put in any directory, and it will scan for comics</br>
in that folder (including all subdirectories). <br/>
<small>For example: '/Users/name/Comics'</small></p>
<p>
It may take a while depending on how many files you have. You can navigate away from the page<br />
as soon as you click 'Save changes'
It may take a while depending on how many files you have.</br>
You can navigate away from the page as soon as you click 'Save changes'
</p>
<br/>
<div class="row">
@ -60,13 +96,12 @@
<small>Rename files to configuration settings</small>
</div>
<br/>
<input type="button" value="Save Changes and Scan" onclick="addScanAction();doAjaxCall('comicScan',$(this),'tabs',true);return true;" data-success="Changes saved. Library will be scanned" data-always="Sucessfully completed scanning library.">
<input type="button" value="Save Changes without Scanning Library" onclick="doAjaxCall('comicScan',$(this),'tabs',true);return false;" data-success="Changes Saved Successfully">
%if mylar.IMPORTBUTTON:
<input type="button" value="Import Results Management" style="float: right;" onclick="location.href='importResults';" />
%endif
</fieldset>
</form>
<input type="button" value="Save Changes and Scan" onclick="addScanAction();doAjaxCall('comicScan',$(this),'tabs',true);return true;" data-success="Import Scan now submitted.">
<input type="button" value="Save Changes without Scanning Library" onclick="doAjaxCall('comicScan',$(this),'tabs',true);return false;" data-success="Changes Saved Successfully">
%if mylar.IMPORTBUTTON:
<input type="button" value="Import Results Management" style="float: right;" onclick="location.href='importResults';" />
%endif
</form>
</div>
<div id="tabs-2" class="configtable">
<tr>
@ -139,12 +174,43 @@
</%def>
<%def name="javascriptIncludes()">
<script>
var CheckEnabled = true;
function addScanAction() {
$('#autoadd').append('<input type="hidden" name="scan" value=1 />');
CheckEnabled = true;
statuscheck();
};
function statuscheck() {
if (CheckEnabled == true){
var ImportTimer = setInterval(function(){
$.get('Check_ImportStatus',
function(data){
if (data.error != undefined) {
alert(data.error);
return;
}
$('#importstatus').val(data);
if (data == 'Import completed.') {
CheckEnabled = false;
clearInterval(ImportTimer);
return;
}
});
}, 5000);
};
};
function turnitoff() {
CheckEnabled = false;
clearInterval(ImportTimer);
};
function turniton() {
if (CheckEnabled == false) {
CheckEnabled = true;
statuscheck();
}
};
function initThisPage() {
jQuery( "#tabs" ).tabs();
jQuery( "#tabs" ).tabs();
initActions();
initConfigCheckbox("#imp_move");

609
lib/rtorrent/__init__.py Normal file
View File

@ -0,0 +1,609 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import urllib
import os.path
import time
import xmlrpclib
from common import find_torrent, \
is_valid_port, convert_version_tuple_to_str
from lib.torrentparser import TorrentParser
from lib.xmlrpc.http import HTTPServerProxy
from lib.xmlrpc.scgi import SCGIServerProxy
from rpc import Method
from lib.xmlrpc.basic_auth import BasicAuthTransport
from torrent import Torrent
from group import Group
import rpc # @UnresolvedImport
__version__ = "0.2.9"
__author__ = "Chris Lucas"
__contact__ = "chris@chrisjlucas.com"
__license__ = "MIT"
MIN_RTORRENT_VERSION = (0, 8, 1)
MIN_RTORRENT_VERSION_STR = convert_version_tuple_to_str(MIN_RTORRENT_VERSION)
class RTorrent:
""" Create a new rTorrent connection """
rpc_prefix = None
def __init__(self, uri, username=None, password=None,
verify=False, sp=None, sp_kwargs=None):
self.uri = uri # : From X{__init__(self, url)}
self.username = username
self.password = password
self.schema = urllib.splittype(uri)[0]
if sp:
self.sp = sp
elif self.schema in ['http', 'https']:
self.sp = HTTPServerProxy
elif self.schema == 'scgi':
self.sp = SCGIServerProxy
else:
raise NotImplementedError()
self.sp_kwargs = sp_kwargs or {}
self.torrents = [] # : List of L{Torrent} instances
self._rpc_methods = [] # : List of rTorrent RPC methods
self._torrent_cache = []
self._client_version_tuple = ()
if verify is True:
self._verify_conn()
def _get_conn(self):
"""Get ServerProxy instance"""
if self.username is not None and self.password is not None:
if self.schema == 'scgi':
raise NotImplementedError()
return self.sp(
self.uri,
transport=BasicAuthTransport(self.username, self.password),
**self.sp_kwargs
)
return self.sp(self.uri, **self.sp_kwargs)
def _verify_conn(self):
# check for rpc methods that should be available
assert "system.client_version" in self._get_rpc_methods(), "Required RPC method not available."
assert "system.library_version" in self._get_rpc_methods(), "Required RPC method not available."
# minimum rTorrent version check
assert self._meets_version_requirement() is True,\
"Error: Minimum rTorrent version required is {0}".format(
MIN_RTORRENT_VERSION_STR)
def _meets_version_requirement(self):
return self._get_client_version_tuple() >= MIN_RTORRENT_VERSION
def _get_client_version_tuple(self):
conn = self._get_conn()
if not self._client_version_tuple:
if not hasattr(self, "client_version"):
setattr(self, "client_version",
conn.system.client_version())
rtver = getattr(self, "client_version")
self._client_version_tuple = tuple([int(i) for i in
rtver.split(".")])
return self._client_version_tuple
def _update_rpc_methods(self):
self._rpc_methods = self._get_conn().system.listMethods()
return self._rpc_methods
def _get_rpc_methods(self):
""" Get list of raw RPC commands
@return: raw RPC commands
@rtype: list
"""
return(self._rpc_methods or self._update_rpc_methods())
def get_torrents(self, view="main"):
"""Get list of all torrents in specified view
@return: list of L{Torrent} instances
@rtype: list
@todo: add validity check for specified view
"""
self.torrents = []
methods = torrent.methods
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self)]
m = rpc.Multicall(self)
m.add("d.multicall", view, "d.get_hash=",
*[method.rpc_call + "=" for method in retriever_methods])
results = m.call()[0] # only sent one call, only need first result
for result in results:
results_dict = {}
# build results_dict
for m, r in zip(retriever_methods, result[1:]): # result[0] is the info_hash
results_dict[m.varname] = rpc.process_result(m, r)
self.torrents.append(
Torrent(self, info_hash=result[0], **results_dict)
)
self._manage_torrent_cache()
return(self.torrents)
def _manage_torrent_cache(self):
"""Carry tracker/peer/file lists over to new torrent list"""
for torrent in self._torrent_cache:
new_torrent = common.find_torrent(torrent.info_hash,
self.torrents)
if new_torrent is not None:
new_torrent.files = torrent.files
new_torrent.peers = torrent.peers
new_torrent.trackers = torrent.trackers
self._torrent_cache = self.torrents
def _get_load_function(self, file_type, start, verbose):
"""Determine correct "load torrent" RPC method"""
func_name = None
if file_type == "url":
# url strings can be input directly
if start and verbose:
func_name = "load_start_verbose"
elif start:
func_name = "load_start"
elif verbose:
func_name = "load_verbose"
else:
func_name = "load"
elif file_type in ["file", "raw"]:
if start and verbose:
func_name = "load_raw_start_verbose"
elif start:
func_name = "load_raw_start"
elif verbose:
func_name = "load_raw_verbose"
else:
func_name = "load_raw"
return(func_name)
def load_torrent(self, torrent, start=False, verbose=False, verify_load=True):
"""
Loads torrent into rTorrent (with various enhancements)
@param torrent: can be a url, a path to a local file, or the raw data
of a torrent file
@type torrent: str
@param start: start torrent when loaded
@type start: bool
@param verbose: print error messages to rTorrent log
@type verbose: bool
@param verify_load: verify that torrent was added to rTorrent successfully
@type verify_load: bool
@return: Depends on verify_load:
- if verify_load is True, (and the torrent was
loaded successfully), it'll return a L{Torrent} instance
- if verify_load is False, it'll return None
@rtype: L{Torrent} instance or None
@raise AssertionError: If the torrent wasn't successfully added to rTorrent
- Check L{TorrentParser} for the AssertionError's
it raises
@note: Because this function includes url verification (if a url was input)
as well as verification as to whether the torrent was successfully added,
this function doesn't execute instantaneously. If that's what you're
looking for, use load_torrent_simple() instead.
"""
p = self._get_conn()
tp = TorrentParser(torrent)
torrent = xmlrpclib.Binary(tp._raw_torrent)
info_hash = tp.info_hash
func_name = self._get_load_function("raw", start, verbose)
# load torrent
getattr(p, func_name)(torrent)
if verify_load:
MAX_RETRIES = 3
i = 0
while i < MAX_RETRIES:
self.get_torrents()
if info_hash in [t.info_hash for t in self.torrents]:
break
# was still getting AssertionErrors, delay should help
time.sleep(1)
i += 1
assert info_hash in [t.info_hash for t in self.torrents],\
"Adding torrent was unsuccessful."
return(find_torrent(info_hash, self.torrents))
def load_torrent_simple(self, torrent, file_type,
start=False, verbose=False):
"""Loads torrent into rTorrent
@param torrent: can be a url, a path to a local file, or the raw data
of a torrent file
@type torrent: str
@param file_type: valid options: "url", "file", or "raw"
@type file_type: str
@param start: start torrent when loaded
@type start: bool
@param verbose: print error messages to rTorrent log
@type verbose: bool
@return: None
@raise AssertionError: if incorrect file_type is specified
@note: This function was written for speed, it includes no enhancements.
If you input a url, it won't check if it's valid. You also can't get
verification that the torrent was successfully added to rTorrent.
Use load_torrent() if you would like these features.
"""
p = self._get_conn()
assert file_type in ["raw", "file", "url"], \
"Invalid file_type, options are: 'url', 'file', 'raw'."
func_name = self._get_load_function(file_type, start, verbose)
if file_type == "file":
# since we have to assume we're connected to a remote rTorrent
# client, we have to read the file and send it to rT as raw
assert os.path.isfile(torrent), \
"Invalid path: \"{0}\"".format(torrent)
torrent = open(torrent, "rb").read()
if file_type in ["raw", "file"]:
finput = xmlrpclib.Binary(torrent)
elif file_type == "url":
finput = torrent
getattr(p, func_name)(finput)
def get_views(self):
p = self._get_conn()
return p.view_list()
def create_group(self, name, persistent=True, view=None):
p = self._get_conn()
if persistent is True:
p.group.insert_persistent_view('', name)
else:
assert view is not None, "view parameter required on non-persistent groups"
p.group.insert('', name, view)
self._update_rpc_methods()
def get_group(self, name):
assert name is not None, "group name required"
group = Group(self, name)
group.update()
return group
def set_dht_port(self, port):
"""Set DHT port
@param port: port
@type port: int
@raise AssertionError: if invalid port is given
"""
assert is_valid_port(port), "Valid port range is 0-65535"
self.dht_port = self._p.set_dht_port(port)
def enable_check_hash(self):
"""Alias for set_check_hash(True)"""
self.set_check_hash(True)
def disable_check_hash(self):
"""Alias for set_check_hash(False)"""
self.set_check_hash(False)
def find_torrent(self, info_hash):
"""Frontend for rtorrent.common.find_torrent"""
return(common.find_torrent(info_hash, self.get_torrents()))
def poll(self):
""" poll rTorrent to get latest torrent/peer/tracker/file information
@note: This essentially refreshes every aspect of the rTorrent
connection, so it can be very slow if working with a remote
connection that has a lot of torrents loaded.
@return: None
"""
self.update()
torrents = self.get_torrents()
for t in torrents:
t.poll()
def update(self):
"""Refresh rTorrent client info
@note: All fields are stored as attributes to self.
@return: None
"""
multicall = rpc.Multicall(self)
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self)]
for method in retriever_methods:
multicall.add(method)
multicall.call()
def _build_class_methods(class_obj):
# multicall add class
caller = lambda self, multicall, method, *args:\
multicall.add(method, self.rpc_id, *args)
caller.__doc__ = """Same as Multicall.add(), but with automatic inclusion
of the rpc_id
@param multicall: A L{Multicall} instance
@type: multicall: Multicall
@param method: L{Method} instance or raw rpc method
@type: Method or str
@param args: optional arguments to pass
"""
setattr(class_obj, "multicall_add", caller)
def __compare_rpc_methods(rt_new, rt_old):
from pprint import pprint
rt_new_methods = set(rt_new._get_rpc_methods())
rt_old_methods = set(rt_old._get_rpc_methods())
print("New Methods:")
pprint(rt_new_methods - rt_old_methods)
print("Methods not in new rTorrent:")
pprint(rt_old_methods - rt_new_methods)
def __check_supported_methods(rt):
from pprint import pprint
supported_methods = set([m.rpc_call for m in
methods +
file.methods +
torrent.methods +
tracker.methods +
peer.methods])
all_methods = set(rt._get_rpc_methods())
print("Methods NOT in supported methods")
pprint(all_methods - supported_methods)
print("Supported methods NOT in all methods")
pprint(supported_methods - all_methods)
methods = [
# RETRIEVERS
Method(RTorrent, 'get_xmlrpc_size_limit', 'get_xmlrpc_size_limit'),
Method(RTorrent, 'get_proxy_address', 'get_proxy_address'),
Method(RTorrent, 'get_split_suffix', 'get_split_suffix'),
Method(RTorrent, 'get_up_limit', 'get_upload_rate'),
Method(RTorrent, 'get_max_memory_usage', 'get_max_memory_usage'),
Method(RTorrent, 'get_max_open_files', 'get_max_open_files'),
Method(RTorrent, 'get_min_peers_seed', 'get_min_peers_seed'),
Method(RTorrent, 'get_use_udp_trackers', 'get_use_udp_trackers'),
Method(RTorrent, 'get_preload_min_size', 'get_preload_min_size'),
Method(RTorrent, 'get_max_uploads', 'get_max_uploads'),
Method(RTorrent, 'get_max_peers', 'get_max_peers'),
Method(RTorrent, 'get_timeout_sync', 'get_timeout_sync'),
Method(RTorrent, 'get_receive_buffer_size', 'get_receive_buffer_size'),
Method(RTorrent, 'get_split_file_size', 'get_split_file_size'),
Method(RTorrent, 'get_dht_throttle', 'get_dht_throttle'),
Method(RTorrent, 'get_max_peers_seed', 'get_max_peers_seed'),
Method(RTorrent, 'get_min_peers', 'get_min_peers'),
Method(RTorrent, 'get_tracker_numwant', 'get_tracker_numwant'),
Method(RTorrent, 'get_max_open_sockets', 'get_max_open_sockets'),
Method(RTorrent, 'get_session', 'get_session'),
Method(RTorrent, 'get_ip', 'get_ip'),
Method(RTorrent, 'get_scgi_dont_route', 'get_scgi_dont_route'),
Method(RTorrent, 'get_hash_read_ahead', 'get_hash_read_ahead'),
Method(RTorrent, 'get_http_cacert', 'get_http_cacert'),
Method(RTorrent, 'get_dht_port', 'get_dht_port'),
Method(RTorrent, 'get_handshake_log', 'get_handshake_log'),
Method(RTorrent, 'get_preload_type', 'get_preload_type'),
Method(RTorrent, 'get_max_open_http', 'get_max_open_http'),
Method(RTorrent, 'get_http_capath', 'get_http_capath'),
Method(RTorrent, 'get_max_downloads_global', 'get_max_downloads_global'),
Method(RTorrent, 'get_name', 'get_name'),
Method(RTorrent, 'get_session_on_completion', 'get_session_on_completion'),
Method(RTorrent, 'get_down_limit', 'get_download_rate'),
Method(RTorrent, 'get_down_total', 'get_down_total'),
Method(RTorrent, 'get_up_rate', 'get_up_rate'),
Method(RTorrent, 'get_hash_max_tries', 'get_hash_max_tries'),
Method(RTorrent, 'get_peer_exchange', 'get_peer_exchange'),
Method(RTorrent, 'get_down_rate', 'get_down_rate'),
Method(RTorrent, 'get_connection_seed', 'get_connection_seed'),
Method(RTorrent, 'get_http_proxy', 'get_http_proxy'),
Method(RTorrent, 'get_stats_preloaded', 'get_stats_preloaded'),
Method(RTorrent, 'get_timeout_safe_sync', 'get_timeout_safe_sync'),
Method(RTorrent, 'get_hash_interval', 'get_hash_interval'),
Method(RTorrent, 'get_port_random', 'get_port_random'),
Method(RTorrent, 'get_directory', 'get_directory'),
Method(RTorrent, 'get_port_open', 'get_port_open'),
Method(RTorrent, 'get_max_file_size', 'get_max_file_size'),
Method(RTorrent, 'get_stats_not_preloaded', 'get_stats_not_preloaded'),
Method(RTorrent, 'get_memory_usage', 'get_memory_usage'),
Method(RTorrent, 'get_connection_leech', 'get_connection_leech'),
Method(RTorrent, 'get_check_hash', 'get_check_hash',
boolean=True,
),
Method(RTorrent, 'get_session_lock', 'get_session_lock'),
Method(RTorrent, 'get_preload_required_rate', 'get_preload_required_rate'),
Method(RTorrent, 'get_max_uploads_global', 'get_max_uploads_global'),
Method(RTorrent, 'get_send_buffer_size', 'get_send_buffer_size'),
Method(RTorrent, 'get_port_range', 'get_port_range'),
Method(RTorrent, 'get_max_downloads_div', 'get_max_downloads_div'),
Method(RTorrent, 'get_max_uploads_div', 'get_max_uploads_div'),
Method(RTorrent, 'get_safe_sync', 'get_safe_sync'),
Method(RTorrent, 'get_bind', 'get_bind'),
Method(RTorrent, 'get_up_total', 'get_up_total'),
Method(RTorrent, 'get_client_version', 'system.client_version'),
Method(RTorrent, 'get_library_version', 'system.library_version'),
Method(RTorrent, 'get_api_version', 'system.api_version',
min_version=(0, 9, 1)
),
Method(RTorrent, "get_system_time", "system.time",
docstring="""Get the current time of the system rTorrent is running on
@return: time (posix)
@rtype: int""",
),
# MODIFIERS
Method(RTorrent, 'set_http_proxy', 'set_http_proxy'),
Method(RTorrent, 'set_max_memory_usage', 'set_max_memory_usage'),
Method(RTorrent, 'set_max_file_size', 'set_max_file_size'),
Method(RTorrent, 'set_bind', 'set_bind',
docstring="""Set address bind
@param arg: ip address
@type arg: str
""",
),
Method(RTorrent, 'set_up_limit', 'set_upload_rate',
docstring="""Set global upload limit (in bytes)
@param arg: speed limit
@type arg: int
""",
),
Method(RTorrent, 'set_port_random', 'set_port_random'),
Method(RTorrent, 'set_connection_leech', 'set_connection_leech'),
Method(RTorrent, 'set_tracker_numwant', 'set_tracker_numwant'),
Method(RTorrent, 'set_max_peers', 'set_max_peers'),
Method(RTorrent, 'set_min_peers', 'set_min_peers'),
Method(RTorrent, 'set_max_uploads_div', 'set_max_uploads_div'),
Method(RTorrent, 'set_max_open_files', 'set_max_open_files'),
Method(RTorrent, 'set_max_downloads_global', 'set_max_downloads_global'),
Method(RTorrent, 'set_session_lock', 'set_session_lock'),
Method(RTorrent, 'set_session', 'set_session'),
Method(RTorrent, 'set_split_suffix', 'set_split_suffix'),
Method(RTorrent, 'set_hash_interval', 'set_hash_interval'),
Method(RTorrent, 'set_handshake_log', 'set_handshake_log'),
Method(RTorrent, 'set_port_range', 'set_port_range'),
Method(RTorrent, 'set_min_peers_seed', 'set_min_peers_seed'),
Method(RTorrent, 'set_scgi_dont_route', 'set_scgi_dont_route'),
Method(RTorrent, 'set_preload_min_size', 'set_preload_min_size'),
Method(RTorrent, 'set_log.tracker', 'set_log.tracker'),
Method(RTorrent, 'set_max_uploads_global', 'set_max_uploads_global'),
Method(RTorrent, 'set_down_limit', 'set_download_rate',
docstring="""Set global download limit (in bytes)
@param arg: speed limit
@type arg: int
""",
),
Method(RTorrent, 'set_preload_required_rate', 'set_preload_required_rate'),
Method(RTorrent, 'set_hash_read_ahead', 'set_hash_read_ahead'),
Method(RTorrent, 'set_max_peers_seed', 'set_max_peers_seed'),
Method(RTorrent, 'set_max_uploads', 'set_max_uploads'),
Method(RTorrent, 'set_session_on_completion', 'set_session_on_completion'),
Method(RTorrent, 'set_max_open_http', 'set_max_open_http'),
Method(RTorrent, 'set_directory', 'set_directory'),
Method(RTorrent, 'set_http_cacert', 'set_http_cacert'),
Method(RTorrent, 'set_dht_throttle', 'set_dht_throttle'),
Method(RTorrent, 'set_hash_max_tries', 'set_hash_max_tries'),
Method(RTorrent, 'set_proxy_address', 'set_proxy_address'),
Method(RTorrent, 'set_split_file_size', 'set_split_file_size'),
Method(RTorrent, 'set_receive_buffer_size', 'set_receive_buffer_size'),
Method(RTorrent, 'set_use_udp_trackers', 'set_use_udp_trackers'),
Method(RTorrent, 'set_connection_seed', 'set_connection_seed'),
Method(RTorrent, 'set_xmlrpc_size_limit', 'set_xmlrpc_size_limit'),
Method(RTorrent, 'set_xmlrpc_dialect', 'set_xmlrpc_dialect'),
Method(RTorrent, 'set_safe_sync', 'set_safe_sync'),
Method(RTorrent, 'set_http_capath', 'set_http_capath'),
Method(RTorrent, 'set_send_buffer_size', 'set_send_buffer_size'),
Method(RTorrent, 'set_max_downloads_div', 'set_max_downloads_div'),
Method(RTorrent, 'set_name', 'set_name'),
Method(RTorrent, 'set_port_open', 'set_port_open'),
Method(RTorrent, 'set_timeout_sync', 'set_timeout_sync'),
Method(RTorrent, 'set_peer_exchange', 'set_peer_exchange'),
Method(RTorrent, 'set_ip', 'set_ip',
docstring="""Set IP
@param arg: ip address
@type arg: str
""",
),
Method(RTorrent, 'set_timeout_safe_sync', 'set_timeout_safe_sync'),
Method(RTorrent, 'set_preload_type', 'set_preload_type'),
Method(RTorrent, 'set_check_hash', 'set_check_hash',
docstring="""Enable/Disable hash checking on finished torrents
@param arg: True to enable, False to disable
@type arg: bool
""",
boolean=True,
),
]
_all_methods_list = [methods,
file.methods,
torrent.methods,
tracker.methods,
peer.methods,
]
class_methods_pair = {
RTorrent: methods,
file.File: file.methods,
torrent.Torrent: torrent.methods,
tracker.Tracker: tracker.methods,
peer.Peer: peer.methods,
}
for c in class_methods_pair.keys():
rpc._build_rpc_methods(c, class_methods_pair[c])
_build_class_methods(c)

View File

@ -0,0 +1,609 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import urllib
import os.path
import time
import xmlrpclib
from rtorrent.common import find_torrent, \
is_valid_port, convert_version_tuple_to_str
from rtorrent.lib.torrentparser import TorrentParser
from rtorrent.lib.xmlrpc.http import HTTPServerProxy
from rtorrent.lib.xmlrpc.scgi import SCGIServerProxy
from rtorrent.rpc import Method
from rtorrent.lib.xmlrpc.basic_auth import BasicAuthTransport
from rtorrent.torrent import Torrent
from rtorrent.group import Group
import rtorrent.rpc # @UnresolvedImport
__version__ = "0.2.9"
__author__ = "Chris Lucas"
__contact__ = "chris@chrisjlucas.com"
__license__ = "MIT"
MIN_RTORRENT_VERSION = (0, 8, 1)
MIN_RTORRENT_VERSION_STR = convert_version_tuple_to_str(MIN_RTORRENT_VERSION)
class RTorrent:
""" Create a new rTorrent connection """
rpc_prefix = None
def __init__(self, uri, username=None, password=None,
verify=False, sp=None, sp_kwargs=None):
self.uri = uri # : From X{__init__(self, url)}
self.username = username
self.password = password
self.schema = urllib.splittype(uri)[0]
if sp:
self.sp = sp
elif self.schema in ['http', 'https']:
self.sp = HTTPServerProxy
elif self.schema == 'scgi':
self.sp = SCGIServerProxy
else:
raise NotImplementedError()
self.sp_kwargs = sp_kwargs or {}
self.torrents = [] # : List of L{Torrent} instances
self._rpc_methods = [] # : List of rTorrent RPC methods
self._torrent_cache = []
self._client_version_tuple = ()
if verify is True:
self._verify_conn()
def _get_conn(self):
"""Get ServerProxy instance"""
if self.username is not None and self.password is not None:
if self.schema == 'scgi':
raise NotImplementedError()
return self.sp(
self.uri,
transport=BasicAuthTransport(self.username, self.password),
**self.sp_kwargs
)
return self.sp(self.uri, **self.sp_kwargs)
def _verify_conn(self):
# check for rpc methods that should be available
assert "system.client_version" in self._get_rpc_methods(), "Required RPC method not available."
assert "system.library_version" in self._get_rpc_methods(), "Required RPC method not available."
# minimum rTorrent version check
assert self._meets_version_requirement() is True,\
"Error: Minimum rTorrent version required is {0}".format(
MIN_RTORRENT_VERSION_STR)
def _meets_version_requirement(self):
return self._get_client_version_tuple() >= MIN_RTORRENT_VERSION
def _get_client_version_tuple(self):
conn = self._get_conn()
if not self._client_version_tuple:
if not hasattr(self, "client_version"):
setattr(self, "client_version",
conn.system.client_version())
rtver = getattr(self, "client_version")
self._client_version_tuple = tuple([int(i) for i in
rtver.split(".")])
return self._client_version_tuple
def _update_rpc_methods(self):
self._rpc_methods = self._get_conn().system.listMethods()
return self._rpc_methods
def _get_rpc_methods(self):
""" Get list of raw RPC commands
@return: raw RPC commands
@rtype: list
"""
return(self._rpc_methods or self._update_rpc_methods())
def get_torrents(self, view="main"):
"""Get list of all torrents in specified view
@return: list of L{Torrent} instances
@rtype: list
@todo: add validity check for specified view
"""
self.torrents = []
methods = rtorrent.torrent.methods
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self)]
m = rtorrent.rpc.Multicall(self)
m.add("d.multicall", view, "d.get_hash=",
*[method.rpc_call + "=" for method in retriever_methods])
results = m.call()[0] # only sent one call, only need first result
for result in results:
results_dict = {}
# build results_dict
for m, r in zip(retriever_methods, result[1:]): # result[0] is the info_hash
results_dict[m.varname] = rtorrent.rpc.process_result(m, r)
self.torrents.append(
Torrent(self, info_hash=result[0], **results_dict)
)
self._manage_torrent_cache()
return(self.torrents)
def _manage_torrent_cache(self):
"""Carry tracker/peer/file lists over to new torrent list"""
for torrent in self._torrent_cache:
new_torrent = rtorrent.common.find_torrent(torrent.info_hash,
self.torrents)
if new_torrent is not None:
new_torrent.files = torrent.files
new_torrent.peers = torrent.peers
new_torrent.trackers = torrent.trackers
self._torrent_cache = self.torrents
def _get_load_function(self, file_type, start, verbose):
"""Determine correct "load torrent" RPC method"""
func_name = None
if file_type == "url":
# url strings can be input directly
if start and verbose:
func_name = "load_start_verbose"
elif start:
func_name = "load_start"
elif verbose:
func_name = "load_verbose"
else:
func_name = "load"
elif file_type in ["file", "raw"]:
if start and verbose:
func_name = "load_raw_start_verbose"
elif start:
func_name = "load_raw_start"
elif verbose:
func_name = "load_raw_verbose"
else:
func_name = "load_raw"
return(func_name)
def load_torrent(self, torrent, start=False, verbose=False, verify_load=True):
"""
Loads torrent into rTorrent (with various enhancements)
@param torrent: can be a url, a path to a local file, or the raw data
of a torrent file
@type torrent: str
@param start: start torrent when loaded
@type start: bool
@param verbose: print error messages to rTorrent log
@type verbose: bool
@param verify_load: verify that torrent was added to rTorrent successfully
@type verify_load: bool
@return: Depends on verify_load:
- if verify_load is True, (and the torrent was
loaded successfully), it'll return a L{Torrent} instance
- if verify_load is False, it'll return None
@rtype: L{Torrent} instance or None
@raise AssertionError: If the torrent wasn't successfully added to rTorrent
- Check L{TorrentParser} for the AssertionError's
it raises
@note: Because this function includes url verification (if a url was input)
as well as verification as to whether the torrent was successfully added,
this function doesn't execute instantaneously. If that's what you're
looking for, use load_torrent_simple() instead.
"""
p = self._get_conn()
tp = TorrentParser(torrent)
torrent = xmlrpclib.Binary(tp._raw_torrent)
info_hash = tp.info_hash
func_name = self._get_load_function("raw", start, verbose)
# load torrent
getattr(p, func_name)(torrent)
if verify_load:
MAX_RETRIES = 3
i = 0
while i < MAX_RETRIES:
self.get_torrents()
if info_hash in [t.info_hash for t in self.torrents]:
break
# was still getting AssertionErrors, delay should help
time.sleep(1)
i += 1
assert info_hash in [t.info_hash for t in self.torrents],\
"Adding torrent was unsuccessful."
return(find_torrent(info_hash, self.torrents))
def load_torrent_simple(self, torrent, file_type,
start=False, verbose=False):
"""Loads torrent into rTorrent
@param torrent: can be a url, a path to a local file, or the raw data
of a torrent file
@type torrent: str
@param file_type: valid options: "url", "file", or "raw"
@type file_type: str
@param start: start torrent when loaded
@type start: bool
@param verbose: print error messages to rTorrent log
@type verbose: bool
@return: None
@raise AssertionError: if incorrect file_type is specified
@note: This function was written for speed, it includes no enhancements.
If you input a url, it won't check if it's valid. You also can't get
verification that the torrent was successfully added to rTorrent.
Use load_torrent() if you would like these features.
"""
p = self._get_conn()
assert file_type in ["raw", "file", "url"], \
"Invalid file_type, options are: 'url', 'file', 'raw'."
func_name = self._get_load_function(file_type, start, verbose)
if file_type == "file":
# since we have to assume we're connected to a remote rTorrent
# client, we have to read the file and send it to rT as raw
assert os.path.isfile(torrent), \
"Invalid path: \"{0}\"".format(torrent)
torrent = open(torrent, "rb").read()
if file_type in ["raw", "file"]:
finput = xmlrpclib.Binary(torrent)
elif file_type == "url":
finput = torrent
getattr(p, func_name)(finput)
def get_views(self):
p = self._get_conn()
return p.view_list()
def create_group(self, name, persistent=True, view=None):
p = self._get_conn()
if persistent is True:
p.group.insert_persistent_view('', name)
else:
assert view is not None, "view parameter required on non-persistent groups"
p.group.insert('', name, view)
self._update_rpc_methods()
def get_group(self, name):
assert name is not None, "group name required"
group = Group(self, name)
group.update()
return group
def set_dht_port(self, port):
"""Set DHT port
@param port: port
@type port: int
@raise AssertionError: if invalid port is given
"""
assert is_valid_port(port), "Valid port range is 0-65535"
self.dht_port = self._p.set_dht_port(port)
def enable_check_hash(self):
"""Alias for set_check_hash(True)"""
self.set_check_hash(True)
def disable_check_hash(self):
"""Alias for set_check_hash(False)"""
self.set_check_hash(False)
def find_torrent(self, info_hash):
"""Frontend for rtorrent.common.find_torrent"""
return(rtorrent.common.find_torrent(info_hash, self.get_torrents()))
def poll(self):
""" poll rTorrent to get latest torrent/peer/tracker/file information
@note: This essentially refreshes every aspect of the rTorrent
connection, so it can be very slow if working with a remote
connection that has a lot of torrents loaded.
@return: None
"""
self.update()
torrents = self.get_torrents()
for t in torrents:
t.poll()
def update(self):
"""Refresh rTorrent client info
@note: All fields are stored as attributes to self.
@return: None
"""
multicall = rtorrent.rpc.Multicall(self)
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self)]
for method in retriever_methods:
multicall.add(method)
multicall.call()
def _build_class_methods(class_obj):
# multicall add class
caller = lambda self, multicall, method, *args:\
multicall.add(method, self.rpc_id, *args)
caller.__doc__ = """Same as Multicall.add(), but with automatic inclusion
of the rpc_id
@param multicall: A L{Multicall} instance
@type: multicall: Multicall
@param method: L{Method} instance or raw rpc method
@type: Method or str
@param args: optional arguments to pass
"""
setattr(class_obj, "multicall_add", caller)
def __compare_rpc_methods(rt_new, rt_old):
from pprint import pprint
rt_new_methods = set(rt_new._get_rpc_methods())
rt_old_methods = set(rt_old._get_rpc_methods())
print("New Methods:")
pprint(rt_new_methods - rt_old_methods)
print("Methods not in new rTorrent:")
pprint(rt_old_methods - rt_new_methods)
def __check_supported_methods(rt):
from pprint import pprint
supported_methods = set([m.rpc_call for m in
methods +
rtorrent.file.methods +
rtorrent.torrent.methods +
rtorrent.tracker.methods +
rtorrent.peer.methods])
all_methods = set(rt._get_rpc_methods())
print("Methods NOT in supported methods")
pprint(all_methods - supported_methods)
print("Supported methods NOT in all methods")
pprint(supported_methods - all_methods)
methods = [
# RETRIEVERS
Method(RTorrent, 'get_xmlrpc_size_limit', 'get_xmlrpc_size_limit'),
Method(RTorrent, 'get_proxy_address', 'get_proxy_address'),
Method(RTorrent, 'get_split_suffix', 'get_split_suffix'),
Method(RTorrent, 'get_up_limit', 'get_upload_rate'),
Method(RTorrent, 'get_max_memory_usage', 'get_max_memory_usage'),
Method(RTorrent, 'get_max_open_files', 'get_max_open_files'),
Method(RTorrent, 'get_min_peers_seed', 'get_min_peers_seed'),
Method(RTorrent, 'get_use_udp_trackers', 'get_use_udp_trackers'),
Method(RTorrent, 'get_preload_min_size', 'get_preload_min_size'),
Method(RTorrent, 'get_max_uploads', 'get_max_uploads'),
Method(RTorrent, 'get_max_peers', 'get_max_peers'),
Method(RTorrent, 'get_timeout_sync', 'get_timeout_sync'),
Method(RTorrent, 'get_receive_buffer_size', 'get_receive_buffer_size'),
Method(RTorrent, 'get_split_file_size', 'get_split_file_size'),
Method(RTorrent, 'get_dht_throttle', 'get_dht_throttle'),
Method(RTorrent, 'get_max_peers_seed', 'get_max_peers_seed'),
Method(RTorrent, 'get_min_peers', 'get_min_peers'),
Method(RTorrent, 'get_tracker_numwant', 'get_tracker_numwant'),
Method(RTorrent, 'get_max_open_sockets', 'get_max_open_sockets'),
Method(RTorrent, 'get_session', 'get_session'),
Method(RTorrent, 'get_ip', 'get_ip'),
Method(RTorrent, 'get_scgi_dont_route', 'get_scgi_dont_route'),
Method(RTorrent, 'get_hash_read_ahead', 'get_hash_read_ahead'),
Method(RTorrent, 'get_http_cacert', 'get_http_cacert'),
Method(RTorrent, 'get_dht_port', 'get_dht_port'),
Method(RTorrent, 'get_handshake_log', 'get_handshake_log'),
Method(RTorrent, 'get_preload_type', 'get_preload_type'),
Method(RTorrent, 'get_max_open_http', 'get_max_open_http'),
Method(RTorrent, 'get_http_capath', 'get_http_capath'),
Method(RTorrent, 'get_max_downloads_global', 'get_max_downloads_global'),
Method(RTorrent, 'get_name', 'get_name'),
Method(RTorrent, 'get_session_on_completion', 'get_session_on_completion'),
Method(RTorrent, 'get_down_limit', 'get_download_rate'),
Method(RTorrent, 'get_down_total', 'get_down_total'),
Method(RTorrent, 'get_up_rate', 'get_up_rate'),
Method(RTorrent, 'get_hash_max_tries', 'get_hash_max_tries'),
Method(RTorrent, 'get_peer_exchange', 'get_peer_exchange'),
Method(RTorrent, 'get_down_rate', 'get_down_rate'),
Method(RTorrent, 'get_connection_seed', 'get_connection_seed'),
Method(RTorrent, 'get_http_proxy', 'get_http_proxy'),
Method(RTorrent, 'get_stats_preloaded', 'get_stats_preloaded'),
Method(RTorrent, 'get_timeout_safe_sync', 'get_timeout_safe_sync'),
Method(RTorrent, 'get_hash_interval', 'get_hash_interval'),
Method(RTorrent, 'get_port_random', 'get_port_random'),
Method(RTorrent, 'get_directory', 'get_directory'),
Method(RTorrent, 'get_port_open', 'get_port_open'),
Method(RTorrent, 'get_max_file_size', 'get_max_file_size'),
Method(RTorrent, 'get_stats_not_preloaded', 'get_stats_not_preloaded'),
Method(RTorrent, 'get_memory_usage', 'get_memory_usage'),
Method(RTorrent, 'get_connection_leech', 'get_connection_leech'),
Method(RTorrent, 'get_check_hash', 'get_check_hash',
boolean=True,
),
Method(RTorrent, 'get_session_lock', 'get_session_lock'),
Method(RTorrent, 'get_preload_required_rate', 'get_preload_required_rate'),
Method(RTorrent, 'get_max_uploads_global', 'get_max_uploads_global'),
Method(RTorrent, 'get_send_buffer_size', 'get_send_buffer_size'),
Method(RTorrent, 'get_port_range', 'get_port_range'),
Method(RTorrent, 'get_max_downloads_div', 'get_max_downloads_div'),
Method(RTorrent, 'get_max_uploads_div', 'get_max_uploads_div'),
Method(RTorrent, 'get_safe_sync', 'get_safe_sync'),
Method(RTorrent, 'get_bind', 'get_bind'),
Method(RTorrent, 'get_up_total', 'get_up_total'),
Method(RTorrent, 'get_client_version', 'system.client_version'),
Method(RTorrent, 'get_library_version', 'system.library_version'),
Method(RTorrent, 'get_api_version', 'system.api_version',
min_version=(0, 9, 1)
),
Method(RTorrent, "get_system_time", "system.time",
docstring="""Get the current time of the system rTorrent is running on
@return: time (posix)
@rtype: int""",
),
# MODIFIERS
Method(RTorrent, 'set_http_proxy', 'set_http_proxy'),
Method(RTorrent, 'set_max_memory_usage', 'set_max_memory_usage'),
Method(RTorrent, 'set_max_file_size', 'set_max_file_size'),
Method(RTorrent, 'set_bind', 'set_bind',
docstring="""Set address bind
@param arg: ip address
@type arg: str
""",
),
Method(RTorrent, 'set_up_limit', 'set_upload_rate',
docstring="""Set global upload limit (in bytes)
@param arg: speed limit
@type arg: int
""",
),
Method(RTorrent, 'set_port_random', 'set_port_random'),
Method(RTorrent, 'set_connection_leech', 'set_connection_leech'),
Method(RTorrent, 'set_tracker_numwant', 'set_tracker_numwant'),
Method(RTorrent, 'set_max_peers', 'set_max_peers'),
Method(RTorrent, 'set_min_peers', 'set_min_peers'),
Method(RTorrent, 'set_max_uploads_div', 'set_max_uploads_div'),
Method(RTorrent, 'set_max_open_files', 'set_max_open_files'),
Method(RTorrent, 'set_max_downloads_global', 'set_max_downloads_global'),
Method(RTorrent, 'set_session_lock', 'set_session_lock'),
Method(RTorrent, 'set_session', 'set_session'),
Method(RTorrent, 'set_split_suffix', 'set_split_suffix'),
Method(RTorrent, 'set_hash_interval', 'set_hash_interval'),
Method(RTorrent, 'set_handshake_log', 'set_handshake_log'),
Method(RTorrent, 'set_port_range', 'set_port_range'),
Method(RTorrent, 'set_min_peers_seed', 'set_min_peers_seed'),
Method(RTorrent, 'set_scgi_dont_route', 'set_scgi_dont_route'),
Method(RTorrent, 'set_preload_min_size', 'set_preload_min_size'),
Method(RTorrent, 'set_log.tracker', 'set_log.tracker'),
Method(RTorrent, 'set_max_uploads_global', 'set_max_uploads_global'),
Method(RTorrent, 'set_down_limit', 'set_download_rate',
docstring="""Set global download limit (in bytes)
@param arg: speed limit
@type arg: int
""",
),
Method(RTorrent, 'set_preload_required_rate', 'set_preload_required_rate'),
Method(RTorrent, 'set_hash_read_ahead', 'set_hash_read_ahead'),
Method(RTorrent, 'set_max_peers_seed', 'set_max_peers_seed'),
Method(RTorrent, 'set_max_uploads', 'set_max_uploads'),
Method(RTorrent, 'set_session_on_completion', 'set_session_on_completion'),
Method(RTorrent, 'set_max_open_http', 'set_max_open_http'),
Method(RTorrent, 'set_directory', 'set_directory'),
Method(RTorrent, 'set_http_cacert', 'set_http_cacert'),
Method(RTorrent, 'set_dht_throttle', 'set_dht_throttle'),
Method(RTorrent, 'set_hash_max_tries', 'set_hash_max_tries'),
Method(RTorrent, 'set_proxy_address', 'set_proxy_address'),
Method(RTorrent, 'set_split_file_size', 'set_split_file_size'),
Method(RTorrent, 'set_receive_buffer_size', 'set_receive_buffer_size'),
Method(RTorrent, 'set_use_udp_trackers', 'set_use_udp_trackers'),
Method(RTorrent, 'set_connection_seed', 'set_connection_seed'),
Method(RTorrent, 'set_xmlrpc_size_limit', 'set_xmlrpc_size_limit'),
Method(RTorrent, 'set_xmlrpc_dialect', 'set_xmlrpc_dialect'),
Method(RTorrent, 'set_safe_sync', 'set_safe_sync'),
Method(RTorrent, 'set_http_capath', 'set_http_capath'),
Method(RTorrent, 'set_send_buffer_size', 'set_send_buffer_size'),
Method(RTorrent, 'set_max_downloads_div', 'set_max_downloads_div'),
Method(RTorrent, 'set_name', 'set_name'),
Method(RTorrent, 'set_port_open', 'set_port_open'),
Method(RTorrent, 'set_timeout_sync', 'set_timeout_sync'),
Method(RTorrent, 'set_peer_exchange', 'set_peer_exchange'),
Method(RTorrent, 'set_ip', 'set_ip',
docstring="""Set IP
@param arg: ip address
@type arg: str
""",
),
Method(RTorrent, 'set_timeout_safe_sync', 'set_timeout_safe_sync'),
Method(RTorrent, 'set_preload_type', 'set_preload_type'),
Method(RTorrent, 'set_check_hash', 'set_check_hash',
docstring="""Enable/Disable hash checking on finished torrents
@param arg: True to enable, False to disable
@type arg: bool
""",
boolean=True,
),
]
_all_methods_list = [methods,
rtorrent.file.methods,
rtorrent.torrent.methods,
rtorrent.tracker.methods,
rtorrent.peer.methods,
]
class_methods_pair = {
RTorrent: methods,
rtorrent.file.File: rtorrent.file.methods,
rtorrent.torrent.Torrent: rtorrent.torrent.methods,
rtorrent.tracker.Tracker: rtorrent.tracker.methods,
rtorrent.peer.Peer: rtorrent.peer.methods,
}
for c in class_methods_pair.keys():
rtorrent.rpc._build_rpc_methods(c, class_methods_pair[c])
_build_class_methods(c)

View File

@ -0,0 +1,86 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from rtorrent.compat import is_py3
def bool_to_int(value):
"""Translates python booleans to RPC-safe integers"""
if value is True:
return("1")
elif value is False:
return("0")
else:
return(value)
def cmd_exists(cmds_list, cmd):
"""Check if given command is in list of available commands
@param cmds_list: see L{RTorrent._rpc_methods}
@type cmds_list: list
@param cmd: name of command to be checked
@type cmd: str
@return: bool
"""
return(cmd in cmds_list)
def find_torrent(info_hash, torrent_list):
"""Find torrent file in given list of Torrent classes
@param info_hash: info hash of torrent
@type info_hash: str
@param torrent_list: list of L{Torrent} instances (see L{RTorrent.get_torrents})
@type torrent_list: list
@return: L{Torrent} instance, or -1 if not found
"""
for t in torrent_list:
if t.info_hash == info_hash:
return t
return None
def is_valid_port(port):
"""Check if given port is valid"""
return(0 <= int(port) <= 65535)
def convert_version_tuple_to_str(t):
return(".".join([str(n) for n in t]))
def safe_repr(fmt, *args, **kwargs):
""" Formatter that handles unicode arguments """
if not is_py3():
# unicode fmt can take str args, str fmt cannot take unicode args
fmt = fmt.decode("utf-8")
out = fmt.format(*args, **kwargs)
return out.encode("utf-8")
else:
return fmt.format(*args, **kwargs)

View File

@ -0,0 +1,30 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import sys
def is_py3():
return sys.version_info[0] == 3
if is_py3():
import xmlrpc.client as xmlrpclib
else:
import xmlrpclib

View File

@ -0,0 +1,40 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from rtorrent.common import convert_version_tuple_to_str
class RTorrentVersionError(Exception):
def __init__(self, min_version, cur_version):
self.min_version = min_version
self.cur_version = cur_version
self.msg = "Minimum version required: {0}".format(
convert_version_tuple_to_str(min_version))
def __str__(self):
return(self.msg)
class MethodError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return(self.msg)

View File

@ -0,0 +1,91 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# from rtorrent.rpc import Method
import rtorrent.rpc
from rtorrent.common import safe_repr
Method = rtorrent.rpc.Method
class File:
"""Represents an individual file within a L{Torrent} instance."""
def __init__(self, _rt_obj, info_hash, index, **kwargs):
self._rt_obj = _rt_obj
self.info_hash = info_hash # : info hash for the torrent the file is associated with
self.index = index # : The position of the file within the file list
for k in kwargs.keys():
setattr(self, k, kwargs.get(k, None))
self.rpc_id = "{0}:f{1}".format(
self.info_hash, self.index) # : unique id to pass to rTorrent
def update(self):
"""Refresh file data
@note: All fields are stored as attributes to self.
@return: None
"""
multicall = rtorrent.rpc.Multicall(self)
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self._rt_obj)]
for method in retriever_methods:
multicall.add(method, self.rpc_id)
multicall.call()
def __repr__(self):
return safe_repr("File(index={0} path=\"{1}\")", self.index, self.path)
methods = [
# RETRIEVERS
Method(File, 'get_last_touched', 'f.get_last_touched'),
Method(File, 'get_range_second', 'f.get_range_second'),
Method(File, 'get_size_bytes', 'f.get_size_bytes'),
Method(File, 'get_priority', 'f.get_priority'),
Method(File, 'get_match_depth_next', 'f.get_match_depth_next'),
Method(File, 'is_resize_queued', 'f.is_resize_queued',
boolean=True,
),
Method(File, 'get_range_first', 'f.get_range_first'),
Method(File, 'get_match_depth_prev', 'f.get_match_depth_prev'),
Method(File, 'get_path', 'f.get_path'),
Method(File, 'get_completed_chunks', 'f.get_completed_chunks'),
Method(File, 'get_path_components', 'f.get_path_components'),
Method(File, 'is_created', 'f.is_created',
boolean=True,
),
Method(File, 'is_open', 'f.is_open',
boolean=True,
),
Method(File, 'get_size_chunks', 'f.get_size_chunks'),
Method(File, 'get_offset', 'f.get_offset'),
Method(File, 'get_frozen_path', 'f.get_frozen_path'),
Method(File, 'get_path_depth', 'f.get_path_depth'),
Method(File, 'is_create_queued', 'f.is_create_queued',
boolean=True,
),
# MODIFIERS
]

View File

@ -0,0 +1,84 @@
# Copyright (c) 2013 Dean Gardiner, <gardiner91@gmail.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import rtorrent.rpc
Method = rtorrent.rpc.Method
class Group:
__name__ = 'Group'
def __init__(self, _rt_obj, name):
self._rt_obj = _rt_obj
self.name = name
self.methods = [
# RETRIEVERS
Method(Group, 'get_max', 'group.' + self.name + '.ratio.max', varname='max'),
Method(Group, 'get_min', 'group.' + self.name + '.ratio.min', varname='min'),
Method(Group, 'get_upload', 'group.' + self.name + '.ratio.upload', varname='upload'),
# MODIFIERS
Method(Group, 'set_max', 'group.' + self.name + '.ratio.max.set', varname='max'),
Method(Group, 'set_min', 'group.' + self.name + '.ratio.min.set', varname='min'),
Method(Group, 'set_upload', 'group.' + self.name + '.ratio.upload.set', varname='upload')
]
rtorrent.rpc._build_rpc_methods(self, self.methods)
# Setup multicall_add method
caller = lambda multicall, method, *args: \
multicall.add(method, *args)
setattr(self, "multicall_add", caller)
def _get_prefix(self):
return 'group.' + self.name + '.ratio.'
def update(self):
multicall = rtorrent.rpc.Multicall(self)
retriever_methods = [m for m in self.methods
if m.is_retriever() and m.is_available(self._rt_obj)]
for method in retriever_methods:
multicall.add(method)
multicall.call()
def enable(self):
p = self._rt_obj._get_conn()
return getattr(p, self._get_prefix() + 'enable')()
def disable(self):
p = self._rt_obj._get_conn()
return getattr(p, self._get_prefix() + 'disable')()
def set_command(self, *methods):
methods = [m + '=' for m in methods]
m = rtorrent.rpc.Multicall(self)
self.multicall_add(
m, 'system.method.set',
self._get_prefix() + 'command',
*methods
)
return(m.call()[-1])

View File

View File

@ -0,0 +1,281 @@
# Copyright (C) 2011 by clueless <clueless.nospam ! mail.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# Version: 20111107
#
# Changelog
# ---------
# 2011-11-07 - Added support for Python2 (tested on 2.6)
# 2011-10-03 - Fixed: moved check for end of list at the top of the while loop
# in _decode_list (in case the list is empty) (Chris Lucas)
# - Converted dictionary keys to str
# 2011-04-24 - Changed date format to YYYY-MM-DD for versioning, bigger
# integer denotes a newer version
# - Fixed a bug that would treat False as an integral type but
# encode it using the 'False' string, attempting to encode a
# boolean now results in an error
# - Fixed a bug where an integer value of 0 in a list or
# dictionary resulted in a parse error while decoding
#
# 2011-04-03 - Original release
import sys
_py3 = sys.version_info[0] == 3
if _py3:
_VALID_STRING_TYPES = (str,)
else:
_VALID_STRING_TYPES = (str, unicode) # @UndefinedVariable
_TYPE_INT = 1
_TYPE_STRING = 2
_TYPE_LIST = 3
_TYPE_DICTIONARY = 4
_TYPE_END = 5
_TYPE_INVALID = 6
# Function to determine the type of he next value/item
# Arguments:
# char First character of the string that is to be decoded
# Return value:
# Returns an integer that describes what type the next value/item is
def _gettype(char):
if not isinstance(char, int):
char = ord(char)
if char == 0x6C: # 'l'
return _TYPE_LIST
elif char == 0x64: # 'd'
return _TYPE_DICTIONARY
elif char == 0x69: # 'i'
return _TYPE_INT
elif char == 0x65: # 'e'
return _TYPE_END
elif char >= 0x30 and char <= 0x39: # '0' '9'
return _TYPE_STRING
else:
return _TYPE_INVALID
# Function to parse a string from the bendcoded data
# Arguments:
# data bencoded data, must be guaranteed to be a string
# Return Value:
# Returns a tuple, the first member of the tuple is the parsed string
# The second member is whatever remains of the bencoded data so it can
# be used to parse the next part of the data
def _decode_string(data):
end = 1
# if py3, data[end] is going to be an int
# if py2, data[end] will be a string
if _py3:
char = 0x3A
else:
char = chr(0x3A)
while data[end] != char: # ':'
end = end + 1
strlen = int(data[:end])
return (data[end + 1:strlen + end + 1], data[strlen + end + 1:])
# Function to parse an integer from the bencoded data
# Arguments:
# data bencoded data, must be guaranteed to be an integer
# Return Value:
# Returns a tuple, the first member of the tuple is the parsed string
# The second member is whatever remains of the bencoded data so it can
# be used to parse the next part of the data
def _decode_int(data):
end = 1
# if py3, data[end] is going to be an int
# if py2, data[end] will be a string
if _py3:
char = 0x65
else:
char = chr(0x65)
while data[end] != char: # 'e'
end = end + 1
return (int(data[1:end]), data[end + 1:])
# Function to parse a bencoded list
# Arguments:
# data bencoded data, must be guaranted to be the start of a list
# Return Value:
# Returns a tuple, the first member of the tuple is the parsed list
# The second member is whatever remains of the bencoded data so it can
# be used to parse the next part of the data
def _decode_list(data):
x = []
overflow = data[1:]
while True: # Loop over the data
if _gettype(overflow[0]) == _TYPE_END: # - Break if we reach the end of the list
return (x, overflow[1:]) # and return the list and overflow
value, overflow = _decode(overflow) #
if isinstance(value, bool) or overflow == '': # - if we have a parse error
return (False, False) # Die with error
else: # - Otherwise
x.append(value) # add the value to the list
# Function to parse a bencoded list
# Arguments:
# data bencoded data, must be guaranted to be the start of a list
# Return Value:
# Returns a tuple, the first member of the tuple is the parsed dictionary
# The second member is whatever remains of the bencoded data so it can
# be used to parse the next part of the data
def _decode_dict(data):
x = {}
overflow = data[1:]
while True: # Loop over the data
if _gettype(overflow[0]) != _TYPE_STRING: # - If the key is not a string
return (False, False) # Die with error
key, overflow = _decode(overflow) #
if key == False or overflow == '': # - If parse error
return (False, False) # Die with error
value, overflow = _decode(overflow) #
if isinstance(value, bool) or overflow == '': # - If parse error
print("Error parsing value")
print(value)
print(overflow)
return (False, False) # Die with error
else:
# don't use bytes for the key
key = key.decode()
x[key] = value
if _gettype(overflow[0]) == _TYPE_END:
return (x, overflow[1:])
# Arguments:
# data bencoded data in bytes format
# Return Values:
# Returns a tuple, the first member is the parsed data, could be a string,
# an integer, a list or a dictionary, or a combination of those
# The second member is the leftover of parsing, if everything parses correctly this
# should be an empty byte string
def _decode(data):
btype = _gettype(data[0])
if btype == _TYPE_INT:
return _decode_int(data)
elif btype == _TYPE_STRING:
return _decode_string(data)
elif btype == _TYPE_LIST:
return _decode_list(data)
elif btype == _TYPE_DICTIONARY:
return _decode_dict(data)
else:
return (False, False)
# Function to decode bencoded data
# Arguments:
# data bencoded data, can be str or bytes
# Return Values:
# Returns the decoded data on success, this coud be bytes, int, dict or list
# or a combinatin of those
# If an error occurs the return value is False
def decode(data):
# if isinstance(data, str):
# data = data.encode()
decoded, overflow = _decode(data)
return decoded
# Args: data as integer
# return: encoded byte string
def _encode_int(data):
return b'i' + str(data).encode() + b'e'
# Args: data as string or bytes
# Return: encoded byte string
def _encode_string(data):
return str(len(data)).encode() + b':' + data
# Args: data as list
# Return: Encoded byte string, false on error
def _encode_list(data):
elist = b'l'
for item in data:
eitem = encode(item)
if eitem == False:
return False
elist += eitem
return elist + b'e'
# Args: data as dict
# Return: encoded byte string, false on error
def _encode_dict(data):
edict = b'd'
keys = []
for key in data:
if not isinstance(key, _VALID_STRING_TYPES) and not isinstance(key, bytes):
return False
keys.append(key)
keys.sort()
for key in keys:
ekey = encode(key)
eitem = encode(data[key])
if ekey == False or eitem == False:
return False
edict += ekey + eitem
return edict + b'e'
# Function to encode a variable in bencoding
# Arguments:
# data Variable to be encoded, can be a list, dict, str, bytes, int or a combination of those
# Return Values:
# Returns the encoded data as a byte string when successful
# If an error occurs the return value is False
def encode(data):
if isinstance(data, bool):
return False
elif isinstance(data, int):
return _encode_int(data)
elif isinstance(data, bytes):
return _encode_string(data)
elif isinstance(data, _VALID_STRING_TYPES):
return _encode_string(data.encode())
elif isinstance(data, list):
return _encode_list(data)
elif isinstance(data, dict):
return _encode_dict(data)
else:
return False

View File

@ -0,0 +1,160 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from rtorrent.compat import is_py3
import os.path
import re
import rtorrent.lib.bencode as bencode
import hashlib
if is_py3():
from urllib.request import urlopen # @UnresolvedImport @UnusedImport
else:
from urllib2 import urlopen # @UnresolvedImport @Reimport
class TorrentParser():
def __init__(self, torrent):
"""Decode and parse given torrent
@param torrent: handles: urls, file paths, string of torrent data
@type torrent: str
@raise AssertionError: Can be raised for a couple reasons:
- If _get_raw_torrent() couldn't figure out
what X{torrent} is
- if X{torrent} isn't a valid bencoded torrent file
"""
self.torrent = torrent
self._raw_torrent = None # : testing yo
self._torrent_decoded = None # : what up
self.file_type = None
self._get_raw_torrent()
assert self._raw_torrent is not None, "Couldn't get raw_torrent."
if self._torrent_decoded is None:
self._decode_torrent()
assert isinstance(self._torrent_decoded, dict), "Invalid torrent file."
self._parse_torrent()
def _is_raw(self):
raw = False
if isinstance(self.torrent, (str, bytes)):
if isinstance(self._decode_torrent(self.torrent), dict):
raw = True
else:
# reset self._torrent_decoded (currently equals False)
self._torrent_decoded = None
return(raw)
def _get_raw_torrent(self):
"""Get raw torrent data by determining what self.torrent is"""
# already raw?
if self._is_raw():
self.file_type = "raw"
self._raw_torrent = self.torrent
return
# local file?
if os.path.isfile(self.torrent):
self.file_type = "file"
self._raw_torrent = open(self.torrent, "rb").read()
# url?
elif re.search("^(http|ftp):\/\/", self.torrent, re.I):
self.file_type = "url"
self._raw_torrent = urlopen(self.torrent).read()
def _decode_torrent(self, raw_torrent=None):
if raw_torrent is None:
raw_torrent = self._raw_torrent
self._torrent_decoded = bencode.decode(raw_torrent)
return(self._torrent_decoded)
def _calc_info_hash(self):
self.info_hash = None
if "info" in self._torrent_decoded.keys():
info_encoded = bencode.encode(self._torrent_decoded["info"])
if info_encoded:
self.info_hash = hashlib.sha1(info_encoded).hexdigest().upper()
return(self.info_hash)
def _parse_torrent(self):
for k in self._torrent_decoded:
key = k.replace(" ", "_").lower()
setattr(self, key, self._torrent_decoded[k])
self._calc_info_hash()
class NewTorrentParser(object):
@staticmethod
def _read_file(fp):
return fp.read()
@staticmethod
def _write_file(fp):
fp.write()
return fp
@staticmethod
def _decode_torrent(data):
return bencode.decode(data)
def __init__(self, input):
self.input = input
self._raw_torrent = None
self._decoded_torrent = None
self._hash_outdated = False
if isinstance(self.input, (str, bytes)):
# path to file?
if os.path.isfile(self.input):
self._raw_torrent = self._read_file(open(self.input, "rb"))
else:
# assume input was the raw torrent data (do we really want
# this?)
self._raw_torrent = self.input
# file-like object?
elif self.input.hasattr("read"):
self._raw_torrent = self._read_file(self.input)
assert self._raw_torrent is not None, "Invalid input: input must be a path or a file-like object"
self._decoded_torrent = self._decode_torrent(self._raw_torrent)
assert isinstance(
self._decoded_torrent, dict), "File could not be decoded"
def _calc_info_hash(self):
self.info_hash = None
info_dict = self._torrent_decoded["info"]
self.info_hash = hashlib.sha1(bencode.encode(
info_dict)).hexdigest().upper()
return(self.info_hash)
def set_tracker(self, tracker):
self._decoded_torrent["announce"] = tracker
def get_tracker(self):
return self._decoded_torrent.get("announce")

View File

@ -0,0 +1,73 @@
#
# Copyright (c) 2013 Dean Gardiner, <gardiner91@gmail.com>
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from base64 import encodestring
import string
import xmlrpclib
class BasicAuthTransport(xmlrpclib.Transport):
def __init__(self, username=None, password=None):
xmlrpclib.Transport.__init__(self)
self.username = username
self.password = password
def send_auth(self, h):
if self.username is not None and self.password is not None:
h.putheader('AUTHORIZATION', "Basic %s" % string.replace(
encodestring("%s:%s" % (self.username, self.password)),
"\012", ""
))
def single_request(self, host, handler, request_body, verbose=0):
# issue XML-RPC request
h = self.make_connection(host)
if verbose:
h.set_debuglevel(1)
try:
self.send_request(h, handler, request_body)
self.send_host(h, host)
self.send_user_agent(h)
self.send_auth(h)
self.send_content(h, request_body)
response = h.getresponse(buffering=True)
if response.status == 200:
self.verbose = verbose
return self.parse_response(response)
except xmlrpclib.Fault:
raise
except Exception:
self.close()
raise
#discard any response data and raise exception
if response.getheader("content-length", 0):
response.read()
raise xmlrpclib.ProtocolError(
host + handler,
response.status, response.reason,
response.msg,
)

View File

@ -0,0 +1,23 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from rtorrent.compat import xmlrpclib
HTTPServerProxy = xmlrpclib.ServerProxy

View File

@ -0,0 +1,219 @@
#!/usr/bin/python
# rtorrent_xmlrpc
# (c) 2011 Roger Que <alerante@bellsouth.net>
#
# Modified portions:
# (c) 2013 Dean Gardiner <gardiner91@gmail.com>
#
# Python module for interacting with rtorrent's XML-RPC interface
# directly over SCGI, instead of through an HTTP server intermediary.
# Inspired by Glenn Washburn's xmlrpc2scgi.py [1], but subclasses the
# built-in xmlrpclib classes so that it is compatible with features
# such as MultiCall objects.
#
# [1] <http://libtorrent.rakshasa.no/wiki/UtilsXmlrpc2scgi>
#
# Usage: server = SCGIServerProxy('scgi://localhost:7000/')
# server = SCGIServerProxy('scgi:///path/to/scgi.sock')
# print server.system.listMethods()
# mc = xmlrpclib.MultiCall(server)
# mc.get_up_rate()
# mc.get_down_rate()
# print mc()
#
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# In addition, as a special exception, the copyright holders give
# permission to link the code of portions of this program with the
# OpenSSL library under certain conditions as described in each
# individual source file, and distribute linked combinations
# including the two.
#
# You must obey the GNU General Public License in all respects for
# all of the code used other than OpenSSL. If you modify file(s)
# with this exception, you may extend this exception to your version
# of the file(s), but you are not obligated to do so. If you do not
# wish to do so, delete this exception statement from your version.
# If you delete this exception statement from all source files in the
# program, then also delete it here.
#
#
#
# Portions based on Python's xmlrpclib:
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by Fredrik Lundh
#
# By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
import httplib
import re
import socket
import urllib
import xmlrpclib
import errno
class SCGITransport(xmlrpclib.Transport):
# Added request() from Python 2.7 xmlrpclib here to backport to Python 2.6
def request(self, host, handler, request_body, verbose=0):
#retry request once if cached connection has gone cold
for i in (0, 1):
try:
return self.single_request(host, handler, request_body, verbose)
except socket.error, e:
if i or e.errno not in (errno.ECONNRESET, errno.ECONNABORTED, errno.EPIPE):
raise
except httplib.BadStatusLine: #close after we sent request
if i:
raise
def single_request(self, host, handler, request_body, verbose=0):
# Add SCGI headers to the request.
headers = {'CONTENT_LENGTH': str(len(request_body)), 'SCGI': '1'}
header = '\x00'.join(('%s\x00%s' % item for item in headers.iteritems())) + '\x00'
header = '%d:%s' % (len(header), header)
request_body = '%s,%s' % (header, request_body)
sock = None
try:
if host:
host, port = urllib.splitport(host)
addrinfo = socket.getaddrinfo(host, int(port), socket.AF_INET,
socket.SOCK_STREAM)
sock = socket.socket(*addrinfo[0][:3])
sock.connect(addrinfo[0][4])
else:
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(handler)
self.verbose = verbose
sock.send(request_body)
return self.parse_response(sock.makefile())
finally:
if sock:
sock.close()
def parse_response(self, response):
p, u = self.getparser()
response_body = ''
while True:
data = response.read(1024)
if not data:
break
response_body += data
# Remove SCGI headers from the response.
response_header, response_body = re.split(r'\n\s*?\n', response_body,
maxsplit=1)
if self.verbose:
print 'body:', repr(response_body)
p.feed(response_body)
p.close()
return u.close()
class SCGIServerProxy(xmlrpclib.ServerProxy):
def __init__(self, uri, transport=None, encoding=None, verbose=False,
allow_none=False, use_datetime=False):
type, uri = urllib.splittype(uri)
if type not in ('scgi'):
raise IOError('unsupported XML-RPC protocol')
self.__host, self.__handler = urllib.splithost(uri)
if not self.__handler:
self.__handler = '/'
if transport is None:
transport = SCGITransport(use_datetime=use_datetime)
self.__transport = transport
self.__encoding = encoding
self.__verbose = verbose
self.__allow_none = allow_none
def __close(self):
self.__transport.close()
def __request(self, methodname, params):
# call a method on the remote server
request = xmlrpclib.dumps(params, methodname, encoding=self.__encoding,
allow_none=self.__allow_none)
response = self.__transport.request(
self.__host,
self.__handler,
request,
verbose=self.__verbose
)
if len(response) == 1:
response = response[0]
return response
def __repr__(self):
return (
"<SCGIServerProxy for %s%s>" %
(self.__host, self.__handler)
)
__str__ = __repr__
def __getattr__(self, name):
# magic method dispatcher
return xmlrpclib._Method(self.__request, name)
# note: to call a remote object with an non-standard name, use
# result getattr(server, "strange-python-name")(args)
def __call__(self, attr):
"""A workaround to get special attributes on the ServerProxy
without interfering with the magic __getattr__
"""
if attr == "close":
return self.__close
elif attr == "transport":
return self.__transport
raise AttributeError("Attribute %r not found" % (attr,))

View File

@ -0,0 +1,98 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# from rtorrent.rpc import Method
import rtorrent.rpc
from rtorrent.common import safe_repr
Method = rtorrent.rpc.Method
class Peer:
"""Represents an individual peer within a L{Torrent} instance."""
def __init__(self, _rt_obj, info_hash, **kwargs):
self._rt_obj = _rt_obj
self.info_hash = info_hash # : info hash for the torrent the peer is associated with
for k in kwargs.keys():
setattr(self, k, kwargs.get(k, None))
self.rpc_id = "{0}:p{1}".format(
self.info_hash, self.id) # : unique id to pass to rTorrent
def __repr__(self):
return safe_repr("Peer(id={0})", self.id)
def update(self):
"""Refresh peer data
@note: All fields are stored as attributes to self.
@return: None
"""
multicall = rtorrent.rpc.Multicall(self)
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self._rt_obj)]
for method in retriever_methods:
multicall.add(method, self.rpc_id)
multicall.call()
methods = [
# RETRIEVERS
Method(Peer, 'is_preferred', 'p.is_preferred',
boolean=True,
),
Method(Peer, 'get_down_rate', 'p.get_down_rate'),
Method(Peer, 'is_unwanted', 'p.is_unwanted',
boolean=True,
),
Method(Peer, 'get_peer_total', 'p.get_peer_total'),
Method(Peer, 'get_peer_rate', 'p.get_peer_rate'),
Method(Peer, 'get_port', 'p.get_port'),
Method(Peer, 'is_snubbed', 'p.is_snubbed',
boolean=True,
),
Method(Peer, 'get_id_html', 'p.get_id_html'),
Method(Peer, 'get_up_rate', 'p.get_up_rate'),
Method(Peer, 'is_banned', 'p.banned',
boolean=True,
),
Method(Peer, 'get_completed_percent', 'p.get_completed_percent'),
Method(Peer, 'completed_percent', 'p.completed_percent'),
Method(Peer, 'get_id', 'p.get_id'),
Method(Peer, 'is_obfuscated', 'p.is_obfuscated',
boolean=True,
),
Method(Peer, 'get_down_total', 'p.get_down_total'),
Method(Peer, 'get_client_version', 'p.get_client_version'),
Method(Peer, 'get_address', 'p.get_address'),
Method(Peer, 'is_incoming', 'p.is_incoming',
boolean=True,
),
Method(Peer, 'is_encrypted', 'p.is_encrypted',
boolean=True,
),
Method(Peer, 'get_options_str', 'p.get_options_str'),
Method(Peer, 'get_client_version', 'p.client_version'),
Method(Peer, 'get_up_total', 'p.get_up_total'),
# MODIFIERS
]

View File

@ -0,0 +1,319 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import inspect
import rtorrent
import re
from rtorrent.common import bool_to_int, convert_version_tuple_to_str,\
safe_repr
from rtorrent.err import MethodError
from rtorrent.compat import xmlrpclib
def get_varname(rpc_call):
"""Transform rpc method into variable name.
@newfield example: Example
@example: if the name of the rpc method is 'p.get_down_rate', the variable
name will be 'down_rate'
"""
# extract variable name from xmlrpc func name
r = re.search(
"([ptdf]\.|system\.|get\_|is\_|set\_)+([^=]*)", rpc_call, re.I)
if r:
return(r.groups()[-1])
else:
return(None)
def _handle_unavailable_rpc_method(method, rt_obj):
msg = "Method isn't available."
if rt_obj._get_client_version_tuple() < method.min_version:
msg = "This method is only available in " \
"RTorrent version v{0} or later".format(
convert_version_tuple_to_str(method.min_version))
raise MethodError(msg)
class DummyClass:
def __init__(self):
pass
class Method:
"""Represents an individual RPC method"""
def __init__(self, _class, method_name,
rpc_call, docstring=None, varname=None, **kwargs):
self._class = _class # : Class this method is associated with
self.class_name = _class.__name__
self.method_name = method_name # : name of public-facing method
self.rpc_call = rpc_call # : name of rpc method
self.docstring = docstring # : docstring for rpc method (optional)
self.varname = varname # : variable for the result of the method call, usually set to self.varname
self.min_version = kwargs.get("min_version", (
0, 0, 0)) # : Minimum version of rTorrent required
self.boolean = kwargs.get("boolean", False) # : returns boolean value?
self.post_process_func = kwargs.get(
"post_process_func", None) # : custom post process function
self.aliases = kwargs.get(
"aliases", []) # : aliases for method (optional)
self.required_args = []
#: Arguments required when calling the method (not utilized)
self.method_type = self._get_method_type()
if self.varname is None:
self.varname = get_varname(self.rpc_call)
assert self.varname is not None, "Couldn't get variable name."
def __repr__(self):
return safe_repr("Method(method_name='{0}', rpc_call='{1}')",
self.method_name, self.rpc_call)
def _get_method_type(self):
"""Determine whether method is a modifier or a retriever"""
if self.method_name[:4] == "set_": return('m') # modifier
else:
return('r') # retriever
def is_modifier(self):
if self.method_type == 'm':
return(True)
else:
return(False)
def is_retriever(self):
if self.method_type == 'r':
return(True)
else:
return(False)
def is_available(self, rt_obj):
if rt_obj._get_client_version_tuple() < self.min_version or \
self.rpc_call not in rt_obj._get_rpc_methods():
return(False)
else:
return(True)
class Multicall:
def __init__(self, class_obj, **kwargs):
self.class_obj = class_obj
if class_obj.__class__.__name__ == "RTorrent":
self.rt_obj = class_obj
else:
self.rt_obj = class_obj._rt_obj
self.calls = []
def add(self, method, *args):
"""Add call to multicall
@param method: L{Method} instance or name of raw RPC method
@type method: Method or str
@param args: call arguments
"""
# if a raw rpc method was given instead of a Method instance,
# try and find the instance for it. And if all else fails, create a
# dummy Method instance
if isinstance(method, str):
result = find_method(method)
# if result not found
if result == -1:
method = Method(DummyClass, method, method)
else:
method = result
# ensure method is available before adding
if not method.is_available(self.rt_obj):
_handle_unavailable_rpc_method(method, self.rt_obj)
self.calls.append((method, args))
def list_calls(self):
for c in self.calls:
print(c)
def call(self):
"""Execute added multicall calls
@return: the results (post-processed), in the order they were added
@rtype: tuple
"""
m = xmlrpclib.MultiCall(self.rt_obj._get_conn())
for call in self.calls:
method, args = call
rpc_call = getattr(method, "rpc_call")
getattr(m, rpc_call)(*args)
results = m()
results = tuple(results)
results_processed = []
for r, c in zip(results, self.calls):
method = c[0] # Method instance
result = process_result(method, r)
results_processed.append(result)
# assign result to class_obj
exists = hasattr(self.class_obj, method.varname)
if not exists or not inspect.ismethod(getattr(self.class_obj, method.varname)):
setattr(self.class_obj, method.varname, result)
return(tuple(results_processed))
def call_method(class_obj, method, *args):
"""Handles single RPC calls
@param class_obj: Peer/File/Torrent/Tracker/RTorrent instance
@type class_obj: object
@param method: L{Method} instance or name of raw RPC method
@type method: Method or str
"""
if method.is_retriever():
args = args[:-1]
else:
assert args[-1] is not None, "No argument given."
if class_obj.__class__.__name__ == "RTorrent":
rt_obj = class_obj
else:
rt_obj = class_obj._rt_obj
# check if rpc method is even available
if not method.is_available(rt_obj):
_handle_unavailable_rpc_method(method, rt_obj)
m = Multicall(class_obj)
m.add(method, *args)
# only added one method, only getting one result back
ret_value = m.call()[0]
####### OBSOLETE ##########################################################
# if method.is_retriever():
# #value = process_result(method, ret_value)
# value = ret_value #MultiCall already processed the result
# else:
# # we're setting the user's input to method.varname
# # but we'll return the value that xmlrpc gives us
# value = process_result(method, args[-1])
##########################################################################
return(ret_value)
def find_method(rpc_call):
"""Return L{Method} instance associated with given RPC call"""
method_lists = [
rtorrent.methods,
rtorrent.file.methods,
rtorrent.tracker.methods,
rtorrent.peer.methods,
rtorrent.torrent.methods,
]
for l in method_lists:
for m in l:
if m.rpc_call.lower() == rpc_call.lower():
return(m)
return(-1)
def process_result(method, result):
"""Process given C{B{result}} based on flags set in C{B{method}}
@param method: L{Method} instance
@type method: Method
@param result: result to be processed (the result of given L{Method} instance)
@note: Supported Processing:
- boolean - convert ones and zeros returned by rTorrent and
convert to python boolean values
"""
# handle custom post processing function
if method.post_process_func is not None:
result = method.post_process_func(result)
# is boolean?
if method.boolean:
if result in [1, '1']:
result = True
elif result in [0, '0']:
result = False
return(result)
def _build_rpc_methods(class_, method_list):
"""Build glorified aliases to raw RPC methods"""
instance = None
if not inspect.isclass(class_):
instance = class_
class_ = instance.__class__
for m in method_list:
class_name = m.class_name
if class_name != class_.__name__:
continue
if class_name == "RTorrent":
caller = lambda self, arg = None, method = m:\
call_method(self, method, bool_to_int(arg))
elif class_name == "Torrent":
caller = lambda self, arg = None, method = m:\
call_method(self, method, self.rpc_id,
bool_to_int(arg))
elif class_name in ["Tracker", "File"]:
caller = lambda self, arg = None, method = m:\
call_method(self, method, self.rpc_id,
bool_to_int(arg))
elif class_name == "Peer":
caller = lambda self, arg = None, method = m:\
call_method(self, method, self.rpc_id,
bool_to_int(arg))
elif class_name == "Group":
caller = lambda arg = None, method = m: \
call_method(instance, method, bool_to_int(arg))
if m.docstring is None:
m.docstring = ""
# print(m)
docstring = """{0}
@note: Variable where the result for this method is stored: {1}.{2}""".format(
m.docstring,
class_name,
m.varname)
caller.__doc__ = docstring
for method_name in [m.method_name] + list(m.aliases):
if instance is None:
setattr(class_, method_name, caller)
else:
setattr(instance, method_name, caller)

View File

@ -0,0 +1,517 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import rtorrent.rpc
# from rtorrent.rpc import Method
import rtorrent.peer
import rtorrent.tracker
import rtorrent.file
import rtorrent.compat
from rtorrent.common import safe_repr
Peer = rtorrent.peer.Peer
Tracker = rtorrent.tracker.Tracker
File = rtorrent.file.File
Method = rtorrent.rpc.Method
class Torrent:
"""Represents an individual torrent within a L{RTorrent} instance."""
def __init__(self, _rt_obj, info_hash, **kwargs):
self._rt_obj = _rt_obj
self.info_hash = info_hash # : info hash for the torrent
self.rpc_id = self.info_hash # : unique id to pass to rTorrent
for k in kwargs.keys():
setattr(self, k, kwargs.get(k, None))
self.peers = []
self.trackers = []
self.files = []
self._call_custom_methods()
def __repr__(self):
return safe_repr("Torrent(info_hash=\"{0}\" name=\"{1}\")",
self.info_hash, self.name)
def _call_custom_methods(self):
"""only calls methods that check instance variables."""
self._is_hash_checking_queued()
self._is_started()
self._is_paused()
def get_peers(self):
"""Get list of Peer instances for given torrent.
@return: L{Peer} instances
@rtype: list
@note: also assigns return value to self.peers
"""
self.peers = []
retriever_methods = [m for m in rtorrent.peer.methods
if m.is_retriever() and m.is_available(self._rt_obj)]
# need to leave 2nd arg empty (dunno why)
m = rtorrent.rpc.Multicall(self)
m.add("p.multicall", self.info_hash, "",
*[method.rpc_call + "=" for method in retriever_methods])
results = m.call()[0] # only sent one call, only need first result
for result in results:
results_dict = {}
# build results_dict
for m, r in zip(retriever_methods, result):
results_dict[m.varname] = rtorrent.rpc.process_result(m, r)
self.peers.append(Peer(
self._rt_obj, self.info_hash, **results_dict))
return(self.peers)
def get_trackers(self):
"""Get list of Tracker instances for given torrent.
@return: L{Tracker} instances
@rtype: list
@note: also assigns return value to self.trackers
"""
self.trackers = []
retriever_methods = [m for m in rtorrent.tracker.methods
if m.is_retriever() and m.is_available(self._rt_obj)]
# need to leave 2nd arg empty (dunno why)
m = rtorrent.rpc.Multicall(self)
m.add("t.multicall", self.info_hash, "",
*[method.rpc_call + "=" for method in retriever_methods])
results = m.call()[0] # only sent one call, only need first result
for result in results:
results_dict = {}
# build results_dict
for m, r in zip(retriever_methods, result):
results_dict[m.varname] = rtorrent.rpc.process_result(m, r)
self.trackers.append(Tracker(
self._rt_obj, self.info_hash, **results_dict))
return(self.trackers)
def get_files(self):
"""Get list of File instances for given torrent.
@return: L{File} instances
@rtype: list
@note: also assigns return value to self.files
"""
self.files = []
retriever_methods = [m for m in rtorrent.file.methods
if m.is_retriever() and m.is_available(self._rt_obj)]
# 2nd arg can be anything, but it'll return all files in torrent
# regardless
m = rtorrent.rpc.Multicall(self)
m.add("f.multicall", self.info_hash, "",
*[method.rpc_call + "=" for method in retriever_methods])
results = m.call()[0] # only sent one call, only need first result
offset_method_index = retriever_methods.index(
rtorrent.rpc.find_method("f.get_offset"))
# make a list of the offsets of all the files, sort appropriately
offset_list = sorted([r[offset_method_index] for r in results])
for result in results:
results_dict = {}
# build results_dict
for m, r in zip(retriever_methods, result):
results_dict[m.varname] = rtorrent.rpc.process_result(m, r)
# get proper index positions for each file (based on the file
# offset)
f_index = offset_list.index(results_dict["offset"])
self.files.append(File(self._rt_obj, self.info_hash,
f_index, **results_dict))
return(self.files)
def set_directory(self, d):
"""Modify download directory
@note: Needs to stop torrent in order to change the directory.
Also doesn't restart after directory is set, that must be called
separately.
"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.try_stop")
self.multicall_add(m, "d.set_directory", d)
self.directory = m.call()[-1]
def set_directory_base(self, d):
"""Modify base download directory
@note: Needs to stop torrent in order to change the directory.
Also doesn't restart after directory is set, that must be called
separately.
"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.try_stop")
self.multicall_add(m, "d.set_directory_base", d)
def start(self):
"""Start the torrent"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.try_start")
self.multicall_add(m, "d.is_active")
self.active = m.call()[-1]
return(self.active)
def stop(self):
""""Stop the torrent"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.try_stop")
self.multicall_add(m, "d.is_active")
self.active = m.call()[-1]
return(self.active)
def pause(self):
"""Pause the torrent"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.pause")
return(m.call()[-1])
def resume(self):
"""Resume the torrent"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.resume")
return(m.call()[-1])
def close(self):
"""Close the torrent and it's files"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.close")
return(m.call()[-1])
def erase(self):
"""Delete the torrent
@note: doesn't delete the downloaded files"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.erase")
return(m.call()[-1])
def check_hash(self):
"""(Re)hash check the torrent"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.check_hash")
return(m.call()[-1])
def poll(self):
"""poll rTorrent to get latest peer/tracker/file information"""
self.get_peers()
self.get_trackers()
self.get_files()
def update(self):
"""Refresh torrent data
@note: All fields are stored as attributes to self.
@return: None
"""
multicall = rtorrent.rpc.Multicall(self)
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self._rt_obj)]
for method in retriever_methods:
multicall.add(method, self.rpc_id)
multicall.call()
# custom functions (only call private methods, since they only check
# local variables and are therefore faster)
self._call_custom_methods()
def accept_seeders(self, accept_seeds):
"""Enable/disable whether the torrent connects to seeders
@param accept_seeds: enable/disable accepting seeders
@type accept_seeds: bool"""
if accept_seeds:
call = "d.accepting_seeders.enable"
else:
call = "d.accepting_seeders.disable"
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, call)
return(m.call()[-1])
def announce(self):
"""Announce torrent info to tracker(s)"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.tracker_announce")
return(m.call()[-1])
@staticmethod
def _assert_custom_key_valid(key):
assert type(key) == int and key > 0 and key < 6, \
"key must be an integer between 1-5"
def get_custom(self, key):
"""
Get custom value
@param key: the index for the custom field (between 1-5)
@type key: int
@rtype: str
"""
self._assert_custom_key_valid(key)
m = rtorrent.rpc.Multicall(self)
field = "custom{0}".format(key)
self.multicall_add(m, "d.get_{0}".format(field))
setattr(self, field, m.call()[-1])
return (getattr(self, field))
def set_custom(self, key, value):
"""
Set custom value
@param key: the index for the custom field (between 1-5)
@type key: int
@param value: the value to be stored
@type value: str
@return: if successful, value will be returned
@rtype: str
"""
self._assert_custom_key_valid(key)
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.set_custom{0}".format(key), value)
return(m.call()[-1])
def set_visible(self, view, visible=True):
p = self._rt_obj._get_conn()
if visible:
return p.view.set_visible(self.info_hash, view)
else:
return p.view.set_not_visible(self.info_hash, view)
############################################################################
# CUSTOM METHODS (Not part of the official rTorrent API)
##########################################################################
def _is_hash_checking_queued(self):
"""Only checks instance variables, shouldn't be called directly"""
# if hashing == 3, then torrent is marked for hash checking
# if hash_checking == False, then torrent is waiting to be checked
self.hash_checking_queued = (self.hashing == 3 and
self.hash_checking is False)
return(self.hash_checking_queued)
def is_hash_checking_queued(self):
"""Check if torrent is waiting to be hash checked
@note: Variable where the result for this method is stored Torrent.hash_checking_queued"""
m = rtorrent.rpc.Multicall(self)
self.multicall_add(m, "d.get_hashing")
self.multicall_add(m, "d.is_hash_checking")
results = m.call()
setattr(self, "hashing", results[0])
setattr(self, "hash_checking", results[1])
return(self._is_hash_checking_queued())
def _is_paused(self):
"""Only checks instance variables, shouldn't be called directly"""
self.paused = (self.state == 0)
return(self.paused)
def is_paused(self):
"""Check if torrent is paused
@note: Variable where the result for this method is stored: Torrent.paused"""
self.get_state()
return(self._is_paused())
def _is_started(self):
"""Only checks instance variables, shouldn't be called directly"""
self.started = (self.state == 1)
return(self.started)
def is_started(self):
"""Check if torrent is started
@note: Variable where the result for this method is stored: Torrent.started"""
self.get_state()
return(self._is_started())
methods = [
# RETRIEVERS
Method(Torrent, 'is_hash_checked', 'd.is_hash_checked',
boolean=True,
),
Method(Torrent, 'is_hash_checking', 'd.is_hash_checking',
boolean=True,
),
Method(Torrent, 'get_peers_max', 'd.get_peers_max'),
Method(Torrent, 'get_tracker_focus', 'd.get_tracker_focus'),
Method(Torrent, 'get_skip_total', 'd.get_skip_total'),
Method(Torrent, 'get_state', 'd.get_state'),
Method(Torrent, 'get_peer_exchange', 'd.get_peer_exchange'),
Method(Torrent, 'get_down_rate', 'd.get_down_rate'),
Method(Torrent, 'get_connection_seed', 'd.get_connection_seed'),
Method(Torrent, 'get_uploads_max', 'd.get_uploads_max'),
Method(Torrent, 'get_priority_str', 'd.get_priority_str'),
Method(Torrent, 'is_open', 'd.is_open',
boolean=True,
),
Method(Torrent, 'get_peers_min', 'd.get_peers_min'),
Method(Torrent, 'get_peers_complete', 'd.get_peers_complete'),
Method(Torrent, 'get_tracker_numwant', 'd.get_tracker_numwant'),
Method(Torrent, 'get_connection_current', 'd.get_connection_current'),
Method(Torrent, 'is_complete', 'd.get_complete',
boolean=True,
),
Method(Torrent, 'get_peers_connected', 'd.get_peers_connected'),
Method(Torrent, 'get_chunk_size', 'd.get_chunk_size'),
Method(Torrent, 'get_state_counter', 'd.get_state_counter'),
Method(Torrent, 'get_base_filename', 'd.get_base_filename'),
Method(Torrent, 'get_state_changed', 'd.get_state_changed'),
Method(Torrent, 'get_peers_not_connected', 'd.get_peers_not_connected'),
Method(Torrent, 'get_directory', 'd.get_directory'),
Method(Torrent, 'is_incomplete', 'd.incomplete',
boolean=True,
),
Method(Torrent, 'get_tracker_size', 'd.get_tracker_size'),
Method(Torrent, 'is_multi_file', 'd.is_multi_file',
boolean=True,
),
Method(Torrent, 'get_local_id', 'd.get_local_id'),
Method(Torrent, 'get_ratio', 'd.get_ratio',
post_process_func=lambda x: x / 1000.0,
),
Method(Torrent, 'get_loaded_file', 'd.get_loaded_file'),
Method(Torrent, 'get_max_file_size', 'd.get_max_file_size'),
Method(Torrent, 'get_size_chunks', 'd.get_size_chunks'),
Method(Torrent, 'is_pex_active', 'd.is_pex_active',
boolean=True,
),
Method(Torrent, 'get_hashing', 'd.get_hashing'),
Method(Torrent, 'get_bitfield', 'd.get_bitfield'),
Method(Torrent, 'get_local_id_html', 'd.get_local_id_html'),
Method(Torrent, 'get_connection_leech', 'd.get_connection_leech'),
Method(Torrent, 'get_peers_accounted', 'd.get_peers_accounted'),
Method(Torrent, 'get_message', 'd.get_message'),
Method(Torrent, 'is_active', 'd.is_active',
boolean=True,
),
Method(Torrent, 'get_size_bytes', 'd.get_size_bytes'),
Method(Torrent, 'get_ignore_commands', 'd.get_ignore_commands'),
Method(Torrent, 'get_creation_date', 'd.get_creation_date'),
Method(Torrent, 'get_base_path', 'd.get_base_path'),
Method(Torrent, 'get_left_bytes', 'd.get_left_bytes'),
Method(Torrent, 'get_size_files', 'd.get_size_files'),
Method(Torrent, 'get_size_pex', 'd.get_size_pex'),
Method(Torrent, 'is_private', 'd.is_private',
boolean=True,
),
Method(Torrent, 'get_max_size_pex', 'd.get_max_size_pex'),
Method(Torrent, 'get_num_chunks_hashed', 'd.get_chunks_hashed',
aliases=("get_chunks_hashed",)),
Method(Torrent, 'get_num_chunks_wanted', 'd.wanted_chunks'),
Method(Torrent, 'get_priority', 'd.get_priority'),
Method(Torrent, 'get_skip_rate', 'd.get_skip_rate'),
Method(Torrent, 'get_completed_bytes', 'd.get_completed_bytes'),
Method(Torrent, 'get_name', 'd.get_name'),
Method(Torrent, 'get_completed_chunks', 'd.get_completed_chunks'),
Method(Torrent, 'get_throttle_name', 'd.get_throttle_name'),
Method(Torrent, 'get_free_diskspace', 'd.get_free_diskspace'),
Method(Torrent, 'get_directory_base', 'd.get_directory_base'),
Method(Torrent, 'get_hashing_failed', 'd.get_hashing_failed'),
Method(Torrent, 'get_tied_to_file', 'd.get_tied_to_file'),
Method(Torrent, 'get_down_total', 'd.get_down_total'),
Method(Torrent, 'get_bytes_done', 'd.get_bytes_done'),
Method(Torrent, 'get_up_rate', 'd.get_up_rate'),
Method(Torrent, 'get_up_total', 'd.get_up_total'),
Method(Torrent, 'is_accepting_seeders', 'd.accepting_seeders',
boolean=True,
),
Method(Torrent, "get_chunks_seen", "d.chunks_seen",
min_version=(0, 9, 1),
),
Method(Torrent, "is_partially_done", "d.is_partially_done",
boolean=True,
),
Method(Torrent, "is_not_partially_done", "d.is_not_partially_done",
boolean=True,
),
Method(Torrent, "get_time_started", "d.timestamp.started"),
Method(Torrent, "get_custom1", "d.get_custom1"),
Method(Torrent, "get_custom2", "d.get_custom2"),
Method(Torrent, "get_custom3", "d.get_custom3"),
Method(Torrent, "get_custom4", "d.get_custom4"),
Method(Torrent, "get_custom5", "d.get_custom5"),
# MODIFIERS
Method(Torrent, 'set_uploads_max', 'd.set_uploads_max'),
Method(Torrent, 'set_tied_to_file', 'd.set_tied_to_file'),
Method(Torrent, 'set_tracker_numwant', 'd.set_tracker_numwant'),
Method(Torrent, 'set_priority', 'd.set_priority'),
Method(Torrent, 'set_peers_max', 'd.set_peers_max'),
Method(Torrent, 'set_hashing_failed', 'd.set_hashing_failed'),
Method(Torrent, 'set_message', 'd.set_message'),
Method(Torrent, 'set_throttle_name', 'd.set_throttle_name'),
Method(Torrent, 'set_peers_min', 'd.set_peers_min'),
Method(Torrent, 'set_ignore_commands', 'd.set_ignore_commands'),
Method(Torrent, 'set_max_file_size', 'd.set_max_file_size'),
Method(Torrent, 'set_custom5', 'd.set_custom5'),
Method(Torrent, 'set_custom4', 'd.set_custom4'),
Method(Torrent, 'set_custom2', 'd.set_custom2'),
Method(Torrent, 'set_custom1', 'd.set_custom1'),
Method(Torrent, 'set_custom3', 'd.set_custom3'),
Method(Torrent, 'set_connection_current', 'd.set_connection_current'),
]

View File

@ -0,0 +1,138 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# from rtorrent.rpc import Method
import rtorrent.rpc
from rtorrent.common import safe_repr
Method = rtorrent.rpc.Method
class Tracker:
"""Represents an individual tracker within a L{Torrent} instance."""
def __init__(self, _rt_obj, info_hash, **kwargs):
self._rt_obj = _rt_obj
self.info_hash = info_hash # : info hash for the torrent using this tracker
for k in kwargs.keys():
setattr(self, k, kwargs.get(k, None))
# for clarity's sake...
self.index = self.group # : position of tracker within the torrent's tracker list
self.rpc_id = "{0}:t{1}".format(
self.info_hash, self.index) # : unique id to pass to rTorrent
def __repr__(self):
return safe_repr("Tracker(index={0}, url=\"{1}\")",
self.index, self.url)
def enable(self):
"""Alias for set_enabled("yes")"""
self.set_enabled("yes")
def disable(self):
"""Alias for set_enabled("no")"""
self.set_enabled("no")
def update(self):
"""Refresh tracker data
@note: All fields are stored as attributes to self.
@return: None
"""
multicall = rtorrent.rpc.Multicall(self)
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self._rt_obj)]
for method in retriever_methods:
multicall.add(method, self.rpc_id)
multicall.call()
methods = [
# RETRIEVERS
Method(Tracker, 'is_enabled', 't.is_enabled', boolean=True),
Method(Tracker, 'get_id', 't.get_id'),
Method(Tracker, 'get_scrape_incomplete', 't.get_scrape_incomplete'),
Method(Tracker, 'is_open', 't.is_open', boolean=True),
Method(Tracker, 'get_min_interval', 't.get_min_interval'),
Method(Tracker, 'get_scrape_downloaded', 't.get_scrape_downloaded'),
Method(Tracker, 'get_group', 't.get_group'),
Method(Tracker, 'get_scrape_time_last', 't.get_scrape_time_last'),
Method(Tracker, 'get_type', 't.get_type'),
Method(Tracker, 'get_normal_interval', 't.get_normal_interval'),
Method(Tracker, 'get_url', 't.get_url'),
Method(Tracker, 'get_scrape_complete', 't.get_scrape_complete',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_activity_time_last', 't.activity_time_last',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_activity_time_next', 't.activity_time_next',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_failed_time_last', 't.failed_time_last',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_failed_time_next', 't.failed_time_next',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_success_time_last', 't.success_time_last',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_success_time_next', 't.success_time_next',
min_version=(0, 8, 9),
),
Method(Tracker, 'can_scrape', 't.can_scrape',
min_version=(0, 9, 1),
boolean=True
),
Method(Tracker, 'get_failed_counter', 't.failed_counter',
min_version=(0, 8, 9)
),
Method(Tracker, 'get_scrape_counter', 't.scrape_counter',
min_version=(0, 8, 9)
),
Method(Tracker, 'get_success_counter', 't.success_counter',
min_version=(0, 8, 9)
),
Method(Tracker, 'is_usable', 't.is_usable',
min_version=(0, 9, 1),
boolean=True
),
Method(Tracker, 'is_busy', 't.is_busy',
min_version=(0, 9, 1),
boolean=True
),
Method(Tracker, 'is_extra_tracker', 't.is_extra_tracker',
min_version=(0, 9, 1),
boolean=True,
),
Method(Tracker, "get_latest_sum_peers", "t.latest_sum_peers",
min_version=(0, 9, 0)
),
Method(Tracker, "get_latest_new_peers", "t.latest_new_peers",
min_version=(0, 9, 0)
),
# MODIFIERS
Method(Tracker, 'set_enabled', 't.set_enabled'),
]

86
lib/rtorrent/common.py Normal file
View File

@ -0,0 +1,86 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from compat import is_py3
def bool_to_int(value):
"""Translates python booleans to RPC-safe integers"""
if value is True:
return("1")
elif value is False:
return("0")
else:
return(value)
def cmd_exists(cmds_list, cmd):
"""Check if given command is in list of available commands
@param cmds_list: see L{RTorrent._rpc_methods}
@type cmds_list: list
@param cmd: name of command to be checked
@type cmd: str
@return: bool
"""
return(cmd in cmds_list)
def find_torrent(info_hash, torrent_list):
"""Find torrent file in given list of Torrent classes
@param info_hash: info hash of torrent
@type info_hash: str
@param torrent_list: list of L{Torrent} instances (see L{RTorrent.get_torrents})
@type torrent_list: list
@return: L{Torrent} instance, or -1 if not found
"""
for t in torrent_list:
if t.info_hash == info_hash:
return t
return None
def is_valid_port(port):
"""Check if given port is valid"""
return(0 <= int(port) <= 65535)
def convert_version_tuple_to_str(t):
return(".".join([str(n) for n in t]))
def safe_repr(fmt, *args, **kwargs):
""" Formatter that handles unicode arguments """
if not is_py3():
# unicode fmt can take str args, str fmt cannot take unicode args
fmt = fmt.decode("utf-8")
out = fmt.format(*args, **kwargs)
return out.encode("utf-8")
else:
return fmt.format(*args, **kwargs)

30
lib/rtorrent/compat.py Normal file
View File

@ -0,0 +1,30 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import sys
def is_py3():
return sys.version_info[0] == 3
if is_py3():
import xmlrpc.client as xmlrpclib
else:
import xmlrpclib

40
lib/rtorrent/err.py Normal file
View File

@ -0,0 +1,40 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from common import convert_version_tuple_to_str
class RTorrentVersionError(Exception):
def __init__(self, min_version, cur_version):
self.min_version = min_version
self.cur_version = cur_version
self.msg = "Minimum version required: {0}".format(
convert_version_tuple_to_str(min_version))
def __str__(self):
return(self.msg)
class MethodError(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return(self.msg)

91
lib/rtorrent/file.py Normal file
View File

@ -0,0 +1,91 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# from rtorrent.rpc import Method
import rpc
from common import safe_repr
Method = rpc.Method
class File:
"""Represents an individual file within a L{Torrent} instance."""
def __init__(self, _rt_obj, info_hash, index, **kwargs):
self._rt_obj = _rt_obj
self.info_hash = info_hash # : info hash for the torrent the file is associated with
self.index = index # : The position of the file within the file list
for k in kwargs.keys():
setattr(self, k, kwargs.get(k, None))
self.rpc_id = "{0}:f{1}".format(
self.info_hash, self.index) # : unique id to pass to rTorrent
def update(self):
"""Refresh file data
@note: All fields are stored as attributes to self.
@return: None
"""
multicall = rtorrent.rpc.Multicall(self)
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self._rt_obj)]
for method in retriever_methods:
multicall.add(method, self.rpc_id)
multicall.call()
def __repr__(self):
return safe_repr("File(index={0} path=\"{1}\")", self.index, self.path)
methods = [
# RETRIEVERS
Method(File, 'get_last_touched', 'f.get_last_touched'),
Method(File, 'get_range_second', 'f.get_range_second'),
Method(File, 'get_size_bytes', 'f.get_size_bytes'),
Method(File, 'get_priority', 'f.get_priority'),
Method(File, 'get_match_depth_next', 'f.get_match_depth_next'),
Method(File, 'is_resize_queued', 'f.is_resize_queued',
boolean=True,
),
Method(File, 'get_range_first', 'f.get_range_first'),
Method(File, 'get_match_depth_prev', 'f.get_match_depth_prev'),
Method(File, 'get_path', 'f.get_path'),
Method(File, 'get_completed_chunks', 'f.get_completed_chunks'),
Method(File, 'get_path_components', 'f.get_path_components'),
Method(File, 'is_created', 'f.is_created',
boolean=True,
),
Method(File, 'is_open', 'f.is_open',
boolean=True,
),
Method(File, 'get_size_chunks', 'f.get_size_chunks'),
Method(File, 'get_offset', 'f.get_offset'),
Method(File, 'get_frozen_path', 'f.get_frozen_path'),
Method(File, 'get_path_depth', 'f.get_path_depth'),
Method(File, 'is_create_queued', 'f.is_create_queued',
boolean=True,
),
# MODIFIERS
]

84
lib/rtorrent/group.py Normal file
View File

@ -0,0 +1,84 @@
# Copyright (c) 2013 Dean Gardiner, <gardiner91@gmail.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import rpc
Method = rpc.Method
class Group:
__name__ = 'Group'
def __init__(self, _rt_obj, name):
self._rt_obj = _rt_obj
self.name = name
self.methods = [
# RETRIEVERS
Method(Group, 'get_max', 'group.' + self.name + '.ratio.max', varname='max'),
Method(Group, 'get_min', 'group.' + self.name + '.ratio.min', varname='min'),
Method(Group, 'get_upload', 'group.' + self.name + '.ratio.upload', varname='upload'),
# MODIFIERS
Method(Group, 'set_max', 'group.' + self.name + '.ratio.max.set', varname='max'),
Method(Group, 'set_min', 'group.' + self.name + '.ratio.min.set', varname='min'),
Method(Group, 'set_upload', 'group.' + self.name + '.ratio.upload.set', varname='upload')
]
rtorrent.rpc._build_rpc_methods(self, self.methods)
# Setup multicall_add method
caller = lambda multicall, method, *args: \
multicall.add(method, *args)
setattr(self, "multicall_add", caller)
def _get_prefix(self):
return 'group.' + self.name + '.ratio.'
def update(self):
multicall = rtorrent.rpc.Multicall(self)
retriever_methods = [m for m in self.methods
if m.is_retriever() and m.is_available(self._rt_obj)]
for method in retriever_methods:
multicall.add(method)
multicall.call()
def enable(self):
p = self._rt_obj._get_conn()
return getattr(p, self._get_prefix() + 'enable')()
def disable(self):
p = self._rt_obj._get_conn()
return getattr(p, self._get_prefix() + 'disable')()
def set_command(self, *methods):
methods = [m + '=' for m in methods]
m = rtorrent.rpc.Multicall(self)
self.multicall_add(
m, 'system.method.set',
self._get_prefix() + 'command',
*methods
)
return(m.call()[-1])

View File

281
lib/rtorrent/lib/bencode.py Normal file
View File

@ -0,0 +1,281 @@
# Copyright (C) 2011 by clueless <clueless.nospam ! mail.com>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
# Version: 20111107
#
# Changelog
# ---------
# 2011-11-07 - Added support for Python2 (tested on 2.6)
# 2011-10-03 - Fixed: moved check for end of list at the top of the while loop
# in _decode_list (in case the list is empty) (Chris Lucas)
# - Converted dictionary keys to str
# 2011-04-24 - Changed date format to YYYY-MM-DD for versioning, bigger
# integer denotes a newer version
# - Fixed a bug that would treat False as an integral type but
# encode it using the 'False' string, attempting to encode a
# boolean now results in an error
# - Fixed a bug where an integer value of 0 in a list or
# dictionary resulted in a parse error while decoding
#
# 2011-04-03 - Original release
import sys
_py3 = sys.version_info[0] == 3
if _py3:
_VALID_STRING_TYPES = (str,)
else:
_VALID_STRING_TYPES = (str, unicode) # @UndefinedVariable
_TYPE_INT = 1
_TYPE_STRING = 2
_TYPE_LIST = 3
_TYPE_DICTIONARY = 4
_TYPE_END = 5
_TYPE_INVALID = 6
# Function to determine the type of he next value/item
# Arguments:
# char First character of the string that is to be decoded
# Return value:
# Returns an integer that describes what type the next value/item is
def _gettype(char):
if not isinstance(char, int):
char = ord(char)
if char == 0x6C: # 'l'
return _TYPE_LIST
elif char == 0x64: # 'd'
return _TYPE_DICTIONARY
elif char == 0x69: # 'i'
return _TYPE_INT
elif char == 0x65: # 'e'
return _TYPE_END
elif char >= 0x30 and char <= 0x39: # '0' '9'
return _TYPE_STRING
else:
return _TYPE_INVALID
# Function to parse a string from the bendcoded data
# Arguments:
# data bencoded data, must be guaranteed to be a string
# Return Value:
# Returns a tuple, the first member of the tuple is the parsed string
# The second member is whatever remains of the bencoded data so it can
# be used to parse the next part of the data
def _decode_string(data):
end = 1
# if py3, data[end] is going to be an int
# if py2, data[end] will be a string
if _py3:
char = 0x3A
else:
char = chr(0x3A)
while data[end] != char: # ':'
end = end + 1
strlen = int(data[:end])
return (data[end + 1:strlen + end + 1], data[strlen + end + 1:])
# Function to parse an integer from the bencoded data
# Arguments:
# data bencoded data, must be guaranteed to be an integer
# Return Value:
# Returns a tuple, the first member of the tuple is the parsed string
# The second member is whatever remains of the bencoded data so it can
# be used to parse the next part of the data
def _decode_int(data):
end = 1
# if py3, data[end] is going to be an int
# if py2, data[end] will be a string
if _py3:
char = 0x65
else:
char = chr(0x65)
while data[end] != char: # 'e'
end = end + 1
return (int(data[1:end]), data[end + 1:])
# Function to parse a bencoded list
# Arguments:
# data bencoded data, must be guaranted to be the start of a list
# Return Value:
# Returns a tuple, the first member of the tuple is the parsed list
# The second member is whatever remains of the bencoded data so it can
# be used to parse the next part of the data
def _decode_list(data):
x = []
overflow = data[1:]
while True: # Loop over the data
if _gettype(overflow[0]) == _TYPE_END: # - Break if we reach the end of the list
return (x, overflow[1:]) # and return the list and overflow
value, overflow = _decode(overflow) #
if isinstance(value, bool) or overflow == '': # - if we have a parse error
return (False, False) # Die with error
else: # - Otherwise
x.append(value) # add the value to the list
# Function to parse a bencoded list
# Arguments:
# data bencoded data, must be guaranted to be the start of a list
# Return Value:
# Returns a tuple, the first member of the tuple is the parsed dictionary
# The second member is whatever remains of the bencoded data so it can
# be used to parse the next part of the data
def _decode_dict(data):
x = {}
overflow = data[1:]
while True: # Loop over the data
if _gettype(overflow[0]) != _TYPE_STRING: # - If the key is not a string
return (False, False) # Die with error
key, overflow = _decode(overflow) #
if key == False or overflow == '': # - If parse error
return (False, False) # Die with error
value, overflow = _decode(overflow) #
if isinstance(value, bool) or overflow == '': # - If parse error
print("Error parsing value")
print(value)
print(overflow)
return (False, False) # Die with error
else:
# don't use bytes for the key
key = key.decode()
x[key] = value
if _gettype(overflow[0]) == _TYPE_END:
return (x, overflow[1:])
# Arguments:
# data bencoded data in bytes format
# Return Values:
# Returns a tuple, the first member is the parsed data, could be a string,
# an integer, a list or a dictionary, or a combination of those
# The second member is the leftover of parsing, if everything parses correctly this
# should be an empty byte string
def _decode(data):
btype = _gettype(data[0])
if btype == _TYPE_INT:
return _decode_int(data)
elif btype == _TYPE_STRING:
return _decode_string(data)
elif btype == _TYPE_LIST:
return _decode_list(data)
elif btype == _TYPE_DICTIONARY:
return _decode_dict(data)
else:
return (False, False)
# Function to decode bencoded data
# Arguments:
# data bencoded data, can be str or bytes
# Return Values:
# Returns the decoded data on success, this coud be bytes, int, dict or list
# or a combinatin of those
# If an error occurs the return value is False
def decode(data):
# if isinstance(data, str):
# data = data.encode()
decoded, overflow = _decode(data)
return decoded
# Args: data as integer
# return: encoded byte string
def _encode_int(data):
return b'i' + str(data).encode() + b'e'
# Args: data as string or bytes
# Return: encoded byte string
def _encode_string(data):
return str(len(data)).encode() + b':' + data
# Args: data as list
# Return: Encoded byte string, false on error
def _encode_list(data):
elist = b'l'
for item in data:
eitem = encode(item)
if eitem == False:
return False
elist += eitem
return elist + b'e'
# Args: data as dict
# Return: encoded byte string, false on error
def _encode_dict(data):
edict = b'd'
keys = []
for key in data:
if not isinstance(key, _VALID_STRING_TYPES) and not isinstance(key, bytes):
return False
keys.append(key)
keys.sort()
for key in keys:
ekey = encode(key)
eitem = encode(data[key])
if ekey == False or eitem == False:
return False
edict += ekey + eitem
return edict + b'e'
# Function to encode a variable in bencoding
# Arguments:
# data Variable to be encoded, can be a list, dict, str, bytes, int or a combination of those
# Return Values:
# Returns the encoded data as a byte string when successful
# If an error occurs the return value is False
def encode(data):
if isinstance(data, bool):
return False
elif isinstance(data, int):
return _encode_int(data)
elif isinstance(data, bytes):
return _encode_string(data)
elif isinstance(data, _VALID_STRING_TYPES):
return _encode_string(data.encode())
elif isinstance(data, list):
return _encode_list(data)
elif isinstance(data, dict):
return _encode_dict(data)
else:
return False

View File

@ -0,0 +1,161 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from lib.rtorrent.compat import is_py3
import os.path
import re
import bencode as bencode
import hashlib
if is_py3():
from urllib.request import urlopen # @UnresolvedImport @UnusedImport
else:
from urllib2 import urlopen # @UnresolvedImport @Reimport
class TorrentParser():
def __init__(self, torrent):
"""Decode and parse given torrent
@param torrent: handles: urls, file paths, string of torrent data
@type torrent: str
@raise AssertionError: Can be raised for a couple reasons:
- If _get_raw_torrent() couldn't figure out
what X{torrent} is
- if X{torrent} isn't a valid bencoded torrent file
"""
self.torrent = torrent
self._raw_torrent = None # : testing yo
self._torrent_decoded = None # : what up
self.file_type = None
self._get_raw_torrent()
assert self._raw_torrent is not None, "Couldn't get raw_torrent."
if self._torrent_decoded is None:
self._decode_torrent()
assert isinstance(self._torrent_decoded, dict), "Invalid torrent file."
self._parse_torrent()
def _is_raw(self):
raw = False
if isinstance(self.torrent, (str, bytes)):
if isinstance(self._decode_torrent(self.torrent), dict):
raw = True
else:
# reset self._torrent_decoded (currently equals False)
self._torrent_decoded = None
return(raw)
def _get_raw_torrent(self):
"""Get raw torrent data by determining what self.torrent is"""
# already raw?
if self._is_raw():
self.file_type = "raw"
self._raw_torrent = self.torrent
return
# local file?
if os.path.isfile(self.torrent):
self.file_type = "file"
self._raw_torrent = open(self.torrent, "rb").read()
# url?
elif re.search("^(http|ftp):\/\/", self.torrent, re.I):
self.file_type = "url"
self._raw_torrent = urlopen(self.torrent).read()
def _decode_torrent(self, raw_torrent=None):
if raw_torrent is None:
raw_torrent = self._raw_torrent
self._torrent_decoded = bencode.decode(raw_torrent)
return(self._torrent_decoded)
def _calc_info_hash(self):
self.info_hash = None
if "info" in self._torrent_decoded.keys():
info_encoded = bencode.encode(self._torrent_decoded["info"])
if info_encoded:
self.info_hash = hashlib.sha1(info_encoded).hexdigest().upper()
return(self.info_hash)
def _parse_torrent(self):
for k in self._torrent_decoded:
key = k.replace(" ", "_").lower()
setattr(self, key, self._torrent_decoded[k])
self._calc_info_hash()
class NewTorrentParser(object):
@staticmethod
def _read_file(fp):
return fp.read()
@staticmethod
def _write_file(fp):
fp.write()
return fp
@staticmethod
def _decode_torrent(data):
return bencode.decode(data)
def __init__(self, input):
self.input = input
self._raw_torrent = None
self._decoded_torrent = None
self._hash_outdated = False
if isinstance(self.input, (str, bytes)):
# path to file?
if os.path.isfile(self.input):
self._raw_torrent = self._read_file(open(self.input, "rb"))
else:
# assume input was the raw torrent data (do we really want
# this?)
self._raw_torrent = self.input
# file-like object?
elif self.input.hasattr("read"):
self._raw_torrent = self._read_file(self.input)
assert self._raw_torrent is not None, "Invalid input: input must be a path or a file-like object"
self._decoded_torrent = self._decode_torrent(self._raw_torrent)
assert isinstance(
self._decoded_torrent, dict), "File could not be decoded"
def _calc_info_hash(self):
self.info_hash = None
info_dict = self._torrent_decoded["info"]
self.info_hash = hashlib.sha1(bencode.encode(
info_dict)).hexdigest().upper()
return(self.info_hash)
def set_tracker(self, tracker):
self._decoded_torrent["announce"] = tracker
def get_tracker(self):
return self._decoded_torrent.get("announce")

View File

View File

@ -0,0 +1,73 @@
#
# Copyright (c) 2013 Dean Gardiner, <gardiner91@gmail.com>
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from base64 import encodestring
import string
import xmlrpclib
class BasicAuthTransport(xmlrpclib.Transport):
def __init__(self, username=None, password=None):
xmlrpclib.Transport.__init__(self)
self.username = username
self.password = password
def send_auth(self, h):
if self.username is not None and self.password is not None:
h.putheader('AUTHORIZATION', "Basic %s" % string.replace(
encodestring("%s:%s" % (self.username, self.password)),
"\012", ""
))
def single_request(self, host, handler, request_body, verbose=0):
# issue XML-RPC request
h = self.make_connection(host)
if verbose:
h.set_debuglevel(1)
try:
self.send_request(h, handler, request_body)
self.send_host(h, host)
self.send_user_agent(h)
self.send_auth(h)
self.send_content(h, request_body)
response = h.getresponse(buffering=True)
if response.status == 200:
self.verbose = verbose
return self.parse_response(response)
except xmlrpclib.Fault:
raise
except Exception:
self.close()
raise
#discard any response data and raise exception
if response.getheader("content-length", 0):
response.read()
raise xmlrpclib.ProtocolError(
host + handler,
response.status, response.reason,
response.msg,
)

View File

@ -0,0 +1,23 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
from lib.rtorrent.compat import xmlrpclib
HTTPServerProxy = xmlrpclib.ServerProxy

View File

@ -0,0 +1,219 @@
#!/usr/bin/python
# rtorrent_xmlrpc
# (c) 2011 Roger Que <alerante@bellsouth.net>
#
# Modified portions:
# (c) 2013 Dean Gardiner <gardiner91@gmail.com>
#
# Python module for interacting with rtorrent's XML-RPC interface
# directly over SCGI, instead of through an HTTP server intermediary.
# Inspired by Glenn Washburn's xmlrpc2scgi.py [1], but subclasses the
# built-in xmlrpclib classes so that it is compatible with features
# such as MultiCall objects.
#
# [1] <http://libtorrent.rakshasa.no/wiki/UtilsXmlrpc2scgi>
#
# Usage: server = SCGIServerProxy('scgi://localhost:7000/')
# server = SCGIServerProxy('scgi:///path/to/scgi.sock')
# print server.system.listMethods()
# mc = xmlrpclib.MultiCall(server)
# mc.get_up_rate()
# mc.get_down_rate()
# print mc()
#
#
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# In addition, as a special exception, the copyright holders give
# permission to link the code of portions of this program with the
# OpenSSL library under certain conditions as described in each
# individual source file, and distribute linked combinations
# including the two.
#
# You must obey the GNU General Public License in all respects for
# all of the code used other than OpenSSL. If you modify file(s)
# with this exception, you may extend this exception to your version
# of the file(s), but you are not obligated to do so. If you do not
# wish to do so, delete this exception statement from your version.
# If you delete this exception statement from all source files in the
# program, then also delete it here.
#
#
#
# Portions based on Python's xmlrpclib:
#
# Copyright (c) 1999-2002 by Secret Labs AB
# Copyright (c) 1999-2002 by Fredrik Lundh
#
# By obtaining, using, and/or copying this software and/or its
# associated documentation, you agree that you have read, understood,
# and will comply with the following terms and conditions:
#
# Permission to use, copy, modify, and distribute this software and
# its associated documentation for any purpose and without fee is
# hereby granted, provided that the above copyright notice appears in
# all copies, and that both that copyright notice and this permission
# notice appear in supporting documentation, and that the name of
# Secret Labs AB or the author not be used in advertising or publicity
# pertaining to distribution of the software without specific, written
# prior permission.
#
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
import httplib
import re
import socket
import urllib
import xmlrpclib
import errno
class SCGITransport(xmlrpclib.Transport):
# Added request() from Python 2.7 xmlrpclib here to backport to Python 2.6
def request(self, host, handler, request_body, verbose=0):
#retry request once if cached connection has gone cold
for i in (0, 1):
try:
return self.single_request(host, handler, request_body, verbose)
except socket.error, e:
if i or e.errno not in (errno.ECONNRESET, errno.ECONNABORTED, errno.EPIPE):
raise
except httplib.BadStatusLine: #close after we sent request
if i:
raise
def single_request(self, host, handler, request_body, verbose=0):
# Add SCGI headers to the request.
headers = {'CONTENT_LENGTH': str(len(request_body)), 'SCGI': '1'}
header = '\x00'.join(('%s\x00%s' % item for item in headers.iteritems())) + '\x00'
header = '%d:%s' % (len(header), header)
request_body = '%s,%s' % (header, request_body)
sock = None
try:
if host:
host, port = urllib.splitport(host)
addrinfo = socket.getaddrinfo(host, int(port), socket.AF_INET,
socket.SOCK_STREAM)
sock = socket.socket(*addrinfo[0][:3])
sock.connect(addrinfo[0][4])
else:
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.connect(handler)
self.verbose = verbose
sock.send(request_body)
return self.parse_response(sock.makefile())
finally:
if sock:
sock.close()
def parse_response(self, response):
p, u = self.getparser()
response_body = ''
while True:
data = response.read(1024)
if not data:
break
response_body += data
# Remove SCGI headers from the response.
response_header, response_body = re.split(r'\n\s*?\n', response_body,
maxsplit=1)
if self.verbose:
print 'body:', repr(response_body)
p.feed(response_body)
p.close()
return u.close()
class SCGIServerProxy(xmlrpclib.ServerProxy):
def __init__(self, uri, transport=None, encoding=None, verbose=False,
allow_none=False, use_datetime=False):
type, uri = urllib.splittype(uri)
if type not in ('scgi'):
raise IOError('unsupported XML-RPC protocol')
self.__host, self.__handler = urllib.splithost(uri)
if not self.__handler:
self.__handler = '/'
if transport is None:
transport = SCGITransport(use_datetime=use_datetime)
self.__transport = transport
self.__encoding = encoding
self.__verbose = verbose
self.__allow_none = allow_none
def __close(self):
self.__transport.close()
def __request(self, methodname, params):
# call a method on the remote server
request = xmlrpclib.dumps(params, methodname, encoding=self.__encoding,
allow_none=self.__allow_none)
response = self.__transport.request(
self.__host,
self.__handler,
request,
verbose=self.__verbose
)
if len(response) == 1:
response = response[0]
return response
def __repr__(self):
return (
"<SCGIServerProxy for %s%s>" %
(self.__host, self.__handler)
)
__str__ = __repr__
def __getattr__(self, name):
# magic method dispatcher
return xmlrpclib._Method(self.__request, name)
# note: to call a remote object with an non-standard name, use
# result getattr(server, "strange-python-name")(args)
def __call__(self, attr):
"""A workaround to get special attributes on the ServerProxy
without interfering with the magic __getattr__
"""
if attr == "close":
return self.__close
elif attr == "transport":
return self.__transport
raise AttributeError("Attribute %r not found" % (attr,))

98
lib/rtorrent/peer.py Normal file
View File

@ -0,0 +1,98 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# from rtorrent.rpc import Method
import rpc
from common import safe_repr
Method = rpc.Method
class Peer:
"""Represents an individual peer within a L{Torrent} instance."""
def __init__(self, _rt_obj, info_hash, **kwargs):
self._rt_obj = _rt_obj
self.info_hash = info_hash # : info hash for the torrent the peer is associated with
for k in kwargs.keys():
setattr(self, k, kwargs.get(k, None))
self.rpc_id = "{0}:p{1}".format(
self.info_hash, self.id) # : unique id to pass to rTorrent
def __repr__(self):
return safe_repr("Peer(id={0})", self.id)
def update(self):
"""Refresh peer data
@note: All fields are stored as attributes to self.
@return: None
"""
multicall = rtorrent.rpc.Multicall(self)
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self._rt_obj)]
for method in retriever_methods:
multicall.add(method, self.rpc_id)
multicall.call()
methods = [
# RETRIEVERS
Method(Peer, 'is_preferred', 'p.is_preferred',
boolean=True,
),
Method(Peer, 'get_down_rate', 'p.get_down_rate'),
Method(Peer, 'is_unwanted', 'p.is_unwanted',
boolean=True,
),
Method(Peer, 'get_peer_total', 'p.get_peer_total'),
Method(Peer, 'get_peer_rate', 'p.get_peer_rate'),
Method(Peer, 'get_port', 'p.get_port'),
Method(Peer, 'is_snubbed', 'p.is_snubbed',
boolean=True,
),
Method(Peer, 'get_id_html', 'p.get_id_html'),
Method(Peer, 'get_up_rate', 'p.get_up_rate'),
Method(Peer, 'is_banned', 'p.banned',
boolean=True,
),
Method(Peer, 'get_completed_percent', 'p.get_completed_percent'),
Method(Peer, 'completed_percent', 'p.completed_percent'),
Method(Peer, 'get_id', 'p.get_id'),
Method(Peer, 'is_obfuscated', 'p.is_obfuscated',
boolean=True,
),
Method(Peer, 'get_down_total', 'p.get_down_total'),
Method(Peer, 'get_client_version', 'p.get_client_version'),
Method(Peer, 'get_address', 'p.get_address'),
Method(Peer, 'is_incoming', 'p.is_incoming',
boolean=True,
),
Method(Peer, 'is_encrypted', 'p.is_encrypted',
boolean=True,
),
Method(Peer, 'get_options_str', 'p.get_options_str'),
Method(Peer, 'get_client_version', 'p.client_version'),
Method(Peer, 'get_up_total', 'p.get_up_total'),
# MODIFIERS
]

View File

@ -0,0 +1,319 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import inspect
import lib.rtorrent
import re
from lib.rtorrent.common import bool_to_int, convert_version_tuple_to_str,\
safe_repr
from lib.rtorrent.err import MethodError
from lib.rtorrent.compat import xmlrpclib
def get_varname(rpc_call):
"""Transform rpc method into variable name.
@newfield example: Example
@example: if the name of the rpc method is 'p.get_down_rate', the variable
name will be 'down_rate'
"""
# extract variable name from xmlrpc func name
r = re.search(
"([ptdf]\.|system\.|get\_|is\_|set\_)+([^=]*)", rpc_call, re.I)
if r:
return(r.groups()[-1])
else:
return(None)
def _handle_unavailable_rpc_method(method, rt_obj):
msg = "Method isn't available."
if rt_obj._get_client_version_tuple() < method.min_version:
msg = "This method is only available in " \
"RTorrent version v{0} or later".format(
convert_version_tuple_to_str(method.min_version))
raise MethodError(msg)
class DummyClass:
def __init__(self):
pass
class Method:
"""Represents an individual RPC method"""
def __init__(self, _class, method_name,
rpc_call, docstring=None, varname=None, **kwargs):
self._class = _class # : Class this method is associated with
self.class_name = _class.__name__
self.method_name = method_name # : name of public-facing method
self.rpc_call = rpc_call # : name of rpc method
self.docstring = docstring # : docstring for rpc method (optional)
self.varname = varname # : variable for the result of the method call, usually set to self.varname
self.min_version = kwargs.get("min_version", (
0, 0, 0)) # : Minimum version of rTorrent required
self.boolean = kwargs.get("boolean", False) # : returns boolean value?
self.post_process_func = kwargs.get(
"post_process_func", None) # : custom post process function
self.aliases = kwargs.get(
"aliases", []) # : aliases for method (optional)
self.required_args = []
#: Arguments required when calling the method (not utilized)
self.method_type = self._get_method_type()
if self.varname is None:
self.varname = get_varname(self.rpc_call)
assert self.varname is not None, "Couldn't get variable name."
def __repr__(self):
return safe_repr("Method(method_name='{0}', rpc_call='{1}')",
self.method_name, self.rpc_call)
def _get_method_type(self):
"""Determine whether method is a modifier or a retriever"""
if self.method_name[:4] == "set_": return('m') # modifier
else:
return('r') # retriever
def is_modifier(self):
if self.method_type == 'm':
return(True)
else:
return(False)
def is_retriever(self):
if self.method_type == 'r':
return(True)
else:
return(False)
def is_available(self, rt_obj):
if rt_obj._get_client_version_tuple() < self.min_version or \
self.rpc_call not in rt_obj._get_rpc_methods():
return(False)
else:
return(True)
class Multicall:
def __init__(self, class_obj, **kwargs):
self.class_obj = class_obj
if class_obj.__class__.__name__ == "RTorrent":
self.rt_obj = class_obj
else:
self.rt_obj = class_obj._rt_obj
self.calls = []
def add(self, method, *args):
"""Add call to multicall
@param method: L{Method} instance or name of raw RPC method
@type method: Method or str
@param args: call arguments
"""
# if a raw rpc method was given instead of a Method instance,
# try and find the instance for it. And if all else fails, create a
# dummy Method instance
if isinstance(method, str):
result = find_method(method)
# if result not found
if result == -1:
method = Method(DummyClass, method, method)
else:
method = result
# ensure method is available before adding
if not method.is_available(self.rt_obj):
_handle_unavailable_rpc_method(method, self.rt_obj)
self.calls.append((method, args))
def list_calls(self):
for c in self.calls:
print(c)
def call(self):
"""Execute added multicall calls
@return: the results (post-processed), in the order they were added
@rtype: tuple
"""
m = xmlrpclib.MultiCall(self.rt_obj._get_conn())
for call in self.calls:
method, args = call
rpc_call = getattr(method, "rpc_call")
getattr(m, rpc_call)(*args)
results = m()
results = tuple(results)
results_processed = []
for r, c in zip(results, self.calls):
method = c[0] # Method instance
result = process_result(method, r)
results_processed.append(result)
# assign result to class_obj
exists = hasattr(self.class_obj, method.varname)
if not exists or not inspect.ismethod(getattr(self.class_obj, method.varname)):
setattr(self.class_obj, method.varname, result)
return(tuple(results_processed))
def call_method(class_obj, method, *args):
"""Handles single RPC calls
@param class_obj: Peer/File/Torrent/Tracker/RTorrent instance
@type class_obj: object
@param method: L{Method} instance or name of raw RPC method
@type method: Method or str
"""
if method.is_retriever():
args = args[:-1]
else:
assert args[-1] is not None, "No argument given."
if class_obj.__class__.__name__ == "RTorrent":
rt_obj = class_obj
else:
rt_obj = class_obj._rt_obj
# check if rpc method is even available
if not method.is_available(rt_obj):
_handle_unavailable_rpc_method(method, rt_obj)
m = Multicall(class_obj)
m.add(method, *args)
# only added one method, only getting one result back
ret_value = m.call()[0]
####### OBSOLETE ##########################################################
# if method.is_retriever():
# #value = process_result(method, ret_value)
# value = ret_value #MultiCall already processed the result
# else:
# # we're setting the user's input to method.varname
# # but we'll return the value that xmlrpc gives us
# value = process_result(method, args[-1])
##########################################################################
return(ret_value)
def find_method(rpc_call):
"""Return L{Method} instance associated with given RPC call"""
method_lists = [
lib.rtorrent.methods,
lib.rtorrent.file.methods,
lib.rtorrent.tracker.methods,
lib.rtorrent.peer.methods,
lib.rtorrent.torrent.methods,
]
for l in method_lists:
for m in l:
if m.rpc_call.lower() == rpc_call.lower():
return(m)
return(-1)
def process_result(method, result):
"""Process given C{B{result}} based on flags set in C{B{method}}
@param method: L{Method} instance
@type method: Method
@param result: result to be processed (the result of given L{Method} instance)
@note: Supported Processing:
- boolean - convert ones and zeros returned by rTorrent and
convert to python boolean values
"""
# handle custom post processing function
if method.post_process_func is not None:
result = method.post_process_func(result)
# is boolean?
if method.boolean:
if result in [1, '1']:
result = True
elif result in [0, '0']:
result = False
return(result)
def _build_rpc_methods(class_, method_list):
"""Build glorified aliases to raw RPC methods"""
instance = None
if not inspect.isclass(class_):
instance = class_
class_ = instance.__class__
for m in method_list:
class_name = m.class_name
if class_name != class_.__name__:
continue
if class_name == "RTorrent":
caller = lambda self, arg = None, method = m:\
call_method(self, method, bool_to_int(arg))
elif class_name == "Torrent":
caller = lambda self, arg = None, method = m:\
call_method(self, method, self.rpc_id,
bool_to_int(arg))
elif class_name in ["Tracker", "File"]:
caller = lambda self, arg = None, method = m:\
call_method(self, method, self.rpc_id,
bool_to_int(arg))
elif class_name == "Peer":
caller = lambda self, arg = None, method = m:\
call_method(self, method, self.rpc_id,
bool_to_int(arg))
elif class_name == "Group":
caller = lambda arg = None, method = m: \
call_method(instance, method, bool_to_int(arg))
if m.docstring is None:
m.docstring = ""
# print(m)
docstring = """{0}
@note: Variable where the result for this method is stored: {1}.{2}""".format(
m.docstring,
class_name,
m.varname)
caller.__doc__ = docstring
for method_name in [m.method_name] + list(m.aliases):
if instance is None:
setattr(class_, method_name, caller)
else:
setattr(instance, method_name, caller)

517
lib/rtorrent/torrent.py Normal file
View File

@ -0,0 +1,517 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import rpc
# from rtorrent.rpc import Method
import peer
import tracker
import file
import compat
from common import safe_repr
Peer = peer.Peer
Tracker = tracker.Tracker
File = file.File
Method = rpc.Method
class Torrent:
"""Represents an individual torrent within a L{RTorrent} instance."""
def __init__(self, _rt_obj, info_hash, **kwargs):
self._rt_obj = _rt_obj
self.info_hash = info_hash # : info hash for the torrent
self.rpc_id = self.info_hash # : unique id to pass to rTorrent
for k in kwargs.keys():
setattr(self, k, kwargs.get(k, None))
self.peers = []
self.trackers = []
self.files = []
self._call_custom_methods()
def __repr__(self):
return safe_repr("Torrent(info_hash=\"{0}\" name=\"{1}\")",
self.info_hash, self.name)
def _call_custom_methods(self):
"""only calls methods that check instance variables."""
self._is_hash_checking_queued()
self._is_started()
self._is_paused()
def get_peers(self):
"""Get list of Peer instances for given torrent.
@return: L{Peer} instances
@rtype: list
@note: also assigns return value to self.peers
"""
self.peers = []
retriever_methods = [m for m in peer.methods
if m.is_retriever() and m.is_available(self._rt_obj)]
# need to leave 2nd arg empty (dunno why)
m = rpc.Multicall(self)
m.add("p.multicall", self.info_hash, "",
*[method.rpc_call + "=" for method in retriever_methods])
results = m.call()[0] # only sent one call, only need first result
for result in results:
results_dict = {}
# build results_dict
for m, r in zip(retriever_methods, result):
results_dict[m.varname] = rpc.process_result(m, r)
self.peers.append(Peer(
self._rt_obj, self.info_hash, **results_dict))
return(self.peers)
def get_trackers(self):
"""Get list of Tracker instances for given torrent.
@return: L{Tracker} instances
@rtype: list
@note: also assigns return value to self.trackers
"""
self.trackers = []
retriever_methods = [m for m in tracker.methods
if m.is_retriever() and m.is_available(self._rt_obj)]
# need to leave 2nd arg empty (dunno why)
m = rpc.Multicall(self)
m.add("t.multicall", self.info_hash, "",
*[method.rpc_call + "=" for method in retriever_methods])
results = m.call()[0] # only sent one call, only need first result
for result in results:
results_dict = {}
# build results_dict
for m, r in zip(retriever_methods, result):
results_dict[m.varname] = rpc.process_result(m, r)
self.trackers.append(Tracker(
self._rt_obj, self.info_hash, **results_dict))
return(self.trackers)
def get_files(self):
"""Get list of File instances for given torrent.
@return: L{File} instances
@rtype: list
@note: also assigns return value to self.files
"""
self.files = []
retriever_methods = [m for m in file.methods
if m.is_retriever() and m.is_available(self._rt_obj)]
# 2nd arg can be anything, but it'll return all files in torrent
# regardless
m = rpc.Multicall(self)
m.add("f.multicall", self.info_hash, "",
*[method.rpc_call + "=" for method in retriever_methods])
results = m.call()[0] # only sent one call, only need first result
offset_method_index = retriever_methods.index(
rpc.find_method("f.get_offset"))
# make a list of the offsets of all the files, sort appropriately
offset_list = sorted([r[offset_method_index] for r in results])
for result in results:
results_dict = {}
# build results_dict
for m, r in zip(retriever_methods, result):
results_dict[m.varname] = rpc.process_result(m, r)
# get proper index positions for each file (based on the file
# offset)
f_index = offset_list.index(results_dict["offset"])
self.files.append(File(self._rt_obj, self.info_hash,
f_index, **results_dict))
return(self.files)
def set_directory(self, d):
"""Modify download directory
@note: Needs to stop torrent in order to change the directory.
Also doesn't restart after directory is set, that must be called
separately.
"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.try_stop")
self.multicall_add(m, "d.set_directory", d)
self.directory = m.call()[-1]
def set_directory_base(self, d):
"""Modify base download directory
@note: Needs to stop torrent in order to change the directory.
Also doesn't restart after directory is set, that must be called
separately.
"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.try_stop")
self.multicall_add(m, "d.set_directory_base", d)
def start(self):
"""Start the torrent"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.try_start")
self.multicall_add(m, "d.is_active")
self.active = m.call()[-1]
return(self.active)
def stop(self):
""""Stop the torrent"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.try_stop")
self.multicall_add(m, "d.is_active")
self.active = m.call()[-1]
return(self.active)
def pause(self):
"""Pause the torrent"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.pause")
return(m.call()[-1])
def resume(self):
"""Resume the torrent"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.resume")
return(m.call()[-1])
def close(self):
"""Close the torrent and it's files"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.close")
return(m.call()[-1])
def erase(self):
"""Delete the torrent
@note: doesn't delete the downloaded files"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.erase")
return(m.call()[-1])
def check_hash(self):
"""(Re)hash check the torrent"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.check_hash")
return(m.call()[-1])
def poll(self):
"""poll rTorrent to get latest peer/tracker/file information"""
self.get_peers()
self.get_trackers()
self.get_files()
def update(self):
"""Refresh torrent data
@note: All fields are stored as attributes to self.
@return: None
"""
multicall = rpc.Multicall(self)
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self._rt_obj)]
for method in retriever_methods:
multicall.add(method, self.rpc_id)
multicall.call()
# custom functions (only call private methods, since they only check
# local variables and are therefore faster)
self._call_custom_methods()
def accept_seeders(self, accept_seeds):
"""Enable/disable whether the torrent connects to seeders
@param accept_seeds: enable/disable accepting seeders
@type accept_seeds: bool"""
if accept_seeds:
call = "d.accepting_seeders.enable"
else:
call = "d.accepting_seeders.disable"
m = rpc.Multicall(self)
self.multicall_add(m, call)
return(m.call()[-1])
def announce(self):
"""Announce torrent info to tracker(s)"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.tracker_announce")
return(m.call()[-1])
@staticmethod
def _assert_custom_key_valid(key):
assert type(key) == int and key > 0 and key < 6, \
"key must be an integer between 1-5"
def get_custom(self, key):
"""
Get custom value
@param key: the index for the custom field (between 1-5)
@type key: int
@rtype: str
"""
self._assert_custom_key_valid(key)
m = rpc.Multicall(self)
field = "custom{0}".format(key)
self.multicall_add(m, "d.get_{0}".format(field))
setattr(self, field, m.call()[-1])
return (getattr(self, field))
def set_custom(self, key, value):
"""
Set custom value
@param key: the index for the custom field (between 1-5)
@type key: int
@param value: the value to be stored
@type value: str
@return: if successful, value will be returned
@rtype: str
"""
self._assert_custom_key_valid(key)
m = rpc.Multicall(self)
self.multicall_add(m, "d.set_custom{0}".format(key), value)
return(m.call()[-1])
def set_visible(self, view, visible=True):
p = self._rt_obj._get_conn()
if visible:
return p.view.set_visible(self.info_hash, view)
else:
return p.view.set_not_visible(self.info_hash, view)
############################################################################
# CUSTOM METHODS (Not part of the official rTorrent API)
##########################################################################
def _is_hash_checking_queued(self):
"""Only checks instance variables, shouldn't be called directly"""
# if hashing == 3, then torrent is marked for hash checking
# if hash_checking == False, then torrent is waiting to be checked
self.hash_checking_queued = (self.hashing == 3 and
self.hash_checking is False)
return(self.hash_checking_queued)
def is_hash_checking_queued(self):
"""Check if torrent is waiting to be hash checked
@note: Variable where the result for this method is stored Torrent.hash_checking_queued"""
m = rpc.Multicall(self)
self.multicall_add(m, "d.get_hashing")
self.multicall_add(m, "d.is_hash_checking")
results = m.call()
setattr(self, "hashing", results[0])
setattr(self, "hash_checking", results[1])
return(self._is_hash_checking_queued())
def _is_paused(self):
"""Only checks instance variables, shouldn't be called directly"""
self.paused = (self.state == 0)
return(self.paused)
def is_paused(self):
"""Check if torrent is paused
@note: Variable where the result for this method is stored: Torrent.paused"""
self.get_state()
return(self._is_paused())
def _is_started(self):
"""Only checks instance variables, shouldn't be called directly"""
self.started = (self.state == 1)
return(self.started)
def is_started(self):
"""Check if torrent is started
@note: Variable where the result for this method is stored: Torrent.started"""
self.get_state()
return(self._is_started())
methods = [
# RETRIEVERS
Method(Torrent, 'is_hash_checked', 'd.is_hash_checked',
boolean=True,
),
Method(Torrent, 'is_hash_checking', 'd.is_hash_checking',
boolean=True,
),
Method(Torrent, 'get_peers_max', 'd.get_peers_max'),
Method(Torrent, 'get_tracker_focus', 'd.get_tracker_focus'),
Method(Torrent, 'get_skip_total', 'd.get_skip_total'),
Method(Torrent, 'get_state', 'd.get_state'),
Method(Torrent, 'get_peer_exchange', 'd.get_peer_exchange'),
Method(Torrent, 'get_down_rate', 'd.get_down_rate'),
Method(Torrent, 'get_connection_seed', 'd.get_connection_seed'),
Method(Torrent, 'get_uploads_max', 'd.get_uploads_max'),
Method(Torrent, 'get_priority_str', 'd.get_priority_str'),
Method(Torrent, 'is_open', 'd.is_open',
boolean=True,
),
Method(Torrent, 'get_peers_min', 'd.get_peers_min'),
Method(Torrent, 'get_peers_complete', 'd.get_peers_complete'),
Method(Torrent, 'get_tracker_numwant', 'd.get_tracker_numwant'),
Method(Torrent, 'get_connection_current', 'd.get_connection_current'),
Method(Torrent, 'is_complete', 'd.get_complete',
boolean=True,
),
Method(Torrent, 'get_peers_connected', 'd.get_peers_connected'),
Method(Torrent, 'get_chunk_size', 'd.get_chunk_size'),
Method(Torrent, 'get_state_counter', 'd.get_state_counter'),
Method(Torrent, 'get_base_filename', 'd.get_base_filename'),
Method(Torrent, 'get_state_changed', 'd.get_state_changed'),
Method(Torrent, 'get_peers_not_connected', 'd.get_peers_not_connected'),
Method(Torrent, 'get_directory', 'd.get_directory'),
Method(Torrent, 'is_incomplete', 'd.incomplete',
boolean=True,
),
Method(Torrent, 'get_tracker_size', 'd.get_tracker_size'),
Method(Torrent, 'is_multi_file', 'd.is_multi_file',
boolean=True,
),
Method(Torrent, 'get_local_id', 'd.get_local_id'),
Method(Torrent, 'get_ratio', 'd.get_ratio',
post_process_func=lambda x: x / 1000.0,
),
Method(Torrent, 'get_loaded_file', 'd.get_loaded_file'),
Method(Torrent, 'get_max_file_size', 'd.get_max_file_size'),
Method(Torrent, 'get_size_chunks', 'd.get_size_chunks'),
Method(Torrent, 'is_pex_active', 'd.is_pex_active',
boolean=True,
),
Method(Torrent, 'get_hashing', 'd.get_hashing'),
Method(Torrent, 'get_bitfield', 'd.get_bitfield'),
Method(Torrent, 'get_local_id_html', 'd.get_local_id_html'),
Method(Torrent, 'get_connection_leech', 'd.get_connection_leech'),
Method(Torrent, 'get_peers_accounted', 'd.get_peers_accounted'),
Method(Torrent, 'get_message', 'd.get_message'),
Method(Torrent, 'is_active', 'd.is_active',
boolean=True,
),
Method(Torrent, 'get_size_bytes', 'd.get_size_bytes'),
Method(Torrent, 'get_ignore_commands', 'd.get_ignore_commands'),
Method(Torrent, 'get_creation_date', 'd.get_creation_date'),
Method(Torrent, 'get_base_path', 'd.get_base_path'),
Method(Torrent, 'get_left_bytes', 'd.get_left_bytes'),
Method(Torrent, 'get_size_files', 'd.get_size_files'),
Method(Torrent, 'get_size_pex', 'd.get_size_pex'),
Method(Torrent, 'is_private', 'd.is_private',
boolean=True,
),
Method(Torrent, 'get_max_size_pex', 'd.get_max_size_pex'),
Method(Torrent, 'get_num_chunks_hashed', 'd.get_chunks_hashed',
aliases=("get_chunks_hashed",)),
Method(Torrent, 'get_num_chunks_wanted', 'd.wanted_chunks'),
Method(Torrent, 'get_priority', 'd.get_priority'),
Method(Torrent, 'get_skip_rate', 'd.get_skip_rate'),
Method(Torrent, 'get_completed_bytes', 'd.get_completed_bytes'),
Method(Torrent, 'get_name', 'd.get_name'),
Method(Torrent, 'get_completed_chunks', 'd.get_completed_chunks'),
Method(Torrent, 'get_throttle_name', 'd.get_throttle_name'),
Method(Torrent, 'get_free_diskspace', 'd.get_free_diskspace'),
Method(Torrent, 'get_directory_base', 'd.get_directory_base'),
Method(Torrent, 'get_hashing_failed', 'd.get_hashing_failed'),
Method(Torrent, 'get_tied_to_file', 'd.get_tied_to_file'),
Method(Torrent, 'get_down_total', 'd.get_down_total'),
Method(Torrent, 'get_bytes_done', 'd.get_bytes_done'),
Method(Torrent, 'get_up_rate', 'd.get_up_rate'),
Method(Torrent, 'get_up_total', 'd.get_up_total'),
Method(Torrent, 'is_accepting_seeders', 'd.accepting_seeders',
boolean=True,
),
Method(Torrent, "get_chunks_seen", "d.chunks_seen",
min_version=(0, 9, 1),
),
Method(Torrent, "is_partially_done", "d.is_partially_done",
boolean=True,
),
Method(Torrent, "is_not_partially_done", "d.is_not_partially_done",
boolean=True,
),
Method(Torrent, "get_time_started", "d.timestamp.started"),
Method(Torrent, "get_custom1", "d.get_custom1"),
Method(Torrent, "get_custom2", "d.get_custom2"),
Method(Torrent, "get_custom3", "d.get_custom3"),
Method(Torrent, "get_custom4", "d.get_custom4"),
Method(Torrent, "get_custom5", "d.get_custom5"),
# MODIFIERS
Method(Torrent, 'set_uploads_max', 'd.set_uploads_max'),
Method(Torrent, 'set_tied_to_file', 'd.set_tied_to_file'),
Method(Torrent, 'set_tracker_numwant', 'd.set_tracker_numwant'),
Method(Torrent, 'set_priority', 'd.set_priority'),
Method(Torrent, 'set_peers_max', 'd.set_peers_max'),
Method(Torrent, 'set_hashing_failed', 'd.set_hashing_failed'),
Method(Torrent, 'set_message', 'd.set_message'),
Method(Torrent, 'set_throttle_name', 'd.set_throttle_name'),
Method(Torrent, 'set_peers_min', 'd.set_peers_min'),
Method(Torrent, 'set_ignore_commands', 'd.set_ignore_commands'),
Method(Torrent, 'set_max_file_size', 'd.set_max_file_size'),
Method(Torrent, 'set_custom5', 'd.set_custom5'),
Method(Torrent, 'set_custom4', 'd.set_custom4'),
Method(Torrent, 'set_custom2', 'd.set_custom2'),
Method(Torrent, 'set_custom1', 'd.set_custom1'),
Method(Torrent, 'set_custom3', 'd.set_custom3'),
Method(Torrent, 'set_connection_current', 'd.set_connection_current'),
]

138
lib/rtorrent/tracker.py Normal file
View File

@ -0,0 +1,138 @@
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# from rtorrent.rpc import Method
import rpc
from common import safe_repr
Method = rpc.Method
class Tracker:
"""Represents an individual tracker within a L{Torrent} instance."""
def __init__(self, _rt_obj, info_hash, **kwargs):
self._rt_obj = _rt_obj
self.info_hash = info_hash # : info hash for the torrent using this tracker
for k in kwargs.keys():
setattr(self, k, kwargs.get(k, None))
# for clarity's sake...
self.index = self.group # : position of tracker within the torrent's tracker list
self.rpc_id = "{0}:t{1}".format(
self.info_hash, self.index) # : unique id to pass to rTorrent
def __repr__(self):
return safe_repr("Tracker(index={0}, url=\"{1}\")",
self.index, self.url)
def enable(self):
"""Alias for set_enabled("yes")"""
self.set_enabled("yes")
def disable(self):
"""Alias for set_enabled("no")"""
self.set_enabled("no")
def update(self):
"""Refresh tracker data
@note: All fields are stored as attributes to self.
@return: None
"""
multicall = rtorrent.rpc.Multicall(self)
retriever_methods = [m for m in methods
if m.is_retriever() and m.is_available(self._rt_obj)]
for method in retriever_methods:
multicall.add(method, self.rpc_id)
multicall.call()
methods = [
# RETRIEVERS
Method(Tracker, 'is_enabled', 't.is_enabled', boolean=True),
Method(Tracker, 'get_id', 't.get_id'),
Method(Tracker, 'get_scrape_incomplete', 't.get_scrape_incomplete'),
Method(Tracker, 'is_open', 't.is_open', boolean=True),
Method(Tracker, 'get_min_interval', 't.get_min_interval'),
Method(Tracker, 'get_scrape_downloaded', 't.get_scrape_downloaded'),
Method(Tracker, 'get_group', 't.get_group'),
Method(Tracker, 'get_scrape_time_last', 't.get_scrape_time_last'),
Method(Tracker, 'get_type', 't.get_type'),
Method(Tracker, 'get_normal_interval', 't.get_normal_interval'),
Method(Tracker, 'get_url', 't.get_url'),
Method(Tracker, 'get_scrape_complete', 't.get_scrape_complete',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_activity_time_last', 't.activity_time_last',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_activity_time_next', 't.activity_time_next',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_failed_time_last', 't.failed_time_last',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_failed_time_next', 't.failed_time_next',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_success_time_last', 't.success_time_last',
min_version=(0, 8, 9),
),
Method(Tracker, 'get_success_time_next', 't.success_time_next',
min_version=(0, 8, 9),
),
Method(Tracker, 'can_scrape', 't.can_scrape',
min_version=(0, 9, 1),
boolean=True
),
Method(Tracker, 'get_failed_counter', 't.failed_counter',
min_version=(0, 8, 9)
),
Method(Tracker, 'get_scrape_counter', 't.scrape_counter',
min_version=(0, 8, 9)
),
Method(Tracker, 'get_success_counter', 't.success_counter',
min_version=(0, 8, 9)
),
Method(Tracker, 'is_usable', 't.is_usable',
min_version=(0, 9, 1),
boolean=True
),
Method(Tracker, 'is_busy', 't.is_busy',
min_version=(0, 9, 1),
boolean=True
),
Method(Tracker, 'is_extra_tracker', 't.is_extra_tracker',
min_version=(0, 9, 1),
boolean=True,
),
Method(Tracker, "get_latest_sum_peers", "t.latest_sum_peers",
min_version=(0, 9, 0)
),
Method(Tracker, "get_latest_new_peers", "t.latest_new_peers",
min_version=(0, 9, 0)
),
# MODIFIERS
Method(Tracker, 'set_enabled', 't.set_enabled'),
]

0
lib/utorrent/__init__.py Normal file
View File

142
lib/utorrent/client.py Normal file
View File

@ -0,0 +1,142 @@
#coding=utf8
import urllib
import urllib2
import urlparse
import cookielib
import re
import StringIO
try:
import json
except ImportError:
import simplejson as json
from upload import MultiPartForm
class UTorrentClient(object):
def __init__(self, base_url, username, password):
self.base_url = base_url
self.username = username
self.password = password
self.opener = self._make_opener('uTorrent', base_url, username, password)
self.token = self._get_token()
#TODO refresh token, when necessary
def _make_opener(self, realm, base_url, username, password):
'''uTorrent API need HTTP Basic Auth and cookie support for token verify.'''
auth_handler = urllib2.HTTPBasicAuthHandler()
auth_handler.add_password(realm=realm,
uri=base_url,
user=username,
passwd=password)
opener = urllib2.build_opener(auth_handler)
urllib2.install_opener(opener)
cookie_jar = cookielib.CookieJar()
cookie_handler = urllib2.HTTPCookieProcessor(cookie_jar)
handlers = [auth_handler, cookie_handler]
opener = urllib2.build_opener(*handlers)
return opener
def _get_token(self):
url = urlparse.urljoin(self.base_url, 'token.html')
response = self.opener.open(url)
token_re = "<div id='token' style='display:none;'>([^<>]+)</div>"
match = re.search(token_re, response.read())
return match.group(1)
def list(self, **kwargs):
params = [('list', '1')]
params += kwargs.items()
return self._action(params)
def start(self, *hashes):
params = [('action', 'start'),]
for hash in hashes:
params.append(('hash', hash))
return self._action(params)
def stop(self, *hashes):
params = [('action', 'stop'),]
for hash in hashes:
params.append(('hash', hash))
return self._action(params)
def pause(self, *hashes):
params = [('action', 'pause'),]
for hash in hashes:
params.append(('hash', hash))
return self._action(params)
def forcestart(self, *hashes):
params = [('action', 'forcestart'),]
for hash in hashes:
params.append(('hash', hash))
return self._action(params)
def remove(self, *hashes):
params = [('action', 'remove'),]
for hash in hashes:
params.append(('hash', hash))
return self._action(params)
def removedata(self, *hashes):
params = [('action', 'removedata'),]
for hash in hashes:
params.append(('hash', hash))
return self._action(params)
def recheck(self, *hashes):
params = [('action', 'recheck'),]
for hash in hashes:
params.append(('hash', hash))
return self._action(params)
def getfiles(self, hash):
params = [('action', 'getfiles'), ('hash', hash)]
return self._action(params)
def getprops(self, hash):
params = [('action', 'getprops'), ('hash', hash)]
return self._action(params)
def setprio(self, hash, priority, *files):
params = [('action', 'setprio'), ('hash', hash), ('p', str(priority))]
for file_index in files:
params.append(('f', str(file_index)))
return self._action(params)
def addfile(self, filename, filepath=None, bytes=None):
params = [('action', 'add-file')]
form = MultiPartForm()
if filepath is not None:
file_handler = open(filepath)
else:
file_handler = StringIO.StringIO(bytes)
form.add_file('torrent_file', filename.encode('utf-8'), file_handler)
return self._action(params, str(form), form.get_content_type())
def _action(self, params, body=None, content_type=None):
#about token, see https://github.com/bittorrent/webui/wiki/TokenSystem
url = self.base_url + '?token=' + self.token + '&' + urllib.urlencode(params)
request = urllib2.Request(url)
if body:
request.add_data(body)
request.add_header('Content-length', len(body))
if content_type:
request.add_header('Content-type', content_type)
try:
response = self.opener.open(request)
return response.code, json.loads(response.read())
except urllib2.HTTPError,e:
raise

71
lib/utorrent/upload.py Normal file
View File

@ -0,0 +1,71 @@
#code copied from http://www.doughellmann.com/PyMOTW/urllib2/
import itertools
import mimetools
import mimetypes
from cStringIO import StringIO
import urllib
import urllib2
class MultiPartForm(object):
"""Accumulate the data to be used when posting a form."""
def __init__(self):
self.form_fields = []
self.files = []
self.boundary = mimetools.choose_boundary()
return
def get_content_type(self):
return 'multipart/form-data; boundary=%s' % self.boundary
def add_field(self, name, value):
"""Add a simple field to the form data."""
self.form_fields.append((name, value))
return
def add_file(self, fieldname, filename, fileHandle, mimetype=None):
"""Add a file to be uploaded."""
body = fileHandle.read()
if mimetype is None:
mimetype = mimetypes.guess_type(filename)[0] or 'application/octet-stream'
self.files.append((fieldname, filename, mimetype, body))
return
def __str__(self):
"""Return a string representing the form data, including attached files."""
# Build a list of lists, each containing "lines" of the
# request. Each part is separated by a boundary string.
# Once the list is built, return a string where each
# line is separated by '\r\n'.
parts = []
part_boundary = '--' + self.boundary
# Add the form fields
parts.extend(
[ part_boundary,
'Content-Disposition: form-data; name="%s"' % name,
'',
value,
]
for name, value in self.form_fields
)
# Add the files to upload
parts.extend(
[ part_boundary,
'Content-Disposition: file; name="%s"; filename="%s"' % \
(field_name, filename),
'Content-Type: %s' % content_type,
'',
body,
]
for field_name, filename, content_type, body in self.files
)
# Flatten the list and add closing boundary marker,
# then return CR+LF separated data
flattened = list(itertools.chain(*parts))
flattened.append('--' + self.boundary + '--')
flattened.append('')
return '\r\n'.join(flattened)

View File

@ -243,44 +243,44 @@ class PostProcessor(object):
#once a series name and issue are matched,
#write the series/issue/filename to a tuple
#when all done, iterate over the tuple until completion...
comicseries = myDB.select("SELECT * FROM comics")
#first we get a parsed results list of the files being processed, and then poll against the sql to get a short list of hits.
fl = filechecker.FileChecker(self.nzb_folder, justparse=True)
filelist = fl.listFiles()
if filelist['comiccount'] == 0: # is None:
logger.warn('There were no files located - check the debugging logs if you think this is in error.')
return
logger.info(filelist)
logger.info('I have located ' + str(filelist['comiccount']) + ' files that I should be able to post-process. Continuing...')
manual_list = []
if comicseries is None:
logger.error(module + ' No Series in Watchlist - checking against Story Arcs (just in case). If I do not find anything, maybe you should be running Import?')
else:
watchvals = []
for wv in comicseries:
wv_comicname = wv['ComicName']
wv_comicpublisher = wv['ComicPublisher']
wv_alternatesearch = wv['AlternateSearch']
wv_comicid = wv['ComicID']
for fl in filelist['comiclist']:
#mod_seriesname = '%' + re.sub(' ', '%', fl['series_name']).strip() + '%'
as_d = filechecker.FileChecker(watchcomic=fl['series_name'].decode('utf-8'))
as_dinfo = as_d.dynamic_replace(fl['series_name'])
mod_seriesname = as_dinfo['mod_seriesname']
logger.fdebug('Dynamic-ComicName: ' + mod_seriesname)
comicseries = myDB.select('SELECT * FROM comics Where DynamicComicName=?', [mod_seriesname])
if comicseries is None:
logger.error(module + ' No Series in Watchlist - checking against Story Arcs (just in case). If I do not find anything, maybe you should be running Import?')
break
else:
watchvals = []
for wv in comicseries:
wv_seriesyear = wv['ComicYear']
wv_comicversion = wv['ComicVersion']
wv_publisher = wv['ComicPublisher']
wv_total = wv['Total']
if mylar.FOLDER_SCAN_LOG_VERBOSE:
logger.fdebug('Checking ' + wv['ComicName'] + ' [' + str(wv['ComicYear']) + '] -- ' + str(wv['ComicID']))
wv_comicname = wv['ComicName']
wv_comicpublisher = wv['ComicPublisher']
wv_alternatesearch = wv['AlternateSearch']
wv_comicid = wv['ComicID']
#force it to use the Publication Date of the latest issue instead of the Latest Date (which could be anything)
latestdate = myDB.select('SELECT IssueDate from issues WHERE ComicID=? order by ReleaseDate DESC', [wv['ComicID']])
if latestdate:
tmplatestdate = latestdate[0][0]
if tmplatestdate[:4] != wv['LatestDate'][:4]:
if tmplatestdate[:4] > wv['LatestDate'][:4]:
latestdate = tmplatestdate
else:
latestdate = wv['LatestDate']
else:
latestdate = tmplatestdate
else:
latestdate = wv['LatestDate']
wv_seriesyear = wv['ComicYear']
wv_comicversion = wv['ComicVersion']
wv_publisher = wv['ComicPublisher']
wv_total = wv['Total']
if mylar.FOLDER_SCAN_LOG_VERBOSE:
logger.fdebug('Checking ' + wv['ComicName'] + ' [' + str(wv['ComicYear']) + '] -- ' + str(wv['ComicID']))
if latestdate == '0000-00-00' or latestdate == 'None' or latestdate is None:
logger.fdebug('Forcing a refresh of series: ' + wv_comicname + ' as it appears to have incomplete issue dates.')
updater.dbUpdate([wv_comicid])
logger.fdebug('Refresh complete for ' + wv_comicname + '. Rechecking issue dates for completion.')
#force it to use the Publication Date of the latest issue instead of the Latest Date (which could be anything)
latestdate = myDB.select('SELECT IssueDate from issues WHERE ComicID=? order by ReleaseDate DESC', [wv['ComicID']])
if latestdate:
tmplatestdate = latestdate[0][0]
@ -294,217 +294,228 @@ class PostProcessor(object):
else:
latestdate = wv['LatestDate']
logger.fdebug('Latest Date (after forced refresh) set to :' + str(latestdate))
if latestdate == '0000-00-00' or latestdate == 'None' or latestdate is None:
logger.fdebug('Unable to properly attain the Latest Date for series: ' + wv_comicname + '. Cannot check against this series for post-processing.')
continue
logger.fdebug('Forcing a refresh of series: ' + wv_comicname + ' as it appears to have incomplete issue dates.')
updater.dbUpdate([wv_comicid])
logger.fdebug('Refresh complete for ' + wv_comicname + '. Rechecking issue dates for completion.')
latestdate = myDB.select('SELECT IssueDate from issues WHERE ComicID=? order by ReleaseDate DESC', [wv['ComicID']])
if latestdate:
tmplatestdate = latestdate[0][0]
if tmplatestdate[:4] != wv['LatestDate'][:4]:
if tmplatestdate[:4] > wv['LatestDate'][:4]:
latestdate = tmplatestdate
else:
latestdate = wv['LatestDate']
else:
latestdate = tmplatestdate
else:
latestdate = wv['LatestDate']
watchvals.append({"ComicName": wv_comicname,
"ComicPublisher": wv_comicpublisher,
"AlternateSearch": wv_alternatesearch,
"ComicID": wv_comicid,
"WatchValues": {"SeriesYear": wv_seriesyear,
"LatestDate": latestdate,
"ComicVersion": wv_comicversion,
"Publisher": wv_publisher,
"Total": wv_total,
"ComicID": wv_comicid,
"IsArc": False}
})
logger.fdebug('Latest Date (after forced refresh) set to :' + str(latestdate))
if latestdate == '0000-00-00' or latestdate == 'None' or latestdate is None:
logger.fdebug('Unable to properly attain the Latest Date for series: ' + wv_comicname + '. Cannot check against this series for post-processing.')
continue
watchvals.append({"ComicName": wv_comicname,
"ComicPublisher": wv_comicpublisher,
"AlternateSearch": wv_alternatesearch,
"ComicID": wv_comicid,
"WatchValues": {"SeriesYear": wv_seriesyear,
"LatestDate": latestdate,
"ComicVersion": wv_comicversion,
"Publisher": wv_publisher,
"Total": wv_total,
"ComicID": wv_comicid,
"IsArc": False}
})
ccnt=0
nm=0
for cs in watchvals:
watchmatch = filechecker.listFiles(self.nzb_folder, cs['ComicName'], cs['ComicPublisher'], cs['AlternateSearch'], manual=cs['WatchValues'])
if watchmatch['comiccount'] == 0: # is None:
wm = filechecker.FileChecker(watchcomic=cs['ComicName'], Publisher=cs['ComicPublisher'], AlternateSearch=cs['AlternateSearch'], manual=cs['WatchValues'])
watchmatch = wm.matchIT(fl)
if watchmatch['process_status'] == 'fail':
nm+=1
continue
else:
fn = 0
fccnt = int(watchmatch['comiccount'])
if len(watchmatch) == 1: continue
while (fn < fccnt):
try:
tmpfc = watchmatch['comiclist'][fn]
except IndexError, KeyError:
break
temploc= tmpfc['JusttheDigits'].replace('_', ' ')
temploc = re.sub('[\#\']', '', temploc)
temploc= watchmatch['justthedigits'].replace('_', ' ')
temploc = re.sub('[\#\']', '', temploc)
if 'annual' in temploc.lower():
biannchk = re.sub('-', '', temploc.lower()).strip()
if 'biannual' in biannchk:
logger.fdebug(module + ' Bi-Annual detected.')
fcdigit = helpers.issuedigits(re.sub('biannual', '', str(biannchk)).strip())
else:
fcdigit = helpers.issuedigits(re.sub('annual', '', str(temploc.lower())).strip())
logger.fdebug(module + ' Annual detected [' + str(fcdigit) +']. ComicID assigned as ' + str(cs['ComicID']))
annchk = "yes"
issuechk = myDB.selectone("SELECT * from annuals WHERE ComicID=? AND Int_IssueNumber=?", [cs['ComicID'], fcdigit]).fetchone()
if 'annual' in temploc.lower():
biannchk = re.sub('-', '', temploc.lower()).strip()
if 'biannual' in biannchk:
logger.fdebug(module + ' Bi-Annual detected.')
fcdigit = helpers.issuedigits(re.sub('biannual', '', str(biannchk)).strip())
else:
fcdigit = helpers.issuedigits(temploc)
issuechk = myDB.selectone("SELECT * from issues WHERE ComicID=? AND Int_IssueNumber=?", [cs['ComicID'], fcdigit]).fetchone()
fcdigit = helpers.issuedigits(re.sub('annual', '', str(temploc.lower())).strip())
logger.fdebug(module + ' Annual detected [' + str(fcdigit) +']. ComicID assigned as ' + str(cs['ComicID']))
annchk = "yes"
issuechk = myDB.selectone("SELECT * from annuals WHERE ComicID=? AND Int_IssueNumber=?", [cs['ComicID'], fcdigit]).fetchone()
else:
fcdigit = helpers.issuedigits(temploc)
issuechk = myDB.selectone("SELECT * from issues WHERE ComicID=? AND Int_IssueNumber=?", [cs['ComicID'], fcdigit]).fetchone()
if issuechk is None:
logger.fdebug(module + ' No corresponding issue # found for ' + str(cs['ComicID']))
continue
else:
datematch = "True"
if len(watchmatch) >= 1 and watchmatch['issue_year'] is not None:
#if the # of matches is more than 1, we need to make sure we get the right series
#compare the ReleaseDate for the issue, to the found issue date in the filename.
#if ReleaseDate doesn't exist, use IssueDate
#if no issue date was found, then ignore.
issyr = None
#logger.fdebug('issuedate:' + str(issuechk['IssueDate']))
#logger.fdebug('issuechk: ' + str(issuechk['IssueDate'][5:7]))
#logger.info('ReleaseDate: ' + str(issuechk['ReleaseDate']))
#logger.info('IssueDate: ' + str(issuechk['IssueDate']))
if issuechk['ReleaseDate'] is not None and issuechk['ReleaseDate'] != '0000-00-00':
monthval = issuechk['ReleaseDate']
if int(issuechk['ReleaseDate'][:4]) < int(watchmatch['issue_year']):
logger.fdebug(module + ' ' + str(issuechk['ReleaseDate']) + ' is before the issue year of ' + str(watchmatch['issue_year']) + ' that was discovered in the filename')
datematch = "False"
else:
monthval = issuechk['IssueDate']
if int(issuechk['IssueDate'][:4]) < int(watchmatch['issue_year']):
logger.fdebug(module + ' ' + str(issuechk['IssueDate']) + ' is before the issue year ' + str(watchmatch['issue_year']) + ' that was discovered in the filename')
datematch = "False"
if int(monthval[5:7]) == 11 or int(monthval[5:7]) == 12:
issyr = int(monthval[:4]) + 1
logger.fdebug(module + ' IssueYear (issyr) is ' + str(issyr))
elif int(monthval[5:7]) == 1 or int(monthval[5:7]) == 2 or int(monthval[5:7]) == 3:
issyr = int(monthval[:4]) - 1
if datematch == "False" and issyr is not None:
logger.fdebug(module + ' ' + str(issyr) + ' comparing to ' + str(watchmatch['issue_year']) + ' : rechecking by month-check versus year.')
datematch = "True"
if int(issyr) != int(watchmatch['issue_year']):
logger.fdebug(module + '[.:FAIL:.] Issue is before the modified issue year of ' + str(issyr))
datematch = "False"
if issuechk is None:
logger.fdebug(module + ' No corresponding issue # found for ' + str(cs['ComicID']))
else:
datematch = "True"
if len(watchmatch) >= 1 and tmpfc['ComicYear'] is not None:
#if the # of matches is more than 1, we need to make sure we get the right series
#compare the ReleaseDate for the issue, to the found issue date in the filename.
#if ReleaseDate doesn't exist, use IssueDate
#if no issue date was found, then ignore.
issyr = None
#logger.fdebug('issuedate:' + str(issuechk['IssueDate']))
#logger.fdebug('issuechk: ' + str(issuechk['IssueDate'][5:7]))
logger.info(module + ' Found matching issue # ' + str(fcdigit) + ' for ComicID: ' + str(cs['ComicID']) + ' / IssueID: ' + str(issuechk['IssueID']))
#logger.info('ReleaseDate: ' + str(issuechk['ReleaseDate']))
#logger.info('IssueDate: ' + str(issuechk['IssueDate']))
if issuechk['ReleaseDate'] is not None and issuechk['ReleaseDate'] != '0000-00-00':
monthval = issuechk['ReleaseDate']
if int(issuechk['ReleaseDate'][:4]) < int(tmpfc['ComicYear']):
logger.fdebug(module + ' ' + str(issuechk['ReleaseDate']) + ' is before the issue year of ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
datematch = "False"
if datematch == "True":
manual_list.append({"ComicLocation": os.path.join(watchmatch['comiclocation'],watchmatch['comicfilename']),
"ComicID": cs['ComicID'],
"IssueID": issuechk['IssueID'],
"IssueNumber": issuechk['Issue_Number'],
"ComicName": cs['ComicName']})
else:
logger.fdebug(module + '[NON-MATCH: ' + cs['ComicName'] + '-' + cs['ComicID'] + '] Incorrect series - not populating..continuing post-processing')
continue
#ccnt+=1
logger.fdebug(module + '[SUCCESSFUL MATCH: ' + cs['ComicName'] + '-' + cs['ComicID'] + '] Match verified for ' + fl['comicfilename'])
break
else:
monthval = issuechk['IssueDate']
if int(issuechk['IssueDate'][:4]) < int(tmpfc['ComicYear']):
logger.fdebug(module + ' ' + str(issuechk['IssueDate']) + ' is before the issue year ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
datematch = "False"
if int(monthval[5:7]) == 11 or int(monthval[5:7]) == 12:
issyr = int(monthval[:4]) + 1
logger.fdebug(module + ' IssueYear (issyr) is ' + str(issyr))
elif int(monthval[5:7]) == 1 or int(monthval[5:7]) == 2 or int(monthval[5:7]) == 3:
issyr = int(monthval[:4]) - 1
if datematch == "False" and issyr is not None:
logger.fdebug(module + ' ' + str(issyr) + ' comparing to ' + str(tmpfc['ComicYear']) + ' : rechecking by month-check versus year.')
datematch = "True"
if int(issyr) != int(tmpfc['ComicYear']):
logger.fdebug(module + '[.:FAIL:.] Issue is before the modified issue year of ' + str(issyr))
datematch = "False"
else:
logger.info(module + ' Found matching issue # ' + str(fcdigit) + ' for ComicID: ' + str(cs['ComicID']) + ' / IssueID: ' + str(issuechk['IssueID']))
if datematch == "True":
manual_list.append({"ComicLocation": tmpfc['ComicLocation'],
"ComicID": cs['ComicID'],
"IssueID": issuechk['IssueID'],
"IssueNumber": issuechk['Issue_Number'],
"ComicName": cs['ComicName']})
else:
logger.fdebug(module + ' Incorrect series - not populating..continuing post-processing')
#ccnt+=1
fn+=1
logger.fdebug(module + ' There are ' + str(len(manual_list)) + ' files found that match on your watchlist, ' + str(nm) + ' do not match anything and will be ignored.')
logger.fdebug(module + ' There are ' + str(len(manual_list)) + ' files found that match on your watchlist, ' + str(int(filelist['comiccount'] - len(manual_list))) + ' do not match anything and will be ignored.')
#we should setup for manual post-processing of story-arc issues here
arc_series = myDB.select("SELECT * FROM readinglist order by ComicName") # by StoryArcID")
manual_arclist = []
if arc_series is None:
logger.error(module + ' No Story Arcs in Watchlist - aborting Manual Post Processing. Maybe you should be running Import?')
return
else:
arcvals = []
for av in arc_series:
arcvals.append({"ComicName": av['ComicName'],
"ArcValues": {"StoryArc": av['StoryArc'],
"StoryArcID": av['StoryArcID'],
"IssueArcID": av['IssueArcID'],
"ComicName": av['ComicName'],
"ComicPublisher": av['IssuePublisher'],
"IssueID": av['IssueID'],
"IssueNumber": av['IssueNumber'],
"IssueYear": av['IssueYear'], #for some reason this is empty
"ReadingOrder": av['ReadingOrder'],
"IssueDate": av['IssueDate'],
"Status": av['Status'],
"Location": av['Location']},
"WatchValues": {"SeriesYear": av['SeriesYear'],
"LatestDate": av['IssueDate'],
"ComicVersion": 'v' + str(av['SeriesYear']),
"Publisher": av['IssuePublisher'],
"Total": av['TotalIssues'], # this will return the total issues in the arc (not needed for this)
"ComicID": av['ComicID'],
"IsArc": True}
})
#we can also search by ComicID to just grab those particular arcs as an alternative as well (not done)
logger.fdebug(module + ' Now Checking if the issue also resides in one of the storyarc\'s that I am watching.')
for fl in filelist['comiclist']:
mod_seriesname = '%' + re.sub(' ', '%', fl['series_name']).strip() + '%'
arc_series = myDB.select("SELECT * FROM readinglist WHERE ComicName LIKE?", [fl['series_name']]) # by StoryArcID")
manual_arclist = []
if arc_series is None:
logger.error(module + ' No Story Arcs in Watchlist that contain that particular series - aborting Manual Post Processing. Maybe you should be running Import?')
return
else:
arcvals = []
for av in arc_series:
arcvals.append({"ComicName": av['ComicName'],
"ArcValues": {"StoryArc": av['StoryArc'],
"StoryArcID": av['StoryArcID'],
"IssueArcID": av['IssueArcID'],
"ComicName": av['ComicName'],
"ComicPublisher": av['IssuePublisher'],
"IssueID": av['IssueID'],
"IssueNumber": av['IssueNumber'],
"IssueYear": av['IssueYear'], #for some reason this is empty
"ReadingOrder": av['ReadingOrder'],
"IssueDate": av['IssueDate'],
"Status": av['Status'],
"Location": av['Location']},
"WatchValues": {"SeriesYear": av['SeriesYear'],
"LatestDate": av['IssueDate'],
"ComicVersion": 'v' + str(av['SeriesYear']),
"Publisher": av['IssuePublisher'],
"Total": av['TotalIssues'], # this will return the total issues in the arc (not needed for this)
"ComicID": av['ComicID'],
"IsArc": True}
})
ccnt=0
nm=0
from collections import defaultdict
res = defaultdict(list)
for acv in arcvals:
res[acv['ComicName']].append({"ArcValues": acv['ArcValues'],
"WatchValues": acv['WatchValues']})
ccnt=0
nm=0
from collections import defaultdict
res = defaultdict(list)
for acv in arcvals:
res[acv['ComicName']].append({"ArcValues": acv['ArcValues'],
"WatchValues": acv['WatchValues']})
for k,v in res.items():
i = 0
#k is ComicName
#v is ArcValues and WatchValues
while i < len(v):
#k is ComicName
#v is ArcValues and WatchValues
if k is None or k == 'None':
pass
else:
arcmatch = filechecker.listFiles(self.nzb_folder, k, v[i]['ArcValues']['ComicPublisher'], manual=v[i]['WatchValues'])
if arcmatch['comiccount'] == 0:
pass
arcm = filechecker.FileChecker(watchcomic=k, Publisher=v[i]['ArcValues']['ComicPublisher'], manual=v[i]['WatchValues'])
arcmatch = arcm.matchIT(fl)
logger.info('arcmatch: ' + str(arcmatch))
if arcmatch['process_status'] == 'fail':
nm+=1
else:
fn = 0
fccnt = int(arcmatch['comiccount'])
if len(arcmatch) == 1: break
while (fn < fccnt):
try:
tmpfc = arcmatch['comiclist'][fn]
except IndexError, KeyError:
break
temploc= tmpfc['JusttheDigits'].replace('_', ' ')
temploc = re.sub('[\#\']', '', temploc)
if 'annual' in temploc.lower():
biannchk = re.sub('-', '', temploc.lower()).strip()
if 'biannual' in biannchk:
logger.fdebug(module + ' Bi-Annual detected.')
fcdigit = helpers.issuedigits(re.sub('biannual', '', str(biannchk)).strip())
else:
logger.fdebug(module + ' Annual detected.')
fcdigit = helpers.issuedigits(re.sub('annual', '', str(temploc.lower())).strip())
annchk = "yes"
issuechk = myDB.selectone("SELECT * from readinglist WHERE ComicID=? AND Int_IssueNumber=?", [v[i]['WatchValues']['ComicID'], fcdigit]).fetchone()
temploc= arcmatch['justthedigits'].replace('_', ' ')
temploc = re.sub('[\#\']', '', temploc)
if helpers.issuedigits(temploc) != helpers.issuedigits(v[i]['ArcValues']['IssueNumber']):
logger.info('issues dont match. Skipping')
i+=1
continue
if 'annual' in temploc.lower():
biannchk = re.sub('-', '', temploc.lower()).strip()
if 'biannual' in biannchk:
logger.fdebug(module + ' Bi-Annual detected.')
fcdigit = helpers.issuedigits(re.sub('biannual', '', str(biannchk)).strip())
else:
fcdigit = helpers.issuedigits(temploc)
issuechk = myDB.selectone("SELECT * from readinglist WHERE ComicID=? AND Int_IssueNumber=?", [v[i]['WatchValues']['ComicID'], fcdigit]).fetchone()
fcdigit = helpers.issuedigits(re.sub('annual', '', str(temploc.lower())).strip())
logger.fdebug(module + ' Annual detected [' + str(fcdigit) +']. ComicID assigned as ' + str(v[i]['WatchValues']['ComicID']))
annchk = "yes"
issuechk = myDB.selectone("SELECT * from readinglist WHERE ComicID=? AND Int_IssueNumber=?", [v[i]['WatchValues']['ComicID'], fcdigit]).fetchone()
else:
fcdigit = helpers.issuedigits(temploc)
issuechk = myDB.selectone("SELECT * from readinglist WHERE ComicID=? AND Int_IssueNumber=?", [v[i]['WatchValues']['ComicID'], fcdigit]).fetchone()
if issuechk is None:
logger.fdebug(module + ' No corresponding issue # found for ' + str(v[i]['WatchValues']['ComicID']))
else:
datematch = "True"
if len(arcmatch) >= 1 and tmpfc['ComicYear'] is not None:
#if the # of matches is more than 1, we need to make sure we get the right series
#compare the ReleaseDate for the issue, to the found issue date in the filename.
#if ReleaseDate doesn't exist, use IssueDate
#if no issue date was found, then ignore.
issyr = None
logger.fdebug('issuedate:' + str(issuechk['IssueDate']))
logger.fdebug('issuechk: ' + str(issuechk['IssueDate'][5:7]))
if issuechk is None:
logger.fdebug(module + ' No corresponding issue # found for ' + str(v[i]['WatchValues']['ComicID']))
else:
datematch = "True"
if len(arcmatch) >= 1 and arcmatch['issue_year'] is not None:
#if the # of matches is more than 1, we need to make sure we get the right series
#compare the ReleaseDate for the issue, to the found issue date in the filename.
#if ReleaseDate doesn't exist, use IssueDate
#if no issue date was found, then ignore.
issyr = None
logger.fdebug('issuedate:' + str(issuechk['IssueDate']))
logger.fdebug('issuechk: ' + str(issuechk['IssueDate'][5:7]))
logger.info('ReleaseDate: ' + str(issuechk['StoreDate']))
logger.info('IssueDate: ' + str(issuechk['IssueDate']))
if issuechk['StoreDate'] is not None and issuechk['StoreDate'] != '0000-00-00':
monthval = issuechk['StoreDate']
if int(issuechk['StoreDate'][:4]) < int(tmpfc['ComicYear']):
logger.fdebug(module + ' ' + str(issuechk['StoreDate']) + ' is before the issue year of ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
datematch = "False"
logger.info('StoreDate ' + str(issuechk['StoreDate']))
logger.info('IssueDate: ' + str(issuechk['IssueDate']))
if issuechk['StoreDate'] is not None and issuechk['StoreDate'] != '0000-00-00':
monthval = issuechk['StoreDate']
if int(issuechk['StoreDate'][:4]) < int(arcmatch['issue_year']):
logger.fdebug(module + ' ' + str(issuechk['StoreDate']) + ' is before the issue year of ' + str(arcmatch['issue_year']) + ' that was discovered in the filename')
datematch = "False"
else:
monthval = issuechk['IssueDate']
if int(issuechk['IssueDate'][:4]) < int(tmpfc['ComicYear']):
logger.fdebug(module + ' ' + str(issuechk['IssueDate']) + ' is before the issue year ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
if int(issuechk['IssueDate'][:4]) < int(arcmatch['issue_year']):
logger.fdebug(module + ' ' + str(issuechk['IssueDate']) + ' is before the issue year ' + str(arcmatch['issue_year']) + ' that was discovered in the filename')
datematch = "False"
if int(monthval[5:7]) == 11 or int(monthval[5:7]) == 12:
@ -514,15 +525,18 @@ class PostProcessor(object):
issyr = int(monthval[:4]) - 1
if datematch == "False" and issyr is not None:
logger.fdebug(module + ' ' + str(issyr) + ' comparing to ' + str(tmpfc['ComicYear']) + ' : rechecking by month-check versus year.')
logger.fdebug(module + ' ' + str(issyr) + ' comparing to ' + str(arcmatch['issue_year']) + ' : rechecking by month-check versus year.')
datematch = "True"
if int(issyr) != int(tmpfc['ComicYear']):
if int(issyr) != int(arcmatch['issue_year']):
logger.fdebug(module + '[.:FAIL:.] Issue is before the modified issue year of ' + str(issyr))
datematch = "False"
else:
logger.info(module + ' Found matching issue # ' + str(fcdigit) + ' for ComicID: ' + str(v[i]['WatchValues']['ComicID']) + ' / IssueID: ' + str(issuechk['IssueID']))
logger.info('datematch: ' + str(datematch))
logger.info('temploc: ' + str(helpers.issuedigits(temploc)))
logger.info('arcissue: ' + str(helpers.issuedigits(v[i]['ArcValues']['IssueNumber'])))
if datematch == "True" and helpers.issuedigits(temploc) == helpers.issuedigits(v[i]['ArcValues']['IssueNumber']):
passit = False
if len(manual_list) > 0:
@ -536,131 +550,135 @@ class PostProcessor(object):
passit = True
if passit == False:
logger.info('[' + k + ' #' + str(issuechk['IssueNumber']) + '] MATCH: ' + tmpfc['ComicLocation'] + ' / ' + str(issuechk['IssueID']) + ' / ' + str(v[i]['ArcValues']['IssueID']))
manual_arclist.append({"ComicLocation": tmpfc['ComicLocation'],
"ComicID": v[i]['WatchValues']['ComicID'],
"IssueID": v[i]['ArcValues']['IssueID'],
"IssueNumber": v[i]['ArcValues']['IssueNumber'],
"StoryArc": v[i]['ArcValues']['StoryArc'],
"IssueArcID": v[i]['ArcValues']['IssueArcID'],
"ReadingOrder": v[i]['ArcValues']['ReadingOrder'],
"ComicName": k})
manual_arclist.append({"ComicLocation": arcmatch['comiclocation'],
"ComicID": v[i]['WatchValues']['ComicID'],
"IssueID": v[i]['ArcValues']['IssueID'],
"IssueNumber": v[i]['ArcValues']['IssueNumber'],
"StoryArc": v[i]['ArcValues']['StoryArc'],
"IssueArcID": v[i]['ArcValues']['IssueArcID'],
"ReadingOrder": v[i]['ArcValues']['ReadingOrder'],
"ComicName": k})
logger.fdebug(module + '[SUCCESSFUL MATCH: ' + k + '-' + v[i]['WatchValues']['ComicID'] + '] Match verified for ' + arcmatch['comicfilename'])
break
else:
logger.fdebug(module + ' Incorrect series - not populating..continuing post-processing')
fn+=1
logger.fdebug(module + '[NON-MATCH: ' + k + '-' + v[i]['WatchValues']['ComicID'] + '] Incorrect series - not populating..continuing post-processing')
i+=1
if len(manual_arclist) > 0:
logger.info('[STORY-ARC MANUAL POST-PROCESSING] I have found ' + str(len(manual_arclist)) + ' issues that belong to Story Arcs. Flinging them into the correct directories.')
for ml in manual_arclist:
issueid = ml['IssueID']
ofilename = ml['ComicLocation']
logger.info('[STORY-ARC POST-PROCESSING] Enabled for ' + ml['StoryArc'])
arcdir = helpers.filesafe(ml['StoryArc'])
if mylar.REPLACE_SPACES:
arcdir = arcdir.replace(' ', mylar.REPLACE_CHAR)
if mylar.STORYARCDIR:
storyarcd = os.path.join(mylar.DESTINATION_DIR, "StoryArcs", arcdir)
logger.fdebug(module + ' Story Arc Directory set to : ' + storyarcd)
grdst = storyarcd
else:
logger.fdebug(module + ' Story Arc Directory set to : ' + mylar.GRABBAG_DIR)
storyarcd = os.path.join(mylar.DESTINATION_DIR, mylar.GRABBAG_DIR)
grdst = storyarcd
#tag the meta.
if mylar.ENABLE_META:
logger.info('[STORY-ARC POST-PROCESSING] Metatagging enabled - proceeding...')
try:
import cmtagmylar
metaresponse = cmtagmylar.run(self.nzb_folder, issueid=issueid, filename=ofilename)
except ImportError:
logger.warn(module + ' comictaggerlib not found on system. Ensure the ENTIRE lib directory is located within mylar/lib/comictaggerlib/')
metaresponse = "fail"
if metaresponse == "fail":
logger.fdebug(module + ' Unable to write metadata successfully - check mylar.log file. Attempting to continue without metatagging...')
elif metaresponse == "unrar error":
logger.error(module + ' This is a corrupt archive - whether CRC errors or it is incomplete. Marking as BAD, and retrying it.')
continue
#launch failed download handling here.
elif metaresponse.startswith('file not found'):
filename_in_error = os.path.split(metaresponse, '||')[1]
self._log("The file cannot be found in the location provided for metatagging to be used [" + filename_in_error + "]. Please verify it exists, and re-run if necessary. Attempting to continue without metatagging...")
logger.error(module + ' The file cannot be found in the location provided for metatagging to be used [' + filename_in_error + ']. Please verify it exists, and re-run if necessary. Attempting to continue without metatagging...')
else:
odir = os.path.split(metaresponse)[0]
ofilename = os.path.split(metaresponse)[1]
ext = os.path.splitext(metaresponse)[1]
logger.info(module + ' Sucessfully wrote metadata to .cbz (' + ofilename + ') - Continuing..')
self._log('Sucessfully wrote metadata to .cbz (' + ofilename + ') - proceeding...')
checkdirectory = filechecker.validateAndCreateDirectory(grdst, True, module=module)
if not checkdirectory:
logger.warn(module + ' Error trying to validate/create directory. Aborting this process at this time.')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
dfilename = ofilename
if len(manual_arclist) > 0:
logger.info('[STORY-ARC MANUAL POST-PROCESSING] I have found ' + str(len(manual_arclist)) + ' issues that belong to Story Arcs. Flinging them into the correct directories.')
for ml in manual_arclist:
issueid = ml['IssueID']
ofilename = ml['ComicLocation']
logger.info('[STORY-ARC POST-PROCESSING] Enabled for ' + ml['StoryArc'])
arcdir = helpers.filesafe(ml['StoryArc'])
if mylar.REPLACE_SPACES:
arcdir = arcdir.replace(' ', mylar.REPLACE_CHAR)
#send to renamer here if valid.
if mylar.RENAME_FILES:
renamed_file = helpers.rename_param(ml['ComicID'], ml['ComicName'], ml['IssueNumber'], ofilename, issueid=ml['IssueID'], arc=ml['StoryArc'])
if renamed_file:
dfilename = renamed_file['nfilename']
logger.fdebug(module + ' Renaming file to conform to configuration: ' + ofilename)
#if from a StoryArc, check to see if we're appending the ReadingOrder to the filename
if mylar.READ2FILENAME:
logger.fdebug(module + ' readingorder#: ' + str(ml['ReadingOrder']))
if int(ml['ReadingOrder']) < 10: readord = "00" + str(ml['ReadingOrder'])
elif int(ml['ReadingOrder']) >= 10 and int(ml['ReadingOrder']) <= 99: readord = "0" + str(ml['ReadingOrder'])
else: readord = str(ml['ReadingOrder'])
dfilename = str(readord) + "-" + dfilename
else:
dfilename = dfilename
if mylar.STORYARCDIR:
storyarcd = os.path.join(mylar.DESTINATION_DIR, "StoryArcs", arcdir)
logger.fdebug(module + ' Story Arc Directory set to : ' + storyarcd)
grdst = storyarcd
else:
logger.fdebug(module + ' Story Arc Directory set to : ' + mylar.GRABBAG_DIR)
storyarcd = os.path.join(mylar.DESTINATION_DIR, mylar.GRABBAG_DIR)
grdst = storyarcd
grab_dst = os.path.join(grdst, dfilename)
logger.fdebug(module + ' Destination Path : ' + grab_dst)
grab_src = os.path.join(self.nzb_folder, ofilename)
logger.fdebug(module + ' Source Path : ' + grab_src)
logger.info(module + ' ' + mylar.FILE_OPTS + 'ing ' + str(ofilename) + ' into directory : ' + str(grab_dst))
#tag the meta.
if mylar.ENABLE_META:
logger.info('[STORY-ARC POST-PROCESSING] Metatagging enabled - proceeding...')
try:
self.fileop(grab_src, grab_dst)
except (OSError, IOError):
logger.warn(module + ' Failed to ' + mylar.FILE_OPTS + ' directory - check directories and manually re-run.')
return
import cmtagmylar
metaresponse = cmtagmylar.run(self.nzb_folder, issueid=issueid, filename=ofilename)
except ImportError:
logger.warn(module + ' comictaggerlib not found on system. Ensure the ENTIRE lib directory is located within mylar/lib/comictaggerlib/')
metaresponse = "fail"
#tidyup old path
try:
pass
#shutil.rmtree(self.nzb_folder)
except (OSError, IOError):
logger.warn(module + ' Failed to remove temporary directory - check directory and manually re-run.')
return
if metaresponse == "fail":
logger.fdebug(module + ' Unable to write metadata successfully - check mylar.log file. Attempting to continue without metatagging...')
elif metaresponse == "unrar error":
logger.error(module + ' This is a corrupt archive - whether CRC errors or it is incomplete. Marking as BAD, and retrying it.')
continue
#launch failed download handling here.
elif metaresponse.startswith('file not found'):
filename_in_error = os.path.split(metaresponse, '||')[1]
self._log("The file cannot be found in the location provided for metatagging to be used [" + filename_in_error + "]. Please verify it exists, and re-run if necessary. Attempting to continue without metatagging...")
logger.error(module + ' The file cannot be found in the location provided for metatagging to be used [' + filename_in_error + ']. Please verify it exists, and re-run if necessary. Attempting to continue without metatagging...')
else:
odir = os.path.split(metaresponse)[0]
ofilename = os.path.split(metaresponse)[1]
ext = os.path.splitext(metaresponse)[1]
logger.info(module + ' Sucessfully wrote metadata to .cbz (' + ofilename + ') - Continuing..')
self._log('Sucessfully wrote metadata to .cbz (' + ofilename + ') - proceeding...')
logger.fdebug(module + ' Removed temporary directory : ' + self.nzb_folder)
checkdirectory = filechecker.validateAndCreateDirectory(grdst, True, module=module)
if not checkdirectory:
logger.warn(module + ' Error trying to validate/create directory. Aborting this process at this time.')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
#delete entry from nzblog table
#if it was downloaded via mylar from the storyarc section, it will have an 'S' in the nzblog
#if it was downloaded outside of mylar and/or not from the storyarc section, it will be a normal issueid in the nzblog
#IssArcID = 'S' + str(ml['IssueArcID'])
myDB.action('DELETE from nzblog WHERE IssueID=? AND SARC=?', ['S' + str(ml['IssueArcID']),ml['StoryArc']])
myDB.action('DELETE from nzblog WHERE IssueID=? AND SARC=?', [ml['IssueArcID'],ml['StoryArc']])
dfilename = ofilename
#send to renamer here if valid.
if mylar.RENAME_FILES:
renamed_file = helpers.rename_param(ml['ComicID'], ml['ComicName'], ml['IssueNumber'], ofilename, issueid=ml['IssueID'], arc=ml['StoryArc'])
if renamed_file:
dfilename = renamed_file['nfilename']
logger.fdebug(module + ' Renaming file to conform to configuration: ' + ofilename)
#if from a StoryArc, check to see if we're appending the ReadingOrder to the filename
if mylar.READ2FILENAME:
logger.fdebug(module + ' readingorder#: ' + str(ml['ReadingOrder']))
if int(ml['ReadingOrder']) < 10: readord = "00" + str(ml['ReadingOrder'])
elif int(ml['ReadingOrder']) >= 10 and int(ml['ReadingOrder']) <= 99: readord = "0" + str(ml['ReadingOrder'])
else: readord = str(ml['ReadingOrder'])
dfilename = str(readord) + "-" + dfilename
else:
dfilename = dfilename
grab_dst = os.path.join(grdst, dfilename)
logger.fdebug(module + ' Destination Path : ' + grab_dst)
grab_src = os.path.join(self.nzb_folder, ofilename)
logger.fdebug(module + ' Source Path : ' + grab_src)
logger.info(module + ' ' + mylar.FILE_OPTS + 'ing ' + str(ofilename) + ' into directory : ' + str(grab_dst))
try:
self.fileop(grab_src, grab_dst)
except (OSError, IOError):
logger.warn(module + ' Failed to ' + mylar.FILE_OPTS + ' directory - check directories and manually re-run.')
return
#tidyup old path
try:
pass
#shutil.rmtree(self.nzb_folder)
except (OSError, IOError):
logger.warn(module + ' Failed to remove temporary directory - check directory and manually re-run.')
return
logger.fdebug(module + ' Removed temporary directory : ' + self.nzb_folder)
#delete entry from nzblog table
#if it was downloaded via mylar from the storyarc section, it will have an 'S' in the nzblog
#if it was downloaded outside of mylar and/or not from the storyarc section, it will be a normal issueid in the nzblog
#IssArcID = 'S' + str(ml['IssueArcID'])
myDB.action('DELETE from nzblog WHERE IssueID=? AND SARC=?', ['S' + str(ml['IssueArcID']),ml['StoryArc']])
myDB.action('DELETE from nzblog WHERE IssueID=? AND SARC=?', [ml['IssueArcID'],ml['StoryArc']])
logger.fdebug(module + ' IssueArcID: ' + str(ml['IssueArcID']))
ctrlVal = {"IssueArcID": ml['IssueArcID']}
newVal = {"Status": "Downloaded",
"Location": grab_dst}
logger.fdebug('writing: ' + str(newVal) + ' -- ' + str(ctrlVal))
myDB.upsert("readinglist", newVal, ctrlVal)
logger.fdebug(module + ' IssueArcID: ' + str(ml['IssueArcID']))
ctrlVal = {"IssueArcID": ml['IssueArcID']}
newVal = {"Status": "Downloaded",
"Location": grab_dst}
logger.fdebug('writing: ' + str(newVal) + ' -- ' + str(ctrlVal))
myDB.upsert("readinglist", newVal, ctrlVal)
logger.fdebug(module + ' [' + ml['StoryArc'] + '] Post-Processing completed for: ' + grab_dst)
logger.fdebug(module + ' [' + ml['StoryArc'] + '] Post-Processing completed for: ' + grab_dst)
else:
nzbname = self.nzb_name
@ -957,7 +975,7 @@ class PostProcessor(object):
#check if duplicate dump folder is enabled and if so move duplicate file in there for manual intervention.
#'dupe_file' - do not write new file as existing file is better quality
#'dupe_src' - write new file, as existing file is a lesser quality (dupe)
if mylar.DUPLICATE_DUMP:
if mylar.DDUMP and not all([mylar.DUPLICATE_DUMP is None, mylar.DUPLICATE_DUMP == '']): #DUPLICATE_DUMP
dupchkit = self.duplicate_process(dupthis)
if dupchkit == False:
logger.warn('Unable to move duplicate file - skipping post-processing of this file.')

View File

@ -55,6 +55,15 @@ PIDFILE= None
CREATEPID = False
SAFESTART = False
AUTO_UPDATE = False
NOWEEKLY = False
IMPORT_STATUS = None
IMPORT_FILES = 0
IMPORT_TOTALFILES = 0
IMPORT_CID_COUNT = 0
IMPORT_PARSED_COUNT = 0
IMPORT_FAILURE_COUNT = 0
CHECKENABLED = False
SCHED = Scheduler()
@ -136,6 +145,7 @@ COMICVINE_API = None
DEFAULT_CVAPI = '583939a3df0a25fc4e8b7a29934a13078002dc27'
CVAPI_RATE = 2
CV_HEADERS = None
BLACKLISTED_PUBLISHERS = None
CHECK_GITHUB = False
CHECK_GITHUB_ON_STARTUP = False
@ -365,6 +375,13 @@ FEEDINFO_32P = None
VERIFY_32P = 1
SNATCHEDTORRENT_NOTIFY = 0
RTORRENT_HOST = None
RTORRENT_USERNAME = None
RTORRENT_PASSWORD = None
RTORRENT_STARTONLOAD = 0
RTORRENT_LABEL = None
RTORRENT_DIRECTORY = None
def CheckSection(sec):
""" Check if INI section exists, if not create it """
try:
@ -414,15 +431,16 @@ def check_setting_str(config, cfg_name, item_name, def_val, log=True):
def initialize():
with INIT_LOCK:
global __INITIALIZED__, DBCHOICE, DBUSER, DBPASS, DBNAME, COMICVINE_API, DEFAULT_CVAPI, CVAPI_RATE, CV_HEADERS, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, UPCOMING_SNATCHED, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, MAX_LOGSIZE, OLDCONFIG_VERSION, OS_DETECT, \
queue, LOCAL_IP, EXT_IP, HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, ENABLE_HTTPS, HTTPS_CERT, HTTPS_KEY, HTTPS_FORCE_ON, HOST_RETURN, API_ENABLED, API_KEY, DOWNLOAD_APIKEY, LAUNCH_BROWSER, GIT_PATH, SAFESTART, AUTO_UPDATE, \
global __INITIALIZED__, DBCHOICE, DBUSER, DBPASS, DBNAME, COMICVINE_API, DEFAULT_CVAPI, CVAPI_RATE, CV_HEADERS, BLACKLISTED_PUBLISHERS, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, UPCOMING_SNATCHED, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, MAX_LOGSIZE, OLDCONFIG_VERSION, OS_DETECT, \
queue, LOCAL_IP, EXT_IP, HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, ENABLE_HTTPS, HTTPS_CERT, HTTPS_KEY, HTTPS_FORCE_ON, HOST_RETURN, API_ENABLED, API_KEY, DOWNLOAD_APIKEY, LAUNCH_BROWSER, GIT_PATH, SAFESTART, NOWEEKLY, AUTO_UPDATE, \
IMPORT_STATUS, IMPORT_FILES, IMPORT_TOTALFILES, IMPORT_CID_COUNT, IMPORT_PARSED_COUNT, IMPORT_FAILURE_COUNT, CHECKENABLED, \
CURRENT_VERSION, LATEST_VERSION, CHECK_GITHUB, CHECK_GITHUB_ON_STARTUP, CHECK_GITHUB_INTERVAL, GIT_USER, GIT_BRANCH, USER_AGENT, DESTINATION_DIR, MULTIPLE_DEST_DIRS, CREATE_FOLDERS, DELETE_REMOVE_DIR, \
DOWNLOAD_DIR, USENET_RETENTION, SEARCH_INTERVAL, NZB_STARTUP_SEARCH, INTERFACE, DUPECONSTRAINT, DDUMP, DUPLICATE_DUMP, AUTOWANT_ALL, AUTOWANT_UPCOMING, ZERO_LEVEL, ZERO_LEVEL_N, COMIC_COVER_LOCAL, HIGHCOUNT, \
DOWNLOAD_SCAN_INTERVAL, FOLDER_SCAN_LOG_VERBOSE, IMPORTLOCK, NZB_DOWNLOADER, USE_SABNZBD, SAB_HOST, SAB_USERNAME, SAB_PASSWORD, SAB_APIKEY, SAB_CATEGORY, SAB_PRIORITY, SAB_TO_MYLAR, SAB_DIRECTORY, USE_BLACKHOLE, BLACKHOLE_DIR, ADD_COMICS, COMIC_DIR, IMP_MOVE, IMP_RENAME, IMP_METADATA, \
USE_NZBGET, NZBGET_HOST, NZBGET_PORT, NZBGET_USERNAME, NZBGET_PASSWORD, NZBGET_CATEGORY, NZBGET_PRIORITY, NZBGET_DIRECTORY, NZBSU, NZBSU_UID, NZBSU_APIKEY, NZBSU_VERIFY, DOGNZB, DOGNZB_APIKEY, DOGNZB_VERIFY, \
NEWZNAB, NEWZNAB_NAME, NEWZNAB_HOST, NEWZNAB_APIKEY, NEWZNAB_VERIFY, NEWZNAB_UID, NEWZNAB_ENABLED, EXTRA_NEWZNABS, NEWZNAB_EXTRA, \
ENABLE_TORZNAB, TORZNAB_NAME, TORZNAB_HOST, TORZNAB_APIKEY, TORZNAB_CATEGORY, TORZNAB_VERIFY, \
EXPERIMENTAL, ALTEXPERIMENTAL, \
EXPERIMENTAL, ALTEXPERIMENTAL, RTORRENT_HOST, RTORRENT_USERNAME, RTORRENT_PASSWORD, RTORRENT_STARTONLOAD, RTORRENT_LABEL, RTORRENT_DIRECTORY, \
ENABLE_META, CMTAGGER_PATH, CBR2CBZ_ONLY, CT_TAG_CR, CT_TAG_CBL, CT_CBZ_OVERWRITE, UNRAR_CMD, CT_SETTINGSPATH, UPDATE_ENDED, INDIE_PUB, BIGGIE_PUB, IGNORE_HAVETOTAL, SNATCHED_HAVETOTAL, PROVIDER_ORDER, \
dbUpdateScheduler, searchScheduler, RSSScheduler, WeeklyScheduler, VersionScheduler, FolderMonitorScheduler, \
ENABLE_TORRENTS, MINSEEDS, TORRENT_LOCAL, LOCAL_WATCHDIR, TORRENT_SEEDBOX, SEEDBOX_HOST, SEEDBOX_PORT, SEEDBOX_USER, SEEDBOX_PASS, SEEDBOX_WATCHDIR, \
@ -626,6 +644,12 @@ def initialize():
INDIE_PUB = check_setting_str(CFG, 'General', 'indie_pub', '75')
BIGGIE_PUB = check_setting_str(CFG, 'General', 'biggie_pub', '55')
flattened_blacklisted_pub = check_setting_str(CFG, 'General', 'blacklisted_publishers', [], log=False)
if len(flattened_blacklisted_pub) == 0:
BLACKLISTED_PUBLISHERS = None
else:
BLACKLISTED_PUBLISHERS = list(itertools.izip(*[itertools.islice(flattened_blacklisted_pub, i, None, 1) for i in range(1)]))
ENABLE_RSS = bool(check_setting_int(CFG, 'General', 'enable_rss', 1))
RSS_CHECKINTERVAL = check_setting_str(CFG, 'General', 'rss_checkinterval', '20')
RSS_LASTRUN = check_setting_str(CFG, 'General', 'rss_lastrun', '')
@ -675,6 +699,13 @@ def initialize():
PASSWORD_32P = check_setting_str(CFG, 'Torrents', 'password_32p', '')
VERIFY_32P = bool(check_setting_int(CFG, 'Torrents', 'verify_32p', 1))
SNATCHEDTORRENT_NOTIFY = bool(check_setting_int(CFG, 'Torrents', 'snatchedtorrent_notify', 0))
RTORRENT_HOST = check_setting_str(CFG, 'Torrents', 'rtorrent_host', '')
RTORRENT_USERNAME = check_setting_str(CFG, 'Torrents', 'rtorrent_username', '')
RTORRENT_PASSWORD = check_setting_str(CFG, 'Torrents', 'rtorrent_password', '')
RTORRENT_STARTONLOAD = bool(check_setting_int(CFG, 'Torrents', 'rtorrent_startonload', 0))
RTORRENT_LABEL = check_setting_str(CFG, 'Torrents', 'rtorrent_label', '')
RTORRENT_DIRECTORY = check_setting_str(CFG, 'Torrents', 'rtorrent_directory', '')
#this needs to have it's own category - for now General will do.
NZB_DOWNLOADER = check_setting_int(CFG, 'General', 'nzb_downloader', 0)
@ -1251,6 +1282,16 @@ def config_write():
new_config['General']['nzb_startup_search'] = int(NZB_STARTUP_SEARCH)
new_config['General']['add_comics'] = int(ADD_COMICS)
new_config['General']['comic_dir'] = COMIC_DIR
if BLACKLISTED_PUBLISHERS is None:
flattened_blacklisted_pub = None
else:
flattened_blacklisted_pub = []
for bpub in BLACKLISTED_PUBLISHERS:
#for key, value in pro.items():
for item in bpub:
flattened_blacklisted_pub.append(item)
#flattened_providers.append(str(value))
new_config['General']['blacklisted_publishers'] = flattened_blacklisted_pub
new_config['General']['imp_move'] = int(IMP_MOVE)
new_config['General']['imp_rename'] = int(IMP_RENAME)
new_config['General']['imp_metadata'] = int(IMP_METADATA)
@ -1366,6 +1407,13 @@ def config_write():
new_config['Torrents']['password_32p'] = PASSWORD_32P
new_config['Torrents']['verify_32p'] = int(VERIFY_32P)
new_config['Torrents']['snatchedtorrent_notify'] = int(SNATCHEDTORRENT_NOTIFY)
new_config['Torrents']['rtorrent_host'] = RTORRENT_HOST
new_config['Torrents']['rtorrent_username'] = RTORRENT_USERNAME
new_config['Torrents']['rtorrent_password'] = RTORRENT_PASSWORD
new_config['Torrents']['rtorrent_startonload'] = int(RTORRENT_STARTONLOAD)
new_config['Torrents']['rtorrent_label'] = RTORRENT_LABEL
new_config['Torrents']['rtorrent_directory'] = RTORRENT_DIRECTORY
new_config['SABnzbd'] = {}
#new_config['SABnzbd']['use_sabnzbd'] = int(USE_SABNZBD)
new_config['SABnzbd']['sab_host'] = SAB_HOST
@ -1492,7 +1540,8 @@ def start():
#threading.Thread(target=weeklypull.pullit).start()
#now the scheduler (check every 24 hours)
#SCHED.add_interval_job(weeklypull.pullit, hours=24)
WeeklyScheduler.thread.start()
if not NOWEEKLY:
WeeklyScheduler.thread.start()
#let's do a run at the Wanted issues here (on startup) if enabled.
#if NZB_STARTUP_SEARCH:
@ -1525,14 +1574,14 @@ def dbcheck():
c_error = 'sqlite3.OperationalError'
c=conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS comics (ComicID TEXT UNIQUE, ComicName TEXT, ComicSortName TEXT, ComicYear TEXT, DateAdded TEXT, Status TEXT, IncludeExtras INTEGER, Have INTEGER, Total INTEGER, ComicImage TEXT, ComicPublisher TEXT, ComicLocation TEXT, ComicPublished TEXT, NewPublish TEXT, LatestIssue TEXT, LatestDate TEXT, Description TEXT, QUALalt_vers TEXT, QUALtype TEXT, QUALscanner TEXT, QUALquality TEXT, LastUpdated TEXT, AlternateSearch TEXT, UseFuzzy TEXT, ComicVersion TEXT, SortOrder INTEGER, DetailURL TEXT, ForceContinuing INTEGER, ComicName_Filesafe TEXT, AlternateFileName TEXT, ComicImageURL TEXT, ComicImageALTURL TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS comics (ComicID TEXT UNIQUE, ComicName TEXT, ComicSortName TEXT, ComicYear TEXT, DateAdded TEXT, Status TEXT, IncludeExtras INTEGER, Have INTEGER, Total INTEGER, ComicImage TEXT, ComicPublisher TEXT, ComicLocation TEXT, ComicPublished TEXT, NewPublish TEXT, LatestIssue TEXT, LatestDate TEXT, Description TEXT, QUALalt_vers TEXT, QUALtype TEXT, QUALscanner TEXT, QUALquality TEXT, LastUpdated TEXT, AlternateSearch TEXT, UseFuzzy TEXT, ComicVersion TEXT, SortOrder INTEGER, DetailURL TEXT, ForceContinuing INTEGER, ComicName_Filesafe TEXT, AlternateFileName TEXT, ComicImageURL TEXT, ComicImageALTURL TEXT, DynamicComicName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS issues (IssueID TEXT, ComicName TEXT, IssueName TEXT, Issue_Number TEXT, DateAdded TEXT, Status TEXT, Type TEXT, ComicID TEXT, ArtworkURL Text, ReleaseDate TEXT, Location TEXT, IssueDate TEXT, Int_IssueNumber INT, ComicSize TEXT, AltIssueNumber TEXT, IssueDate_Edit TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS snatched (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Size INTEGER, DateAdded TEXT, Status TEXT, FolderName TEXT, ComicID TEXT, Provider TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS upcoming (ComicName TEXT, IssueNumber TEXT, ComicID TEXT, IssueID TEXT, IssueDate TEXT, Status TEXT, DisplayComicName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS nzblog (IssueID TEXT, NZBName TEXT, SARC TEXT, PROVIDER TEXT, ID TEXT, AltNZBName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS weekly (SHIPDATE TEXT, PUBLISHER TEXT, ISSUE TEXT, COMIC VARCHAR(150), EXTRA TEXT, STATUS TEXT, ComicID TEXT, IssueID TEXT)')
# c.execute('CREATE TABLE IF NOT EXISTS sablog (nzo_id TEXT, ComicName TEXT, ComicYEAR TEXT, ComicIssue TEXT, name TEXT, nzo_complete TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS importresults (impID TEXT, ComicName TEXT, ComicYear TEXT, Status TEXT, ImportDate TEXT, ComicFilename TEXT, ComicLocation TEXT, WatchMatch TEXT, DisplayName TEXT, SRID TEXT, ComicID TEXT, IssueID TEXT, Volume TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS importresults (impID TEXT, ComicName TEXT, ComicYear TEXT, Status TEXT, ImportDate TEXT, ComicFilename TEXT, ComicLocation TEXT, WatchMatch TEXT, DisplayName TEXT, SRID TEXT, ComicID TEXT, IssueID TEXT, Volume TEXT, IssueNumber TEXT, DynamicName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS readlist (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Status TEXT, DateAdded TEXT, Location TEXT, inCacheDir TEXT, SeriesYear TEXT, ComicID TEXT, StatusChange TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS readinglist(StoryArcID TEXT, ComicName TEXT, IssueNumber TEXT, SeriesYear TEXT, IssueYEAR TEXT, StoryArc TEXT, TotalIssues TEXT, Status TEXT, inCacheDir TEXT, Location TEXT, IssueArcID TEXT, ReadingOrder INT, IssueID TEXT, ComicID TEXT, StoreDate TEXT, IssueDate TEXT, Publisher TEXT, IssuePublisher TEXT, IssueName TEXT, CV_ArcID TEXT, Int_IssueNumber INT)')
c.execute('CREATE TABLE IF NOT EXISTS annuals (IssueID TEXT, Issue_Number TEXT, IssueName TEXT, IssueDate TEXT, Status TEXT, ComicID TEXT, GCDComicID TEXT, Location TEXT, ComicSize TEXT, Int_IssueNumber INT, ComicName TEXT, ReleaseDate TEXT, ReleaseComicID TEXT, ReleaseComicName TEXT, IssueDate_Edit TEXT)')
@ -1630,6 +1679,13 @@ def dbcheck():
except sqlite3.OperationalError:
c.execute('ALTER TABLE comics ADD COLUMN NewPublish TEXT')
try:
c.execute('SELECT DynamicComicName from comics')
dynamic_upgrade = False
except sqlite3.OperationalError:
c.execute('ALTER TABLE comics ADD COLUMN DynamicComicName TEXT')
dynamic_upgrade = True
# -- Issues Table --
try:
@ -1709,6 +1765,17 @@ def dbcheck():
c.execute('SELECT Volume from importresults')
except sqlite3.OperationalError:
c.execute('ALTER TABLE importresults ADD COLUMN Volume TEXT')
try:
c.execute('SELECT IssueNumber from importresults')
except sqlite3.OperationalError:
c.execute('ALTER TABLE importresults ADD COLUMN IssueNumber TEXT')
try:
c.execute('SELECT DynamicName from importresults')
except sqlite3.OperationalError:
c.execute('ALTER TABLE importresults ADD COLUMN DynamicName TEXT')
## -- Readlist Table --
try:
@ -1946,6 +2013,10 @@ def dbcheck():
conn.commit()
c.close()
if dynamic_upgrade:
logger.info('Updating db to include some important changes.')
helpers.upgrade_dynamic()
def csv_load():
# for redudant module calls..include this.
conn = sqlite3.connect(DB_FILE)
@ -2002,7 +2073,7 @@ def csv_load():
c.close()
def halt():
global __INITIALIZED__, dbUpdateScheduler, seachScheduler, RSSScheduler, WeeklyScheduler, \
global __INITIALIZED__, dbUpdateScheduler, searchScheduler, RSSScheduler, WeeklyScheduler, \
VersionScheduler, FolderMonitorScheduler, started
with INIT_LOCK:

View File

@ -8,7 +8,7 @@ from mylar import logger
class info32p(object):
def __init__(self, reauthenticate=False, searchterm=None):
def __init__(self, reauthenticate=False, searchterm=None, test=False):
self.module = '[32P-AUTHENTICATION]'
self.url = 'https://32pag.es/login.php'
@ -19,6 +19,7 @@ class info32p(object):
'User-Agent': 'Mozilla/5.0'}
self.reauthenticate = reauthenticate
self.searchterm = searchterm
self.test = test
def authenticate(self):
@ -43,11 +44,56 @@ class info32p(object):
s.headers = self.headers
try:
s.get(self.url, verify=verify, timeout=30)
t = s.get(self.url, verify=verify, timeout=30)
except (requests.exceptions.SSLError, requests.exceptions.Timeout) as e:
logger.error(self.module + ' Unable to establish connection to 32P: ' + str(e))
return
chksoup = BeautifulSoup(t.content)
chksoup.prettify()
chk_login = chksoup.find_all("form", {"id":"loginform"})
if not chk_login:
logger.warn(self.module + ' Something is wrong - either 32p is offline, or your account has been temporarily banned (possibly).')
logger.warn(self.module + ' Disabling provider until this gets addressed by manual intervention.')
return "disable"
for ck in chk_login:
#<div><div id='recaptchadiv'></div><input type='hidden' id='recaptchainp' value='' name='recaptchainp' /></div>
captcha = ck.find("div", {"id":"recaptchadiv"})
capt_error = ck.find("span", {"class":"notice hidden","id":"formnotice"})
error_msg = ck.find("span", {"id":"formerror"})
if error_msg:
loginerror = " ".join(list(error_msg.stripped_strings))
logger.warn(self.module + ' Warning: ' + loginerror)
if capt_error:
aleft = ck.find("span", {"class":"info"})
attemptsleft = " ".join(list(aleft.stripped_strings))
if int(attemptsleft) < 6:
logger.warn(self.module + ' ' + str(attemptsleft) + ' sign-on attempts left.')
if captcha:
logger.warn(self.module + ' Captcha detected. Temporariliy disabling 32p (to re-enable answer the captcha manually in a normal browswer or wait ~10 minutes...')
return "disable"
else:
logger.fdebug(self.module + ' Captcha currently not present - continuing to signon...')
if self.test:
rtnmsg = ''
if (not capt_error and not error_msg) or (capt_error and int(attemptsleft) == 6):
rtnmsg += '[No Warnings/Errors]'
else:
if capt_error and int(attemptsleft) < 6:
rtnmsg = '[' + str(attemptsleft) + ' sign-on attempts left]'
if error_msg:
rtnmsg += '[' + error_msg + ']'
if not captcha:
rtnmsg += '[No Captcha]'
else:
rtnmsg += '[Captcha Present!]'
return rtnmsg
# post to the login form
r = s.post(self.url, data=self.payload, verify=verify)
@ -57,15 +103,23 @@ class info32p(object):
soup.prettify()
#check for invalid username/password and if it's invalid - disable provider so we don't autoban (manual intervention is required after).
chk_login = soup.find_all("form", {"id":"loginform"})
for ck in chk_login:
captcha = ck.find("div", {"id":"recaptchadiv"})
errorlog = ck.find("span", {"id":"formerror"})
errornot = ck.find("span", {"class":"notice hidden","id":"formnotice"})
loginerror = " ".join(list(errorlog.stripped_strings)) #login_error.findNext(text=True)
errornot = ck.find("span", {"class":"notice"})
noticeerror = " ".join(list(errornot.stripped_strings)) #notice_error.findNext(text=True)
logger.error(self.module + ' Error: ' + loginerror)
if noticeerror:
logger.error(self.module + ' Warning: ' + noticeerror)
logger.error(self.module + ' Disabling 32P provider until username/password can be corrected / verified.')
if captcha:
logger.warn(self.module + ' Captcha detected. Temporariliy disabling 32p (to re-enable answer the captcha manually in a normal browswer or wait ~10 minutes')
if errorlog:
logger.error(self.module + ' Error: ' + loginerror)
if errornot:
aleft = ck.find("span", {"class":"info"})
attemptsleft = " ".join(list(aleft.stripped_strings))
if int(attemptsleft) < 6:
logger.warn(self.module + ' ' + str(attemptsleft) + ' sign-on attempts left.')
logger.error(self.module + ' Disabling 32P provider until errors can be fixed in order to avoid temporary bans.')
return "disable"

View File

@ -32,52 +32,6 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
# Force mylar to use cmtagger_path = mylar.PROG_DIR to force the use of the included lib.
if platform.system() == "Windows":
if mylar.UNRAR_CMD == 'None' or mylar.UNRAR_CMD == '' or mylar.UNRAR_CMD is None:
unrar_cmd = "C:\Program Files\WinRAR\UnRAR.exe"
else:
unrar_cmd = mylar.UNRAR_CMD.strip()
# test for UnRAR
if not os.path.isfile(unrar_cmd):
unrar_cmd = "C:\Program Files (x86)\WinRAR\UnRAR.exe"
if not os.path.isfile(unrar_cmd):
logger.fdebug(module + ' Unable to locate UnRAR.exe - make sure it is installed.')
logger.fdebug(module + ' Aborting meta-tagging.')
return "fail"
logger.fdebug(module + ' UNRAR path set to : ' + unrar_cmd)
elif platform.system() == "Darwin":
#Mac OS X
sys_type = 'mac'
if mylar.UNRAR_CMD == 'None' or mylar.UNRAR_CMD == '' or mylar.UNRAR_CMD is None:
unrar_cmd = "/usr/local/bin/unrar"
else:
unrar_cmd = mylar.UNRAR_CMD.strip()
logger.fdebug(module + ' UNRAR path set to : ' + unrar_cmd)
else:
#for the 'nix
sys_type = 'linux'
if mylar.UNRAR_CMD == 'None' or mylar.UNRAR_CMD == '' or mylar.UNRAR_CMD is None:
if 'freebsd' in platform.linux_distribution()[0].lower():
unrar_cmd = "/usr/local/bin/unrar"
else:
unrar_cmd = "/usr/bin/unrar"
else:
unrar_cmd = mylar.UNRAR_CMD.strip()
logger.fdebug(module + ' UNRAR path set to : ' + unrar_cmd)
if not os.path.exists(unrar_cmd):
logger.fdebug(module + ' WARNING: cannot find the unrar command.')
logger.fdebug(module + ' File conversion and extension fixing not available')
logger.fdebug(module + ' You probably need to edit this script, or install the missing tool, or both!')
return "fail"
logger.fdebug(module + ' Filename is : ' + str(filename))
filepath = filename
@ -107,14 +61,12 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
downloadpath = os.path.abspath(dirName)
sabnzbdscriptpath = os.path.dirname(sys.argv[0])
comicpath = new_folder
unrar_folder = os.path.join(comicpath, "unrard")
logger.fdebug(module + ' Paths / Locations:')
logger.fdebug(module + ' scriptname : ' + scriptname)
logger.fdebug(module + ' downloadpath : ' + downloadpath)
logger.fdebug(module + ' sabnzbdscriptpath : ' + sabnzbdscriptpath)
logger.fdebug(module + ' comicpath : ' + comicpath)
logger.fdebug(module + ' unrar_folder : ' + unrar_folder)
logger.fdebug(module + ' Running the ComicTagger Add-on for Mylar')
@ -139,7 +91,7 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
logger.warn(module + '[WARNING] Make sure that you are using the comictagger included with Mylar.')
return "fail"
ctend = ctversion.find('\]')
ctend = ctversion.find('\n')
ctcheck = re.sub("[^0-9]", "", ctversion[:ctend])
ctcheck = re.sub('\.', '', ctcheck).strip()
if int(ctcheck) >= int('1115'): # (v1.1.15)
@ -223,8 +175,8 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
try:
p = subprocess.Popen(script_cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, err = p.communicate()
logger.info(out)
logger.info(err)
#logger.info(out)
#logger.info(err)
if initial_ctrun and 'exported successfully' in out:
logger.fdebug(module + '[COMIC-TAGGER] : ' +str(out))
#Archive exported successfully to: X-Men v4 008 (2014) (Digital) (Nahga-Empire).cbz (Original deleted)

View File

@ -94,13 +94,8 @@ def pulldetails(comicid, type, issueid=None, offset=1, arclist=None, comicidlist
logger.warn('Error fetching data from ComicVine: %s' % (e))
return
#file = urllib2.urlopen(PULLURL)
#convert to string:
#data = file.read()
#close file because we dont need it anymore:
#file.close()
#parse the xml you downloaded
dom = parseString(r.content) #(data)
logger.fdebug('cv status code : ' + str(r.status_code))
dom = parseString(r.content)
return dom
@ -563,7 +558,6 @@ def GetSeriesYears(dom):
return serieslist
def GetImportList(results):
logger.info('booyah')
importlist = results.getElementsByTagName('issue')
serieslist = []
importids = {}
@ -596,11 +590,17 @@ def GetImportList(results):
except:
tempseries['ComicName'] = 'None'
try:
tempseries['Issue_Number'] = implist.getElementsByTagName('issue_number')[0].firstChild.wholeText
except:
logger.fdebug('No Issue Number available - Trade Paperbacks, Graphic Novels and Compendiums are not supported as of yet.')
logger.info('tempseries:' + str(tempseries))
serieslist.append({"ComicID": tempseries['ComicID'],
"IssueID": tempseries['IssueID'],
"ComicName": tempseries['ComicName'],
"Issue_Name": tempseries['Issue_Name']})
serieslist.append({"ComicID": tempseries['ComicID'],
"IssueID": tempseries['IssueID'],
"ComicName": tempseries['ComicName'],
"Issue_Name": tempseries['Issue_Name'],
"Issue_Number": tempseries['Issue_Number']})
return serieslist

File diff suppressed because it is too large Load Diff

View File

@ -909,7 +909,20 @@ def issuedigits(issnum):
if int_issnum is not None:
return int_issnum
elif u'\xbd' in issnum:
#try:
# issnum.decode('ascii')
# logger.fdebug('ascii character.')
#except:
# logger.fdebug('Unicode character detected: ' + issnum)
#else: issnum.decode(mylar.SYS_ENCODING).decode('utf-8')
if type(issnum) == str:
try:
issnum = issnum.decode('utf-8')
except:
issnum = issnum.decode('windows-1252')
if u'\xbd' in issnum:
int_issnum = .5 * 1000
elif u'\xbc' in issnum:
int_issnum = .25 * 1000
@ -1120,6 +1133,26 @@ def latestdate_fix():
return
def upgrade_dynamic():
import db, logger
dynamic_list = []
myDB = db.DBConnection()
clist = myDB.select('SELECT * FROM Comics')
for cl in clist:
cl_d = mylar.filechecker.FileChecker(watchcomic=cl['ComicName'])
cl_dyninfo = cl_d.dynamic_replace(cl['ComicName'])
dynamic_list.append({'DynamicComicName': cl_dyninfo['mod_seriesname'],
'ComicID': cl['ComicID']})
if len(dynamic_list) > 0:
for dl in dynamic_list:
CtrlVal = {"ComicID": dl['ComicID']}
newVal = {"DynamicComicName": dl['DynamicComicName']}
myDB.upsert("Comics", newVal, CtrlVal)
logger.info('Finshed updating ' + str(len(dynamic_list)) + ' entries within the db.')
return
def checkFolder():
from mylar import PostProcessor, logger
import Queue
@ -1298,39 +1331,58 @@ def IssueDetails(filelocation, IssueID=None):
issuetag = None
pic_extensions = ('.jpg','.png','.webp')
modtime = os.path.getmtime(dstlocation)
low_infile = 999999
with zipfile.ZipFile(dstlocation, 'r') as inzipfile:
for infile in inzipfile.namelist():
if infile == 'ComicInfo.xml':
logger.fdebug('Extracting ComicInfo.xml to display.')
dst = os.path.join(mylar.CACHE_DIR, 'ComicInfo.xml')
data = inzipfile.read(infile)
#print str(data)
issuetag = 'xml'
#looks for the first page and assumes it's the cover. (Alternate covers handled later on)
elif any(['000.' in infile, '00.' in infile]) and infile.endswith(pic_extensions) and cover == "notfound":
logger.fdebug('Extracting primary image ' + infile + ' as coverfile for display.')
local_file = open(os.path.join(mylar.CACHE_DIR, 'temp.jpg'), "wb")
local_file.write(inzipfile.read(infile))
local_file.close
cover = "found"
elif any(['00a' in infile, '00b' in infile, '00c' in infile, '00d' in infile, '00e' in infile]) and infile.endswith(pic_extensions) and cover == "notfound":
logger.fdebug('Found Alternate cover - ' + infile + ' . Extracting.')
altlist = ('00a', '00b', '00c', '00d', '00e')
for alt in altlist:
if alt in infile:
local_file = open(os.path.join(mylar.CACHE_DIR, 'temp.jpg'), "wb")
local_file.write(inzipfile.read(infile))
local_file.close
cover = "found"
break
try:
with zipfile.ZipFile(dstlocation, 'r') as inzipfile:
for infile in inzipfile.namelist():
tmp_infile = re.sub("[^0-9]","", infile).strip()
if tmp_infile == '':
pass
elif int(tmp_infile) < int(low_infile):
low_infile = tmp_infile
low_infile_name = infile
if infile == 'ComicInfo.xml':
logger.fdebug('Extracting ComicInfo.xml to display.')
dst = os.path.join(mylar.CACHE_DIR, 'ComicInfo.xml')
data = inzipfile.read(infile)
#print str(data)
issuetag = 'xml'
#looks for the first page and assumes it's the cover. (Alternate covers handled later on)
elif any(['000.' in infile, '00.' in infile]) and infile.endswith(pic_extensions) and cover == "notfound":
logger.fdebug('Extracting primary image ' + infile + ' as coverfile for display.')
local_file = open(os.path.join(mylar.CACHE_DIR, 'temp.jpg'), "wb")
local_file.write(inzipfile.read(infile))
local_file.close
cover = "found"
elif any(['00a' in infile, '00b' in infile, '00c' in infile, '00d' in infile, '00e' in infile]) and infile.endswith(pic_extensions) and cover == "notfound":
logger.fdebug('Found Alternate cover - ' + infile + ' . Extracting.')
altlist = ('00a', '00b', '00c', '00d', '00e')
for alt in altlist:
if alt in infile:
local_file = open(os.path.join(mylar.CACHE_DIR, 'temp.jpg'), "wb")
local_file.write(inzipfile.read(infile))
local_file.close
cover = "found"
break
elif any(['001.jpg' in infile, '001.png' in infile, '001.webp' in infile, '01.jpg' in infile, '01.png' in infile, '01.webp' in infile]) and cover == "notfound":
logger.fdebug('Extracting primary image ' + infile + ' as coverfile for display.')
local_file = open(os.path.join(mylar.CACHE_DIR, 'temp.jpg'), "wb")
local_file.write(inzipfile.read(infile))
local_file.close
cover = "found"
elif any(['001.jpg' in infile, '001.png' in infile, '001.webp' in infile, '01.jpg' in infile, '01.png' in infile, '01.webp' in infile]) and cover == "notfound":
logger.fdebug('Extracting primary image ' + infile + ' as coverfile for display.')
local_file = open(os.path.join(mylar.CACHE_DIR, 'temp.jpg'), "wb")
local_file.write(inzipfile.read(infile))
local_file.close
cover = "found"
if cover != "found":
logger.fdebug('Invalid naming sequence for jpgs discovered. Attempting to find the lowest sequence and will use as cover (it might not work). Currently : ' + str(low_infile))
local_file = open(os.path.join(mylar.CACHE_DIR, 'temp.jpg'), "wb")
local_file.write(inzipfile.read(low_infile_name))
local_file.close
cover = "found"
except:
logger.info('ERROR. Unable to properly retrieve the cover for displaying. It\'s probably best to re-tag this file.')
return
ComicImage = os.path.join('cache', 'temp.jpg?' +str(modtime))
IssueImage = replacetheslash(ComicImage)
@ -1438,7 +1490,7 @@ def IssueDetails(filelocation, IssueID=None):
pagecount = result.getElementsByTagName('PageCount')[0].firstChild.wholeText
except:
pagecount = 0
logger.fdebug("number of pages I counted: " + str(pagecount))
i = 0
try:
@ -1451,14 +1503,15 @@ def IssueDetails(filelocation, IssueID=None):
while (i < int(pagecount)):
pageinfo = result.getElementsByTagName('Page')[i].attributes
attrib = pageinfo.getNamedItem('Image')
logger.fdebug('Frontcover validated as being image #: ' + str(attrib.value))
#logger.fdebug('Frontcover validated as being image #: ' + str(attrib.value))
att = pageinfo.getNamedItem('Type')
logger.fdebug('pageinfo: ' + str(pageinfo))
if att.value == 'FrontCover':
logger.fdebug('FrontCover detected. Extracting.')
#logger.fdebug('FrontCover detected. Extracting.')
break
i+=1
elif issuetag == 'comment':
logger.info('CBL Tagging.')
stripline = 'Archive: ' + dstlocation
data = re.sub(stripline, '', data.encode("utf-8")).strip()
if data is None or data == '':
@ -1468,17 +1521,39 @@ def IssueDetails(filelocation, IssueID=None):
lastmodified = ast_data['lastModified']
dt = ast_data['ComicBookInfo/1.0']
publisher = dt['publisher']
year = dt['publicationYear']
month = dt['publicationMonth']
try:
publisher = dt['publisher']
except:
publisher = None
try:
year = dt['publicationYear']
except:
year = None
try:
month = dt['publicationMonth']
except:
month = None
try:
day = dt['publicationDay']
except:
day = None
issue_title = dt['title']
series_title = dt['series']
issue_number = dt['issue']
summary = dt['comments']
try:
issue_title = dt['title']
except:
issue_title = None
try:
series_title = dt['series']
except:
series_title = None
try:
issue_number = dt['issue']
except:
issue_number = None
try:
summary = dt['comments']
except:
summary = "None"
editor = "None"
colorist = "None"
artist = "None"
@ -1487,35 +1562,50 @@ def IssueDetails(filelocation, IssueID=None):
cover_artist = "None"
penciller = "None"
inker = "None"
try:
series_volume = dt['volume']
except:
series_volume = None
for cl in dt['credits']:
if cl['role'] == 'Editor':
if editor == "None": editor = cl['person']
else: editor += ', ' + cl['person']
elif cl['role'] == 'Colorist':
if colorist == "None": colorist = cl['person']
else: colorist += ', ' + cl['person']
elif cl['role'] == 'Artist':
if artist == "None": artist = cl['person']
else: artist += ', ' + cl['person']
elif cl['role'] == 'Writer':
if writer == "None": writer = cl['person']
else: writer += ', ' + cl['person']
elif cl['role'] == 'Letterer':
if letterer == "None": letterer = cl['person']
else: letterer += ', ' + cl['person']
elif cl['role'] == 'Cover':
if cover_artist == "None": cover_artist = cl['person']
else: cover_artist += ', ' + cl['person']
elif cl['role'] == 'Penciller':
if penciller == "None": penciller = cl['person']
else: penciller += ', ' + cl['person']
elif cl['role'] == 'Inker':
if inker == "None": inker = cl['person']
else: inker += ', ' + cl['person']
try:
t = dt['credits']
except:
editor = None
colorist = None
artist = None
writer = None
letterer = None
cover_artist = None
penciller = None
inker = None
else:
for cl in dt['credits']:
if cl['role'] == 'Editor':
if editor == "None": editor = cl['person']
else: editor += ', ' + cl['person']
elif cl['role'] == 'Colorist':
if colorist == "None": colorist = cl['person']
else: colorist += ', ' + cl['person']
elif cl['role'] == 'Artist':
if artist == "None": artist = cl['person']
else: artist += ', ' + cl['person']
elif cl['role'] == 'Writer':
if writer == "None": writer = cl['person']
else: writer += ', ' + cl['person']
elif cl['role'] == 'Letterer':
if letterer == "None": letterer = cl['person']
else: letterer += ', ' + cl['person']
elif cl['role'] == 'Cover':
if cover_artist == "None": cover_artist = cl['person']
else: cover_artist += ', ' + cl['person']
elif cl['role'] == 'Penciller':
if penciller == "None": penciller = cl['person']
else: penciller += ', ' + cl['person']
elif cl['role'] == 'Inker':
if inker == "None": inker = cl['person']
else: inker += ', ' + cl['person']
try:
notes = dt['notes']
@ -1680,33 +1770,52 @@ def duplicate_filecheck(filename, ComicID=None, IssueID=None, StoryArcID=None):
logger.info('[DUPECHECK] Assuming 0-byte file - this one is gonna get hammered.')
logger.fdebug('[DUPECHECK] Based on duplication preferences I will retain based on : ' + mylar.DUPECONSTRAINT)
if 'cbr' in mylar.DUPECONSTRAINT or 'cbz' in mylar.DUPECONSTRAINT:
tmp_dupeconstraint = mylar.DUPECONSTRAINT
if any(['cbr' in mylar.DUPECONSTRAINT, 'cbz' in mylar.DUPECONSTRAINT]):
if 'cbr' in mylar.DUPECONSTRAINT:
#this has to be configured in config - either retain cbr or cbz.
if dupchk['Location'].endswith('.cbz'):
#keep dupechk['Location']
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in file : ' + dupchk['Location'])
rtnval.append({'action': "dupe_file",
'to_dupe': filename})
if filename.endswith('.cbr'):
#this has to be configured in config - either retain cbr or cbz.
if dupchk['Location'].endswith('.cbr'):
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] BOTH files are in cbr format. Retaining the larger filesize of the two.')
tmp_dupeconstraint = 'filesize'
else:
#keep filename
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in file : ' + filename)
rtnval.append({'action': "dupe_src",
'to_dupe': os.path.join(series['ComicLocation'], dupchk['Location'])})
else:
#keep filename
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in file : ' + filename)
rtnval.append({'action': "dupe_src",
'to_dupe': os.path.join(series['ComicLocation'], dupchk['Location'])})
if dupchk['Location'].endswith('.cbz'):
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] BOTH files are in cbz format. Retaining the larger filesize of the two.')
tmp_dupeconstraint = 'filesize'
else:
#keep filename
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in file : ' + dupchk['Location'])
rtnval.append({'action': "dupe_file",
'to_dupe': filename})
elif 'cbz' in mylar.DUPECONSTRAINT:
if dupchk['Location'].endswith('.cbr'):
#keep dupchk['Location']
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in filename : ' + dupchk['Location'])
rtnval.append({'action': "dupe_file",
'to_dupe': filename})
if filename.endswith('.cbr'):
if dupchk['Location'].endswith('.cbr'):
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] BOTH files are in cbr format. Retaining the larger filesize of the two.')
tmp_dupeconstraint = 'filesize'
else:
#keep filename
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in filename : ' + dupchk['Location'])
rtnval.append({'action': "dupe_file",
'to_dupe': filename})
else:
#keep filename
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in filename : ' + filename)
rtnval.append({'action': "dupe_src",
'to_dupe': os.path.join(series['ComicLocation'], dupchk['Location'])})
if dupchk['Location'].endswith('.cbz'):
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] BOTH files are in cbz format. Retaining the larger filesize of the two.')
tmp_dupeconstraint = 'filesize'
else:
#keep filename
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in filename : ' + filename)
rtnval.append({'action': "dupe_src",
'to_dupe': os.path.join(series['ComicLocation'], dupchk['Location'])})
if mylar.DUPECONSTRAINT == 'filesize':
if mylar.DUPECONSTRAINT == 'filesize' or tmp_dupeconstraint == 'filesize':
if filesz <= int(dupsize) and int(dupsize) != 0:
logger.info('[DUPECHECK-FILESIZE PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in filename : ' + dupchk['Location'])
rtnval.append({'action': "dupe_file",

View File

@ -447,14 +447,16 @@ def addComictoDB(comicid, mismatch=None, pullupd=None, imported=None, ogcname=No
statinfo = os.stat(coverfile)
coversize = statinfo.st_size
if int(coversize) < 35000 or str(r.status_code) != '200':
if int(coversize) < 30000 or str(r.status_code) != '200':
if str(r.status_code) != '200':
logger.info('Trying to grab an alternate cover due to problems trying to retrieve the main cover image.')
else:
logger.info('Image size invalid [' + str(coversize) + ' bytes] - trying to get alternate cover image.')
logger.fdebug('invalid image link is here: ' + comic['ComicImage'])
os.remove(coverfile)
if os.path.exists(coverfile):
os.remove(coverfile)
logger.info('Attempting to retrieve alternate comic image for the series.')
try:
r = requests.get(comic['ComicImageALT'], params=None, stream=True, headers=mylar.CV_HEADERS)
@ -673,7 +675,11 @@ def addComictoDB(comicid, mismatch=None, pullupd=None, imported=None, ogcname=No
logger.info('Returning to Future-Check module to complete the add & remove entry.')
return
if imported == 'yes':
if calledfrom == 'addbyid':
logger.info('Sucessfully added ' + comic['ComicName'] + ' (' + str(SeriesYear) + ') by directly using the ComicVine ID')
return
if imported:
logger.info('Successfully imported : ' + comic['ComicName'])
#now that it's moved / renamed ... we remove it from importResults or mark as completed.
@ -686,9 +692,6 @@ def addComictoDB(comicid, mismatch=None, pullupd=None, imported=None, ogcname=No
"ComicID": comicid}
myDB.upsert("importresults", newValue, controlValue)
if calledfrom == 'addbyid':
logger.info('Sucessfully added ' + comic['ComicName'] + ' (' + str(SeriesYear) + ') by directly using the ComicVine ID')
return
def GCDimport(gcomicid, pullupd=None, imported=None, ogcname=None):
# this is for importing via GCD only and not using CV.

View File

@ -19,12 +19,13 @@ import os
import glob
import re
import shutil
import random
import mylar
from mylar import db, logger, helpers, importer, updater
from mylar import db, logger, helpers, importer, updater, filechecker
# You can scan a single directory and append it to the current library by specifying append=True
def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None):
def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None, queue=None):
if cron and not mylar.LIBRARYSCAN:
return
@ -47,12 +48,18 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
basedir = dir
comic_list = []
failure_list = []
comiccnt = 0
extensions = ('cbr','cbz')
cv_location = []
cbz_retry = 0
mylar.IMPORT_STATUS = 'Now attempting to parse files for additional information'
#mylar.IMPORT_PARSED_COUNT #used to count what #/totalfiles the filename parser is currently on
for r, d, f in os.walk(dir):
for files in f:
mylar.IMPORT_FILES +=1
if 'cvinfo' in files:
cv_location.append(r)
logger.fdebug('CVINFO found: ' + os.path.join(r))
@ -60,20 +67,77 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
comic = files
comicpath = os.path.join(r, files)
comicsize = os.path.getsize(comicpath)
t = filechecker.FileChecker(dir=r, file=comic)
results = t.listFiles()
#logger.info(results)
#'type': re.sub('\.','', filetype).strip(),
#'sub': path_list,
#'volume': volume,
#'match_type': match_type,
#'comicfilename': filename,
#'comiclocation': clocation,
#'series_name': series_name,
#'series_volume': issue_volume,
#'series_year': issue_year,
#'justthedigits': issue_number,
#'annualcomicid': annual_comicid,
#'scangroup': scangroup}
logger.fdebug('Comic: ' + comic + ' [' + comicpath + '] - ' + str(comicsize) + ' bytes')
comiccnt+=1
if results:
resultline = '[PARSE-' + results['parse_status'].upper() + ']'
resultline += '[SERIES: ' + results['series_name'] + ']'
if results['series_volume'] is not None:
resultline += '[VOLUME: ' + results['series_volume'] + ']'
if results['issue_year'] is not None:
resultline += '[ISSUE YEAR: ' + str(results['issue_year']) + ']'
if results['issue_number'] is not None:
resultline += '[ISSUE #: ' + results['issue_number'] + ']'
logger.fdebug(resultline)
else:
logger.fdebug('[PARSED] FAILURE.')
continue
# We need the unicode path to use for logging, inserting into database
unicode_comic_path = comicpath.decode(mylar.SYS_ENCODING, 'replace')
comic_dict = {'ComicFilename': comic,
'ComicLocation': comicpath,
'ComicSize': comicsize,
'Unicode_ComicLocation': unicode_comic_path}
comic_list.append(comic_dict)
logger.info("I've found a total of " + str(comiccnt) + " comics....analyzing now")
#logger.info("comiclist: " + str(comic_list))
if results['parse_status'] == 'success':
comic_list.append({'ComicFilename': comic,
'ComicLocation': comicpath,
'ComicSize': comicsize,
'Unicode_ComicLocation': unicode_comic_path,
'parsedinfo': {'series_name': results['series_name'],
'series_volume': results['series_volume'],
'issue_year': results['issue_year'],
'issue_number': results['issue_number']}
})
comiccnt +=1
mylar.IMPORT_PARSED_COUNT +=1
else:
failure_list.append({'ComicFilename': comic,
'ComicLocation': comicpath,
'ComicSize': comicsize,
'Unicode_ComicLocation': unicode_comic_path,
'parsedinfo': {'series_name': results['series_name'],
'series_volume': results['series_volume'],
'issue_year': results['issue_year'],
'issue_number': results['issue_number']}
})
mylar.IMPORT_FAILURE_COUNT +=1
if comic.endswith('.cbz'):
cbz_retry +=1
mylar.IMPORT_TOTALFILES = comiccnt
logger.info('I have successfully discovered & parsed a total of ' + str(comiccnt) + ' files....analyzing now')
logger.info('I have not been able to determine what ' + str(len(failure_list)) + ' files are')
logger.info('However, ' + str(cbz_retry) + ' files are in a cbz format, which may contain metadata.')
mylar.IMPORT_STATUS = 'Successfully parsed ' + str(comiccnt) + ' files'
#return queue.put(valreturn)
myDB = db.DBConnection()
#let's load in the watchlist to see if we have any matches.
@ -144,13 +208,17 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
issueid_list = []
cvscanned_loc = None
cvinfo_CID = None
cnt = 0
mylar.IMPORT_STATUS = '[0%] Now parsing individual filenames for metadata if available'
for i in comic_list:
mylar.IMPORT_STATUS = '[' + str(cnt) + '/' + str(comiccnt) + '] Now parsing individual filenames for metadata if available'
logger.fdebug('Analyzing : ' + i['ComicFilename'])
comfilename = i['ComicFilename']
comlocation = i['ComicLocation']
issueinfo = None
#probably need to zero these issue-related metadata to None so we can pick the best option
issuevolume = None
#Make sure cvinfo is checked for FIRST (so that CID can be attached to all files properly thereafter as they're scanned in)
if os.path.dirname(comlocation) in cv_location and os.path.dirname(comlocation) != cvscanned_loc:
@ -181,376 +249,236 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
# continue
if mylar.IMP_METADATA:
logger.info('metatagging checking enabled.')
#if read tags is enabled during import, check here.
if i['ComicLocation'].endswith('.cbz'):
logger.info('Attempting to read tags present in filename: ' + i['ComicLocation'])
logger.fdebug('[IMPORT-CBZ] Metatagging checking enabled.')
logger.info('[IMPORT-CBZ} Attempting to read tags present in filename: ' + i['ComicLocation'])
issueinfo = helpers.IssueDetails(i['ComicLocation'])
logger.info('issueinfo: ' + str(issueinfo))
if issueinfo is None:
logger.fdebug('[IMPORT-CBZ] No valid metadata contained within filename. Dropping down to parsing the filename itself.')
pass
else:
issuenotes_id = None
logger.info('Successfully retrieved some tags. Lets see what I can figure out.')
logger.info('[IMPORT-CBZ] Successfully retrieved some tags. Lets see what I can figure out.')
comicname = issueinfo[0]['series']
logger.fdebug('Series Name: ' + comicname)
issue_number = issueinfo[0]['issue_number']
logger.fdebug('Issue Number: ' + str(issue_number))
issuetitle = issueinfo[0]['title']
logger.fdebug('Issue Title: ' + issuetitle)
issueyear = issueinfo[0]['year']
logger.fdebug('Issue Year: ' + str(issueyear))
try:
issuevolume = issueinfo[0]['volume']
except:
issuevolume = None
# if used by ComicTagger, Notes field will have the IssueID.
issuenotes = issueinfo[0]['notes']
logger.fdebug('Notes: ' + issuenotes)
if issuenotes is not None:
if 'Issue ID' in issuenotes:
st_find = issuenotes.find('Issue ID')
tmp_issuenotes_id = re.sub("[^0-9]", " ", issuenotes[st_find:]).strip()
if tmp_issuenotes_id.isdigit():
issuenotes_id = tmp_issuenotes_id
logger.fdebug('Successfully retrieved CV IssueID for ' + comicname + ' #' + str(issue_number) + ' [' + str(issuenotes_id) + ']')
elif 'CVDB' in issuenotes:
st_find = issuenotes.find('CVDB')
tmp_issuenotes_id = re.sub("[^0-9]", " ", issuenotes[st_find:]).strip()
if tmp_issuenotes_id.isdigit():
issuenotes_id = tmp_issuenotes_id
logger.fdebug('Successfully retrieved CV IssueID for ' + comicname + ' #' + str(issue_number) + ' [' + str(issuenotes_id) + ']')
else:
logger.fdebug('Unable to retrieve IssueID from meta-tagging. If there is other metadata present I will use that.')
logger.fdebug("adding " + comicname + " to the import-queue!")
impid = comicname + '-' + str(issueyear) + '-' + str(issue_number) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
logger.fdebug("impid: " + str(impid))
#make sure we only add in those issueid's which don't already have a comicid attached via the cvinfo scan above (this is for reverse-lookup of issueids)
if cvinfo_CID is None:
issueid_list.append(issuenotes_id)
if cvscanned_loc == os.path.dirname(comlocation):
cv_cid = cvinfo_CID
logger.info('CVINFO_COMICID attached : ' + str(cv_cid))
if comicname is not None:
logger.fdebug('[IMPORT-CBZ] Series Name: ' + comicname)
as_d = filechecker.FileChecker(watchcomic=comicname.decode('utf-8'))
as_dyninfo = as_d.dynamic_replace(comicname)
logger.fdebug('Dynamic-ComicName: ' + as_dyninfo['mod_seriesname'])
else:
cv_cid = None
import_by_comicids.append({
"impid": impid,
"comicid": cv_cid,
"watchmatch": None,
"displayname": helpers.cleanName(comicname),
"comicname": comicname, #com_NAME,
"comicyear": issueyear,
"volume": issuevolume,
"issueid": issuenotes_id,
"comfilename": comfilename,
"comlocation": comlocation.decode(mylar.SYS_ENCODING)
logger.fdebug('[IMPORT-CBZ] No series name found within metadata. This is bunk - dropping down to file parsing for usable information.')
issueinfo = None
issue_number = None
if issueinfo is not None:
try:
issueyear = issueinfo[0]['year']
except:
issueyear = None
#if the issue number is a non-numeric unicode string, this will screw up along with impID
issue_number = issueinfo[0]['issue_number']
if issue_number is not None:
logger.fdebug('[IMPORT-CBZ] Issue Number: ' + issue_number)
else:
issue_number = i['parsed']['issue_number']
if 'annual' in comicname.lower() or 'annual' in comfilename.lower():
if issue_number is None or issue_number == 'None':
logger.info('Annual detected with no issue number present within metadata. Assuming year as issue.')
try:
issue_number = 'Annual ' + str(issueyear)
except:
issue_number = 'Annual ' + i['parsed']['issue_year']
else:
logger.info('Annual detected with issue number present within metadata.')
if 'annual' not in issue_number.lower():
issue_number = 'Annual ' + issue_number
mod_series = re.sub('annual', '', comicname, flags=re.I).strip()
else:
mod_series = comicname
logger.fdebug('issue number SHOULD Be: ' + issue_number)
try:
issuetitle = issueinfo[0]['title']
except:
issuetitle = None
try:
issueyear = issueinfo[0]['year']
except:
issueyear = None
try:
issuevolume = str(issueinfo[0]['volume'])
if all([issuevolume is not None, issuevolume != 'None']) and not issuevolume.lower().startswith('v'):
issuevolume = 'v' + str(issuevolume)
logger.fdebug('[TRY]issue volume is: ' + str(issuevolume))
except:
logger.fdebug('[EXCEPT]issue volume is: ' + str(issuevolume))
issuevolume = None
if any([comicname is None, comicname == 'None', issue_number is None, issue_number == 'None']):
logger.fdebug('[IMPORT-CBZ] Improperly tagged file as the metatagging is invalid. Ignoring meta and just parsing the filename.')
issueinfo = None
pass
else:
# if used by ComicTagger, Notes field will have the IssueID.
issuenotes = issueinfo[0]['notes']
logger.fdebug('[IMPORT-CBZ] Notes: ' + issuenotes)
if issuenotes is not None and issuenotes != 'None':
if 'Issue ID' in issuenotes:
st_find = issuenotes.find('Issue ID')
tmp_issuenotes_id = re.sub("[^0-9]", " ", issuenotes[st_find:]).strip()
if tmp_issuenotes_id.isdigit():
issuenotes_id = tmp_issuenotes_id
logger.fdebug('[IMPORT-CBZ] Successfully retrieved CV IssueID for ' + comicname + ' #' + issue_number + ' [' + str(issuenotes_id) + ']')
elif 'CVDB' in issuenotes:
st_find = issuenotes.find('CVDB')
tmp_issuenotes_id = re.sub("[^0-9]", " ", issuenotes[st_find:]).strip()
if tmp_issuenotes_id.isdigit():
issuenotes_id = tmp_issuenotes_id
logger.fdebug('[IMPORT-CBZ] Successfully retrieved CV IssueID for ' + comicname + ' #' + issue_number + ' [' + str(issuenotes_id) + ']')
else:
logger.fdebug('[IMPORT-CBZ] Unable to retrieve IssueID from meta-tagging. If there is other metadata present I will use that.')
logger.fdebug('[IMPORT-CBZ] Adding ' + comicname + ' to the import-queue!')
#impid = comicname + '-' + str(issueyear) + '-' + str(issue_number) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
impid = str(random.randint(1000000,99999999))
logger.fdebug('[IMPORT-CBZ] impid: ' + str(impid))
#make sure we only add in those issueid's which don't already have a comicid attached via the cvinfo scan above (this is for reverse-lookup of issueids)
issuepopulated = False
if cvinfo_CID is None:
if issuenotes_id is None:
logger.info('[IMPORT-CBZ] No ComicID detected where it should be. Bypassing this metadata entry and going the parsing route [' + comfilename + ']')
else:
#we need to store the impid here as well so we can look it up.
issueid_list.append({'issueid': issuenotes_id,
'importinfo': {'impid': impid,
'comicid': None,
'comicname': comicname,
'dynamicname': as_dyninfo['mod_seriesname'],
'comicyear': issueyear,
'issuenumber': issue_number,
'volume': issuevolume,
'comfilename': comfilename,
'comlocation': comlocation.decode(mylar.SYS_ENCODING)}
})
mylar.IMPORT_CID_COUNT +=1
issuepopulated = True
if issuepopulated == False:
if cvscanned_loc == os.path.dirname(comlocation):
cv_cid = cvinfo_CID
logger.fdebug('[IMPORT-CBZ] CVINFO_COMICID attached : ' + str(cv_cid))
else:
cv_cid = None
import_by_comicids.append({
"impid": impid,
"comicid": cv_cid,
"watchmatch": None,
"displayname": mod_series,
"comicname": comicname,
"dynamicname": as_dyninfo['mod_seriesname'],
"comicyear": issueyear,
"issuenumber": issue_number,
"volume": issuevolume,
"issueid": issuenotes_id,
"comfilename": comfilename,
"comlocation": comlocation.decode(mylar.SYS_ENCODING)
})
else:
logger.info(i['ComicLocation'] + ' is not in a metatagged format (cbz). Bypassing reading of the metatags')
mylar.IMPORT_CID_COUNT +=1
else:
pass
#logger.fdebug(i['ComicFilename'] + ' is not in a metatagged format (cbz). Bypassing reading of the metatags')
if issueinfo is None:
#let's clean up the filename for matching purposes
cfilename = re.sub('[\_\#\,\/\:\;\-\!\$\%\&\+\'\?\@]', ' ', comfilename)
#cfilename = re.sub('\s', '_', str(cfilename))
d_filename = re.sub('[\_\#\,\/\;\!\$\%\&\?\@]', ' ', comfilename)
d_filename = re.sub('[\:\-\+\']', '#', d_filename)
#strip extraspaces
d_filename = re.sub('\s+', ' ', d_filename)
cfilename = re.sub('\s+', ' ', cfilename)
#versioning - remove it
subsplit = cfilename.replace('_', ' ').split()
volno = None
volyr = None
for subit in subsplit:
if subit[0].lower() == 'v':
vfull = 0
if subit[1:].isdigit():
#if in format v1, v2009 etc...
if len(subit) > 3:
# if it's greater than 3 in length, then the format is Vyyyy
vfull = 1 # add on 1 character length to account for extra space
cfilename = re.sub(subit, '', cfilename)
d_filename = re.sub(subit, '', d_filename)
volno = re.sub("[^0-9]", " ", subit)
elif subit.lower()[:3] == 'vol':
#if in format vol.2013 etc
#because the '.' in Vol. gets removed, let's loop thru again after the Vol hit to remove it entirely
logger.fdebug('volume indicator detected as version #:' + str(subit))
cfilename = re.sub(subit, '', cfilename)
cfilename = " ".join(cfilename.split())
d_filename = re.sub(subit, '', d_filename)
d_filename = " ".join(d_filename.split())
volyr = re.sub("[^0-9]", " ", subit).strip()
logger.fdebug('volume year set as : ' + str(volyr))
cm_cn = 0
#we need to track the counter to make sure we are comparing the right array parts
#this takes care of the brackets :)
m = re.findall('[^()]+', d_filename) #cfilename)
lenm = len(m)
logger.fdebug("there are " + str(lenm) + " words.")
cnt = 0
yearmatch = "false"
foundonwatch = "False"
issue = 999999
while (cnt < lenm):
if m[cnt] is None: break
if m[cnt] == ' ':
pass
else:
logger.fdebug(str(cnt) + ". Bracket Word: " + m[cnt])
if cnt == 0:
comic_andiss = m[cnt]
logger.fdebug("Comic: " + comic_andiss)
# if it's not in the standard format this will bork.
# let's try to accomodate (somehow).
# first remove the extension (if any)
extensions = ('cbr', 'cbz')
if comic_andiss.lower().endswith(extensions):
comic_andiss = comic_andiss[:-4]
logger.fdebug("removed extension from filename.")
#now we have to break up the string regardless of formatting.
#let's force the spaces.
comic_andiss = re.sub('_', ' ', comic_andiss)
cs = comic_andiss.split()
cs_len = len(cs)
cn = ''
ydetected = 'no'
idetected = 'no'
decimaldetect = 'no'
for i in reversed(xrange(len(cs))):
#start at the end.
logger.fdebug("word: " + str(cs[i]))
#assume once we find issue - everything prior is the actual title
#idetected = no will ignore everything so it will assume all title
if cs[i][:-2] == '19' or cs[i][:-2] == '20' and idetected == 'no':
logger.fdebug("year detected: " + str(cs[i]))
ydetected = 'yes'
result_comyear = cs[i]
elif cs[i].isdigit() and idetected == 'no' or '.' in cs[i]:
if '.' in cs[i]:
#make sure it's a number on either side of decimal and assume decimal issue.
decst = cs[i].find('.')
dec_st = cs[i][:decst]
dec_en = cs[i][decst +1:]
logger.fdebug("st: " + str(dec_st))
logger.fdebug("en: " + str(dec_en))
if dec_st.isdigit() and dec_en.isdigit():
logger.fdebug("decimal issue detected...adjusting.")
issue = dec_st + "." + dec_en
logger.fdebug("issue detected: " + str(issue))
idetected = 'yes'
else:
logger.fdebug("false decimal represent. Chunking to extra word.")
cn = cn + cs[i] + " "
#break
else:
issue = cs[i]
logger.fdebug("issue detected : " + str(issue))
idetected = 'yes'
elif '\#' in cs[i] or decimaldetect == 'yes':
logger.fdebug("issue detected: " + str(cs[i]))
idetected = 'yes'
else: cn = cn + cs[i] + " "
if ydetected == 'no':
#assume no year given in filename...
result_comyear = "0000"
logger.fdebug("cm?: " + str(cn))
if issue is not '999999':
comiss = issue
else:
logger.ERROR("Invalid Issue number (none present) for " + comfilename)
break
cnsplit = cn.split()
cname = ''
findcn = 0
while (findcn < len(cnsplit)):
cname = cname + cs[findcn] + " "
findcn+=1
cname = cname[:len(cname)-1] # drop the end space...
logger.fdebug('assuming name is : ' + cname)
com_NAME = cname
logger.fdebug('com_NAME : ' + com_NAME)
yearmatch = "True"
if i['parsedinfo']['issue_number'] is None:
if 'annual' in i['parsedinfo']['series_name'].lower():
logger.fdebug('Annual detected with no issue number present. Assuming year as issue.')##1 issue')
if i['parsedinfo']['issue_year'] is not None:
issuenumber = 'Annual ' + str(i['parsedinfo']['issue_year'])
else:
logger.fdebug('checking ' + m[cnt])
# we're assuming that the year is in brackets (and it should be damnit)
if m[cnt][:-2] == '19' or m[cnt][:-2] == '20':
logger.fdebug('year detected: ' + str(m[cnt]))
ydetected = 'yes'
result_comyear = m[cnt]
elif m[cnt][:3].lower() in datelist:
logger.fdebug('possible issue date format given - verifying')
#if the date of the issue is given as (Jan 2010) or (January 2010) let's adjust.
#keeping in mind that ',' and '.' are already stripped from the string
if m[cnt][-4:].isdigit():
ydetected = 'yes'
result_comyear = m[cnt][-4:]
logger.fdebug('Valid Issue year of ' + str(result_comyear) + 'detected in format of ' + str(m[cnt]))
cnt+=1
displength = len(cname)
logger.fdebug('cname length : ' + str(displength) + ' --- ' + str(cname))
logger.fdebug('d_filename is : ' + d_filename)
charcount = d_filename.count('#')
logger.fdebug('charcount is : ' + str(charcount))
if charcount > 0:
logger.fdebug('entering loop')
for i, m in enumerate(re.finditer('\#', d_filename)):
if m.end() <= displength:
logger.fdebug(comfilename[m.start():m.end()])
# find occurance in c_filename, then replace into d_filname so special characters are brought across
newchar = comfilename[m.start():m.end()]
logger.fdebug('newchar:' + str(newchar))
d_filename = d_filename[:m.start()] + str(newchar) + d_filename[m.end():]
logger.fdebug('d_filename:' + str(d_filename))
dispname = d_filename[:displength]
logger.fdebug('dispname : ' + dispname)
splitit = []
watchcomic_split = []
logger.fdebug("filename comic and issue: " + comic_andiss)
#changed this from '' to ' '
comic_iss_b4 = re.sub('[\-\:\,]', ' ', comic_andiss)
comic_iss = comic_iss_b4.replace('.', ' ')
comic_iss = re.sub('[\s+]', ' ', comic_iss).strip()
logger.fdebug("adjusted comic and issue: " + str(comic_iss))
#remove 'the' from here for proper comparisons.
if ' the ' in comic_iss.lower():
comic_iss = re.sub('\\bthe\\b', '', comic_iss).strip()
splitit = comic_iss.split(None)
logger.fdebug("adjusting from: " + str(comic_iss_b4) + " to: " + str(comic_iss))
#here we cycle through the Watchlist looking for a match.
while (cm_cn < watchcnt):
#setup the watchlist
comname = ComicName[cm_cn]
comyear = ComicYear[cm_cn]
compub = ComicPublisher[cm_cn]
comtotal = ComicTotal[cm_cn]
comicid = ComicID[cm_cn]
watch_location = ComicLocation[cm_cn]
# there shouldn't be an issue in the comic now, so let's just assume it's all gravy.
splitst = len(splitit)
watchcomic_split = helpers.cleanName(comname)
watchcomic_split = re.sub('[\-\:\,\.]', ' ', watchcomic_split).split(None)
logger.fdebug(str(splitit) + " file series word count: " + str(splitst))
logger.fdebug(str(watchcomic_split) + " watchlist word count: " + str(len(watchcomic_split)))
if (splitst) != len(watchcomic_split):
logger.fdebug("incorrect comic lengths...not a match")
# if str(splitit[0]).lower() == "the":
# logger.fdebug("THE word detected...attempting to adjust pattern matching")
# splitit[0] = splitit[4:]
else:
logger.fdebug("length match..proceeding")
n = 0
scount = 0
logger.fdebug("search-length: " + str(splitst))
logger.fdebug("Watchlist-length: " + str(len(watchcomic_split)))
while (n <= (splitst) -1):
logger.fdebug("splitit: " + str(splitit[n]))
if n < (splitst) and n < len(watchcomic_split):
logger.fdebug(str(n) + " Comparing: " + str(watchcomic_split[n]) + " .to. " + str(splitit[n]))
if '+' in watchcomic_split[n]:
watchcomic_split[n] = re.sub('+', '', str(watchcomic_split[n]))
if str(watchcomic_split[n].lower()) in str(splitit[n].lower()) and len(watchcomic_split[n]) >= len(splitit[n]):
logger.fdebug("word matched on : " + str(splitit[n]))
scount+=1
#elif ':' in splitit[n] or '-' in splitit[n]:
# splitrep = splitit[n].replace('-', '')
# logger.fdebug("non-character keyword...skipped on " + splitit[n])
elif str(splitit[n]).lower().startswith('v'):
logger.fdebug("possible versioning..checking")
#we hit a versioning # - account for it
if splitit[n][1:].isdigit():
comicversion = str(splitit[n])
logger.fdebug("version found: " + str(comicversion))
else:
logger.fdebug("Comic / Issue section")
if splitit[n].isdigit():
logger.fdebug("issue detected")
else:
logger.fdebug("non-match for: "+ str(splitit[n]))
pass
n+=1
#set the match threshold to 80% (for now)
# if it's less than 80% consider it a non-match and discard.
#splitit has to splitit-1 because last position is issue.
wordcnt = int(scount)
logger.fdebug("scount:" + str(wordcnt))
totalcnt = int(splitst)
logger.fdebug("splitit-len:" + str(totalcnt))
spercent = (wordcnt /totalcnt) * 100
logger.fdebug("we got " + str(spercent) + " percent.")
if int(spercent) >= 80:
logger.fdebug("it's a go captain... - we matched " + str(spercent) + "%!")
logger.fdebug("this should be a match!")
logger.fdebug("issue we found for is : " + str(comiss))
#set the year to the series we just found ;)
result_comyear = comyear
#issue comparison now as well
logger.info(u"Found " + comname + " (" + str(comyear) + ") issue: " + str(comiss))
watchmatch = str(comicid)
dispname = DisplayName[cm_cn]
foundonwatch = "True"
break
elif int(spercent) < 80:
logger.fdebug("failure - we only got " + str(spercent) + "% right!")
cm_cn+=1
if foundonwatch == "False":
watchmatch = None
#---if it's not a match - send it to the importer.
n = 0
if volyr is None:
if result_comyear is None:
result_comyear = '0000' #no year in filename basically.
issuenumber = 'Annual 1'
else:
if result_comyear is None:
result_comyear = volyr
if volno is None:
if volyr is None:
vol_label = None
else:
vol_label = volyr
else:
vol_label = volno
issuenumber = i['parsedinfo']['issue_number']
logger.fdebug("adding " + com_NAME + " to the import-queue!")
impid = dispname + '-' + str(result_comyear) + '-' + str(comiss) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
if 'annual' in i['parsedinfo']['series_name'].lower():
mod_series = re.sub('annual', '', i['parsedinfo']['series_name'], flags=re.I).strip()
logger.fdebug('Annual detected with no issue number present. Assuming year as issue.')##1 issue')
if i['parsedinfo']['issue_number'] is not None:
issuenumber = 'Annual ' + str(i['parsedinfo']['issue_number'])
else:
if i['parsedinfo']['issue_year'] is not None:
issuenumber = 'Annual ' + str(i['parsedinfo']['issue_year'])
else:
issuenumber = 'Annual 1'
else:
mod_series = i['parsedinfo']['series_name']
issuenumber = i['parsedinfo']['issue_number']
logger.fdebug('[' + mod_series + '] Adding to the import-queue!')
isd = filechecker.FileChecker(watchcomic=mod_series.decode('utf-8'))
is_dyninfo = isd.dynamic_replace(mod_series)
logger.fdebug('Dynamic-ComicName: ' + is_dyninfo['mod_seriesname'])
#impid = dispname + '-' + str(result_comyear) + '-' + str(comiss) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
impid = str(random.randint(1000000,99999999))
logger.fdebug("impid: " + str(impid))
if cvscanned_loc == os.path.dirname(comlocation):
cv_cid = cvinfo_CID
logger.info('CVINFO_COMICID attached : ' + str(cv_cid))
logger.fdebug('CVINFO_COMICID attached : ' + str(cv_cid))
else:
cv_cid = None
if issuevolume is None:
logger.fdebug('issue volume is none : ' + str(issuevolume))
if i['parsedinfo']['series_volume'] is None:
issuevolume = None
else:
if str(i['parsedinfo']['series_volume'].lower()).startswith('v'):
issuevolume = i['parsedinfo']['series_volume']
else:
issuevolume = 'v' + str(i['parsedinfo']['series_volume'])
else:
logger.fdebug('issue volume not none : ' + str(issuevolume))
if issuevolume.lower().startswith('v'):
issuevolume = issuevolume
else:
issuevolume = 'v' + str(issuevolume)
logger.fdebug('IssueVolume is : ' + str(issuevolume))
import_by_comicids.append({
"impid": impid,
"comicid": cv_cid,
"issueid": None,
"watchmatch": watchmatch,
"displayname": dispname,
"comicname": dispname, #com_NAME,
"comicyear": result_comyear,
"volume": vol_label,
"watchmatch": None, #watchmatch (should be true/false if it already exists on watchlist)
"displayname": mod_series,
"comicname": i['parsedinfo']['series_name'],
"dynamicname": is_dyninfo['mod_seriesname'].lower(),
"comicyear": i['parsedinfo']['issue_year'],
"issuenumber": issuenumber,
"volume": issuevolume,
"comfilename": comfilename,
"comlocation": comlocation.decode(mylar.SYS_ENCODING)
})
})
cnt+=1
#logger.fdebug('import_by_ids: ' + str(import_by_comicids))
#reverse lookup all of the gathered IssueID's in order to get the related ComicID
vals = mylar.cv.getComic(None, 'import', comicidlist=issueid_list)
logger.fdebug('vals returned:' + str(vals))
reverse_issueids = []
for x in issueid_list:
reverse_issueids.append(x['issueid'])
vals = None
if len(reverse_issueids) > 0:
mylar.IMPORT_STATUS = 'Now Reverse looking up ' + str(len(reverse_issueids)) + ' IssueIDs to get the ComicIDs'
vals = mylar.cv.getComic(None, 'import', comicidlist=reverse_issueids)
#logger.fdebug('vals returned:' + str(vals))
if len(watch_kchoice) > 0:
watchchoice['watchlist'] = watch_kchoice
@ -629,79 +557,93 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
if not len(import_by_comicids):
return "Completed"
if len(import_by_comicids) > 0:
import_comicids['comic_info'] = import_by_comicids
#import_comicids['comic_info'] = import_by_comicids
#if vals:
# import_comicids['issueid_info'] = vals
#else:
# import_comicids['issueid_info'] = None
if vals:
import_comicids['issueid_info'] = vals
cvimport_comicids = vals
import_cv_ids = len(vals)
else:
import_comicids['issueid_info'] = None
logger.fdebug('import comicids: ' + str(import_by_comicids))
cvimport_comicids = None
import_cv_ids = 0
#logger.fdebug('import comicids: ' + str(import_by_comicids))
return import_comicids, len(import_by_comicids)
return {'import_by_comicids': import_by_comicids,
'import_count': len(import_by_comicids),
'CV_import_comicids': cvimport_comicids,
'import_cv_ids': import_cv_ids,
'issueid_list': issueid_list,
'failure_list': failure_list}
def scanLibrary(scan=None, queue=None):
valreturn = []
if scan:
try:
soma, noids = libraryScan()
soma = libraryScan(queue=queue)
except Exception, e:
logger.error('Unable to complete the scan: %s' % e)
logger.error('[IMPORT] Unable to complete the scan: %s' % e)
mylar.IMPORT_STATUS = None
return
if soma == "Completed":
logger.info('Sucessfully completed import.')
logger.info('[IMPORT] Sucessfully completed import.')
else:
logger.info('Starting mass importing...' + str(noids) + ' records.')
#this is what it should do...
#store soma (the list of comic_details from importing) into sql table so import can be whenever
#display webpage showing results
#allow user to select comic to add (one at a time)
#call addComic off of the webpage to initiate the add.
#return to result page to finish or continue adding.
#....
#threading.Thread(target=self.searchit).start()
#threadthis = threadit.ThreadUrl()
#result = threadthis.main(soma)
mylar.IMPORT_STATUS = 'Now adding the completed results to the DB.'
logger.info('[IMPORT] Parsing/Reading of files completed!')
logger.info('[IMPORT] Attempting to import ' + str(int(soma['import_cv_ids'] + soma['import_count'])) + ' files into your watchlist.')
logger.info('[IMPORT-BREAKDOWN] Files with ComicIDs successfully extracted: ' + str(soma['import_cv_ids']))
logger.info('[IMPORT-BREAKDOWN] Files that had to be parsed: ' + str(soma['import_count']))
logger.info('[IMPORT-BREAKDOWN] Files that were unable to be parsed: ' + str(len(soma['failure_list'])))
logger.info('[IMPORT-BREAKDOWN] Failure Files: ' + str(soma['failure_list']))
myDB = db.DBConnection()
sl = 0
logger.fdebug("number of records: " + str(noids))
while (sl < int(noids)):
soma_sl = soma['comic_info'][sl]
issue_info = soma['issueid_info']
logger.fdebug("soma_sl: " + str(soma_sl))
logger.fdebug("issue_info: " + str(issue_info))
logger.fdebug("comicname: " + soma_sl['comicname'])
logger.fdebug("filename: " + soma_sl['comfilename'])
if issue_info is not None:
for iss in issue_info:
if soma_sl['issueid'] == iss['IssueID']:
logger.info('IssueID match: ' + str(iss['IssueID']))
logger.info('ComicName: ' + str(iss['ComicName'] + '[' + str(iss['ComicID'])+ ']'))
IssID = iss['IssueID']
ComicID = iss['ComicID']
displayname = iss['ComicName']
comicname = iss['ComicName']
break
else:
IssID = None
displayname = soma_sl['displayname'].encode('utf-8')
comicname = soma_sl['comicname'].encode('utf-8')
ComicID = soma_sl['comicid'] #if it's been scanned in for cvinfo, this will be the CID - otherwise it's None
controlValue = {"impID": soma_sl['impid']}
newValue = {"ComicYear": soma_sl['comicyear'],
"Status": "Not Imported",
"ComicName": comicname,
"DisplayName": displayname,
"ComicID": ComicID,
"IssueID": IssID,
"Volume": soma_sl['volume'],
"ComicFilename": soma_sl['comfilename'].encode('utf-8'),
"ComicLocation": soma_sl['comlocation'].encode('utf-8'),
"ImportDate": helpers.today(),
"WatchMatch": soma_sl['watchmatch']}
myDB.upsert("importresults", newValue, controlValue)
sl+=1
#first we do the CV ones.
if int(soma['import_cv_ids']) > 0:
for i in soma['CV_import_comicids']:
#we need to find the impid in the issueid_list as that holds the impid + other info
abc = [x for x in soma['issueid_list'] if x['issueid'] == i['IssueID']]
ghi = abc[0]['importinfo']
nspace_dynamicname = re.sub('[\|\s]', '', ghi['dynamicname'].lower()).strip()
#these all have related ComicID/IssueID's...just add them as is.
controlValue = {"impID": ghi['impid']}
newValue = {"Status": "Not Imported",
"ComicName": i['ComicName'],
"DisplayName": i['ComicName'],
"DynamicName": nspace_dynamicname,
"ComicID": i['ComicID'],
"IssueID": i['IssueID'],
"IssueNumber": i['Issue_Number'],
"Volume": ghi['volume'],
"ComicYear": ghi['comicyear'],
"ComicFilename": ghi['comfilename'].decode('utf-8'),
"ComicLocation": ghi['comlocation'],
"ImportDate": helpers.today(),
"WatchMatch": None} #i['watchmatch']}
myDB.upsert("importresults", newValue, controlValue)
if int(soma['import_count']) > 0:
for ss in soma['import_by_comicids']:
nspace_dynamicname = re.sub('[\|\s]', '', ss['dynamicname'].lower()).strip()
controlValue = {"impID": ss['impid']}
newValue = {"ComicYear": ss['comicyear'],
"Status": "Not Imported",
"ComicName": ss['comicname'], #.encode('utf-8'),
"DisplayName": ss['displayname'], #.encode('utf-8'),
"DynamicName": nspace_dynamicname,
"ComicID": ss['comicid'], #if it's been scanned in for cvinfo, this will be the CID - otherwise it's None
"IssueID": None,
"Volume": ss['volume'],
"IssueNumber": ss['issuenumber'].decode('utf-8'),
"ComicFilename": ss['comfilename'].decode('utf-8'), #ss['comfilename'].encode('utf-8'),
"ComicLocation": ss['comlocation'],
"ImportDate": helpers.today(),
"WatchMatch": ss['watchmatch']}
myDB.upsert("importresults", newValue, controlValue)
# because we could be adding volumes/series that span years, we need to account for this
# add the year to the db under the term, valid-years
# add the issue to the db under the term, min-issue
@ -710,6 +652,7 @@ def scanLibrary(scan=None, queue=None):
# unzip -z filename.cbz will show the comment field of the zip which contains the metadata.
#self.importResults()
mylar.IMPORT_STATUS = 'Import completed.'
valreturn.append({"somevalue": 'self.ie',
"result": 'success'})
return queue.put(valreturn)

View File

@ -137,7 +137,7 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
if not totalResults:
return False
if int(totalResults) > 1000:
logger.warn('Search returned more than 1000 hits [' + str(totalResults) + ']. Only displaying first 2000 results - use more specifics or the exact ComicID if required.')
logger.warn('Search returned more than 1000 hits [' + str(totalResults) + ']. Only displaying first 1000 results - use more specifics or the exact ComicID if required.')
totalResults = 1000
countResults = 0
while (countResults < int(totalResults)):
@ -176,7 +176,7 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
try:
xmlTag = result.getElementsByTagName('name')[n].firstChild.wholeText
xmlTag = xmlTag.rstrip()
logger.fdebug('name: ' + str(xmlTag))
logger.fdebug('name: ' + xmlTag)
except:
logger.error('There was a problem retrieving the given data from ComicVine. Ensure that www.comicvine.com is accessible.')
return
@ -278,7 +278,7 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
if (result.getElementsByTagName('start_year')[0].firstChild) is not None:
xmlYr = result.getElementsByTagName('start_year')[0].firstChild.wholeText
else: xmlYr = "0000"
#logger.info('name:' + str(xmlTag) + ' -- ' + str(xmlYr))
logger.info('name:' + xmlTag + ' -- ' + str(xmlYr) + ' [limityear: ' + str(limityear) + ']')
if xmlYr in limityear or limityear == 'None':
xmlurl = result.getElementsByTagName('site_detail_url')[0].firstChild.wholeText
idl = len (result.getElementsByTagName('id'))
@ -293,7 +293,7 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
if xmlid is None:
logger.error('Unable to figure out the comicid - skipping this : ' + str(xmlurl))
continue
#logger.info('xmlid: ' + str(xmlid))
publishers = result.getElementsByTagName('publisher')
if len(publishers) > 0:
pubnames = publishers[0].getElementsByTagName('name')
@ -303,6 +303,12 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
xmlpub = "Unknown"
else:
xmlpub = "Unknown"
logger.info('publisher: ' + xmlpub)
#ignore specific publishers on a global scale here.
if mylar.BLACKLISTED_PUBLISHERS is not None and any([x for x in mylar.BLACKLISTED_PUBLISHERS if x.lower() == xmlpub.lower()]):
#'panini' in xmlpub.lower() or 'deagostini' in xmlpub.lower() or 'Editorial Televisa' in xmlpub.lower():
logger.fdebug('Blacklisted publisher [' + xmlpub + ']. Ignoring this result.')
continue
try:
xmldesc = result.getElementsByTagName('description')[0].firstChild.wholeText

View File

@ -15,20 +15,18 @@ def movefiles(comicid, comlocation, ogcname, imported=None):
for impr in impres:
srcimp = impr['ComicLocation']
orig_filename = impr['ComicFilename']
orig_iss = impr['impID'].rfind('-')
orig_iss = impr['impID'][orig_iss +1:]
logger.fdebug("Issue :" + str(orig_iss))
logger.fdebug("Issue :" + impr['IssueNumber'])
#before moving check to see if Rename to Mylar structure is enabled.
if mylar.IMP_RENAME and mylar.FILE_FORMAT != '':
logger.fdebug("Renaming files according to configuration details : " + str(mylar.FILE_FORMAT))
renameit = helpers.rename_param(comicid, impr['ComicName'], orig_iss, orig_filename)
renameit = helpers.rename_param(comicid, impr['ComicName'], impr['IssueNumber'], orig_filename)
nfilename = renameit['nfilename']
dstimp = os.path.join(comlocation, nfilename)
else:
logger.fdebug("Renaming files not enabled, keeping original filename(s)")
dstimp = os.path.join(comlocation, orig_filename)
logger.info("moving " + str(srcimp) + " ... to " + str(dstimp))
logger.info("moving " + srcimp + " ... to " + dstimp)
try:
shutil.move(srcimp, dstimp)
except (OSError, IOError):

View File

@ -24,7 +24,7 @@ def newpull():
r = requests.get(pagelinks, verify=False)
except Exception, e:
logger.warn('Error fetching data: %s' % (tmpprov, e))
logger.warn('Error fetching data: %s' % e)
soup = BeautifulSoup(r.content)
getthedate = soup.findAll("div", {"class": "Headline"})[0]

View File

@ -224,7 +224,10 @@ def torrents(pickfeed=None, seriesname=None, issue=None, feedinfo=None):
seeddigits = 0
if int(mylar.MINSEEDS) >= int(seeddigits):
#new releases has it as '&id', notification feeds have it as %ampid (possibly even &amp;id
link = feedme.entries[i].link
link = re.sub('&amp','&', link)
link = re.sub('&amp;','&', link)
linkst = link.find('&id')
linken = link.find('&', linkst +1)
if linken == -1:
@ -493,48 +496,47 @@ def torrentdbsearch(seriesname, issue, comicid=None, nzbprov=None):
#cache db that have the incorrect entry, we'll adjust.
torTITLE = re.sub('&amp;', '&', tor['Title']).strip()
torsplit = torTITLE.split('/')
#torsplit = torTITLE.split(' ')
if mylar.PREFERRED_QUALITY == 1:
if 'cbr' in torTITLE:
logger.fdebug('Quality restriction enforced [ cbr only ]. Accepting result.')
else:
logger.fdebug('Quality restriction enforced [ cbr only ]. Rejecting result.')
continue
elif mylar.PREFERRED_QUALITY == 2:
if 'cbz' in torTITLE:
logger.fdebug('Quality restriction enforced [ cbz only ]. Accepting result.')
else:
logger.fdebug('Quality restriction enforced [ cbz only ]. Rejecting result.')
continue
logger.fdebug('tor-Title: ' + torTITLE)
logger.fdebug('there are ' + str(len(torsplit)) + ' sections in this title')
#logger.fdebug('there are ' + str(len(torsplit)) + ' sections in this title')
i=0
if nzbprov is not None:
if nzbprov != tor['Site']:
logger.fdebug('this is a result from ' + str(tor['Site']) + ', not the site I am looking for of ' + str(nzbprov))
continue
#0 holds the title/issue and format-type.
ext_check = True # extension checker to enforce cbr/cbz filetype restrictions.
while (i < len(torsplit)):
#we'll rebuild the string here so that it's formatted accordingly to be passed back to the parser.
logger.fdebug('section(' + str(i) + '): ' + torsplit[i])
#remove extensions
titletemp = torsplit[i]
titletemp = re.sub('cbr', '', titletemp)
titletemp = re.sub('cbz', '', titletemp)
titletemp = re.sub('none', '', titletemp)
if i == 0:
rebuiltline = titletemp
else:
rebuiltline = rebuiltline + ' (' + titletemp + ')'
i+=1
if ext_check == False:
continue
logger.fdebug('rebuiltline is :' + rebuiltline)
#--- this was for old cbt feeds, no longer used for 32p
# while (i < len(torsplit)):
# #we'll rebuild the string here so that it's formatted accordingly to be passed back to the parser.
# logger.fdebug('section(' + str(i) + '): ' + torsplit[i])
# #remove extensions
# titletemp = torsplit[i]
# titletemp = re.sub('cbr', '', titletemp)
# titletemp = re.sub('cbz', '', titletemp)
# titletemp = re.sub('none', '', titletemp)
# if i == 0:
# rebuiltline = titletemp
# else:
# rebuiltline = rebuiltline + ' (' + titletemp + ')'
# i+=1
# logger.fdebug('rebuiltline is :' + rebuiltline)
#----
seriesname_mod = seriesname
foundname_mod = torsplit[0]
foundname_mod = torTITLE #torsplit[0]
seriesname_mod = re.sub("\\band\\b", " ", seriesname_mod.lower())
foundname_mod = re.sub("\\band\\b", " ", foundname_mod.lower())
seriesname_mod = re.sub("\\bthe\\b", " ", seriesname_mod.lower())
@ -571,24 +573,24 @@ def torrentdbsearch(seriesname, issue, comicid=None, nzbprov=None):
extra = ''
#the title on 32P has a mix-mash of crap...ignore everything after cbz/cbr to cleanit
ctitle = torTITLE.find('cbr')
if ctitle == 0:
ctitle = torTITLE.find('cbz')
if ctitle == 0:
ctitle = torTITLE.find('none')
if ctitle == 0:
logger.fdebug('cannot determine title properly - ignoring for now.')
continue
cttitle = torTITLE[:ctitle]
#ctitle = torTITLE.find('cbr')
#if ctitle == 0:
# ctitle = torTITLE.find('cbz')
# if ctitle == 0:
# ctitle = torTITLE.find('none')
# if ctitle == 0:
# logger.fdebug('cannot determine title properly - ignoring for now.')
# continue
#cttitle = torTITLE[:ctitle]
if tor['Site'] == '32P':
st_pub = rebuiltline.find('(')
if st_pub < 2 and st_pub != -1:
st_end = rebuiltline.find(')')
rebuiltline = rebuiltline[st_end +1:]
# if tor['Site'] == '32P':
# st_pub = rebuiltline.find('(')
# if st_pub < 2 and st_pub != -1:
# st_end = rebuiltline.find(')')
# rebuiltline = rebuiltline[st_end +1:]
tortheinfo.append({
'title': rebuiltline, #cttitle,
'title': torTITLE, #cttitle,
'link': tor['Link'],
'pubdate': tor['Pubdate'],
'site': tor['Site'],
@ -889,9 +891,20 @@ def torsend2client(seriesname, issue, seriesyear, linkit, site):
return "pass"
elif mylar.TORRENT_SEEDBOX:
tssh = ftpsshup.putfile(filepath, filename)
return tssh
if mylar.RTORRENT_HOST:
import test
rp = test.RTorrent()
torrent_info = rp.main(filepath=filepath)
logger.info(torrent_info)
if torrent_info:
return "pass"
else:
return "fail"
else:
tssh = ftpsshup.putfile(filepath, filename)
return tssh
if __name__ == '__main__':
#torrents(sys.argv[1])

View File

@ -846,10 +846,11 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
ctchk = cleantitle.split()
ctchk_indexes = []
origvol = None
volfound = False
vol_nono = []
new_cleantitle = []
fndcomicversion = None
for ct in ctchk:
if any([ct.lower().startswith('v') and ct[1:].isdigit(), ct.lower()[:3] == 'vol', volfound == True]):
@ -870,6 +871,7 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
tmpsplit = ct
if tmpsplit.lower().startswith('vol'):
logger.fdebug('volume detected - stripping and re-analzying for volume label.')
origvol = tmpsplit
if '.' in tmpsplit:
tmpsplit = re.sub('.', '', tmpsplit).strip()
tmpsplit = re.sub('vol', '', tmpsplit.lower()).strip()
@ -912,6 +914,8 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
continue
if fndcomicversion:
cleantitle = re.sub(origvol, '', cleantitle).strip()
logger.fdebug('Newly finished reformed cleantitle (with NO volume label): ' + cleantitle)
versionfound = "yes"
break

75
mylar/test.py Normal file
View File

@ -0,0 +1,75 @@
import os
import sys
import re
import time
import shutil
import traceback
from base64 import b16encode, b32decode
from torrent.helpers.variable import link, symlink, is_rarfile
import lib.requests as requests
from lib.unrar2 import RarFile
import torrent.clients.rtorrent as TorClient
import mylar
from mylar import logger, helpers
class RTorrent(object):
def __init__(self):
self.client = TorClient.TorrentClient()
if not self.client.connect(mylar.RTORRENT_HOST,
mylar.RTORRENT_USERNAME,
mylar.RTORRENT_PASSWORD):
logger.error('could not connect to %s, exiting', mylar.RTORRENT_HOST)
sys.exit(-1)
def main(self, torrent_hash=None, filepath=None):
torrent = self.client.find_torrent(torrent_hash)
if torrent:
logger.warn("%s Torrent already exists. Not downloading at this time.", torrent_hash)
return
if filepath:
loadit = self.client.load_torrent(filepath)
if loadit:
torrent_hash = self.get_the_hash(filepath)
else:
return
torrent = self.client.find_torrent(torrent_hash)
if torrent is None:
logger.warn("Couldn't find torrent with hash: %s", torrent_hash)
sys.exit(-1)
torrent_info = self.client.get_torrent(torrent)
if torrent_info['completed']:
logger.info("Directory: %s", torrent_info['folder'])
logger.info("Name: %s", torrent_info['name'])
logger.info("FileSize: %s", helpers.human_size(torrent_info['total_filesize']))
logger.info("Completed: %s", torrent_info['completed'])
logger.info("Downloaded: %s", helpers.human_size(torrent_info['download_total']))
logger.info("Uploaded: %s", helpers.human_size(torrent_info['upload_total']))
logger.info("Ratio: %s", torrent_info['ratio'])
#logger.info("Time Started: %s", torrent_info['time_started'])
logger.info("Seeding Time: %s", helpers.humanize_time(int(time.time()) - torrent_info['time_started']))
if torrent_info['label']:
logger.info("Torrent Label: %s", torrent_info['label'])
logger.info(torrent_info)
return torrent_info
def get_the_hash(self, filepath):
import hashlib, StringIO
import lib.rtorrent.lib.bencode as bencode
# Open torrent file
torrent_file = open(filepath, "rb")
metainfo = bencode.decode(torrent_file.read())
info = metainfo['info']
thehash = hashlib.sha1(bencode.encode(info)).hexdigest().upper()
logger.info('Hash: ' + thehash)
return thehash

0
mylar/torrent/__init__.py Executable file
View File

View File

116
mylar/torrent/clients/rtorrent.py Executable file
View File

@ -0,0 +1,116 @@
import os
from lib.rtorrent import RTorrent
import mylar
from mylar import logger, helpers
class TorrentClient(object):
def __init__(self):
self.conn = None
def connect(self, host, username, password):
if self.conn is not None:
return self.conn
if not host:
return False
if username and password:
self.conn = RTorrent(
host,
username,
password
)
else:
self.conn = RTorrent(host)
return self.conn
def find_torrent(self, hash):
return self.conn.find_torrent(hash)
def get_torrent (self, torrent):
torrent_files = []
torrent_directory = os.path.normpath(torrent.directory)
try:
for f in torrent.get_files():
if not os.path.normpath(f.path).startswith(torrent_directory):
file_path = os.path.join(torrent_directory, f.path.lstrip('/'))
else:
file_path = f.path
torrent_files.append(file_path)
torrent_info = {
'hash': torrent.info_hash,
'name': torrent.name,
'label': torrent.get_custom1() if torrent.get_custom1() else '',
'folder': torrent_directory,
'completed': torrent.complete,
'files': torrent_files,
'upload_total': torrent.get_up_total(),
'download_total': torrent.get_down_total(),
'ratio': torrent.get_ratio(),
'total_filesize': torrent.get_size_bytes(),
'time_started': torrent.get_time_started()
}
except Exception:
raise
return torrent_info if torrent_info else False
def load_torrent(self, filepath):
start = bool(mylar.RTORRENT_STARTONLOAD)
logger.info('filepath to torrent file set to : ' + filepath)
torrent = self.conn.load_torrent(filepath, verify_load=True)
if not torrent:
return False
if mylar.RTORRENT_LABEL:
torrent.set_custom(1, mylar.RTORRENT_LABEL)
logger.info('Setting label for torrent to : ' + mylar.RTORRENT_LABEL)
if mylar.RTORRENT_DIRECTORY:
torrent.set_directory(mylar.RTORRENT_DIRECTORY)
logger.info('Setting directory for torrent to : ' + mylar.RTORRENT_DIRECTORY)
logger.info('Successfully loaded torrent.')
#note that if set_directory is enabled, the torrent has to be started AFTER it's loaded or else it will give chunk errors and not seed
if start:
logger.info('[' + str(start) + '] Now starting torrent.')
torrent.start()
else:
logger.info('[' + str(start) + '] Not starting torrent due to configuration setting.')
return True
def start_torrent(self, torrent):
return torrent.start()
def stop_torrent(self, torrent):
return torrent.stop()
def delete_torrent(self, torrent):
deleted = []
try:
for file_item in torrent.get_files():
file_path = os.path.join(torrent.directory, file_item.path)
os.unlink(file_path)
deleted.append(file_item.path)
if torrent.is_multi_file() and torrent.directory.endswith(torrent.name):
try:
for path, _, _ in os.walk(torrent.directory, topdown=False):
os.rmdir(path)
deleted.append(path)
except:
pass
except Exception:
raise
torrent.erase()
return deleted

View File

@ -0,0 +1,96 @@
import os
from libs.utorrent.client import UTorrentClient
# Only compatible with uTorrent 3.0+
class TorrentClient(object):
def __init__(self):
self.conn = None
def connect(self, host, username, password):
if self.conn is not None:
return self.conn
if not host:
return False
if username and password:
self.conn = UTorrentClient(
host,
username,
password
)
else:
self.conn = UTorrentClient(host)
return self.conn
def find_torrent(self, hash):
try:
torrent_list = self.conn.list()[1]
for t in torrent_list['torrents']:
if t[0] == hash:
torrent = t
except Exception:
raise
return torrent if torrent else False
def get_torrent(self, torrent):
if not torrent[26]:
raise 'Only compatible with uTorrent 3.0+'
torrent_files = []
torrent_completed = False
torrent_directory = os.path.normpath(torrent[26])
try:
if torrent[4] == 1000:
torrent_completed = True
files = self.conn.getfiles(torrent[0])[1]['files'][1]
for f in files:
if not os.path.normpath(f[0]).startswith(torrent_directory):
file_path = os.path.join(torrent_directory, f[0].lstrip('/'))
else:
file_path = f[0]
torrent_files.append(file_path)
torrent_info = {
'hash': torrent[0],
'name': torrent[2],
'label': torrent[11] if torrent[11] else '',
'folder': torrent[26],
'completed': torrent_completed,
'files': torrent_files,
}
except Exception:
raise
return torrent_info
def start_torrent(self, torrent_hash):
return self.conn.start(torrent_hash)
def stop_torrent(self, torrent_hash):
return self.conn.stop(torrent_hash)
def delete_torrent(self, torrent):
deleted = []
try:
files = self.conn.getfiles(torrent[0])[1]['files'][1]
for f in files:
deleted.append(os.path.normpath(os.path.join(torrent[26], f[0])))
self.conn.removedata(torrent[0])
except Exception:
raise
return deleted

View File

View File

@ -0,0 +1,31 @@
import os
def link(src, dst):
if os.name == 'nt':
import ctypes
if ctypes.windll.kernel32.CreateHardLinkW(unicode(dst), unicode(src), 0) == 0: raise ctypes.WinError()
else:
os.link(src, dst)
def symlink(src, dst):
if os.name == 'nt':
import ctypes
if ctypes.windll.kernel32.CreateSymbolicLinkW(unicode(dst), unicode(src), 1 if os.path.isdir(src) else 0) in [0, 1280]: raise ctypes.WinError()
else:
os.symlink(src, dst)
def is_rarfile(f):
import binascii
with open(f, "rb") as f:
byte = f.read(12)
spanned = binascii.hexlify(byte[10])
main = binascii.hexlify(byte[11])
if spanned == "01" and main == "01": # main rar archive in a set of archives
return True
elif spanned == "00" and main == "00": # single rar
return True
return False

View File

@ -780,7 +780,9 @@ def forceRescan(ComicID, archive=None, module=None):
logger.info(module + ' Now checking files for ' + rescan['ComicName'] + ' (' + str(rescan['ComicYear']) + ') in ' + rescan['ComicLocation'])
fca = []
if archive is None:
tmpval = filechecker.listFiles(dir=rescan['ComicLocation'], watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
tval = filechecker.FileChecker(dir=rescan['ComicLocation'], watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
tmpval = tval.listFiles()
#tmpval = filechecker.listFiles(dir=rescan['ComicLocation'], watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
comiccnt = int(tmpval['comiccount'])
logger.fdebug(module + 'comiccnt is:' + str(comiccnt))
fca.append(tmpval)
@ -790,12 +792,16 @@ def forceRescan(ComicID, archive=None, module=None):
logger.fdebug(module + 'os.path.basename: ' + os.path.basename(rescan['ComicLocation']))
pathdir = os.path.join(mylar.MULTIPLE_DEST_DIRS, os.path.basename(rescan['ComicLocation']))
logger.info(module + ' Now checking files for ' + rescan['ComicName'] + ' (' + str(rescan['ComicYear']) + ') in :' + pathdir)
tmpv = filechecker.listFiles(dir=pathdir, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
mvals = filechecker.FileChecker(dir=pathdir, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
tmpv = mvals.listFiles()
#tmpv = filechecker.listFiles(dir=pathdir, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
logger.fdebug(module + 'tmpv filecount: ' + str(tmpv['comiccount']))
comiccnt += int(tmpv['comiccount'])
fca.append(tmpv)
else:
files_arc = filechecker.listFiles(dir=archive, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=rescan['AlternateSearch'])
# files_arc = filechecker.listFiles(dir=archive, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=rescan['AlternateSearch'])
arcval = filechecker.FileChecker(dir=archive, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=rescan['AlternateSearch'])
files_arc = arcval.listFiles()
fca.append(files_arc)
comiccnt = int(files_arc['comiccount'])
fcb = []
@ -1001,7 +1007,7 @@ def forceRescan(ComicID, archive=None, module=None):
logger.fdebug(module + ' Matched...issue: ' + rescan['ComicName'] + '#' + reiss['Issue_Number'] + ' --- ' + str(int_iss))
havefiles+=1
haveissue = "yes"
isslocation = tmpfc['ComicFilename']
isslocation = tmpfc['ComicFilename'].decode('utf-8')
issSize = str(tmpfc['ComicSize'])
logger.fdebug(module + ' .......filename: ' + isslocation)
logger.fdebug(module + ' .......filesize: ' + str(tmpfc['ComicSize']))
@ -1141,9 +1147,9 @@ def forceRescan(ComicID, archive=None, module=None):
logger.fdebug(module + ' Matched...annual issue: ' + rescan['ComicName'] + '#' + str(reann['Issue_Number']) + ' --- ' + str(int_iss))
havefiles+=1
haveissue = "yes"
isslocation = str(tmpfc['ComicFilename'])
isslocation = tmpfc['ComicFilename'].decode('utf-8')
issSize = str(tmpfc['ComicSize'])
logger.fdebug(module + ' .......filename: ' + str(isslocation))
logger.fdebug(module + ' .......filename: ' + isslocation)
logger.fdebug(module + ' .......filesize: ' + str(tmpfc['ComicSize']))
# to avoid duplicate issues which screws up the count...let's store the filename issues then
# compare earlier...
@ -1215,6 +1221,7 @@ def forceRescan(ComicID, archive=None, module=None):
myDB.upsert("annuals", newValueDict, controlValueDict)
ANNComicID = None
else:
logger.fdebug(newValueDict)
#issID_to_write.append({"tableName": "issues",
# "valueDict": newValueDict,
# "keyDict": controlValueDict})
@ -1226,7 +1233,7 @@ def forceRescan(ComicID, archive=None, module=None):
# logger.info('writing ' + str(iss))
# writethis = myDB.upsert(iss['tableName'], iss['valueDict'], iss['keyDict'])
logger.fdebug(module + ' IssueID to ignore: ' + str(issID_to_ignore))
#logger.fdebug(module + ' IssueID to ignore: ' + str(issID_to_ignore))
#here we need to change the status of the ones we DIDN'T FIND above since the loop only hits on FOUND issues.
update_iss = []

File diff suppressed because it is too large Load Diff

View File

@ -472,15 +472,7 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
b_list = []
comicid = []
# if it's a one-off check (during an add series), load the comicname here and ignore below.
if comic1off_name:
logger.fdebug("This is a one-off for " + comic1off_name + '[ latest issue: ' + str(issue) + ' ]')
lines.append(comic1off_name.strip())
unlines.append(comic1off_name.strip())
comicid.append(comic1off_id)
latestissue.append(issue)
w = 1
else:
if comic1off_name is None:
#let's read in the comic.watchlist from the db here
#cur.execute("SELECT ComicID, ComicName_Filesafe, ComicYear, ComicPublisher, ComicPublished, LatestDate, ForceContinuing, AlternateSearch, LatestIssue from comics WHERE Status = 'Active'")
weeklylist = []
@ -577,6 +569,16 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
else:
logger.fdebug("Determined to not be a Continuing series at this time.")
else:
# if it's a one-off check (during an add series), load the comicname here and ignore below.
logger.fdebug("This is a one-off for " + comic1off_name + ' [ latest issue: ' + str(issue) + ' ]')
lines.append(comic1off_name.strip())
unlines.append(comic1off_name.strip())
comicid.append(comic1off_id)
latestissue.append(issue)
w = 1
if w >= 1:
cnt = int(w -1)
cntback = int(w -1)
kp = []