mirror of https://github.com/evilhero/mylar
FIX: When retrieving feeds from 32p and in Auth mode, personal notification feeds contained some invalid html entries that weren't removed properly resulting in no results when attempting to match for downloading, FIX: When searching 32P, if title had a '/' within the title - Mylar would mistakingly skip it due to some previous exceptions that were made for CBT, FIX: Main page would quickly display & hide the have% column instead of always being hidden, FIX: Adjusted some incorrect spacing for non-alphanumeric characters when comparing search results (should result in better matching hopefully), FIX: When adding a series and the most recent issue was present on the weekly-pull list, it would sometimes not mark it as Wanted and auto-attempt to search for it (if auto mark Upcoming enabled), FIX: Added Test Connection button for 32P where it will test logon credentials as well as if Captcha is present, IMP: If captcha is enabled for 32p and signon is required because keys are stale, will not send authentication information and will just bypass as a provider, IMP: Test Connection button added for SABnzbd, IMP: Added ability to directly add torrents to rtorrent and apply label + download directory options (config.ini only atm), FIX: If a search result had a 'vol.' label in it, depending on how the format of the label was mylar would refuse to remove the volume which resulted in failed matches (also fixed a similar issue with failing to remove the volume label when comparing search results), FIX: When filechecking, if a series had a - in the title, will now account for it properly, IMP: Completely redid the filecheck module which allows for integration into other modules as well as more detailed failure logs, IMP: Added Dynamic handder integration into filechecker and subsequent modules that use it which allows for special characters to be replaced with any other type of character, IMP: Manual post-processing speed improved greatly due to new usage of filecheck module, IMP: Importer backend code redone to include new filecheck module, IMP: Added status/counter to import process, IMP: Added force unlock option to importer for failed imports, IMP: Added new status to Import labelled as 'Manual Intervention' for imports that need the user to manually select an option from an available list, FIX: When import said there were search results to view, but none available - would blank screen, IMP: Added a failure log entry showing all the failed files that weren't able to be scanned in during an import (will be in GUI eventually), IMP: if only partial metadata is available during import, Mylar will attempt to use what's available from the metatagging instead of picking all of one/other, IMP: Better grouping of series/volumes when viewing the import results page as well as now indicating if annuals are present within the files, IMP: Added a file-icon beside each imported item on the import result page which allows the user to view the files that are associated with the given series grouping, IMP: Added a blacklisted_publishers option to config.ini which will blacklist specific publishers from being returned during search / import results, FIX: If duplicate dump folder had a value, but duplicate dump wasn't enabled - would still use the duplicate dump folder during post-processing runs, FIX: (#1194) Patch to allow for fixed H1 elements for title (thnx chazlarson), FIX: Removed UnRAR dependency checks in cmtagmylar since not being used anymore, FIX: Fixed a problem with non-ascii characters being recognized during a file-check in certain cases, IMP: Attmept by Mylar to grab an alternate jpg from file when viewing the issue details if it complies with the naming conventions, FIX: Fixed some metatagging issues with ComicBookLover tags not being handled properly if they didn't exist, IMP: Dupecheck now has a failback if it's comparing a cbr/cbr, cbz/cbz and cbr/cbz-priority is enabled, FIX: Quick check added for when adding/refreshing a comic that if a cover already existed, it would delete the cover prior to the attempt to retrieve it, IMP: Added some additional handling for when searching/adding fails, FIX: If a story arc didn't have proper issue dates (or invalid ones) would error out on loading the story arc main page - usually when arcs were imported using a cbl file.
This commit is contained in:
parent
4e56a31f77
commit
6085c9f993
|
@ -155,6 +155,7 @@ table#series_table th#name { text-align: left; min-width: 250px; }
|
|||
table#series_table th#year { text-align: left; max-width: 25px; }
|
||||
table#series_table th#issue { text-align: left; min-width: 100px; }
|
||||
table#series_table th#published { vertical-align: middle; text-align: left; min-width:40px; }
|
||||
table#series_table th#have_percent { text-align: left; max-width: 25px; }
|
||||
table#series_table th#have { text-align: center; }
|
||||
table#series_table th#status { vertical-align: middle; text-align: left; min-width: 25px; }
|
||||
table#series_table th#active { vertical-align: middle; text-align: left; max-width: 20px; }
|
||||
|
@ -164,6 +165,7 @@ table#series_table td#name { text-align: left; max-width: 250px; }
|
|||
table#series_table td#year { vertical-align: middle; text-align: left; max-width: 30px; }
|
||||
table#series_table td#issue { vertical-align: middle; text-align: left; max-width: 100px; }
|
||||
table#series_table td#published { vertical-align: middle; text-align: left; max-width: 40px; }
|
||||
table#series_table td#have_percent { vertical-align: middle; text-align: left; max-width: 30px; }
|
||||
table#series_table td#have { text-align: center; }
|
||||
table#series_table td#status { vertical-align: middle; text-align: left; max-width: 25px; }
|
||||
table#series_table td#active { vertical-align: middle; text-align: left; max-width: 20px; }
|
||||
|
|
|
@ -43,6 +43,7 @@
|
|||
|
||||
<div id="paddingheader">
|
||||
<h1>
|
||||
</br>
|
||||
%if comic['Status'] == 'Loading':
|
||||
<img src="interfaces/default/images/loader_black.gif" alt="loading" style="float:left; margin-right: 5px;"/>
|
||||
%endif
|
||||
|
@ -448,7 +449,7 @@
|
|||
%if linky:
|
||||
<a href="downloadthis?pathfile=${linky |u}"><img src="interfaces/default/images/download_icon.png" height="25" width="25" title="Download the Issue" class="highqual" /></a>
|
||||
%if linky.endswith('.cbz'):
|
||||
<a href="#issue-box" onclick="return runMetaIssue('${linky |u}');" class="issue-window"><img src="interfaces/default/images/issueinfo.png" height="25" width="25" title="View Issue Details" class="highqual" /></a>
|
||||
<a href="#issue-box" onclick="return runMetaIssue('${linky |u}', '${comic['ComicName']}', '${issue['Issue_Number']}', '${issue['IssueDate']}', '${issue['IssueName'] |u}');" class="issue-window"><img src="interfaces/default/images/issueinfo.png" height="25" width="25" title="View Issue Details" class="highqual" /></a>
|
||||
<div id="issue-box" class="issue-popup">
|
||||
<a href="#" class="close"><img src="interfaces/default/images/close_pop.png" class="btn_close" title="Close Window" alt="Close" class="highqual" /></a>
|
||||
<fieldset>
|
||||
|
@ -582,7 +583,7 @@
|
|||
%if linky:
|
||||
<a href="downloadthis?pathfile=${linky |u}"><img src="interfaces/default/images/download_icon.png" height="25" width="25" title="Download the annual" class="highqual" /></a>
|
||||
%if linky.endswith('.cbz'):
|
||||
<a href="#issue-box" onclick="return runMetaIssue('${linky |u}');" class="issue-window"><img src="interfaces/default/images/issueinfo.png" height="25" width="25" title="View Issue Details" class="highqual" /></a>
|
||||
<a href="#issue-box" onclick="return runMetaIssue('${linky |u}', '${comic['ComicName']}', '${annual['Issue_Number']}', '${annual['IssueDate']}', '${annual['IssueName']}');" class="issue-window"><img src="interfaces/default/images/issueinfo.png" height="25" width="25" title="View Issue Details" class="highqual" /></a>
|
||||
<div id="issue-box" class="issue-popup">
|
||||
<a href="#" class="close"><img src="interfaces/default/images/close_pop.png" class="btn_close" title="Close Window" alt="Close" class="highqual" /></a>
|
||||
<fieldset>
|
||||
|
@ -700,12 +701,12 @@
|
|||
</script>
|
||||
|
||||
<script>
|
||||
function runMetaIssue(filelink) {
|
||||
// alert(filelink);
|
||||
function runMetaIssue(filelink, comicname, issue, date, title) {
|
||||
alert(filelink);
|
||||
$.ajax({
|
||||
type: "GET",
|
||||
url: "IssueInfo",
|
||||
data: { filelocation: filelink },
|
||||
data: { filelocation: filelink, comicname: comicname, issue: issue, date: date, title: title },
|
||||
success: function(response) {
|
||||
var names = response
|
||||
$('#responsethis').html(response);
|
||||
|
|
|
@ -319,11 +319,11 @@
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<!--
|
||||
<div class="row">
|
||||
<a href="#" style="float:right" type="button" onclick="doAjaxCall('addAction();SABtest',$(this))" data-success="Sucessfully tested SABnzbd connection" data-error="Error testing SABnzbd connection"><span class="ui-icon ui-icon-extlink"></span>Test SABnzbd</a>
|
||||
<div align="center" class="row">
|
||||
<input type="button" value="Test SABnzbd" id="test_sab" style="float:center" /></br>
|
||||
<input type="text" name="sabstatus" style="text-align:center; font-size:11px;" id="sabstatus" size="50" DISABLED />
|
||||
</div>
|
||||
-->
|
||||
|
||||
</fieldset>
|
||||
<fieldset id="nzbget_options">
|
||||
<div class="row">
|
||||
|
@ -542,6 +542,10 @@
|
|||
<input type="password" name="password_32p" value="${config['password_32p']}" size="36">
|
||||
<small>( monitor the NEW releases feed & your personal notifications )</small>
|
||||
</div>
|
||||
<div align="center" class="row">
|
||||
<input type="button" value="Test Connection" id="test_32p" style="float:center" /></br>
|
||||
<input type="text" name="status32p" style="text-align:center; font-size:11px;" id="status32p" size="50" DISABLED />
|
||||
</div>
|
||||
</fieldset>
|
||||
</div>
|
||||
<div class="row checkbox left clearfix">
|
||||
|
@ -691,6 +695,7 @@
|
|||
</fieldset>
|
||||
<fieldset>
|
||||
<legend>Duplicate Handling</legend>
|
||||
<small>( if filetypes are identical, will retain larger filesize )</small>
|
||||
<div class="row">
|
||||
<label>Retain based on</label>
|
||||
<select name="dupeconstraint">
|
||||
|
@ -1313,6 +1318,26 @@
|
|||
$('#api_key').val(data);
|
||||
});
|
||||
});
|
||||
$("#test_32p").click(function(){
|
||||
$.get('test_32p',
|
||||
function(data){
|
||||
if (data.error != undefined) {
|
||||
alert(data.error);
|
||||
return;
|
||||
}
|
||||
$('#status32p').val(data);
|
||||
});
|
||||
});
|
||||
$("#test_sab").click(function(){
|
||||
$.get('SABtest',
|
||||
function(data){
|
||||
if (data.error != undefined) {
|
||||
alert(data.error);
|
||||
return;
|
||||
}
|
||||
$('#sabstatus').val(data);
|
||||
});
|
||||
});
|
||||
if ($("#enable_https").is(":checked"))
|
||||
{
|
||||
$("#https_options").show();
|
||||
|
|
|
@ -182,6 +182,11 @@ img.editArt {
|
|||
max-height: 300px;
|
||||
position: relative;
|
||||
}
|
||||
.className {
|
||||
width: 500px;
|
||||
height: 400px;
|
||||
overflow: scroll;
|
||||
}
|
||||
.title {
|
||||
margin-bottom: 20px;
|
||||
margin-top: 10px;
|
||||
|
@ -334,6 +339,9 @@ form fieldset small.heading {
|
|||
margin-bottom: 10px;
|
||||
margin-top: -15px;
|
||||
}
|
||||
form .fieldset-auto-width {
|
||||
display: inline-block;
|
||||
}
|
||||
form .row {
|
||||
font-family: Helvetica, Arial;
|
||||
margin-bottom: 10px;
|
||||
|
@ -861,7 +869,7 @@ div#searchbar .mini-icon {
|
|||
}
|
||||
#paddingheader h1 {
|
||||
line-height: 33px;
|
||||
width: 450px;
|
||||
/* width: 450px; */
|
||||
}
|
||||
#paddingheader h1 img {
|
||||
float: left;
|
||||
|
@ -1227,10 +1235,14 @@ div#artistheader h2 a {
|
|||
min-width: 275px;
|
||||
text-align: left;
|
||||
}
|
||||
#series_table th#year,
|
||||
#series_table th#year {
|
||||
max-width: 25px;
|
||||
text-align: left;
|
||||
}
|
||||
#series_table th#have_percent {
|
||||
max-width: 25px;
|
||||
text-align: left;
|
||||
display: none;
|
||||
}
|
||||
#series_table th#active {
|
||||
max-width: 40px;
|
||||
|
@ -1257,10 +1269,15 @@ div#artistheader h2 a {
|
|||
vertical-align: middle;
|
||||
}
|
||||
#series_table td#year,
|
||||
max-width: 25px;
|
||||
text-align: left;
|
||||
vertical-align: middle;
|
||||
}
|
||||
#series_table td#have_percent {
|
||||
max-width: 25px;
|
||||
text-align: left;
|
||||
vertical-align: middle;
|
||||
display: none;
|
||||
}
|
||||
#series_table td#active {
|
||||
max-width: 40px;
|
||||
|
@ -1399,6 +1416,10 @@ div#artistheader h2 a {
|
|||
min-width: 75px;
|
||||
text-align: center;
|
||||
}
|
||||
#impresults_table th#issues {
|
||||
min-width: 25px;
|
||||
text-align: center;
|
||||
}
|
||||
#impresults_table th#status {
|
||||
min-width: 110px;
|
||||
text-align: center;
|
||||
|
@ -1430,6 +1451,11 @@ div#artistheader h2 a {
|
|||
text-align: left;
|
||||
vertical-align: middle;
|
||||
}
|
||||
#impresults_table td#issues {
|
||||
min-width: 25px;
|
||||
text-align: left;
|
||||
vertical-align: middle;
|
||||
}
|
||||
#impresults_table td#status {
|
||||
min-width: 110px;
|
||||
text-align: left;
|
||||
|
@ -1445,7 +1471,6 @@ div#artistheader h2 a {
|
|||
vertical-align: middle;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
#searchmanage_table th#comicname {
|
||||
min-width: 325px;
|
||||
text-align: left;
|
||||
|
|
|
@ -532,7 +532,7 @@ div#searchbar {
|
|||
top: -25px;
|
||||
h1 {
|
||||
line-height: 33px;
|
||||
width: 450px;
|
||||
// width: 450px;
|
||||
img {
|
||||
float:left;
|
||||
margin-right: 5px;
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 2.1 KiB |
Binary file not shown.
After Width: | Height: | Size: 10 KiB |
|
@ -46,7 +46,11 @@
|
|||
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="imp_metadata" id="imp_metadata" value="1" ${checked(mylar.IMP_METADATA)}><label>Use Existing Metadata</label>
|
||||
<small>(Use existing Metadata to better locate series for import)</small>
|
||||
</div>
|
||||
|
||||
%if mylar.IMPORTLOCK:
|
||||
<div class="row checkbox">
|
||||
<input type="checkbox" onclick="<%mylar.IMPORTLOCK = False%>" style="vertical-align: middle; margin: 3px; margin-top: -1px;"><label>Existing Import running. Force next run?</label>
|
||||
</div>
|
||||
%endif
|
||||
</fieldset>
|
||||
</span>
|
||||
</tr>
|
||||
|
@ -70,7 +74,7 @@
|
|||
<tr><center><h3>To be Imported</h3></center></tr>
|
||||
<thead>
|
||||
<tr>
|
||||
<th id="select"></th>
|
||||
<th id="select" align="left"><input type="checkbox" onClick="toggle(this)" name="results" class="checkbox" /></th>
|
||||
<th id="comicname">Comic Name</th>
|
||||
<th id="comicyear">Year</th>
|
||||
<th id="issues">Issues</th>
|
||||
|
@ -83,16 +87,36 @@
|
|||
%if results:
|
||||
%for result in results:
|
||||
<%
|
||||
if result['Status'] == 'Imported':
|
||||
grade = 'X'
|
||||
elif result['Status'] == 'Manual Intervention':
|
||||
grade = 'C'
|
||||
else:
|
||||
grade = 'Z'
|
||||
|
||||
if result['DisplayName'] is None:
|
||||
displayname = result['ComicName']
|
||||
else:
|
||||
displayname = result['DisplayName']
|
||||
displayline = displayname
|
||||
if result['AnnualCount'] > 0:
|
||||
displayline += '(+' + str(result['AnnualCount']) + ' Annuals)'
|
||||
if all([result['Volume'] is not None, result['Volume'] != 'None']):
|
||||
displayline += ' (' + str(result['Volume']) + ')'
|
||||
|
||||
%>
|
||||
<tr class="grade${grade}">
|
||||
<td id="select"><input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="${result['ComicName']}" value="${result['ComicName']}" class="checkbox" />
|
||||
<td id="comicname">${displayname}
|
||||
<tr class="${result['Status']} grade${grade}">
|
||||
<td id="select"><input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="${result['ComicName']}[${result['Volume']}]" value="${result['DynamicName']}" class="checkbox" />
|
||||
<td id="comicname">${displayline}
|
||||
%if result['Status'] != 'Imported':
|
||||
<a href="#issue-box" onclick="return runMetaIssue('${result['ComicName'] |u}','${result['DynamicName']}','${result['Volume']}');" class="issue-window"><img src="interfaces/default/images/files.png" height="15 width="15" title="View files in this group" class="highqual" /></a>
|
||||
<div id="issue-box" class="issue-popup">
|
||||
<a href="#" class="close"><img src="interfaces/default/images/close_pop.png" class="btn_close" title="Close Window" alt="Close" class="highqual" /></a>
|
||||
<div id="responsethis">
|
||||
<label><strong>[ NODATA ]</strong></label></br>
|
||||
</div>
|
||||
</div>
|
||||
%endif
|
||||
%if result['ComicID'] is not None:
|
||||
[${result['ComicID']}]
|
||||
%endif
|
||||
|
@ -111,15 +135,12 @@
|
|||
</td>
|
||||
<td id="importdate">${result['ImportDate']}</td>
|
||||
<td id="addcomic">
|
||||
%if result['Status'] == 'Not Imported':
|
||||
[<a href="#" title="Import ${result['ComicName']} into your watchlist" onclick="doAjaxCall('preSearchit?ComicName=${result['ComicName']| u}&displaycomic=${displayname}| u}&comicid=${result['ComicID']}',$(this),'table')" data-success="Imported ${result['ComicName']}">Import</a>]
|
||||
%endif
|
||||
[<a href="deleteimport?ComicName=${result['ComicName']}">Remove</a>]
|
||||
%if result['implog'] is not None:
|
||||
[<a class="showlog" title="Display the Import log for ${result['ComicName']}" href="importLog?ComicName=${result['ComicName'] |u}&SRID=${result['SRID']}">Log</a>]
|
||||
%if result['Status'] == 'Not Imported' and result['SRID'] is None:
|
||||
[<a href="#" title="Import ${result['ComicName']} into your watchlist" onclick="doAjaxCall('preSearchit?ComicName=${result['ComicName']| u}&displaycomic=${displayname}| u}&comicid=${result['ComicID']}&volume=${result['Volume']}&dynamicname=${result['DynamicName']}&displayline=${displayline}',$(this),'table')" data-success="Imported ${result['ComicName']}">Import</a>]
|
||||
%endif
|
||||
[<a href="deleteimport?ComicName=${result['ComicName']}&volume=${result['Volume']}&DynamicName=${result['DynamicName']}&Status=${result['Status']}">Remove</a>]
|
||||
%if result['SRID'] is not None and result['Status'] != 'Imported':
|
||||
[<a title="Manual intervention is required - more than one result when attempting to import" href="importresults_popup?SRID=${result['SRID']}&ComicName=${result['ComicName'] |u}&imported=yes&ogcname=${result['ComicName'] |u}">Select</a>]
|
||||
[<a title="Manual intervention is required - more than one result when attempting to import" href="importresults_popup?SRID=${result['SRID']}&ComicName=${result['ComicName'] |u}&imported=yes&ogcname=${result['ComicName'] |u}&DynamicName=${result['DynamicName']}">Select</a>]
|
||||
%endif
|
||||
</td>
|
||||
</tr>
|
||||
|
@ -142,7 +163,61 @@
|
|||
|
||||
<%def name="javascriptIncludes()">
|
||||
<script src="js/libs/jquery.dataTables.min.js"></script>
|
||||
<script type="text/javascript">
|
||||
<script>
|
||||
$(document).ready(function() {
|
||||
$('a.issue-window').click(function() {
|
||||
|
||||
//Getting the variable's value from a link
|
||||
var issueBox = $(this).attr('href');
|
||||
|
||||
//Fade in the Popup
|
||||
$(issueBox).fadeIn(300);
|
||||
|
||||
//Set the center alignment padding + border see css style
|
||||
var popMargTop = ($(issueBox).height() + 24) / 2;
|
||||
var popMargLeft = ($(issueBox).width() + 24) / 2;
|
||||
|
||||
$(issueBox).css({
|
||||
'margin-top' : -popMargTop,
|
||||
'margin-left' : -popMargLeft
|
||||
});
|
||||
|
||||
// Add the mask to body
|
||||
$('body').append('<div id="mask"></div>');
|
||||
$('#mask').fadeIn(300);
|
||||
|
||||
return false;
|
||||
});
|
||||
|
||||
// When clicking on the button close or the mask layer the popup closed
|
||||
$('a.close, #mask').on('click', function() {
|
||||
$('#mask , .issue-popup').fadeOut(300 , function() {
|
||||
$('#mask').remove();
|
||||
});
|
||||
return false;
|
||||
});
|
||||
});
|
||||
</script>
|
||||
<script>
|
||||
function runMetaIssue(comicname, dynamicname, volume) {
|
||||
//alert(comicname);
|
||||
$.ajax({
|
||||
type: "GET",
|
||||
url: "ImportFilelisting",
|
||||
data: { comicname: comicname, dynamicname: dynamicname, volume: volume },
|
||||
success: function(response) {
|
||||
//alert(response)
|
||||
var names = response
|
||||
$('#responsethis').html(response);
|
||||
},
|
||||
error: function(data)
|
||||
{
|
||||
alert('ERROR'+data.responseText);
|
||||
},
|
||||
});
|
||||
}
|
||||
</script>
|
||||
<script>
|
||||
$('.showlog').click(function (event) {
|
||||
var width = 575,
|
||||
height = 400,
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
<th id="year">Year</th>
|
||||
<th id="issue">Last Issue</th>
|
||||
<th id="published">Published</th>
|
||||
<th id="have_percent">Have %</th>
|
||||
<th class="hidden" id="have_percent">Have %</th>
|
||||
<th id="have">Have</th>
|
||||
<th id="status">Status</th>
|
||||
<th id="active">Active</th>
|
||||
|
@ -47,7 +47,7 @@
|
|||
<td id="year"><span title="${comic['ComicYear']}"></span>${comic['ComicYear']}</td>
|
||||
<td id="issue"><span title="${comic['LatestIssue']}"></span># ${comic['LatestIssue']}</td>
|
||||
<td id="published">${comic['LatestDate']}</td>
|
||||
<td id="have_percent">${comic['percent']}</td>
|
||||
<td class="hidden" id="have_percent">${comic['percent']}</td>
|
||||
<td id="have"><span title="${comic['percent']}"></span>${css}<div style="width:${comic['percent']}%"><span class="progressbar-front-text">${comic['haveissues']}/${comic['totalissues']}</span></div></td>
|
||||
<td id="status">${comic['recentstatus']}</td>
|
||||
<td id="active" align="center">
|
||||
|
|
|
@ -24,15 +24,51 @@
|
|||
<li><a href="#tabs-3">Advanced Options</a></li>
|
||||
</ul>
|
||||
<div id="tabs-1" class="configtable">
|
||||
<div style="float:right;position:absolute;right:0;top;0;margin-right:50px;">
|
||||
<legend>Current Import Status</legend>
|
||||
%if mylar.IMPORT_STATUS:
|
||||
%if mylar.IMPORT_STATUS == 'Import completed.':
|
||||
<input type="text" name="importstatus" id="importstatus" style="text-align:center; font-size:11px;" size="60" DISABLED /></br>
|
||||
<script>
|
||||
turnitoff();
|
||||
</script>
|
||||
%else:
|
||||
<input type="text" name="importstatus" id="importstatus" style="text-align:center; font-size:11px;" size="60" DISABLED /></br>
|
||||
<script>
|
||||
turniton();
|
||||
</script>
|
||||
%endif
|
||||
%else:
|
||||
<script>
|
||||
turnitoff();
|
||||
</script>
|
||||
<input type="text" name="importstatus" id="importstatus" style="text-align:center; font-size:11px;" size="60" value="Import is currently not running" DISABLED /></br>
|
||||
%endif
|
||||
<label>Number of valid files to process: </label>${mylar.IMPORT_TOTALFILES}
|
||||
%if int(mylar.IMPORT_FILES) != int(mylar.IMPORT_TOTALFILES):
|
||||
/ ${mylar.IMPORT_FILES}
|
||||
%endif
|
||||
</br>
|
||||
<label>Files with ComicID's present: </label>${mylar.IMPORT_CID_COUNT}</br>
|
||||
<label>Files that were parsed: </label>${mylar.IMPORT_PARSED_COUNT}</br>
|
||||
<label>Files that couldn't be parsed: </label>${mylar.IMPORT_FAILURE_COUNT}</br>
|
||||
|
||||
%if mylar.IMPORTLOCK:
|
||||
</br></br></br></br>
|
||||
<div class="row checkbox">
|
||||
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="forcescan" id="forcescan" value="1"><label>Existing Import Running. Force this import?</label>
|
||||
</div>
|
||||
%endif
|
||||
</div>
|
||||
<form action="comicScan" method="GET" id="comicScan">
|
||||
<fieldset>
|
||||
<legend>Scan Comic Library</legend>
|
||||
<p><strong>Where do you keep your comics?</strong></p>
|
||||
<p>You can put in any directory, and it will scan for comic files in that folder
|
||||
(including all subdirectories). <br/><small>For example: '/Users/name/Comics'</small></p>
|
||||
<p>You can put in any directory, and it will scan for comics</br>
|
||||
in that folder (including all subdirectories). <br/>
|
||||
<small>For example: '/Users/name/Comics'</small></p>
|
||||
<p>
|
||||
It may take a while depending on how many files you have. You can navigate away from the page<br />
|
||||
as soon as you click 'Save changes'
|
||||
It may take a while depending on how many files you have.</br>
|
||||
You can navigate away from the page as soon as you click 'Save changes'
|
||||
</p>
|
||||
<br/>
|
||||
<div class="row">
|
||||
|
@ -60,12 +96,11 @@
|
|||
<small>Rename files to configuration settings</small>
|
||||
</div>
|
||||
<br/>
|
||||
<input type="button" value="Save Changes and Scan" onclick="addScanAction();doAjaxCall('comicScan',$(this),'tabs',true);return true;" data-success="Changes saved. Library will be scanned" data-always="Sucessfully completed scanning library.">
|
||||
<input type="button" value="Save Changes and Scan" onclick="addScanAction();doAjaxCall('comicScan',$(this),'tabs',true);return true;" data-success="Import Scan now submitted.">
|
||||
<input type="button" value="Save Changes without Scanning Library" onclick="doAjaxCall('comicScan',$(this),'tabs',true);return false;" data-success="Changes Saved Successfully">
|
||||
%if mylar.IMPORTBUTTON:
|
||||
<input type="button" value="Import Results Management" style="float: right;" onclick="location.href='importResults';" />
|
||||
%endif
|
||||
</fieldset>
|
||||
</form>
|
||||
</div>
|
||||
<div id="tabs-2" class="configtable">
|
||||
|
@ -139,10 +174,41 @@
|
|||
</%def>
|
||||
<%def name="javascriptIncludes()">
|
||||
<script>
|
||||
var CheckEnabled = true;
|
||||
function addScanAction() {
|
||||
$('#autoadd').append('<input type="hidden" name="scan" value=1 />');
|
||||
CheckEnabled = true;
|
||||
statuscheck();
|
||||
};
|
||||
function statuscheck() {
|
||||
if (CheckEnabled == true){
|
||||
var ImportTimer = setInterval(function(){
|
||||
$.get('Check_ImportStatus',
|
||||
function(data){
|
||||
if (data.error != undefined) {
|
||||
alert(data.error);
|
||||
return;
|
||||
}
|
||||
$('#importstatus').val(data);
|
||||
if (data == 'Import completed.') {
|
||||
CheckEnabled = false;
|
||||
clearInterval(ImportTimer);
|
||||
return;
|
||||
}
|
||||
});
|
||||
}, 5000);
|
||||
};
|
||||
};
|
||||
function turnitoff() {
|
||||
CheckEnabled = false;
|
||||
clearInterval(ImportTimer);
|
||||
};
|
||||
function turniton() {
|
||||
if (CheckEnabled == false) {
|
||||
CheckEnabled = true;
|
||||
statuscheck();
|
||||
}
|
||||
};
|
||||
|
||||
function initThisPage() {
|
||||
jQuery( "#tabs" ).tabs();
|
||||
initActions();
|
||||
|
|
|
@ -0,0 +1,609 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
import urllib
|
||||
import os.path
|
||||
import time
|
||||
import xmlrpclib
|
||||
|
||||
from common import find_torrent, \
|
||||
is_valid_port, convert_version_tuple_to_str
|
||||
from lib.torrentparser import TorrentParser
|
||||
from lib.xmlrpc.http import HTTPServerProxy
|
||||
from lib.xmlrpc.scgi import SCGIServerProxy
|
||||
from rpc import Method
|
||||
from lib.xmlrpc.basic_auth import BasicAuthTransport
|
||||
from torrent import Torrent
|
||||
from group import Group
|
||||
import rpc # @UnresolvedImport
|
||||
|
||||
__version__ = "0.2.9"
|
||||
__author__ = "Chris Lucas"
|
||||
__contact__ = "chris@chrisjlucas.com"
|
||||
__license__ = "MIT"
|
||||
|
||||
MIN_RTORRENT_VERSION = (0, 8, 1)
|
||||
MIN_RTORRENT_VERSION_STR = convert_version_tuple_to_str(MIN_RTORRENT_VERSION)
|
||||
|
||||
|
||||
class RTorrent:
|
||||
""" Create a new rTorrent connection """
|
||||
rpc_prefix = None
|
||||
|
||||
def __init__(self, uri, username=None, password=None,
|
||||
verify=False, sp=None, sp_kwargs=None):
|
||||
self.uri = uri # : From X{__init__(self, url)}
|
||||
|
||||
self.username = username
|
||||
self.password = password
|
||||
|
||||
self.schema = urllib.splittype(uri)[0]
|
||||
|
||||
if sp:
|
||||
self.sp = sp
|
||||
elif self.schema in ['http', 'https']:
|
||||
self.sp = HTTPServerProxy
|
||||
elif self.schema == 'scgi':
|
||||
self.sp = SCGIServerProxy
|
||||
else:
|
||||
raise NotImplementedError()
|
||||
|
||||
self.sp_kwargs = sp_kwargs or {}
|
||||
|
||||
self.torrents = [] # : List of L{Torrent} instances
|
||||
self._rpc_methods = [] # : List of rTorrent RPC methods
|
||||
self._torrent_cache = []
|
||||
self._client_version_tuple = ()
|
||||
|
||||
if verify is True:
|
||||
self._verify_conn()
|
||||
|
||||
def _get_conn(self):
|
||||
"""Get ServerProxy instance"""
|
||||
if self.username is not None and self.password is not None:
|
||||
if self.schema == 'scgi':
|
||||
raise NotImplementedError()
|
||||
|
||||
return self.sp(
|
||||
self.uri,
|
||||
transport=BasicAuthTransport(self.username, self.password),
|
||||
**self.sp_kwargs
|
||||
)
|
||||
|
||||
return self.sp(self.uri, **self.sp_kwargs)
|
||||
|
||||
def _verify_conn(self):
|
||||
# check for rpc methods that should be available
|
||||
assert "system.client_version" in self._get_rpc_methods(), "Required RPC method not available."
|
||||
assert "system.library_version" in self._get_rpc_methods(), "Required RPC method not available."
|
||||
|
||||
# minimum rTorrent version check
|
||||
assert self._meets_version_requirement() is True,\
|
||||
"Error: Minimum rTorrent version required is {0}".format(
|
||||
MIN_RTORRENT_VERSION_STR)
|
||||
|
||||
def _meets_version_requirement(self):
|
||||
return self._get_client_version_tuple() >= MIN_RTORRENT_VERSION
|
||||
|
||||
def _get_client_version_tuple(self):
|
||||
conn = self._get_conn()
|
||||
|
||||
if not self._client_version_tuple:
|
||||
if not hasattr(self, "client_version"):
|
||||
setattr(self, "client_version",
|
||||
conn.system.client_version())
|
||||
|
||||
rtver = getattr(self, "client_version")
|
||||
self._client_version_tuple = tuple([int(i) for i in
|
||||
rtver.split(".")])
|
||||
|
||||
return self._client_version_tuple
|
||||
|
||||
def _update_rpc_methods(self):
|
||||
self._rpc_methods = self._get_conn().system.listMethods()
|
||||
|
||||
return self._rpc_methods
|
||||
|
||||
def _get_rpc_methods(self):
|
||||
""" Get list of raw RPC commands
|
||||
|
||||
@return: raw RPC commands
|
||||
@rtype: list
|
||||
"""
|
||||
|
||||
return(self._rpc_methods or self._update_rpc_methods())
|
||||
|
||||
def get_torrents(self, view="main"):
|
||||
"""Get list of all torrents in specified view
|
||||
|
||||
@return: list of L{Torrent} instances
|
||||
|
||||
@rtype: list
|
||||
|
||||
@todo: add validity check for specified view
|
||||
"""
|
||||
self.torrents = []
|
||||
methods = torrent.methods
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self)]
|
||||
|
||||
m = rpc.Multicall(self)
|
||||
m.add("d.multicall", view, "d.get_hash=",
|
||||
*[method.rpc_call + "=" for method in retriever_methods])
|
||||
|
||||
results = m.call()[0] # only sent one call, only need first result
|
||||
|
||||
for result in results:
|
||||
results_dict = {}
|
||||
# build results_dict
|
||||
for m, r in zip(retriever_methods, result[1:]): # result[0] is the info_hash
|
||||
results_dict[m.varname] = rpc.process_result(m, r)
|
||||
|
||||
self.torrents.append(
|
||||
Torrent(self, info_hash=result[0], **results_dict)
|
||||
)
|
||||
|
||||
self._manage_torrent_cache()
|
||||
return(self.torrents)
|
||||
|
||||
def _manage_torrent_cache(self):
|
||||
"""Carry tracker/peer/file lists over to new torrent list"""
|
||||
for torrent in self._torrent_cache:
|
||||
new_torrent = common.find_torrent(torrent.info_hash,
|
||||
self.torrents)
|
||||
if new_torrent is not None:
|
||||
new_torrent.files = torrent.files
|
||||
new_torrent.peers = torrent.peers
|
||||
new_torrent.trackers = torrent.trackers
|
||||
|
||||
self._torrent_cache = self.torrents
|
||||
|
||||
def _get_load_function(self, file_type, start, verbose):
|
||||
"""Determine correct "load torrent" RPC method"""
|
||||
func_name = None
|
||||
if file_type == "url":
|
||||
# url strings can be input directly
|
||||
if start and verbose:
|
||||
func_name = "load_start_verbose"
|
||||
elif start:
|
||||
func_name = "load_start"
|
||||
elif verbose:
|
||||
func_name = "load_verbose"
|
||||
else:
|
||||
func_name = "load"
|
||||
elif file_type in ["file", "raw"]:
|
||||
if start and verbose:
|
||||
func_name = "load_raw_start_verbose"
|
||||
elif start:
|
||||
func_name = "load_raw_start"
|
||||
elif verbose:
|
||||
func_name = "load_raw_verbose"
|
||||
else:
|
||||
func_name = "load_raw"
|
||||
|
||||
return(func_name)
|
||||
|
||||
def load_torrent(self, torrent, start=False, verbose=False, verify_load=True):
|
||||
"""
|
||||
Loads torrent into rTorrent (with various enhancements)
|
||||
|
||||
@param torrent: can be a url, a path to a local file, or the raw data
|
||||
of a torrent file
|
||||
@type torrent: str
|
||||
|
||||
@param start: start torrent when loaded
|
||||
@type start: bool
|
||||
|
||||
@param verbose: print error messages to rTorrent log
|
||||
@type verbose: bool
|
||||
|
||||
@param verify_load: verify that torrent was added to rTorrent successfully
|
||||
@type verify_load: bool
|
||||
|
||||
@return: Depends on verify_load:
|
||||
- if verify_load is True, (and the torrent was
|
||||
loaded successfully), it'll return a L{Torrent} instance
|
||||
- if verify_load is False, it'll return None
|
||||
|
||||
@rtype: L{Torrent} instance or None
|
||||
|
||||
@raise AssertionError: If the torrent wasn't successfully added to rTorrent
|
||||
- Check L{TorrentParser} for the AssertionError's
|
||||
it raises
|
||||
|
||||
|
||||
@note: Because this function includes url verification (if a url was input)
|
||||
as well as verification as to whether the torrent was successfully added,
|
||||
this function doesn't execute instantaneously. If that's what you're
|
||||
looking for, use load_torrent_simple() instead.
|
||||
"""
|
||||
p = self._get_conn()
|
||||
tp = TorrentParser(torrent)
|
||||
torrent = xmlrpclib.Binary(tp._raw_torrent)
|
||||
info_hash = tp.info_hash
|
||||
|
||||
func_name = self._get_load_function("raw", start, verbose)
|
||||
|
||||
# load torrent
|
||||
getattr(p, func_name)(torrent)
|
||||
|
||||
if verify_load:
|
||||
MAX_RETRIES = 3
|
||||
i = 0
|
||||
while i < MAX_RETRIES:
|
||||
self.get_torrents()
|
||||
if info_hash in [t.info_hash for t in self.torrents]:
|
||||
break
|
||||
|
||||
# was still getting AssertionErrors, delay should help
|
||||
time.sleep(1)
|
||||
i += 1
|
||||
|
||||
assert info_hash in [t.info_hash for t in self.torrents],\
|
||||
"Adding torrent was unsuccessful."
|
||||
|
||||
return(find_torrent(info_hash, self.torrents))
|
||||
|
||||
def load_torrent_simple(self, torrent, file_type,
|
||||
start=False, verbose=False):
|
||||
"""Loads torrent into rTorrent
|
||||
|
||||
@param torrent: can be a url, a path to a local file, or the raw data
|
||||
of a torrent file
|
||||
@type torrent: str
|
||||
|
||||
@param file_type: valid options: "url", "file", or "raw"
|
||||
@type file_type: str
|
||||
|
||||
@param start: start torrent when loaded
|
||||
@type start: bool
|
||||
|
||||
@param verbose: print error messages to rTorrent log
|
||||
@type verbose: bool
|
||||
|
||||
@return: None
|
||||
|
||||
@raise AssertionError: if incorrect file_type is specified
|
||||
|
||||
@note: This function was written for speed, it includes no enhancements.
|
||||
If you input a url, it won't check if it's valid. You also can't get
|
||||
verification that the torrent was successfully added to rTorrent.
|
||||
Use load_torrent() if you would like these features.
|
||||
"""
|
||||
p = self._get_conn()
|
||||
|
||||
assert file_type in ["raw", "file", "url"], \
|
||||
"Invalid file_type, options are: 'url', 'file', 'raw'."
|
||||
func_name = self._get_load_function(file_type, start, verbose)
|
||||
|
||||
if file_type == "file":
|
||||
# since we have to assume we're connected to a remote rTorrent
|
||||
# client, we have to read the file and send it to rT as raw
|
||||
assert os.path.isfile(torrent), \
|
||||
"Invalid path: \"{0}\"".format(torrent)
|
||||
torrent = open(torrent, "rb").read()
|
||||
|
||||
if file_type in ["raw", "file"]:
|
||||
finput = xmlrpclib.Binary(torrent)
|
||||
elif file_type == "url":
|
||||
finput = torrent
|
||||
|
||||
getattr(p, func_name)(finput)
|
||||
|
||||
def get_views(self):
|
||||
p = self._get_conn()
|
||||
return p.view_list()
|
||||
|
||||
def create_group(self, name, persistent=True, view=None):
|
||||
p = self._get_conn()
|
||||
|
||||
if persistent is True:
|
||||
p.group.insert_persistent_view('', name)
|
||||
else:
|
||||
assert view is not None, "view parameter required on non-persistent groups"
|
||||
p.group.insert('', name, view)
|
||||
|
||||
self._update_rpc_methods()
|
||||
|
||||
def get_group(self, name):
|
||||
assert name is not None, "group name required"
|
||||
|
||||
group = Group(self, name)
|
||||
group.update()
|
||||
return group
|
||||
|
||||
def set_dht_port(self, port):
|
||||
"""Set DHT port
|
||||
|
||||
@param port: port
|
||||
@type port: int
|
||||
|
||||
@raise AssertionError: if invalid port is given
|
||||
"""
|
||||
assert is_valid_port(port), "Valid port range is 0-65535"
|
||||
self.dht_port = self._p.set_dht_port(port)
|
||||
|
||||
def enable_check_hash(self):
|
||||
"""Alias for set_check_hash(True)"""
|
||||
self.set_check_hash(True)
|
||||
|
||||
def disable_check_hash(self):
|
||||
"""Alias for set_check_hash(False)"""
|
||||
self.set_check_hash(False)
|
||||
|
||||
def find_torrent(self, info_hash):
|
||||
"""Frontend for rtorrent.common.find_torrent"""
|
||||
return(common.find_torrent(info_hash, self.get_torrents()))
|
||||
|
||||
def poll(self):
|
||||
""" poll rTorrent to get latest torrent/peer/tracker/file information
|
||||
|
||||
@note: This essentially refreshes every aspect of the rTorrent
|
||||
connection, so it can be very slow if working with a remote
|
||||
connection that has a lot of torrents loaded.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
self.update()
|
||||
torrents = self.get_torrents()
|
||||
for t in torrents:
|
||||
t.poll()
|
||||
|
||||
def update(self):
|
||||
"""Refresh rTorrent client info
|
||||
|
||||
@note: All fields are stored as attributes to self.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
multicall = rpc.Multicall(self)
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self)]
|
||||
for method in retriever_methods:
|
||||
multicall.add(method)
|
||||
|
||||
multicall.call()
|
||||
|
||||
|
||||
def _build_class_methods(class_obj):
|
||||
# multicall add class
|
||||
caller = lambda self, multicall, method, *args:\
|
||||
multicall.add(method, self.rpc_id, *args)
|
||||
|
||||
caller.__doc__ = """Same as Multicall.add(), but with automatic inclusion
|
||||
of the rpc_id
|
||||
|
||||
@param multicall: A L{Multicall} instance
|
||||
@type: multicall: Multicall
|
||||
|
||||
@param method: L{Method} instance or raw rpc method
|
||||
@type: Method or str
|
||||
|
||||
@param args: optional arguments to pass
|
||||
"""
|
||||
setattr(class_obj, "multicall_add", caller)
|
||||
|
||||
|
||||
def __compare_rpc_methods(rt_new, rt_old):
|
||||
from pprint import pprint
|
||||
rt_new_methods = set(rt_new._get_rpc_methods())
|
||||
rt_old_methods = set(rt_old._get_rpc_methods())
|
||||
print("New Methods:")
|
||||
pprint(rt_new_methods - rt_old_methods)
|
||||
print("Methods not in new rTorrent:")
|
||||
pprint(rt_old_methods - rt_new_methods)
|
||||
|
||||
|
||||
def __check_supported_methods(rt):
|
||||
from pprint import pprint
|
||||
supported_methods = set([m.rpc_call for m in
|
||||
methods +
|
||||
file.methods +
|
||||
torrent.methods +
|
||||
tracker.methods +
|
||||
peer.methods])
|
||||
all_methods = set(rt._get_rpc_methods())
|
||||
|
||||
print("Methods NOT in supported methods")
|
||||
pprint(all_methods - supported_methods)
|
||||
print("Supported methods NOT in all methods")
|
||||
pprint(supported_methods - all_methods)
|
||||
|
||||
methods = [
|
||||
# RETRIEVERS
|
||||
Method(RTorrent, 'get_xmlrpc_size_limit', 'get_xmlrpc_size_limit'),
|
||||
Method(RTorrent, 'get_proxy_address', 'get_proxy_address'),
|
||||
Method(RTorrent, 'get_split_suffix', 'get_split_suffix'),
|
||||
Method(RTorrent, 'get_up_limit', 'get_upload_rate'),
|
||||
Method(RTorrent, 'get_max_memory_usage', 'get_max_memory_usage'),
|
||||
Method(RTorrent, 'get_max_open_files', 'get_max_open_files'),
|
||||
Method(RTorrent, 'get_min_peers_seed', 'get_min_peers_seed'),
|
||||
Method(RTorrent, 'get_use_udp_trackers', 'get_use_udp_trackers'),
|
||||
Method(RTorrent, 'get_preload_min_size', 'get_preload_min_size'),
|
||||
Method(RTorrent, 'get_max_uploads', 'get_max_uploads'),
|
||||
Method(RTorrent, 'get_max_peers', 'get_max_peers'),
|
||||
Method(RTorrent, 'get_timeout_sync', 'get_timeout_sync'),
|
||||
Method(RTorrent, 'get_receive_buffer_size', 'get_receive_buffer_size'),
|
||||
Method(RTorrent, 'get_split_file_size', 'get_split_file_size'),
|
||||
Method(RTorrent, 'get_dht_throttle', 'get_dht_throttle'),
|
||||
Method(RTorrent, 'get_max_peers_seed', 'get_max_peers_seed'),
|
||||
Method(RTorrent, 'get_min_peers', 'get_min_peers'),
|
||||
Method(RTorrent, 'get_tracker_numwant', 'get_tracker_numwant'),
|
||||
Method(RTorrent, 'get_max_open_sockets', 'get_max_open_sockets'),
|
||||
Method(RTorrent, 'get_session', 'get_session'),
|
||||
Method(RTorrent, 'get_ip', 'get_ip'),
|
||||
Method(RTorrent, 'get_scgi_dont_route', 'get_scgi_dont_route'),
|
||||
Method(RTorrent, 'get_hash_read_ahead', 'get_hash_read_ahead'),
|
||||
Method(RTorrent, 'get_http_cacert', 'get_http_cacert'),
|
||||
Method(RTorrent, 'get_dht_port', 'get_dht_port'),
|
||||
Method(RTorrent, 'get_handshake_log', 'get_handshake_log'),
|
||||
Method(RTorrent, 'get_preload_type', 'get_preload_type'),
|
||||
Method(RTorrent, 'get_max_open_http', 'get_max_open_http'),
|
||||
Method(RTorrent, 'get_http_capath', 'get_http_capath'),
|
||||
Method(RTorrent, 'get_max_downloads_global', 'get_max_downloads_global'),
|
||||
Method(RTorrent, 'get_name', 'get_name'),
|
||||
Method(RTorrent, 'get_session_on_completion', 'get_session_on_completion'),
|
||||
Method(RTorrent, 'get_down_limit', 'get_download_rate'),
|
||||
Method(RTorrent, 'get_down_total', 'get_down_total'),
|
||||
Method(RTorrent, 'get_up_rate', 'get_up_rate'),
|
||||
Method(RTorrent, 'get_hash_max_tries', 'get_hash_max_tries'),
|
||||
Method(RTorrent, 'get_peer_exchange', 'get_peer_exchange'),
|
||||
Method(RTorrent, 'get_down_rate', 'get_down_rate'),
|
||||
Method(RTorrent, 'get_connection_seed', 'get_connection_seed'),
|
||||
Method(RTorrent, 'get_http_proxy', 'get_http_proxy'),
|
||||
Method(RTorrent, 'get_stats_preloaded', 'get_stats_preloaded'),
|
||||
Method(RTorrent, 'get_timeout_safe_sync', 'get_timeout_safe_sync'),
|
||||
Method(RTorrent, 'get_hash_interval', 'get_hash_interval'),
|
||||
Method(RTorrent, 'get_port_random', 'get_port_random'),
|
||||
Method(RTorrent, 'get_directory', 'get_directory'),
|
||||
Method(RTorrent, 'get_port_open', 'get_port_open'),
|
||||
Method(RTorrent, 'get_max_file_size', 'get_max_file_size'),
|
||||
Method(RTorrent, 'get_stats_not_preloaded', 'get_stats_not_preloaded'),
|
||||
Method(RTorrent, 'get_memory_usage', 'get_memory_usage'),
|
||||
Method(RTorrent, 'get_connection_leech', 'get_connection_leech'),
|
||||
Method(RTorrent, 'get_check_hash', 'get_check_hash',
|
||||
boolean=True,
|
||||
),
|
||||
Method(RTorrent, 'get_session_lock', 'get_session_lock'),
|
||||
Method(RTorrent, 'get_preload_required_rate', 'get_preload_required_rate'),
|
||||
Method(RTorrent, 'get_max_uploads_global', 'get_max_uploads_global'),
|
||||
Method(RTorrent, 'get_send_buffer_size', 'get_send_buffer_size'),
|
||||
Method(RTorrent, 'get_port_range', 'get_port_range'),
|
||||
Method(RTorrent, 'get_max_downloads_div', 'get_max_downloads_div'),
|
||||
Method(RTorrent, 'get_max_uploads_div', 'get_max_uploads_div'),
|
||||
Method(RTorrent, 'get_safe_sync', 'get_safe_sync'),
|
||||
Method(RTorrent, 'get_bind', 'get_bind'),
|
||||
Method(RTorrent, 'get_up_total', 'get_up_total'),
|
||||
Method(RTorrent, 'get_client_version', 'system.client_version'),
|
||||
Method(RTorrent, 'get_library_version', 'system.library_version'),
|
||||
Method(RTorrent, 'get_api_version', 'system.api_version',
|
||||
min_version=(0, 9, 1)
|
||||
),
|
||||
Method(RTorrent, "get_system_time", "system.time",
|
||||
docstring="""Get the current time of the system rTorrent is running on
|
||||
|
||||
@return: time (posix)
|
||||
@rtype: int""",
|
||||
),
|
||||
|
||||
# MODIFIERS
|
||||
Method(RTorrent, 'set_http_proxy', 'set_http_proxy'),
|
||||
Method(RTorrent, 'set_max_memory_usage', 'set_max_memory_usage'),
|
||||
Method(RTorrent, 'set_max_file_size', 'set_max_file_size'),
|
||||
Method(RTorrent, 'set_bind', 'set_bind',
|
||||
docstring="""Set address bind
|
||||
|
||||
@param arg: ip address
|
||||
@type arg: str
|
||||
""",
|
||||
),
|
||||
Method(RTorrent, 'set_up_limit', 'set_upload_rate',
|
||||
docstring="""Set global upload limit (in bytes)
|
||||
|
||||
@param arg: speed limit
|
||||
@type arg: int
|
||||
""",
|
||||
),
|
||||
Method(RTorrent, 'set_port_random', 'set_port_random'),
|
||||
Method(RTorrent, 'set_connection_leech', 'set_connection_leech'),
|
||||
Method(RTorrent, 'set_tracker_numwant', 'set_tracker_numwant'),
|
||||
Method(RTorrent, 'set_max_peers', 'set_max_peers'),
|
||||
Method(RTorrent, 'set_min_peers', 'set_min_peers'),
|
||||
Method(RTorrent, 'set_max_uploads_div', 'set_max_uploads_div'),
|
||||
Method(RTorrent, 'set_max_open_files', 'set_max_open_files'),
|
||||
Method(RTorrent, 'set_max_downloads_global', 'set_max_downloads_global'),
|
||||
Method(RTorrent, 'set_session_lock', 'set_session_lock'),
|
||||
Method(RTorrent, 'set_session', 'set_session'),
|
||||
Method(RTorrent, 'set_split_suffix', 'set_split_suffix'),
|
||||
Method(RTorrent, 'set_hash_interval', 'set_hash_interval'),
|
||||
Method(RTorrent, 'set_handshake_log', 'set_handshake_log'),
|
||||
Method(RTorrent, 'set_port_range', 'set_port_range'),
|
||||
Method(RTorrent, 'set_min_peers_seed', 'set_min_peers_seed'),
|
||||
Method(RTorrent, 'set_scgi_dont_route', 'set_scgi_dont_route'),
|
||||
Method(RTorrent, 'set_preload_min_size', 'set_preload_min_size'),
|
||||
Method(RTorrent, 'set_log.tracker', 'set_log.tracker'),
|
||||
Method(RTorrent, 'set_max_uploads_global', 'set_max_uploads_global'),
|
||||
Method(RTorrent, 'set_down_limit', 'set_download_rate',
|
||||
docstring="""Set global download limit (in bytes)
|
||||
|
||||
@param arg: speed limit
|
||||
@type arg: int
|
||||
""",
|
||||
),
|
||||
Method(RTorrent, 'set_preload_required_rate', 'set_preload_required_rate'),
|
||||
Method(RTorrent, 'set_hash_read_ahead', 'set_hash_read_ahead'),
|
||||
Method(RTorrent, 'set_max_peers_seed', 'set_max_peers_seed'),
|
||||
Method(RTorrent, 'set_max_uploads', 'set_max_uploads'),
|
||||
Method(RTorrent, 'set_session_on_completion', 'set_session_on_completion'),
|
||||
Method(RTorrent, 'set_max_open_http', 'set_max_open_http'),
|
||||
Method(RTorrent, 'set_directory', 'set_directory'),
|
||||
Method(RTorrent, 'set_http_cacert', 'set_http_cacert'),
|
||||
Method(RTorrent, 'set_dht_throttle', 'set_dht_throttle'),
|
||||
Method(RTorrent, 'set_hash_max_tries', 'set_hash_max_tries'),
|
||||
Method(RTorrent, 'set_proxy_address', 'set_proxy_address'),
|
||||
Method(RTorrent, 'set_split_file_size', 'set_split_file_size'),
|
||||
Method(RTorrent, 'set_receive_buffer_size', 'set_receive_buffer_size'),
|
||||
Method(RTorrent, 'set_use_udp_trackers', 'set_use_udp_trackers'),
|
||||
Method(RTorrent, 'set_connection_seed', 'set_connection_seed'),
|
||||
Method(RTorrent, 'set_xmlrpc_size_limit', 'set_xmlrpc_size_limit'),
|
||||
Method(RTorrent, 'set_xmlrpc_dialect', 'set_xmlrpc_dialect'),
|
||||
Method(RTorrent, 'set_safe_sync', 'set_safe_sync'),
|
||||
Method(RTorrent, 'set_http_capath', 'set_http_capath'),
|
||||
Method(RTorrent, 'set_send_buffer_size', 'set_send_buffer_size'),
|
||||
Method(RTorrent, 'set_max_downloads_div', 'set_max_downloads_div'),
|
||||
Method(RTorrent, 'set_name', 'set_name'),
|
||||
Method(RTorrent, 'set_port_open', 'set_port_open'),
|
||||
Method(RTorrent, 'set_timeout_sync', 'set_timeout_sync'),
|
||||
Method(RTorrent, 'set_peer_exchange', 'set_peer_exchange'),
|
||||
Method(RTorrent, 'set_ip', 'set_ip',
|
||||
docstring="""Set IP
|
||||
|
||||
@param arg: ip address
|
||||
@type arg: str
|
||||
""",
|
||||
),
|
||||
Method(RTorrent, 'set_timeout_safe_sync', 'set_timeout_safe_sync'),
|
||||
Method(RTorrent, 'set_preload_type', 'set_preload_type'),
|
||||
Method(RTorrent, 'set_check_hash', 'set_check_hash',
|
||||
docstring="""Enable/Disable hash checking on finished torrents
|
||||
|
||||
@param arg: True to enable, False to disable
|
||||
@type arg: bool
|
||||
""",
|
||||
boolean=True,
|
||||
),
|
||||
]
|
||||
|
||||
_all_methods_list = [methods,
|
||||
file.methods,
|
||||
torrent.methods,
|
||||
tracker.methods,
|
||||
peer.methods,
|
||||
]
|
||||
|
||||
class_methods_pair = {
|
||||
RTorrent: methods,
|
||||
file.File: file.methods,
|
||||
torrent.Torrent: torrent.methods,
|
||||
tracker.Tracker: tracker.methods,
|
||||
peer.Peer: peer.methods,
|
||||
}
|
||||
for c in class_methods_pair.keys():
|
||||
rpc._build_rpc_methods(c, class_methods_pair[c])
|
||||
_build_class_methods(c)
|
|
@ -0,0 +1,609 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
import urllib
|
||||
import os.path
|
||||
import time
|
||||
import xmlrpclib
|
||||
|
||||
from rtorrent.common import find_torrent, \
|
||||
is_valid_port, convert_version_tuple_to_str
|
||||
from rtorrent.lib.torrentparser import TorrentParser
|
||||
from rtorrent.lib.xmlrpc.http import HTTPServerProxy
|
||||
from rtorrent.lib.xmlrpc.scgi import SCGIServerProxy
|
||||
from rtorrent.rpc import Method
|
||||
from rtorrent.lib.xmlrpc.basic_auth import BasicAuthTransport
|
||||
from rtorrent.torrent import Torrent
|
||||
from rtorrent.group import Group
|
||||
import rtorrent.rpc # @UnresolvedImport
|
||||
|
||||
__version__ = "0.2.9"
|
||||
__author__ = "Chris Lucas"
|
||||
__contact__ = "chris@chrisjlucas.com"
|
||||
__license__ = "MIT"
|
||||
|
||||
MIN_RTORRENT_VERSION = (0, 8, 1)
|
||||
MIN_RTORRENT_VERSION_STR = convert_version_tuple_to_str(MIN_RTORRENT_VERSION)
|
||||
|
||||
|
||||
class RTorrent:
|
||||
""" Create a new rTorrent connection """
|
||||
rpc_prefix = None
|
||||
|
||||
def __init__(self, uri, username=None, password=None,
|
||||
verify=False, sp=None, sp_kwargs=None):
|
||||
self.uri = uri # : From X{__init__(self, url)}
|
||||
|
||||
self.username = username
|
||||
self.password = password
|
||||
|
||||
self.schema = urllib.splittype(uri)[0]
|
||||
|
||||
if sp:
|
||||
self.sp = sp
|
||||
elif self.schema in ['http', 'https']:
|
||||
self.sp = HTTPServerProxy
|
||||
elif self.schema == 'scgi':
|
||||
self.sp = SCGIServerProxy
|
||||
else:
|
||||
raise NotImplementedError()
|
||||
|
||||
self.sp_kwargs = sp_kwargs or {}
|
||||
|
||||
self.torrents = [] # : List of L{Torrent} instances
|
||||
self._rpc_methods = [] # : List of rTorrent RPC methods
|
||||
self._torrent_cache = []
|
||||
self._client_version_tuple = ()
|
||||
|
||||
if verify is True:
|
||||
self._verify_conn()
|
||||
|
||||
def _get_conn(self):
|
||||
"""Get ServerProxy instance"""
|
||||
if self.username is not None and self.password is not None:
|
||||
if self.schema == 'scgi':
|
||||
raise NotImplementedError()
|
||||
|
||||
return self.sp(
|
||||
self.uri,
|
||||
transport=BasicAuthTransport(self.username, self.password),
|
||||
**self.sp_kwargs
|
||||
)
|
||||
|
||||
return self.sp(self.uri, **self.sp_kwargs)
|
||||
|
||||
def _verify_conn(self):
|
||||
# check for rpc methods that should be available
|
||||
assert "system.client_version" in self._get_rpc_methods(), "Required RPC method not available."
|
||||
assert "system.library_version" in self._get_rpc_methods(), "Required RPC method not available."
|
||||
|
||||
# minimum rTorrent version check
|
||||
assert self._meets_version_requirement() is True,\
|
||||
"Error: Minimum rTorrent version required is {0}".format(
|
||||
MIN_RTORRENT_VERSION_STR)
|
||||
|
||||
def _meets_version_requirement(self):
|
||||
return self._get_client_version_tuple() >= MIN_RTORRENT_VERSION
|
||||
|
||||
def _get_client_version_tuple(self):
|
||||
conn = self._get_conn()
|
||||
|
||||
if not self._client_version_tuple:
|
||||
if not hasattr(self, "client_version"):
|
||||
setattr(self, "client_version",
|
||||
conn.system.client_version())
|
||||
|
||||
rtver = getattr(self, "client_version")
|
||||
self._client_version_tuple = tuple([int(i) for i in
|
||||
rtver.split(".")])
|
||||
|
||||
return self._client_version_tuple
|
||||
|
||||
def _update_rpc_methods(self):
|
||||
self._rpc_methods = self._get_conn().system.listMethods()
|
||||
|
||||
return self._rpc_methods
|
||||
|
||||
def _get_rpc_methods(self):
|
||||
""" Get list of raw RPC commands
|
||||
|
||||
@return: raw RPC commands
|
||||
@rtype: list
|
||||
"""
|
||||
|
||||
return(self._rpc_methods or self._update_rpc_methods())
|
||||
|
||||
def get_torrents(self, view="main"):
|
||||
"""Get list of all torrents in specified view
|
||||
|
||||
@return: list of L{Torrent} instances
|
||||
|
||||
@rtype: list
|
||||
|
||||
@todo: add validity check for specified view
|
||||
"""
|
||||
self.torrents = []
|
||||
methods = rtorrent.torrent.methods
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self)]
|
||||
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
m.add("d.multicall", view, "d.get_hash=",
|
||||
*[method.rpc_call + "=" for method in retriever_methods])
|
||||
|
||||
results = m.call()[0] # only sent one call, only need first result
|
||||
|
||||
for result in results:
|
||||
results_dict = {}
|
||||
# build results_dict
|
||||
for m, r in zip(retriever_methods, result[1:]): # result[0] is the info_hash
|
||||
results_dict[m.varname] = rtorrent.rpc.process_result(m, r)
|
||||
|
||||
self.torrents.append(
|
||||
Torrent(self, info_hash=result[0], **results_dict)
|
||||
)
|
||||
|
||||
self._manage_torrent_cache()
|
||||
return(self.torrents)
|
||||
|
||||
def _manage_torrent_cache(self):
|
||||
"""Carry tracker/peer/file lists over to new torrent list"""
|
||||
for torrent in self._torrent_cache:
|
||||
new_torrent = rtorrent.common.find_torrent(torrent.info_hash,
|
||||
self.torrents)
|
||||
if new_torrent is not None:
|
||||
new_torrent.files = torrent.files
|
||||
new_torrent.peers = torrent.peers
|
||||
new_torrent.trackers = torrent.trackers
|
||||
|
||||
self._torrent_cache = self.torrents
|
||||
|
||||
def _get_load_function(self, file_type, start, verbose):
|
||||
"""Determine correct "load torrent" RPC method"""
|
||||
func_name = None
|
||||
if file_type == "url":
|
||||
# url strings can be input directly
|
||||
if start and verbose:
|
||||
func_name = "load_start_verbose"
|
||||
elif start:
|
||||
func_name = "load_start"
|
||||
elif verbose:
|
||||
func_name = "load_verbose"
|
||||
else:
|
||||
func_name = "load"
|
||||
elif file_type in ["file", "raw"]:
|
||||
if start and verbose:
|
||||
func_name = "load_raw_start_verbose"
|
||||
elif start:
|
||||
func_name = "load_raw_start"
|
||||
elif verbose:
|
||||
func_name = "load_raw_verbose"
|
||||
else:
|
||||
func_name = "load_raw"
|
||||
|
||||
return(func_name)
|
||||
|
||||
def load_torrent(self, torrent, start=False, verbose=False, verify_load=True):
|
||||
"""
|
||||
Loads torrent into rTorrent (with various enhancements)
|
||||
|
||||
@param torrent: can be a url, a path to a local file, or the raw data
|
||||
of a torrent file
|
||||
@type torrent: str
|
||||
|
||||
@param start: start torrent when loaded
|
||||
@type start: bool
|
||||
|
||||
@param verbose: print error messages to rTorrent log
|
||||
@type verbose: bool
|
||||
|
||||
@param verify_load: verify that torrent was added to rTorrent successfully
|
||||
@type verify_load: bool
|
||||
|
||||
@return: Depends on verify_load:
|
||||
- if verify_load is True, (and the torrent was
|
||||
loaded successfully), it'll return a L{Torrent} instance
|
||||
- if verify_load is False, it'll return None
|
||||
|
||||
@rtype: L{Torrent} instance or None
|
||||
|
||||
@raise AssertionError: If the torrent wasn't successfully added to rTorrent
|
||||
- Check L{TorrentParser} for the AssertionError's
|
||||
it raises
|
||||
|
||||
|
||||
@note: Because this function includes url verification (if a url was input)
|
||||
as well as verification as to whether the torrent was successfully added,
|
||||
this function doesn't execute instantaneously. If that's what you're
|
||||
looking for, use load_torrent_simple() instead.
|
||||
"""
|
||||
p = self._get_conn()
|
||||
tp = TorrentParser(torrent)
|
||||
torrent = xmlrpclib.Binary(tp._raw_torrent)
|
||||
info_hash = tp.info_hash
|
||||
|
||||
func_name = self._get_load_function("raw", start, verbose)
|
||||
|
||||
# load torrent
|
||||
getattr(p, func_name)(torrent)
|
||||
|
||||
if verify_load:
|
||||
MAX_RETRIES = 3
|
||||
i = 0
|
||||
while i < MAX_RETRIES:
|
||||
self.get_torrents()
|
||||
if info_hash in [t.info_hash for t in self.torrents]:
|
||||
break
|
||||
|
||||
# was still getting AssertionErrors, delay should help
|
||||
time.sleep(1)
|
||||
i += 1
|
||||
|
||||
assert info_hash in [t.info_hash for t in self.torrents],\
|
||||
"Adding torrent was unsuccessful."
|
||||
|
||||
return(find_torrent(info_hash, self.torrents))
|
||||
|
||||
def load_torrent_simple(self, torrent, file_type,
|
||||
start=False, verbose=False):
|
||||
"""Loads torrent into rTorrent
|
||||
|
||||
@param torrent: can be a url, a path to a local file, or the raw data
|
||||
of a torrent file
|
||||
@type torrent: str
|
||||
|
||||
@param file_type: valid options: "url", "file", or "raw"
|
||||
@type file_type: str
|
||||
|
||||
@param start: start torrent when loaded
|
||||
@type start: bool
|
||||
|
||||
@param verbose: print error messages to rTorrent log
|
||||
@type verbose: bool
|
||||
|
||||
@return: None
|
||||
|
||||
@raise AssertionError: if incorrect file_type is specified
|
||||
|
||||
@note: This function was written for speed, it includes no enhancements.
|
||||
If you input a url, it won't check if it's valid. You also can't get
|
||||
verification that the torrent was successfully added to rTorrent.
|
||||
Use load_torrent() if you would like these features.
|
||||
"""
|
||||
p = self._get_conn()
|
||||
|
||||
assert file_type in ["raw", "file", "url"], \
|
||||
"Invalid file_type, options are: 'url', 'file', 'raw'."
|
||||
func_name = self._get_load_function(file_type, start, verbose)
|
||||
|
||||
if file_type == "file":
|
||||
# since we have to assume we're connected to a remote rTorrent
|
||||
# client, we have to read the file and send it to rT as raw
|
||||
assert os.path.isfile(torrent), \
|
||||
"Invalid path: \"{0}\"".format(torrent)
|
||||
torrent = open(torrent, "rb").read()
|
||||
|
||||
if file_type in ["raw", "file"]:
|
||||
finput = xmlrpclib.Binary(torrent)
|
||||
elif file_type == "url":
|
||||
finput = torrent
|
||||
|
||||
getattr(p, func_name)(finput)
|
||||
|
||||
def get_views(self):
|
||||
p = self._get_conn()
|
||||
return p.view_list()
|
||||
|
||||
def create_group(self, name, persistent=True, view=None):
|
||||
p = self._get_conn()
|
||||
|
||||
if persistent is True:
|
||||
p.group.insert_persistent_view('', name)
|
||||
else:
|
||||
assert view is not None, "view parameter required on non-persistent groups"
|
||||
p.group.insert('', name, view)
|
||||
|
||||
self._update_rpc_methods()
|
||||
|
||||
def get_group(self, name):
|
||||
assert name is not None, "group name required"
|
||||
|
||||
group = Group(self, name)
|
||||
group.update()
|
||||
return group
|
||||
|
||||
def set_dht_port(self, port):
|
||||
"""Set DHT port
|
||||
|
||||
@param port: port
|
||||
@type port: int
|
||||
|
||||
@raise AssertionError: if invalid port is given
|
||||
"""
|
||||
assert is_valid_port(port), "Valid port range is 0-65535"
|
||||
self.dht_port = self._p.set_dht_port(port)
|
||||
|
||||
def enable_check_hash(self):
|
||||
"""Alias for set_check_hash(True)"""
|
||||
self.set_check_hash(True)
|
||||
|
||||
def disable_check_hash(self):
|
||||
"""Alias for set_check_hash(False)"""
|
||||
self.set_check_hash(False)
|
||||
|
||||
def find_torrent(self, info_hash):
|
||||
"""Frontend for rtorrent.common.find_torrent"""
|
||||
return(rtorrent.common.find_torrent(info_hash, self.get_torrents()))
|
||||
|
||||
def poll(self):
|
||||
""" poll rTorrent to get latest torrent/peer/tracker/file information
|
||||
|
||||
@note: This essentially refreshes every aspect of the rTorrent
|
||||
connection, so it can be very slow if working with a remote
|
||||
connection that has a lot of torrents loaded.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
self.update()
|
||||
torrents = self.get_torrents()
|
||||
for t in torrents:
|
||||
t.poll()
|
||||
|
||||
def update(self):
|
||||
"""Refresh rTorrent client info
|
||||
|
||||
@note: All fields are stored as attributes to self.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
multicall = rtorrent.rpc.Multicall(self)
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self)]
|
||||
for method in retriever_methods:
|
||||
multicall.add(method)
|
||||
|
||||
multicall.call()
|
||||
|
||||
|
||||
def _build_class_methods(class_obj):
|
||||
# multicall add class
|
||||
caller = lambda self, multicall, method, *args:\
|
||||
multicall.add(method, self.rpc_id, *args)
|
||||
|
||||
caller.__doc__ = """Same as Multicall.add(), but with automatic inclusion
|
||||
of the rpc_id
|
||||
|
||||
@param multicall: A L{Multicall} instance
|
||||
@type: multicall: Multicall
|
||||
|
||||
@param method: L{Method} instance or raw rpc method
|
||||
@type: Method or str
|
||||
|
||||
@param args: optional arguments to pass
|
||||
"""
|
||||
setattr(class_obj, "multicall_add", caller)
|
||||
|
||||
|
||||
def __compare_rpc_methods(rt_new, rt_old):
|
||||
from pprint import pprint
|
||||
rt_new_methods = set(rt_new._get_rpc_methods())
|
||||
rt_old_methods = set(rt_old._get_rpc_methods())
|
||||
print("New Methods:")
|
||||
pprint(rt_new_methods - rt_old_methods)
|
||||
print("Methods not in new rTorrent:")
|
||||
pprint(rt_old_methods - rt_new_methods)
|
||||
|
||||
|
||||
def __check_supported_methods(rt):
|
||||
from pprint import pprint
|
||||
supported_methods = set([m.rpc_call for m in
|
||||
methods +
|
||||
rtorrent.file.methods +
|
||||
rtorrent.torrent.methods +
|
||||
rtorrent.tracker.methods +
|
||||
rtorrent.peer.methods])
|
||||
all_methods = set(rt._get_rpc_methods())
|
||||
|
||||
print("Methods NOT in supported methods")
|
||||
pprint(all_methods - supported_methods)
|
||||
print("Supported methods NOT in all methods")
|
||||
pprint(supported_methods - all_methods)
|
||||
|
||||
methods = [
|
||||
# RETRIEVERS
|
||||
Method(RTorrent, 'get_xmlrpc_size_limit', 'get_xmlrpc_size_limit'),
|
||||
Method(RTorrent, 'get_proxy_address', 'get_proxy_address'),
|
||||
Method(RTorrent, 'get_split_suffix', 'get_split_suffix'),
|
||||
Method(RTorrent, 'get_up_limit', 'get_upload_rate'),
|
||||
Method(RTorrent, 'get_max_memory_usage', 'get_max_memory_usage'),
|
||||
Method(RTorrent, 'get_max_open_files', 'get_max_open_files'),
|
||||
Method(RTorrent, 'get_min_peers_seed', 'get_min_peers_seed'),
|
||||
Method(RTorrent, 'get_use_udp_trackers', 'get_use_udp_trackers'),
|
||||
Method(RTorrent, 'get_preload_min_size', 'get_preload_min_size'),
|
||||
Method(RTorrent, 'get_max_uploads', 'get_max_uploads'),
|
||||
Method(RTorrent, 'get_max_peers', 'get_max_peers'),
|
||||
Method(RTorrent, 'get_timeout_sync', 'get_timeout_sync'),
|
||||
Method(RTorrent, 'get_receive_buffer_size', 'get_receive_buffer_size'),
|
||||
Method(RTorrent, 'get_split_file_size', 'get_split_file_size'),
|
||||
Method(RTorrent, 'get_dht_throttle', 'get_dht_throttle'),
|
||||
Method(RTorrent, 'get_max_peers_seed', 'get_max_peers_seed'),
|
||||
Method(RTorrent, 'get_min_peers', 'get_min_peers'),
|
||||
Method(RTorrent, 'get_tracker_numwant', 'get_tracker_numwant'),
|
||||
Method(RTorrent, 'get_max_open_sockets', 'get_max_open_sockets'),
|
||||
Method(RTorrent, 'get_session', 'get_session'),
|
||||
Method(RTorrent, 'get_ip', 'get_ip'),
|
||||
Method(RTorrent, 'get_scgi_dont_route', 'get_scgi_dont_route'),
|
||||
Method(RTorrent, 'get_hash_read_ahead', 'get_hash_read_ahead'),
|
||||
Method(RTorrent, 'get_http_cacert', 'get_http_cacert'),
|
||||
Method(RTorrent, 'get_dht_port', 'get_dht_port'),
|
||||
Method(RTorrent, 'get_handshake_log', 'get_handshake_log'),
|
||||
Method(RTorrent, 'get_preload_type', 'get_preload_type'),
|
||||
Method(RTorrent, 'get_max_open_http', 'get_max_open_http'),
|
||||
Method(RTorrent, 'get_http_capath', 'get_http_capath'),
|
||||
Method(RTorrent, 'get_max_downloads_global', 'get_max_downloads_global'),
|
||||
Method(RTorrent, 'get_name', 'get_name'),
|
||||
Method(RTorrent, 'get_session_on_completion', 'get_session_on_completion'),
|
||||
Method(RTorrent, 'get_down_limit', 'get_download_rate'),
|
||||
Method(RTorrent, 'get_down_total', 'get_down_total'),
|
||||
Method(RTorrent, 'get_up_rate', 'get_up_rate'),
|
||||
Method(RTorrent, 'get_hash_max_tries', 'get_hash_max_tries'),
|
||||
Method(RTorrent, 'get_peer_exchange', 'get_peer_exchange'),
|
||||
Method(RTorrent, 'get_down_rate', 'get_down_rate'),
|
||||
Method(RTorrent, 'get_connection_seed', 'get_connection_seed'),
|
||||
Method(RTorrent, 'get_http_proxy', 'get_http_proxy'),
|
||||
Method(RTorrent, 'get_stats_preloaded', 'get_stats_preloaded'),
|
||||
Method(RTorrent, 'get_timeout_safe_sync', 'get_timeout_safe_sync'),
|
||||
Method(RTorrent, 'get_hash_interval', 'get_hash_interval'),
|
||||
Method(RTorrent, 'get_port_random', 'get_port_random'),
|
||||
Method(RTorrent, 'get_directory', 'get_directory'),
|
||||
Method(RTorrent, 'get_port_open', 'get_port_open'),
|
||||
Method(RTorrent, 'get_max_file_size', 'get_max_file_size'),
|
||||
Method(RTorrent, 'get_stats_not_preloaded', 'get_stats_not_preloaded'),
|
||||
Method(RTorrent, 'get_memory_usage', 'get_memory_usage'),
|
||||
Method(RTorrent, 'get_connection_leech', 'get_connection_leech'),
|
||||
Method(RTorrent, 'get_check_hash', 'get_check_hash',
|
||||
boolean=True,
|
||||
),
|
||||
Method(RTorrent, 'get_session_lock', 'get_session_lock'),
|
||||
Method(RTorrent, 'get_preload_required_rate', 'get_preload_required_rate'),
|
||||
Method(RTorrent, 'get_max_uploads_global', 'get_max_uploads_global'),
|
||||
Method(RTorrent, 'get_send_buffer_size', 'get_send_buffer_size'),
|
||||
Method(RTorrent, 'get_port_range', 'get_port_range'),
|
||||
Method(RTorrent, 'get_max_downloads_div', 'get_max_downloads_div'),
|
||||
Method(RTorrent, 'get_max_uploads_div', 'get_max_uploads_div'),
|
||||
Method(RTorrent, 'get_safe_sync', 'get_safe_sync'),
|
||||
Method(RTorrent, 'get_bind', 'get_bind'),
|
||||
Method(RTorrent, 'get_up_total', 'get_up_total'),
|
||||
Method(RTorrent, 'get_client_version', 'system.client_version'),
|
||||
Method(RTorrent, 'get_library_version', 'system.library_version'),
|
||||
Method(RTorrent, 'get_api_version', 'system.api_version',
|
||||
min_version=(0, 9, 1)
|
||||
),
|
||||
Method(RTorrent, "get_system_time", "system.time",
|
||||
docstring="""Get the current time of the system rTorrent is running on
|
||||
|
||||
@return: time (posix)
|
||||
@rtype: int""",
|
||||
),
|
||||
|
||||
# MODIFIERS
|
||||
Method(RTorrent, 'set_http_proxy', 'set_http_proxy'),
|
||||
Method(RTorrent, 'set_max_memory_usage', 'set_max_memory_usage'),
|
||||
Method(RTorrent, 'set_max_file_size', 'set_max_file_size'),
|
||||
Method(RTorrent, 'set_bind', 'set_bind',
|
||||
docstring="""Set address bind
|
||||
|
||||
@param arg: ip address
|
||||
@type arg: str
|
||||
""",
|
||||
),
|
||||
Method(RTorrent, 'set_up_limit', 'set_upload_rate',
|
||||
docstring="""Set global upload limit (in bytes)
|
||||
|
||||
@param arg: speed limit
|
||||
@type arg: int
|
||||
""",
|
||||
),
|
||||
Method(RTorrent, 'set_port_random', 'set_port_random'),
|
||||
Method(RTorrent, 'set_connection_leech', 'set_connection_leech'),
|
||||
Method(RTorrent, 'set_tracker_numwant', 'set_tracker_numwant'),
|
||||
Method(RTorrent, 'set_max_peers', 'set_max_peers'),
|
||||
Method(RTorrent, 'set_min_peers', 'set_min_peers'),
|
||||
Method(RTorrent, 'set_max_uploads_div', 'set_max_uploads_div'),
|
||||
Method(RTorrent, 'set_max_open_files', 'set_max_open_files'),
|
||||
Method(RTorrent, 'set_max_downloads_global', 'set_max_downloads_global'),
|
||||
Method(RTorrent, 'set_session_lock', 'set_session_lock'),
|
||||
Method(RTorrent, 'set_session', 'set_session'),
|
||||
Method(RTorrent, 'set_split_suffix', 'set_split_suffix'),
|
||||
Method(RTorrent, 'set_hash_interval', 'set_hash_interval'),
|
||||
Method(RTorrent, 'set_handshake_log', 'set_handshake_log'),
|
||||
Method(RTorrent, 'set_port_range', 'set_port_range'),
|
||||
Method(RTorrent, 'set_min_peers_seed', 'set_min_peers_seed'),
|
||||
Method(RTorrent, 'set_scgi_dont_route', 'set_scgi_dont_route'),
|
||||
Method(RTorrent, 'set_preload_min_size', 'set_preload_min_size'),
|
||||
Method(RTorrent, 'set_log.tracker', 'set_log.tracker'),
|
||||
Method(RTorrent, 'set_max_uploads_global', 'set_max_uploads_global'),
|
||||
Method(RTorrent, 'set_down_limit', 'set_download_rate',
|
||||
docstring="""Set global download limit (in bytes)
|
||||
|
||||
@param arg: speed limit
|
||||
@type arg: int
|
||||
""",
|
||||
),
|
||||
Method(RTorrent, 'set_preload_required_rate', 'set_preload_required_rate'),
|
||||
Method(RTorrent, 'set_hash_read_ahead', 'set_hash_read_ahead'),
|
||||
Method(RTorrent, 'set_max_peers_seed', 'set_max_peers_seed'),
|
||||
Method(RTorrent, 'set_max_uploads', 'set_max_uploads'),
|
||||
Method(RTorrent, 'set_session_on_completion', 'set_session_on_completion'),
|
||||
Method(RTorrent, 'set_max_open_http', 'set_max_open_http'),
|
||||
Method(RTorrent, 'set_directory', 'set_directory'),
|
||||
Method(RTorrent, 'set_http_cacert', 'set_http_cacert'),
|
||||
Method(RTorrent, 'set_dht_throttle', 'set_dht_throttle'),
|
||||
Method(RTorrent, 'set_hash_max_tries', 'set_hash_max_tries'),
|
||||
Method(RTorrent, 'set_proxy_address', 'set_proxy_address'),
|
||||
Method(RTorrent, 'set_split_file_size', 'set_split_file_size'),
|
||||
Method(RTorrent, 'set_receive_buffer_size', 'set_receive_buffer_size'),
|
||||
Method(RTorrent, 'set_use_udp_trackers', 'set_use_udp_trackers'),
|
||||
Method(RTorrent, 'set_connection_seed', 'set_connection_seed'),
|
||||
Method(RTorrent, 'set_xmlrpc_size_limit', 'set_xmlrpc_size_limit'),
|
||||
Method(RTorrent, 'set_xmlrpc_dialect', 'set_xmlrpc_dialect'),
|
||||
Method(RTorrent, 'set_safe_sync', 'set_safe_sync'),
|
||||
Method(RTorrent, 'set_http_capath', 'set_http_capath'),
|
||||
Method(RTorrent, 'set_send_buffer_size', 'set_send_buffer_size'),
|
||||
Method(RTorrent, 'set_max_downloads_div', 'set_max_downloads_div'),
|
||||
Method(RTorrent, 'set_name', 'set_name'),
|
||||
Method(RTorrent, 'set_port_open', 'set_port_open'),
|
||||
Method(RTorrent, 'set_timeout_sync', 'set_timeout_sync'),
|
||||
Method(RTorrent, 'set_peer_exchange', 'set_peer_exchange'),
|
||||
Method(RTorrent, 'set_ip', 'set_ip',
|
||||
docstring="""Set IP
|
||||
|
||||
@param arg: ip address
|
||||
@type arg: str
|
||||
""",
|
||||
),
|
||||
Method(RTorrent, 'set_timeout_safe_sync', 'set_timeout_safe_sync'),
|
||||
Method(RTorrent, 'set_preload_type', 'set_preload_type'),
|
||||
Method(RTorrent, 'set_check_hash', 'set_check_hash',
|
||||
docstring="""Enable/Disable hash checking on finished torrents
|
||||
|
||||
@param arg: True to enable, False to disable
|
||||
@type arg: bool
|
||||
""",
|
||||
boolean=True,
|
||||
),
|
||||
]
|
||||
|
||||
_all_methods_list = [methods,
|
||||
rtorrent.file.methods,
|
||||
rtorrent.torrent.methods,
|
||||
rtorrent.tracker.methods,
|
||||
rtorrent.peer.methods,
|
||||
]
|
||||
|
||||
class_methods_pair = {
|
||||
RTorrent: methods,
|
||||
rtorrent.file.File: rtorrent.file.methods,
|
||||
rtorrent.torrent.Torrent: rtorrent.torrent.methods,
|
||||
rtorrent.tracker.Tracker: rtorrent.tracker.methods,
|
||||
rtorrent.peer.Peer: rtorrent.peer.methods,
|
||||
}
|
||||
for c in class_methods_pair.keys():
|
||||
rtorrent.rpc._build_rpc_methods(c, class_methods_pair[c])
|
||||
_build_class_methods(c)
|
|
@ -0,0 +1,86 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
|
||||
from rtorrent.compat import is_py3
|
||||
|
||||
|
||||
def bool_to_int(value):
|
||||
"""Translates python booleans to RPC-safe integers"""
|
||||
if value is True:
|
||||
return("1")
|
||||
elif value is False:
|
||||
return("0")
|
||||
else:
|
||||
return(value)
|
||||
|
||||
|
||||
def cmd_exists(cmds_list, cmd):
|
||||
"""Check if given command is in list of available commands
|
||||
|
||||
@param cmds_list: see L{RTorrent._rpc_methods}
|
||||
@type cmds_list: list
|
||||
|
||||
@param cmd: name of command to be checked
|
||||
@type cmd: str
|
||||
|
||||
@return: bool
|
||||
"""
|
||||
|
||||
return(cmd in cmds_list)
|
||||
|
||||
|
||||
def find_torrent(info_hash, torrent_list):
|
||||
"""Find torrent file in given list of Torrent classes
|
||||
|
||||
@param info_hash: info hash of torrent
|
||||
@type info_hash: str
|
||||
|
||||
@param torrent_list: list of L{Torrent} instances (see L{RTorrent.get_torrents})
|
||||
@type torrent_list: list
|
||||
|
||||
@return: L{Torrent} instance, or -1 if not found
|
||||
"""
|
||||
for t in torrent_list:
|
||||
if t.info_hash == info_hash:
|
||||
return t
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def is_valid_port(port):
|
||||
"""Check if given port is valid"""
|
||||
return(0 <= int(port) <= 65535)
|
||||
|
||||
|
||||
def convert_version_tuple_to_str(t):
|
||||
return(".".join([str(n) for n in t]))
|
||||
|
||||
|
||||
def safe_repr(fmt, *args, **kwargs):
|
||||
""" Formatter that handles unicode arguments """
|
||||
|
||||
if not is_py3():
|
||||
# unicode fmt can take str args, str fmt cannot take unicode args
|
||||
fmt = fmt.decode("utf-8")
|
||||
out = fmt.format(*args, **kwargs)
|
||||
return out.encode("utf-8")
|
||||
else:
|
||||
return fmt.format(*args, **kwargs)
|
|
@ -0,0 +1,30 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
import sys
|
||||
|
||||
|
||||
def is_py3():
|
||||
return sys.version_info[0] == 3
|
||||
|
||||
if is_py3():
|
||||
import xmlrpc.client as xmlrpclib
|
||||
else:
|
||||
import xmlrpclib
|
|
@ -0,0 +1,40 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
from rtorrent.common import convert_version_tuple_to_str
|
||||
|
||||
|
||||
class RTorrentVersionError(Exception):
|
||||
def __init__(self, min_version, cur_version):
|
||||
self.min_version = min_version
|
||||
self.cur_version = cur_version
|
||||
self.msg = "Minimum version required: {0}".format(
|
||||
convert_version_tuple_to_str(min_version))
|
||||
|
||||
def __str__(self):
|
||||
return(self.msg)
|
||||
|
||||
|
||||
class MethodError(Exception):
|
||||
def __init__(self, msg):
|
||||
self.msg = msg
|
||||
|
||||
def __str__(self):
|
||||
return(self.msg)
|
|
@ -0,0 +1,91 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
# from rtorrent.rpc import Method
|
||||
import rtorrent.rpc
|
||||
|
||||
from rtorrent.common import safe_repr
|
||||
|
||||
Method = rtorrent.rpc.Method
|
||||
|
||||
|
||||
class File:
|
||||
"""Represents an individual file within a L{Torrent} instance."""
|
||||
|
||||
def __init__(self, _rt_obj, info_hash, index, **kwargs):
|
||||
self._rt_obj = _rt_obj
|
||||
self.info_hash = info_hash # : info hash for the torrent the file is associated with
|
||||
self.index = index # : The position of the file within the file list
|
||||
for k in kwargs.keys():
|
||||
setattr(self, k, kwargs.get(k, None))
|
||||
|
||||
self.rpc_id = "{0}:f{1}".format(
|
||||
self.info_hash, self.index) # : unique id to pass to rTorrent
|
||||
|
||||
def update(self):
|
||||
"""Refresh file data
|
||||
|
||||
@note: All fields are stored as attributes to self.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
multicall = rtorrent.rpc.Multicall(self)
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
for method in retriever_methods:
|
||||
multicall.add(method, self.rpc_id)
|
||||
|
||||
multicall.call()
|
||||
|
||||
def __repr__(self):
|
||||
return safe_repr("File(index={0} path=\"{1}\")", self.index, self.path)
|
||||
|
||||
methods = [
|
||||
# RETRIEVERS
|
||||
Method(File, 'get_last_touched', 'f.get_last_touched'),
|
||||
Method(File, 'get_range_second', 'f.get_range_second'),
|
||||
Method(File, 'get_size_bytes', 'f.get_size_bytes'),
|
||||
Method(File, 'get_priority', 'f.get_priority'),
|
||||
Method(File, 'get_match_depth_next', 'f.get_match_depth_next'),
|
||||
Method(File, 'is_resize_queued', 'f.is_resize_queued',
|
||||
boolean=True,
|
||||
),
|
||||
Method(File, 'get_range_first', 'f.get_range_first'),
|
||||
Method(File, 'get_match_depth_prev', 'f.get_match_depth_prev'),
|
||||
Method(File, 'get_path', 'f.get_path'),
|
||||
Method(File, 'get_completed_chunks', 'f.get_completed_chunks'),
|
||||
Method(File, 'get_path_components', 'f.get_path_components'),
|
||||
Method(File, 'is_created', 'f.is_created',
|
||||
boolean=True,
|
||||
),
|
||||
Method(File, 'is_open', 'f.is_open',
|
||||
boolean=True,
|
||||
),
|
||||
Method(File, 'get_size_chunks', 'f.get_size_chunks'),
|
||||
Method(File, 'get_offset', 'f.get_offset'),
|
||||
Method(File, 'get_frozen_path', 'f.get_frozen_path'),
|
||||
Method(File, 'get_path_depth', 'f.get_path_depth'),
|
||||
Method(File, 'is_create_queued', 'f.is_create_queued',
|
||||
boolean=True,
|
||||
),
|
||||
|
||||
|
||||
# MODIFIERS
|
||||
]
|
|
@ -0,0 +1,84 @@
|
|||
# Copyright (c) 2013 Dean Gardiner, <gardiner91@gmail.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
import rtorrent.rpc
|
||||
|
||||
Method = rtorrent.rpc.Method
|
||||
|
||||
|
||||
class Group:
|
||||
__name__ = 'Group'
|
||||
|
||||
def __init__(self, _rt_obj, name):
|
||||
self._rt_obj = _rt_obj
|
||||
self.name = name
|
||||
|
||||
self.methods = [
|
||||
# RETRIEVERS
|
||||
Method(Group, 'get_max', 'group.' + self.name + '.ratio.max', varname='max'),
|
||||
Method(Group, 'get_min', 'group.' + self.name + '.ratio.min', varname='min'),
|
||||
Method(Group, 'get_upload', 'group.' + self.name + '.ratio.upload', varname='upload'),
|
||||
|
||||
# MODIFIERS
|
||||
Method(Group, 'set_max', 'group.' + self.name + '.ratio.max.set', varname='max'),
|
||||
Method(Group, 'set_min', 'group.' + self.name + '.ratio.min.set', varname='min'),
|
||||
Method(Group, 'set_upload', 'group.' + self.name + '.ratio.upload.set', varname='upload')
|
||||
]
|
||||
|
||||
rtorrent.rpc._build_rpc_methods(self, self.methods)
|
||||
|
||||
# Setup multicall_add method
|
||||
caller = lambda multicall, method, *args: \
|
||||
multicall.add(method, *args)
|
||||
setattr(self, "multicall_add", caller)
|
||||
|
||||
def _get_prefix(self):
|
||||
return 'group.' + self.name + '.ratio.'
|
||||
|
||||
def update(self):
|
||||
multicall = rtorrent.rpc.Multicall(self)
|
||||
|
||||
retriever_methods = [m for m in self.methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
|
||||
for method in retriever_methods:
|
||||
multicall.add(method)
|
||||
|
||||
multicall.call()
|
||||
|
||||
def enable(self):
|
||||
p = self._rt_obj._get_conn()
|
||||
return getattr(p, self._get_prefix() + 'enable')()
|
||||
|
||||
def disable(self):
|
||||
p = self._rt_obj._get_conn()
|
||||
return getattr(p, self._get_prefix() + 'disable')()
|
||||
|
||||
def set_command(self, *methods):
|
||||
methods = [m + '=' for m in methods]
|
||||
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(
|
||||
m, 'system.method.set',
|
||||
self._get_prefix() + 'command',
|
||||
*methods
|
||||
)
|
||||
|
||||
return(m.call()[-1])
|
|
@ -0,0 +1,281 @@
|
|||
# Copyright (C) 2011 by clueless <clueless.nospam ! mail.com>
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
# THE SOFTWARE.
|
||||
#
|
||||
# Version: 20111107
|
||||
#
|
||||
# Changelog
|
||||
# ---------
|
||||
# 2011-11-07 - Added support for Python2 (tested on 2.6)
|
||||
# 2011-10-03 - Fixed: moved check for end of list at the top of the while loop
|
||||
# in _decode_list (in case the list is empty) (Chris Lucas)
|
||||
# - Converted dictionary keys to str
|
||||
# 2011-04-24 - Changed date format to YYYY-MM-DD for versioning, bigger
|
||||
# integer denotes a newer version
|
||||
# - Fixed a bug that would treat False as an integral type but
|
||||
# encode it using the 'False' string, attempting to encode a
|
||||
# boolean now results in an error
|
||||
# - Fixed a bug where an integer value of 0 in a list or
|
||||
# dictionary resulted in a parse error while decoding
|
||||
#
|
||||
# 2011-04-03 - Original release
|
||||
|
||||
import sys
|
||||
|
||||
_py3 = sys.version_info[0] == 3
|
||||
|
||||
if _py3:
|
||||
_VALID_STRING_TYPES = (str,)
|
||||
else:
|
||||
_VALID_STRING_TYPES = (str, unicode) # @UndefinedVariable
|
||||
|
||||
_TYPE_INT = 1
|
||||
_TYPE_STRING = 2
|
||||
_TYPE_LIST = 3
|
||||
_TYPE_DICTIONARY = 4
|
||||
_TYPE_END = 5
|
||||
_TYPE_INVALID = 6
|
||||
|
||||
# Function to determine the type of he next value/item
|
||||
# Arguments:
|
||||
# char First character of the string that is to be decoded
|
||||
# Return value:
|
||||
# Returns an integer that describes what type the next value/item is
|
||||
|
||||
|
||||
def _gettype(char):
|
||||
if not isinstance(char, int):
|
||||
char = ord(char)
|
||||
if char == 0x6C: # 'l'
|
||||
return _TYPE_LIST
|
||||
elif char == 0x64: # 'd'
|
||||
return _TYPE_DICTIONARY
|
||||
elif char == 0x69: # 'i'
|
||||
return _TYPE_INT
|
||||
elif char == 0x65: # 'e'
|
||||
return _TYPE_END
|
||||
elif char >= 0x30 and char <= 0x39: # '0' '9'
|
||||
return _TYPE_STRING
|
||||
else:
|
||||
return _TYPE_INVALID
|
||||
|
||||
# Function to parse a string from the bendcoded data
|
||||
# Arguments:
|
||||
# data bencoded data, must be guaranteed to be a string
|
||||
# Return Value:
|
||||
# Returns a tuple, the first member of the tuple is the parsed string
|
||||
# The second member is whatever remains of the bencoded data so it can
|
||||
# be used to parse the next part of the data
|
||||
|
||||
|
||||
def _decode_string(data):
|
||||
end = 1
|
||||
# if py3, data[end] is going to be an int
|
||||
# if py2, data[end] will be a string
|
||||
if _py3:
|
||||
char = 0x3A
|
||||
else:
|
||||
char = chr(0x3A)
|
||||
|
||||
while data[end] != char: # ':'
|
||||
end = end + 1
|
||||
strlen = int(data[:end])
|
||||
return (data[end + 1:strlen + end + 1], data[strlen + end + 1:])
|
||||
|
||||
# Function to parse an integer from the bencoded data
|
||||
# Arguments:
|
||||
# data bencoded data, must be guaranteed to be an integer
|
||||
# Return Value:
|
||||
# Returns a tuple, the first member of the tuple is the parsed string
|
||||
# The second member is whatever remains of the bencoded data so it can
|
||||
# be used to parse the next part of the data
|
||||
|
||||
|
||||
def _decode_int(data):
|
||||
end = 1
|
||||
# if py3, data[end] is going to be an int
|
||||
# if py2, data[end] will be a string
|
||||
if _py3:
|
||||
char = 0x65
|
||||
else:
|
||||
char = chr(0x65)
|
||||
|
||||
while data[end] != char: # 'e'
|
||||
end = end + 1
|
||||
return (int(data[1:end]), data[end + 1:])
|
||||
|
||||
# Function to parse a bencoded list
|
||||
# Arguments:
|
||||
# data bencoded data, must be guaranted to be the start of a list
|
||||
# Return Value:
|
||||
# Returns a tuple, the first member of the tuple is the parsed list
|
||||
# The second member is whatever remains of the bencoded data so it can
|
||||
# be used to parse the next part of the data
|
||||
|
||||
|
||||
def _decode_list(data):
|
||||
x = []
|
||||
overflow = data[1:]
|
||||
while True: # Loop over the data
|
||||
if _gettype(overflow[0]) == _TYPE_END: # - Break if we reach the end of the list
|
||||
return (x, overflow[1:]) # and return the list and overflow
|
||||
|
||||
value, overflow = _decode(overflow) #
|
||||
if isinstance(value, bool) or overflow == '': # - if we have a parse error
|
||||
return (False, False) # Die with error
|
||||
else: # - Otherwise
|
||||
x.append(value) # add the value to the list
|
||||
|
||||
|
||||
# Function to parse a bencoded list
|
||||
# Arguments:
|
||||
# data bencoded data, must be guaranted to be the start of a list
|
||||
# Return Value:
|
||||
# Returns a tuple, the first member of the tuple is the parsed dictionary
|
||||
# The second member is whatever remains of the bencoded data so it can
|
||||
# be used to parse the next part of the data
|
||||
def _decode_dict(data):
|
||||
x = {}
|
||||
overflow = data[1:]
|
||||
while True: # Loop over the data
|
||||
if _gettype(overflow[0]) != _TYPE_STRING: # - If the key is not a string
|
||||
return (False, False) # Die with error
|
||||
key, overflow = _decode(overflow) #
|
||||
if key == False or overflow == '': # - If parse error
|
||||
return (False, False) # Die with error
|
||||
value, overflow = _decode(overflow) #
|
||||
if isinstance(value, bool) or overflow == '': # - If parse error
|
||||
print("Error parsing value")
|
||||
print(value)
|
||||
print(overflow)
|
||||
return (False, False) # Die with error
|
||||
else:
|
||||
# don't use bytes for the key
|
||||
key = key.decode()
|
||||
x[key] = value
|
||||
if _gettype(overflow[0]) == _TYPE_END:
|
||||
return (x, overflow[1:])
|
||||
|
||||
# Arguments:
|
||||
# data bencoded data in bytes format
|
||||
# Return Values:
|
||||
# Returns a tuple, the first member is the parsed data, could be a string,
|
||||
# an integer, a list or a dictionary, or a combination of those
|
||||
# The second member is the leftover of parsing, if everything parses correctly this
|
||||
# should be an empty byte string
|
||||
|
||||
|
||||
def _decode(data):
|
||||
btype = _gettype(data[0])
|
||||
if btype == _TYPE_INT:
|
||||
return _decode_int(data)
|
||||
elif btype == _TYPE_STRING:
|
||||
return _decode_string(data)
|
||||
elif btype == _TYPE_LIST:
|
||||
return _decode_list(data)
|
||||
elif btype == _TYPE_DICTIONARY:
|
||||
return _decode_dict(data)
|
||||
else:
|
||||
return (False, False)
|
||||
|
||||
# Function to decode bencoded data
|
||||
# Arguments:
|
||||
# data bencoded data, can be str or bytes
|
||||
# Return Values:
|
||||
# Returns the decoded data on success, this coud be bytes, int, dict or list
|
||||
# or a combinatin of those
|
||||
# If an error occurs the return value is False
|
||||
|
||||
|
||||
def decode(data):
|
||||
# if isinstance(data, str):
|
||||
# data = data.encode()
|
||||
decoded, overflow = _decode(data)
|
||||
return decoded
|
||||
|
||||
# Args: data as integer
|
||||
# return: encoded byte string
|
||||
|
||||
|
||||
def _encode_int(data):
|
||||
return b'i' + str(data).encode() + b'e'
|
||||
|
||||
# Args: data as string or bytes
|
||||
# Return: encoded byte string
|
||||
|
||||
|
||||
def _encode_string(data):
|
||||
return str(len(data)).encode() + b':' + data
|
||||
|
||||
# Args: data as list
|
||||
# Return: Encoded byte string, false on error
|
||||
|
||||
|
||||
def _encode_list(data):
|
||||
elist = b'l'
|
||||
for item in data:
|
||||
eitem = encode(item)
|
||||
if eitem == False:
|
||||
return False
|
||||
elist += eitem
|
||||
return elist + b'e'
|
||||
|
||||
# Args: data as dict
|
||||
# Return: encoded byte string, false on error
|
||||
|
||||
|
||||
def _encode_dict(data):
|
||||
edict = b'd'
|
||||
keys = []
|
||||
for key in data:
|
||||
if not isinstance(key, _VALID_STRING_TYPES) and not isinstance(key, bytes):
|
||||
return False
|
||||
keys.append(key)
|
||||
keys.sort()
|
||||
for key in keys:
|
||||
ekey = encode(key)
|
||||
eitem = encode(data[key])
|
||||
if ekey == False or eitem == False:
|
||||
return False
|
||||
edict += ekey + eitem
|
||||
return edict + b'e'
|
||||
|
||||
# Function to encode a variable in bencoding
|
||||
# Arguments:
|
||||
# data Variable to be encoded, can be a list, dict, str, bytes, int or a combination of those
|
||||
# Return Values:
|
||||
# Returns the encoded data as a byte string when successful
|
||||
# If an error occurs the return value is False
|
||||
|
||||
|
||||
def encode(data):
|
||||
if isinstance(data, bool):
|
||||
return False
|
||||
elif isinstance(data, int):
|
||||
return _encode_int(data)
|
||||
elif isinstance(data, bytes):
|
||||
return _encode_string(data)
|
||||
elif isinstance(data, _VALID_STRING_TYPES):
|
||||
return _encode_string(data.encode())
|
||||
elif isinstance(data, list):
|
||||
return _encode_list(data)
|
||||
elif isinstance(data, dict):
|
||||
return _encode_dict(data)
|
||||
else:
|
||||
return False
|
|
@ -0,0 +1,160 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
from rtorrent.compat import is_py3
|
||||
import os.path
|
||||
import re
|
||||
import rtorrent.lib.bencode as bencode
|
||||
import hashlib
|
||||
|
||||
if is_py3():
|
||||
from urllib.request import urlopen # @UnresolvedImport @UnusedImport
|
||||
else:
|
||||
from urllib2 import urlopen # @UnresolvedImport @Reimport
|
||||
|
||||
|
||||
class TorrentParser():
|
||||
def __init__(self, torrent):
|
||||
"""Decode and parse given torrent
|
||||
|
||||
@param torrent: handles: urls, file paths, string of torrent data
|
||||
@type torrent: str
|
||||
|
||||
@raise AssertionError: Can be raised for a couple reasons:
|
||||
- If _get_raw_torrent() couldn't figure out
|
||||
what X{torrent} is
|
||||
- if X{torrent} isn't a valid bencoded torrent file
|
||||
"""
|
||||
self.torrent = torrent
|
||||
self._raw_torrent = None # : testing yo
|
||||
self._torrent_decoded = None # : what up
|
||||
self.file_type = None
|
||||
|
||||
self._get_raw_torrent()
|
||||
assert self._raw_torrent is not None, "Couldn't get raw_torrent."
|
||||
if self._torrent_decoded is None:
|
||||
self._decode_torrent()
|
||||
assert isinstance(self._torrent_decoded, dict), "Invalid torrent file."
|
||||
self._parse_torrent()
|
||||
|
||||
def _is_raw(self):
|
||||
raw = False
|
||||
if isinstance(self.torrent, (str, bytes)):
|
||||
if isinstance(self._decode_torrent(self.torrent), dict):
|
||||
raw = True
|
||||
else:
|
||||
# reset self._torrent_decoded (currently equals False)
|
||||
self._torrent_decoded = None
|
||||
|
||||
return(raw)
|
||||
|
||||
def _get_raw_torrent(self):
|
||||
"""Get raw torrent data by determining what self.torrent is"""
|
||||
# already raw?
|
||||
if self._is_raw():
|
||||
self.file_type = "raw"
|
||||
self._raw_torrent = self.torrent
|
||||
return
|
||||
# local file?
|
||||
if os.path.isfile(self.torrent):
|
||||
self.file_type = "file"
|
||||
self._raw_torrent = open(self.torrent, "rb").read()
|
||||
# url?
|
||||
elif re.search("^(http|ftp):\/\/", self.torrent, re.I):
|
||||
self.file_type = "url"
|
||||
self._raw_torrent = urlopen(self.torrent).read()
|
||||
|
||||
def _decode_torrent(self, raw_torrent=None):
|
||||
if raw_torrent is None:
|
||||
raw_torrent = self._raw_torrent
|
||||
self._torrent_decoded = bencode.decode(raw_torrent)
|
||||
return(self._torrent_decoded)
|
||||
|
||||
def _calc_info_hash(self):
|
||||
self.info_hash = None
|
||||
if "info" in self._torrent_decoded.keys():
|
||||
info_encoded = bencode.encode(self._torrent_decoded["info"])
|
||||
|
||||
if info_encoded:
|
||||
self.info_hash = hashlib.sha1(info_encoded).hexdigest().upper()
|
||||
|
||||
return(self.info_hash)
|
||||
|
||||
def _parse_torrent(self):
|
||||
for k in self._torrent_decoded:
|
||||
key = k.replace(" ", "_").lower()
|
||||
setattr(self, key, self._torrent_decoded[k])
|
||||
|
||||
self._calc_info_hash()
|
||||
|
||||
|
||||
class NewTorrentParser(object):
|
||||
@staticmethod
|
||||
def _read_file(fp):
|
||||
return fp.read()
|
||||
|
||||
@staticmethod
|
||||
def _write_file(fp):
|
||||
fp.write()
|
||||
return fp
|
||||
|
||||
@staticmethod
|
||||
def _decode_torrent(data):
|
||||
return bencode.decode(data)
|
||||
|
||||
def __init__(self, input):
|
||||
self.input = input
|
||||
self._raw_torrent = None
|
||||
self._decoded_torrent = None
|
||||
self._hash_outdated = False
|
||||
|
||||
if isinstance(self.input, (str, bytes)):
|
||||
# path to file?
|
||||
if os.path.isfile(self.input):
|
||||
self._raw_torrent = self._read_file(open(self.input, "rb"))
|
||||
else:
|
||||
# assume input was the raw torrent data (do we really want
|
||||
# this?)
|
||||
self._raw_torrent = self.input
|
||||
|
||||
# file-like object?
|
||||
elif self.input.hasattr("read"):
|
||||
self._raw_torrent = self._read_file(self.input)
|
||||
|
||||
assert self._raw_torrent is not None, "Invalid input: input must be a path or a file-like object"
|
||||
|
||||
self._decoded_torrent = self._decode_torrent(self._raw_torrent)
|
||||
|
||||
assert isinstance(
|
||||
self._decoded_torrent, dict), "File could not be decoded"
|
||||
|
||||
def _calc_info_hash(self):
|
||||
self.info_hash = None
|
||||
info_dict = self._torrent_decoded["info"]
|
||||
self.info_hash = hashlib.sha1(bencode.encode(
|
||||
info_dict)).hexdigest().upper()
|
||||
|
||||
return(self.info_hash)
|
||||
|
||||
def set_tracker(self, tracker):
|
||||
self._decoded_torrent["announce"] = tracker
|
||||
|
||||
def get_tracker(self):
|
||||
return self._decoded_torrent.get("announce")
|
|
@ -0,0 +1,73 @@
|
|||
#
|
||||
# Copyright (c) 2013 Dean Gardiner, <gardiner91@gmail.com>
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
from base64 import encodestring
|
||||
import string
|
||||
import xmlrpclib
|
||||
|
||||
|
||||
class BasicAuthTransport(xmlrpclib.Transport):
|
||||
def __init__(self, username=None, password=None):
|
||||
xmlrpclib.Transport.__init__(self)
|
||||
|
||||
self.username = username
|
||||
self.password = password
|
||||
|
||||
def send_auth(self, h):
|
||||
if self.username is not None and self.password is not None:
|
||||
h.putheader('AUTHORIZATION', "Basic %s" % string.replace(
|
||||
encodestring("%s:%s" % (self.username, self.password)),
|
||||
"\012", ""
|
||||
))
|
||||
|
||||
def single_request(self, host, handler, request_body, verbose=0):
|
||||
# issue XML-RPC request
|
||||
|
||||
h = self.make_connection(host)
|
||||
if verbose:
|
||||
h.set_debuglevel(1)
|
||||
|
||||
try:
|
||||
self.send_request(h, handler, request_body)
|
||||
self.send_host(h, host)
|
||||
self.send_user_agent(h)
|
||||
self.send_auth(h)
|
||||
self.send_content(h, request_body)
|
||||
|
||||
response = h.getresponse(buffering=True)
|
||||
if response.status == 200:
|
||||
self.verbose = verbose
|
||||
return self.parse_response(response)
|
||||
except xmlrpclib.Fault:
|
||||
raise
|
||||
except Exception:
|
||||
self.close()
|
||||
raise
|
||||
|
||||
#discard any response data and raise exception
|
||||
if response.getheader("content-length", 0):
|
||||
response.read()
|
||||
raise xmlrpclib.ProtocolError(
|
||||
host + handler,
|
||||
response.status, response.reason,
|
||||
response.msg,
|
||||
)
|
|
@ -0,0 +1,23 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
from rtorrent.compat import xmlrpclib
|
||||
|
||||
HTTPServerProxy = xmlrpclib.ServerProxy
|
|
@ -0,0 +1,219 @@
|
|||
#!/usr/bin/python
|
||||
|
||||
# rtorrent_xmlrpc
|
||||
# (c) 2011 Roger Que <alerante@bellsouth.net>
|
||||
#
|
||||
# Modified portions:
|
||||
# (c) 2013 Dean Gardiner <gardiner91@gmail.com>
|
||||
#
|
||||
# Python module for interacting with rtorrent's XML-RPC interface
|
||||
# directly over SCGI, instead of through an HTTP server intermediary.
|
||||
# Inspired by Glenn Washburn's xmlrpc2scgi.py [1], but subclasses the
|
||||
# built-in xmlrpclib classes so that it is compatible with features
|
||||
# such as MultiCall objects.
|
||||
#
|
||||
# [1] <http://libtorrent.rakshasa.no/wiki/UtilsXmlrpc2scgi>
|
||||
#
|
||||
# Usage: server = SCGIServerProxy('scgi://localhost:7000/')
|
||||
# server = SCGIServerProxy('scgi:///path/to/scgi.sock')
|
||||
# print server.system.listMethods()
|
||||
# mc = xmlrpclib.MultiCall(server)
|
||||
# mc.get_up_rate()
|
||||
# mc.get_down_rate()
|
||||
# print mc()
|
||||
#
|
||||
#
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation; either version 2 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program; if not, write to the Free Software
|
||||
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
||||
#
|
||||
# In addition, as a special exception, the copyright holders give
|
||||
# permission to link the code of portions of this program with the
|
||||
# OpenSSL library under certain conditions as described in each
|
||||
# individual source file, and distribute linked combinations
|
||||
# including the two.
|
||||
#
|
||||
# You must obey the GNU General Public License in all respects for
|
||||
# all of the code used other than OpenSSL. If you modify file(s)
|
||||
# with this exception, you may extend this exception to your version
|
||||
# of the file(s), but you are not obligated to do so. If you do not
|
||||
# wish to do so, delete this exception statement from your version.
|
||||
# If you delete this exception statement from all source files in the
|
||||
# program, then also delete it here.
|
||||
#
|
||||
#
|
||||
#
|
||||
# Portions based on Python's xmlrpclib:
|
||||
#
|
||||
# Copyright (c) 1999-2002 by Secret Labs AB
|
||||
# Copyright (c) 1999-2002 by Fredrik Lundh
|
||||
#
|
||||
# By obtaining, using, and/or copying this software and/or its
|
||||
# associated documentation, you agree that you have read, understood,
|
||||
# and will comply with the following terms and conditions:
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its associated documentation for any purpose and without fee is
|
||||
# hereby granted, provided that the above copyright notice appears in
|
||||
# all copies, and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of
|
||||
# Secret Labs AB or the author not be used in advertising or publicity
|
||||
# pertaining to distribution of the software without specific, written
|
||||
# prior permission.
|
||||
#
|
||||
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
|
||||
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
|
||||
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
|
||||
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
|
||||
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
|
||||
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
|
||||
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
|
||||
# OF THIS SOFTWARE.
|
||||
|
||||
import httplib
|
||||
import re
|
||||
import socket
|
||||
import urllib
|
||||
import xmlrpclib
|
||||
import errno
|
||||
|
||||
|
||||
class SCGITransport(xmlrpclib.Transport):
|
||||
# Added request() from Python 2.7 xmlrpclib here to backport to Python 2.6
|
||||
def request(self, host, handler, request_body, verbose=0):
|
||||
#retry request once if cached connection has gone cold
|
||||
for i in (0, 1):
|
||||
try:
|
||||
return self.single_request(host, handler, request_body, verbose)
|
||||
except socket.error, e:
|
||||
if i or e.errno not in (errno.ECONNRESET, errno.ECONNABORTED, errno.EPIPE):
|
||||
raise
|
||||
except httplib.BadStatusLine: #close after we sent request
|
||||
if i:
|
||||
raise
|
||||
|
||||
def single_request(self, host, handler, request_body, verbose=0):
|
||||
# Add SCGI headers to the request.
|
||||
headers = {'CONTENT_LENGTH': str(len(request_body)), 'SCGI': '1'}
|
||||
header = '\x00'.join(('%s\x00%s' % item for item in headers.iteritems())) + '\x00'
|
||||
header = '%d:%s' % (len(header), header)
|
||||
request_body = '%s,%s' % (header, request_body)
|
||||
|
||||
sock = None
|
||||
|
||||
try:
|
||||
if host:
|
||||
host, port = urllib.splitport(host)
|
||||
addrinfo = socket.getaddrinfo(host, int(port), socket.AF_INET,
|
||||
socket.SOCK_STREAM)
|
||||
sock = socket.socket(*addrinfo[0][:3])
|
||||
sock.connect(addrinfo[0][4])
|
||||
else:
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
sock.connect(handler)
|
||||
|
||||
self.verbose = verbose
|
||||
|
||||
sock.send(request_body)
|
||||
return self.parse_response(sock.makefile())
|
||||
finally:
|
||||
if sock:
|
||||
sock.close()
|
||||
|
||||
def parse_response(self, response):
|
||||
p, u = self.getparser()
|
||||
|
||||
response_body = ''
|
||||
while True:
|
||||
data = response.read(1024)
|
||||
if not data:
|
||||
break
|
||||
response_body += data
|
||||
|
||||
# Remove SCGI headers from the response.
|
||||
response_header, response_body = re.split(r'\n\s*?\n', response_body,
|
||||
maxsplit=1)
|
||||
|
||||
if self.verbose:
|
||||
print 'body:', repr(response_body)
|
||||
|
||||
p.feed(response_body)
|
||||
p.close()
|
||||
|
||||
return u.close()
|
||||
|
||||
|
||||
class SCGIServerProxy(xmlrpclib.ServerProxy):
|
||||
def __init__(self, uri, transport=None, encoding=None, verbose=False,
|
||||
allow_none=False, use_datetime=False):
|
||||
type, uri = urllib.splittype(uri)
|
||||
if type not in ('scgi'):
|
||||
raise IOError('unsupported XML-RPC protocol')
|
||||
self.__host, self.__handler = urllib.splithost(uri)
|
||||
if not self.__handler:
|
||||
self.__handler = '/'
|
||||
|
||||
if transport is None:
|
||||
transport = SCGITransport(use_datetime=use_datetime)
|
||||
self.__transport = transport
|
||||
|
||||
self.__encoding = encoding
|
||||
self.__verbose = verbose
|
||||
self.__allow_none = allow_none
|
||||
|
||||
def __close(self):
|
||||
self.__transport.close()
|
||||
|
||||
def __request(self, methodname, params):
|
||||
# call a method on the remote server
|
||||
|
||||
request = xmlrpclib.dumps(params, methodname, encoding=self.__encoding,
|
||||
allow_none=self.__allow_none)
|
||||
|
||||
response = self.__transport.request(
|
||||
self.__host,
|
||||
self.__handler,
|
||||
request,
|
||||
verbose=self.__verbose
|
||||
)
|
||||
|
||||
if len(response) == 1:
|
||||
response = response[0]
|
||||
|
||||
return response
|
||||
|
||||
def __repr__(self):
|
||||
return (
|
||||
"<SCGIServerProxy for %s%s>" %
|
||||
(self.__host, self.__handler)
|
||||
)
|
||||
|
||||
__str__ = __repr__
|
||||
|
||||
def __getattr__(self, name):
|
||||
# magic method dispatcher
|
||||
return xmlrpclib._Method(self.__request, name)
|
||||
|
||||
# note: to call a remote object with an non-standard name, use
|
||||
# result getattr(server, "strange-python-name")(args)
|
||||
|
||||
def __call__(self, attr):
|
||||
"""A workaround to get special attributes on the ServerProxy
|
||||
without interfering with the magic __getattr__
|
||||
"""
|
||||
if attr == "close":
|
||||
return self.__close
|
||||
elif attr == "transport":
|
||||
return self.__transport
|
||||
raise AttributeError("Attribute %r not found" % (attr,))
|
|
@ -0,0 +1,98 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
# from rtorrent.rpc import Method
|
||||
import rtorrent.rpc
|
||||
|
||||
from rtorrent.common import safe_repr
|
||||
|
||||
Method = rtorrent.rpc.Method
|
||||
|
||||
|
||||
class Peer:
|
||||
"""Represents an individual peer within a L{Torrent} instance."""
|
||||
def __init__(self, _rt_obj, info_hash, **kwargs):
|
||||
self._rt_obj = _rt_obj
|
||||
self.info_hash = info_hash # : info hash for the torrent the peer is associated with
|
||||
for k in kwargs.keys():
|
||||
setattr(self, k, kwargs.get(k, None))
|
||||
|
||||
self.rpc_id = "{0}:p{1}".format(
|
||||
self.info_hash, self.id) # : unique id to pass to rTorrent
|
||||
|
||||
def __repr__(self):
|
||||
return safe_repr("Peer(id={0})", self.id)
|
||||
|
||||
def update(self):
|
||||
"""Refresh peer data
|
||||
|
||||
@note: All fields are stored as attributes to self.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
multicall = rtorrent.rpc.Multicall(self)
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
for method in retriever_methods:
|
||||
multicall.add(method, self.rpc_id)
|
||||
|
||||
multicall.call()
|
||||
|
||||
methods = [
|
||||
# RETRIEVERS
|
||||
Method(Peer, 'is_preferred', 'p.is_preferred',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_down_rate', 'p.get_down_rate'),
|
||||
Method(Peer, 'is_unwanted', 'p.is_unwanted',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_peer_total', 'p.get_peer_total'),
|
||||
Method(Peer, 'get_peer_rate', 'p.get_peer_rate'),
|
||||
Method(Peer, 'get_port', 'p.get_port'),
|
||||
Method(Peer, 'is_snubbed', 'p.is_snubbed',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_id_html', 'p.get_id_html'),
|
||||
Method(Peer, 'get_up_rate', 'p.get_up_rate'),
|
||||
Method(Peer, 'is_banned', 'p.banned',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_completed_percent', 'p.get_completed_percent'),
|
||||
Method(Peer, 'completed_percent', 'p.completed_percent'),
|
||||
Method(Peer, 'get_id', 'p.get_id'),
|
||||
Method(Peer, 'is_obfuscated', 'p.is_obfuscated',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_down_total', 'p.get_down_total'),
|
||||
Method(Peer, 'get_client_version', 'p.get_client_version'),
|
||||
Method(Peer, 'get_address', 'p.get_address'),
|
||||
Method(Peer, 'is_incoming', 'p.is_incoming',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'is_encrypted', 'p.is_encrypted',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_options_str', 'p.get_options_str'),
|
||||
Method(Peer, 'get_client_version', 'p.client_version'),
|
||||
Method(Peer, 'get_up_total', 'p.get_up_total'),
|
||||
|
||||
# MODIFIERS
|
||||
]
|
|
@ -0,0 +1,319 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
import inspect
|
||||
import rtorrent
|
||||
import re
|
||||
from rtorrent.common import bool_to_int, convert_version_tuple_to_str,\
|
||||
safe_repr
|
||||
from rtorrent.err import MethodError
|
||||
from rtorrent.compat import xmlrpclib
|
||||
|
||||
|
||||
def get_varname(rpc_call):
|
||||
"""Transform rpc method into variable name.
|
||||
|
||||
@newfield example: Example
|
||||
@example: if the name of the rpc method is 'p.get_down_rate', the variable
|
||||
name will be 'down_rate'
|
||||
"""
|
||||
# extract variable name from xmlrpc func name
|
||||
r = re.search(
|
||||
"([ptdf]\.|system\.|get\_|is\_|set\_)+([^=]*)", rpc_call, re.I)
|
||||
if r:
|
||||
return(r.groups()[-1])
|
||||
else:
|
||||
return(None)
|
||||
|
||||
|
||||
def _handle_unavailable_rpc_method(method, rt_obj):
|
||||
msg = "Method isn't available."
|
||||
if rt_obj._get_client_version_tuple() < method.min_version:
|
||||
msg = "This method is only available in " \
|
||||
"RTorrent version v{0} or later".format(
|
||||
convert_version_tuple_to_str(method.min_version))
|
||||
|
||||
raise MethodError(msg)
|
||||
|
||||
|
||||
class DummyClass:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
|
||||
class Method:
|
||||
"""Represents an individual RPC method"""
|
||||
|
||||
def __init__(self, _class, method_name,
|
||||
rpc_call, docstring=None, varname=None, **kwargs):
|
||||
self._class = _class # : Class this method is associated with
|
||||
self.class_name = _class.__name__
|
||||
self.method_name = method_name # : name of public-facing method
|
||||
self.rpc_call = rpc_call # : name of rpc method
|
||||
self.docstring = docstring # : docstring for rpc method (optional)
|
||||
self.varname = varname # : variable for the result of the method call, usually set to self.varname
|
||||
self.min_version = kwargs.get("min_version", (
|
||||
0, 0, 0)) # : Minimum version of rTorrent required
|
||||
self.boolean = kwargs.get("boolean", False) # : returns boolean value?
|
||||
self.post_process_func = kwargs.get(
|
||||
"post_process_func", None) # : custom post process function
|
||||
self.aliases = kwargs.get(
|
||||
"aliases", []) # : aliases for method (optional)
|
||||
self.required_args = []
|
||||
#: Arguments required when calling the method (not utilized)
|
||||
|
||||
self.method_type = self._get_method_type()
|
||||
|
||||
if self.varname is None:
|
||||
self.varname = get_varname(self.rpc_call)
|
||||
assert self.varname is not None, "Couldn't get variable name."
|
||||
|
||||
def __repr__(self):
|
||||
return safe_repr("Method(method_name='{0}', rpc_call='{1}')",
|
||||
self.method_name, self.rpc_call)
|
||||
|
||||
def _get_method_type(self):
|
||||
"""Determine whether method is a modifier or a retriever"""
|
||||
if self.method_name[:4] == "set_": return('m') # modifier
|
||||
else:
|
||||
return('r') # retriever
|
||||
|
||||
def is_modifier(self):
|
||||
if self.method_type == 'm':
|
||||
return(True)
|
||||
else:
|
||||
return(False)
|
||||
|
||||
def is_retriever(self):
|
||||
if self.method_type == 'r':
|
||||
return(True)
|
||||
else:
|
||||
return(False)
|
||||
|
||||
def is_available(self, rt_obj):
|
||||
if rt_obj._get_client_version_tuple() < self.min_version or \
|
||||
self.rpc_call not in rt_obj._get_rpc_methods():
|
||||
return(False)
|
||||
else:
|
||||
return(True)
|
||||
|
||||
|
||||
class Multicall:
|
||||
def __init__(self, class_obj, **kwargs):
|
||||
self.class_obj = class_obj
|
||||
if class_obj.__class__.__name__ == "RTorrent":
|
||||
self.rt_obj = class_obj
|
||||
else:
|
||||
self.rt_obj = class_obj._rt_obj
|
||||
self.calls = []
|
||||
|
||||
def add(self, method, *args):
|
||||
"""Add call to multicall
|
||||
|
||||
@param method: L{Method} instance or name of raw RPC method
|
||||
@type method: Method or str
|
||||
|
||||
@param args: call arguments
|
||||
"""
|
||||
# if a raw rpc method was given instead of a Method instance,
|
||||
# try and find the instance for it. And if all else fails, create a
|
||||
# dummy Method instance
|
||||
if isinstance(method, str):
|
||||
result = find_method(method)
|
||||
# if result not found
|
||||
if result == -1:
|
||||
method = Method(DummyClass, method, method)
|
||||
else:
|
||||
method = result
|
||||
|
||||
# ensure method is available before adding
|
||||
if not method.is_available(self.rt_obj):
|
||||
_handle_unavailable_rpc_method(method, self.rt_obj)
|
||||
|
||||
self.calls.append((method, args))
|
||||
|
||||
def list_calls(self):
|
||||
for c in self.calls:
|
||||
print(c)
|
||||
|
||||
def call(self):
|
||||
"""Execute added multicall calls
|
||||
|
||||
@return: the results (post-processed), in the order they were added
|
||||
@rtype: tuple
|
||||
"""
|
||||
m = xmlrpclib.MultiCall(self.rt_obj._get_conn())
|
||||
for call in self.calls:
|
||||
method, args = call
|
||||
rpc_call = getattr(method, "rpc_call")
|
||||
getattr(m, rpc_call)(*args)
|
||||
|
||||
results = m()
|
||||
results = tuple(results)
|
||||
results_processed = []
|
||||
|
||||
for r, c in zip(results, self.calls):
|
||||
method = c[0] # Method instance
|
||||
result = process_result(method, r)
|
||||
results_processed.append(result)
|
||||
# assign result to class_obj
|
||||
exists = hasattr(self.class_obj, method.varname)
|
||||
if not exists or not inspect.ismethod(getattr(self.class_obj, method.varname)):
|
||||
setattr(self.class_obj, method.varname, result)
|
||||
|
||||
return(tuple(results_processed))
|
||||
|
||||
|
||||
def call_method(class_obj, method, *args):
|
||||
"""Handles single RPC calls
|
||||
|
||||
@param class_obj: Peer/File/Torrent/Tracker/RTorrent instance
|
||||
@type class_obj: object
|
||||
|
||||
@param method: L{Method} instance or name of raw RPC method
|
||||
@type method: Method or str
|
||||
"""
|
||||
if method.is_retriever():
|
||||
args = args[:-1]
|
||||
else:
|
||||
assert args[-1] is not None, "No argument given."
|
||||
|
||||
if class_obj.__class__.__name__ == "RTorrent":
|
||||
rt_obj = class_obj
|
||||
else:
|
||||
rt_obj = class_obj._rt_obj
|
||||
|
||||
# check if rpc method is even available
|
||||
if not method.is_available(rt_obj):
|
||||
_handle_unavailable_rpc_method(method, rt_obj)
|
||||
|
||||
m = Multicall(class_obj)
|
||||
m.add(method, *args)
|
||||
# only added one method, only getting one result back
|
||||
ret_value = m.call()[0]
|
||||
|
||||
####### OBSOLETE ##########################################################
|
||||
# if method.is_retriever():
|
||||
# #value = process_result(method, ret_value)
|
||||
# value = ret_value #MultiCall already processed the result
|
||||
# else:
|
||||
# # we're setting the user's input to method.varname
|
||||
# # but we'll return the value that xmlrpc gives us
|
||||
# value = process_result(method, args[-1])
|
||||
##########################################################################
|
||||
|
||||
return(ret_value)
|
||||
|
||||
|
||||
def find_method(rpc_call):
|
||||
"""Return L{Method} instance associated with given RPC call"""
|
||||
method_lists = [
|
||||
rtorrent.methods,
|
||||
rtorrent.file.methods,
|
||||
rtorrent.tracker.methods,
|
||||
rtorrent.peer.methods,
|
||||
rtorrent.torrent.methods,
|
||||
]
|
||||
|
||||
for l in method_lists:
|
||||
for m in l:
|
||||
if m.rpc_call.lower() == rpc_call.lower():
|
||||
return(m)
|
||||
|
||||
return(-1)
|
||||
|
||||
|
||||
def process_result(method, result):
|
||||
"""Process given C{B{result}} based on flags set in C{B{method}}
|
||||
|
||||
@param method: L{Method} instance
|
||||
@type method: Method
|
||||
|
||||
@param result: result to be processed (the result of given L{Method} instance)
|
||||
|
||||
@note: Supported Processing:
|
||||
- boolean - convert ones and zeros returned by rTorrent and
|
||||
convert to python boolean values
|
||||
"""
|
||||
# handle custom post processing function
|
||||
if method.post_process_func is not None:
|
||||
result = method.post_process_func(result)
|
||||
|
||||
# is boolean?
|
||||
if method.boolean:
|
||||
if result in [1, '1']:
|
||||
result = True
|
||||
elif result in [0, '0']:
|
||||
result = False
|
||||
|
||||
return(result)
|
||||
|
||||
|
||||
def _build_rpc_methods(class_, method_list):
|
||||
"""Build glorified aliases to raw RPC methods"""
|
||||
instance = None
|
||||
if not inspect.isclass(class_):
|
||||
instance = class_
|
||||
class_ = instance.__class__
|
||||
|
||||
for m in method_list:
|
||||
class_name = m.class_name
|
||||
if class_name != class_.__name__:
|
||||
continue
|
||||
|
||||
if class_name == "RTorrent":
|
||||
caller = lambda self, arg = None, method = m:\
|
||||
call_method(self, method, bool_to_int(arg))
|
||||
elif class_name == "Torrent":
|
||||
caller = lambda self, arg = None, method = m:\
|
||||
call_method(self, method, self.rpc_id,
|
||||
bool_to_int(arg))
|
||||
elif class_name in ["Tracker", "File"]:
|
||||
caller = lambda self, arg = None, method = m:\
|
||||
call_method(self, method, self.rpc_id,
|
||||
bool_to_int(arg))
|
||||
|
||||
elif class_name == "Peer":
|
||||
caller = lambda self, arg = None, method = m:\
|
||||
call_method(self, method, self.rpc_id,
|
||||
bool_to_int(arg))
|
||||
|
||||
elif class_name == "Group":
|
||||
caller = lambda arg = None, method = m: \
|
||||
call_method(instance, method, bool_to_int(arg))
|
||||
|
||||
if m.docstring is None:
|
||||
m.docstring = ""
|
||||
|
||||
# print(m)
|
||||
docstring = """{0}
|
||||
|
||||
@note: Variable where the result for this method is stored: {1}.{2}""".format(
|
||||
m.docstring,
|
||||
class_name,
|
||||
m.varname)
|
||||
|
||||
caller.__doc__ = docstring
|
||||
|
||||
for method_name in [m.method_name] + list(m.aliases):
|
||||
if instance is None:
|
||||
setattr(class_, method_name, caller)
|
||||
else:
|
||||
setattr(instance, method_name, caller)
|
|
@ -0,0 +1,517 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
import rtorrent.rpc
|
||||
# from rtorrent.rpc import Method
|
||||
import rtorrent.peer
|
||||
import rtorrent.tracker
|
||||
import rtorrent.file
|
||||
import rtorrent.compat
|
||||
|
||||
from rtorrent.common import safe_repr
|
||||
|
||||
Peer = rtorrent.peer.Peer
|
||||
Tracker = rtorrent.tracker.Tracker
|
||||
File = rtorrent.file.File
|
||||
Method = rtorrent.rpc.Method
|
||||
|
||||
|
||||
class Torrent:
|
||||
"""Represents an individual torrent within a L{RTorrent} instance."""
|
||||
|
||||
def __init__(self, _rt_obj, info_hash, **kwargs):
|
||||
self._rt_obj = _rt_obj
|
||||
self.info_hash = info_hash # : info hash for the torrent
|
||||
self.rpc_id = self.info_hash # : unique id to pass to rTorrent
|
||||
for k in kwargs.keys():
|
||||
setattr(self, k, kwargs.get(k, None))
|
||||
|
||||
self.peers = []
|
||||
self.trackers = []
|
||||
self.files = []
|
||||
|
||||
self._call_custom_methods()
|
||||
|
||||
def __repr__(self):
|
||||
return safe_repr("Torrent(info_hash=\"{0}\" name=\"{1}\")",
|
||||
self.info_hash, self.name)
|
||||
|
||||
def _call_custom_methods(self):
|
||||
"""only calls methods that check instance variables."""
|
||||
self._is_hash_checking_queued()
|
||||
self._is_started()
|
||||
self._is_paused()
|
||||
|
||||
def get_peers(self):
|
||||
"""Get list of Peer instances for given torrent.
|
||||
|
||||
@return: L{Peer} instances
|
||||
@rtype: list
|
||||
|
||||
@note: also assigns return value to self.peers
|
||||
"""
|
||||
self.peers = []
|
||||
retriever_methods = [m for m in rtorrent.peer.methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
# need to leave 2nd arg empty (dunno why)
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
m.add("p.multicall", self.info_hash, "",
|
||||
*[method.rpc_call + "=" for method in retriever_methods])
|
||||
|
||||
results = m.call()[0] # only sent one call, only need first result
|
||||
|
||||
for result in results:
|
||||
results_dict = {}
|
||||
# build results_dict
|
||||
for m, r in zip(retriever_methods, result):
|
||||
results_dict[m.varname] = rtorrent.rpc.process_result(m, r)
|
||||
|
||||
self.peers.append(Peer(
|
||||
self._rt_obj, self.info_hash, **results_dict))
|
||||
|
||||
return(self.peers)
|
||||
|
||||
def get_trackers(self):
|
||||
"""Get list of Tracker instances for given torrent.
|
||||
|
||||
@return: L{Tracker} instances
|
||||
@rtype: list
|
||||
|
||||
@note: also assigns return value to self.trackers
|
||||
"""
|
||||
self.trackers = []
|
||||
retriever_methods = [m for m in rtorrent.tracker.methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
|
||||
# need to leave 2nd arg empty (dunno why)
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
m.add("t.multicall", self.info_hash, "",
|
||||
*[method.rpc_call + "=" for method in retriever_methods])
|
||||
|
||||
results = m.call()[0] # only sent one call, only need first result
|
||||
|
||||
for result in results:
|
||||
results_dict = {}
|
||||
# build results_dict
|
||||
for m, r in zip(retriever_methods, result):
|
||||
results_dict[m.varname] = rtorrent.rpc.process_result(m, r)
|
||||
|
||||
self.trackers.append(Tracker(
|
||||
self._rt_obj, self.info_hash, **results_dict))
|
||||
|
||||
return(self.trackers)
|
||||
|
||||
def get_files(self):
|
||||
"""Get list of File instances for given torrent.
|
||||
|
||||
@return: L{File} instances
|
||||
@rtype: list
|
||||
|
||||
@note: also assigns return value to self.files
|
||||
"""
|
||||
|
||||
self.files = []
|
||||
retriever_methods = [m for m in rtorrent.file.methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
# 2nd arg can be anything, but it'll return all files in torrent
|
||||
# regardless
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
m.add("f.multicall", self.info_hash, "",
|
||||
*[method.rpc_call + "=" for method in retriever_methods])
|
||||
|
||||
results = m.call()[0] # only sent one call, only need first result
|
||||
|
||||
offset_method_index = retriever_methods.index(
|
||||
rtorrent.rpc.find_method("f.get_offset"))
|
||||
|
||||
# make a list of the offsets of all the files, sort appropriately
|
||||
offset_list = sorted([r[offset_method_index] for r in results])
|
||||
|
||||
for result in results:
|
||||
results_dict = {}
|
||||
# build results_dict
|
||||
for m, r in zip(retriever_methods, result):
|
||||
results_dict[m.varname] = rtorrent.rpc.process_result(m, r)
|
||||
|
||||
# get proper index positions for each file (based on the file
|
||||
# offset)
|
||||
f_index = offset_list.index(results_dict["offset"])
|
||||
|
||||
self.files.append(File(self._rt_obj, self.info_hash,
|
||||
f_index, **results_dict))
|
||||
|
||||
return(self.files)
|
||||
|
||||
def set_directory(self, d):
|
||||
"""Modify download directory
|
||||
|
||||
@note: Needs to stop torrent in order to change the directory.
|
||||
Also doesn't restart after directory is set, that must be called
|
||||
separately.
|
||||
"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.try_stop")
|
||||
self.multicall_add(m, "d.set_directory", d)
|
||||
|
||||
self.directory = m.call()[-1]
|
||||
|
||||
def set_directory_base(self, d):
|
||||
"""Modify base download directory
|
||||
|
||||
@note: Needs to stop torrent in order to change the directory.
|
||||
Also doesn't restart after directory is set, that must be called
|
||||
separately.
|
||||
"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.try_stop")
|
||||
self.multicall_add(m, "d.set_directory_base", d)
|
||||
|
||||
def start(self):
|
||||
"""Start the torrent"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.try_start")
|
||||
self.multicall_add(m, "d.is_active")
|
||||
|
||||
self.active = m.call()[-1]
|
||||
return(self.active)
|
||||
|
||||
def stop(self):
|
||||
""""Stop the torrent"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.try_stop")
|
||||
self.multicall_add(m, "d.is_active")
|
||||
|
||||
self.active = m.call()[-1]
|
||||
return(self.active)
|
||||
|
||||
def pause(self):
|
||||
"""Pause the torrent"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.pause")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def resume(self):
|
||||
"""Resume the torrent"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.resume")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def close(self):
|
||||
"""Close the torrent and it's files"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.close")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def erase(self):
|
||||
"""Delete the torrent
|
||||
|
||||
@note: doesn't delete the downloaded files"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.erase")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def check_hash(self):
|
||||
"""(Re)hash check the torrent"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.check_hash")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def poll(self):
|
||||
"""poll rTorrent to get latest peer/tracker/file information"""
|
||||
self.get_peers()
|
||||
self.get_trackers()
|
||||
self.get_files()
|
||||
|
||||
def update(self):
|
||||
"""Refresh torrent data
|
||||
|
||||
@note: All fields are stored as attributes to self.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
multicall = rtorrent.rpc.Multicall(self)
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
for method in retriever_methods:
|
||||
multicall.add(method, self.rpc_id)
|
||||
|
||||
multicall.call()
|
||||
|
||||
# custom functions (only call private methods, since they only check
|
||||
# local variables and are therefore faster)
|
||||
self._call_custom_methods()
|
||||
|
||||
def accept_seeders(self, accept_seeds):
|
||||
"""Enable/disable whether the torrent connects to seeders
|
||||
|
||||
@param accept_seeds: enable/disable accepting seeders
|
||||
@type accept_seeds: bool"""
|
||||
if accept_seeds:
|
||||
call = "d.accepting_seeders.enable"
|
||||
else:
|
||||
call = "d.accepting_seeders.disable"
|
||||
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, call)
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def announce(self):
|
||||
"""Announce torrent info to tracker(s)"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.tracker_announce")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
@staticmethod
|
||||
def _assert_custom_key_valid(key):
|
||||
assert type(key) == int and key > 0 and key < 6, \
|
||||
"key must be an integer between 1-5"
|
||||
|
||||
def get_custom(self, key):
|
||||
"""
|
||||
Get custom value
|
||||
|
||||
@param key: the index for the custom field (between 1-5)
|
||||
@type key: int
|
||||
|
||||
@rtype: str
|
||||
"""
|
||||
|
||||
self._assert_custom_key_valid(key)
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
|
||||
field = "custom{0}".format(key)
|
||||
self.multicall_add(m, "d.get_{0}".format(field))
|
||||
setattr(self, field, m.call()[-1])
|
||||
|
||||
return (getattr(self, field))
|
||||
|
||||
def set_custom(self, key, value):
|
||||
"""
|
||||
Set custom value
|
||||
|
||||
@param key: the index for the custom field (between 1-5)
|
||||
@type key: int
|
||||
|
||||
@param value: the value to be stored
|
||||
@type value: str
|
||||
|
||||
@return: if successful, value will be returned
|
||||
@rtype: str
|
||||
"""
|
||||
|
||||
self._assert_custom_key_valid(key)
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
|
||||
self.multicall_add(m, "d.set_custom{0}".format(key), value)
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def set_visible(self, view, visible=True):
|
||||
p = self._rt_obj._get_conn()
|
||||
|
||||
if visible:
|
||||
return p.view.set_visible(self.info_hash, view)
|
||||
else:
|
||||
return p.view.set_not_visible(self.info_hash, view)
|
||||
|
||||
############################################################################
|
||||
# CUSTOM METHODS (Not part of the official rTorrent API)
|
||||
##########################################################################
|
||||
def _is_hash_checking_queued(self):
|
||||
"""Only checks instance variables, shouldn't be called directly"""
|
||||
# if hashing == 3, then torrent is marked for hash checking
|
||||
# if hash_checking == False, then torrent is waiting to be checked
|
||||
self.hash_checking_queued = (self.hashing == 3 and
|
||||
self.hash_checking is False)
|
||||
|
||||
return(self.hash_checking_queued)
|
||||
|
||||
def is_hash_checking_queued(self):
|
||||
"""Check if torrent is waiting to be hash checked
|
||||
|
||||
@note: Variable where the result for this method is stored Torrent.hash_checking_queued"""
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.get_hashing")
|
||||
self.multicall_add(m, "d.is_hash_checking")
|
||||
results = m.call()
|
||||
|
||||
setattr(self, "hashing", results[0])
|
||||
setattr(self, "hash_checking", results[1])
|
||||
|
||||
return(self._is_hash_checking_queued())
|
||||
|
||||
def _is_paused(self):
|
||||
"""Only checks instance variables, shouldn't be called directly"""
|
||||
self.paused = (self.state == 0)
|
||||
return(self.paused)
|
||||
|
||||
def is_paused(self):
|
||||
"""Check if torrent is paused
|
||||
|
||||
@note: Variable where the result for this method is stored: Torrent.paused"""
|
||||
self.get_state()
|
||||
return(self._is_paused())
|
||||
|
||||
def _is_started(self):
|
||||
"""Only checks instance variables, shouldn't be called directly"""
|
||||
self.started = (self.state == 1)
|
||||
return(self.started)
|
||||
|
||||
def is_started(self):
|
||||
"""Check if torrent is started
|
||||
|
||||
@note: Variable where the result for this method is stored: Torrent.started"""
|
||||
self.get_state()
|
||||
return(self._is_started())
|
||||
|
||||
|
||||
methods = [
|
||||
# RETRIEVERS
|
||||
Method(Torrent, 'is_hash_checked', 'd.is_hash_checked',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'is_hash_checking', 'd.is_hash_checking',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_peers_max', 'd.get_peers_max'),
|
||||
Method(Torrent, 'get_tracker_focus', 'd.get_tracker_focus'),
|
||||
Method(Torrent, 'get_skip_total', 'd.get_skip_total'),
|
||||
Method(Torrent, 'get_state', 'd.get_state'),
|
||||
Method(Torrent, 'get_peer_exchange', 'd.get_peer_exchange'),
|
||||
Method(Torrent, 'get_down_rate', 'd.get_down_rate'),
|
||||
Method(Torrent, 'get_connection_seed', 'd.get_connection_seed'),
|
||||
Method(Torrent, 'get_uploads_max', 'd.get_uploads_max'),
|
||||
Method(Torrent, 'get_priority_str', 'd.get_priority_str'),
|
||||
Method(Torrent, 'is_open', 'd.is_open',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_peers_min', 'd.get_peers_min'),
|
||||
Method(Torrent, 'get_peers_complete', 'd.get_peers_complete'),
|
||||
Method(Torrent, 'get_tracker_numwant', 'd.get_tracker_numwant'),
|
||||
Method(Torrent, 'get_connection_current', 'd.get_connection_current'),
|
||||
Method(Torrent, 'is_complete', 'd.get_complete',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_peers_connected', 'd.get_peers_connected'),
|
||||
Method(Torrent, 'get_chunk_size', 'd.get_chunk_size'),
|
||||
Method(Torrent, 'get_state_counter', 'd.get_state_counter'),
|
||||
Method(Torrent, 'get_base_filename', 'd.get_base_filename'),
|
||||
Method(Torrent, 'get_state_changed', 'd.get_state_changed'),
|
||||
Method(Torrent, 'get_peers_not_connected', 'd.get_peers_not_connected'),
|
||||
Method(Torrent, 'get_directory', 'd.get_directory'),
|
||||
Method(Torrent, 'is_incomplete', 'd.incomplete',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_tracker_size', 'd.get_tracker_size'),
|
||||
Method(Torrent, 'is_multi_file', 'd.is_multi_file',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_local_id', 'd.get_local_id'),
|
||||
Method(Torrent, 'get_ratio', 'd.get_ratio',
|
||||
post_process_func=lambda x: x / 1000.0,
|
||||
),
|
||||
Method(Torrent, 'get_loaded_file', 'd.get_loaded_file'),
|
||||
Method(Torrent, 'get_max_file_size', 'd.get_max_file_size'),
|
||||
Method(Torrent, 'get_size_chunks', 'd.get_size_chunks'),
|
||||
Method(Torrent, 'is_pex_active', 'd.is_pex_active',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_hashing', 'd.get_hashing'),
|
||||
Method(Torrent, 'get_bitfield', 'd.get_bitfield'),
|
||||
Method(Torrent, 'get_local_id_html', 'd.get_local_id_html'),
|
||||
Method(Torrent, 'get_connection_leech', 'd.get_connection_leech'),
|
||||
Method(Torrent, 'get_peers_accounted', 'd.get_peers_accounted'),
|
||||
Method(Torrent, 'get_message', 'd.get_message'),
|
||||
Method(Torrent, 'is_active', 'd.is_active',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_size_bytes', 'd.get_size_bytes'),
|
||||
Method(Torrent, 'get_ignore_commands', 'd.get_ignore_commands'),
|
||||
Method(Torrent, 'get_creation_date', 'd.get_creation_date'),
|
||||
Method(Torrent, 'get_base_path', 'd.get_base_path'),
|
||||
Method(Torrent, 'get_left_bytes', 'd.get_left_bytes'),
|
||||
Method(Torrent, 'get_size_files', 'd.get_size_files'),
|
||||
Method(Torrent, 'get_size_pex', 'd.get_size_pex'),
|
||||
Method(Torrent, 'is_private', 'd.is_private',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_max_size_pex', 'd.get_max_size_pex'),
|
||||
Method(Torrent, 'get_num_chunks_hashed', 'd.get_chunks_hashed',
|
||||
aliases=("get_chunks_hashed",)),
|
||||
Method(Torrent, 'get_num_chunks_wanted', 'd.wanted_chunks'),
|
||||
Method(Torrent, 'get_priority', 'd.get_priority'),
|
||||
Method(Torrent, 'get_skip_rate', 'd.get_skip_rate'),
|
||||
Method(Torrent, 'get_completed_bytes', 'd.get_completed_bytes'),
|
||||
Method(Torrent, 'get_name', 'd.get_name'),
|
||||
Method(Torrent, 'get_completed_chunks', 'd.get_completed_chunks'),
|
||||
Method(Torrent, 'get_throttle_name', 'd.get_throttle_name'),
|
||||
Method(Torrent, 'get_free_diskspace', 'd.get_free_diskspace'),
|
||||
Method(Torrent, 'get_directory_base', 'd.get_directory_base'),
|
||||
Method(Torrent, 'get_hashing_failed', 'd.get_hashing_failed'),
|
||||
Method(Torrent, 'get_tied_to_file', 'd.get_tied_to_file'),
|
||||
Method(Torrent, 'get_down_total', 'd.get_down_total'),
|
||||
Method(Torrent, 'get_bytes_done', 'd.get_bytes_done'),
|
||||
Method(Torrent, 'get_up_rate', 'd.get_up_rate'),
|
||||
Method(Torrent, 'get_up_total', 'd.get_up_total'),
|
||||
Method(Torrent, 'is_accepting_seeders', 'd.accepting_seeders',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, "get_chunks_seen", "d.chunks_seen",
|
||||
min_version=(0, 9, 1),
|
||||
),
|
||||
Method(Torrent, "is_partially_done", "d.is_partially_done",
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, "is_not_partially_done", "d.is_not_partially_done",
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, "get_time_started", "d.timestamp.started"),
|
||||
Method(Torrent, "get_custom1", "d.get_custom1"),
|
||||
Method(Torrent, "get_custom2", "d.get_custom2"),
|
||||
Method(Torrent, "get_custom3", "d.get_custom3"),
|
||||
Method(Torrent, "get_custom4", "d.get_custom4"),
|
||||
Method(Torrent, "get_custom5", "d.get_custom5"),
|
||||
|
||||
# MODIFIERS
|
||||
Method(Torrent, 'set_uploads_max', 'd.set_uploads_max'),
|
||||
Method(Torrent, 'set_tied_to_file', 'd.set_tied_to_file'),
|
||||
Method(Torrent, 'set_tracker_numwant', 'd.set_tracker_numwant'),
|
||||
Method(Torrent, 'set_priority', 'd.set_priority'),
|
||||
Method(Torrent, 'set_peers_max', 'd.set_peers_max'),
|
||||
Method(Torrent, 'set_hashing_failed', 'd.set_hashing_failed'),
|
||||
Method(Torrent, 'set_message', 'd.set_message'),
|
||||
Method(Torrent, 'set_throttle_name', 'd.set_throttle_name'),
|
||||
Method(Torrent, 'set_peers_min', 'd.set_peers_min'),
|
||||
Method(Torrent, 'set_ignore_commands', 'd.set_ignore_commands'),
|
||||
Method(Torrent, 'set_max_file_size', 'd.set_max_file_size'),
|
||||
Method(Torrent, 'set_custom5', 'd.set_custom5'),
|
||||
Method(Torrent, 'set_custom4', 'd.set_custom4'),
|
||||
Method(Torrent, 'set_custom2', 'd.set_custom2'),
|
||||
Method(Torrent, 'set_custom1', 'd.set_custom1'),
|
||||
Method(Torrent, 'set_custom3', 'd.set_custom3'),
|
||||
Method(Torrent, 'set_connection_current', 'd.set_connection_current'),
|
||||
]
|
|
@ -0,0 +1,138 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
# from rtorrent.rpc import Method
|
||||
import rtorrent.rpc
|
||||
|
||||
from rtorrent.common import safe_repr
|
||||
|
||||
Method = rtorrent.rpc.Method
|
||||
|
||||
|
||||
class Tracker:
|
||||
"""Represents an individual tracker within a L{Torrent} instance."""
|
||||
|
||||
def __init__(self, _rt_obj, info_hash, **kwargs):
|
||||
self._rt_obj = _rt_obj
|
||||
self.info_hash = info_hash # : info hash for the torrent using this tracker
|
||||
for k in kwargs.keys():
|
||||
setattr(self, k, kwargs.get(k, None))
|
||||
|
||||
# for clarity's sake...
|
||||
self.index = self.group # : position of tracker within the torrent's tracker list
|
||||
self.rpc_id = "{0}:t{1}".format(
|
||||
self.info_hash, self.index) # : unique id to pass to rTorrent
|
||||
|
||||
def __repr__(self):
|
||||
return safe_repr("Tracker(index={0}, url=\"{1}\")",
|
||||
self.index, self.url)
|
||||
|
||||
def enable(self):
|
||||
"""Alias for set_enabled("yes")"""
|
||||
self.set_enabled("yes")
|
||||
|
||||
def disable(self):
|
||||
"""Alias for set_enabled("no")"""
|
||||
self.set_enabled("no")
|
||||
|
||||
def update(self):
|
||||
"""Refresh tracker data
|
||||
|
||||
@note: All fields are stored as attributes to self.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
multicall = rtorrent.rpc.Multicall(self)
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
for method in retriever_methods:
|
||||
multicall.add(method, self.rpc_id)
|
||||
|
||||
multicall.call()
|
||||
|
||||
methods = [
|
||||
# RETRIEVERS
|
||||
Method(Tracker, 'is_enabled', 't.is_enabled', boolean=True),
|
||||
Method(Tracker, 'get_id', 't.get_id'),
|
||||
Method(Tracker, 'get_scrape_incomplete', 't.get_scrape_incomplete'),
|
||||
Method(Tracker, 'is_open', 't.is_open', boolean=True),
|
||||
Method(Tracker, 'get_min_interval', 't.get_min_interval'),
|
||||
Method(Tracker, 'get_scrape_downloaded', 't.get_scrape_downloaded'),
|
||||
Method(Tracker, 'get_group', 't.get_group'),
|
||||
Method(Tracker, 'get_scrape_time_last', 't.get_scrape_time_last'),
|
||||
Method(Tracker, 'get_type', 't.get_type'),
|
||||
Method(Tracker, 'get_normal_interval', 't.get_normal_interval'),
|
||||
Method(Tracker, 'get_url', 't.get_url'),
|
||||
Method(Tracker, 'get_scrape_complete', 't.get_scrape_complete',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_activity_time_last', 't.activity_time_last',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_activity_time_next', 't.activity_time_next',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_failed_time_last', 't.failed_time_last',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_failed_time_next', 't.failed_time_next',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_success_time_last', 't.success_time_last',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_success_time_next', 't.success_time_next',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'can_scrape', 't.can_scrape',
|
||||
min_version=(0, 9, 1),
|
||||
boolean=True
|
||||
),
|
||||
Method(Tracker, 'get_failed_counter', 't.failed_counter',
|
||||
min_version=(0, 8, 9)
|
||||
),
|
||||
Method(Tracker, 'get_scrape_counter', 't.scrape_counter',
|
||||
min_version=(0, 8, 9)
|
||||
),
|
||||
Method(Tracker, 'get_success_counter', 't.success_counter',
|
||||
min_version=(0, 8, 9)
|
||||
),
|
||||
Method(Tracker, 'is_usable', 't.is_usable',
|
||||
min_version=(0, 9, 1),
|
||||
boolean=True
|
||||
),
|
||||
Method(Tracker, 'is_busy', 't.is_busy',
|
||||
min_version=(0, 9, 1),
|
||||
boolean=True
|
||||
),
|
||||
Method(Tracker, 'is_extra_tracker', 't.is_extra_tracker',
|
||||
min_version=(0, 9, 1),
|
||||
boolean=True,
|
||||
),
|
||||
Method(Tracker, "get_latest_sum_peers", "t.latest_sum_peers",
|
||||
min_version=(0, 9, 0)
|
||||
),
|
||||
Method(Tracker, "get_latest_new_peers", "t.latest_new_peers",
|
||||
min_version=(0, 9, 0)
|
||||
),
|
||||
|
||||
# MODIFIERS
|
||||
Method(Tracker, 'set_enabled', 't.set_enabled'),
|
||||
]
|
|
@ -0,0 +1,86 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
|
||||
from compat import is_py3
|
||||
|
||||
|
||||
def bool_to_int(value):
|
||||
"""Translates python booleans to RPC-safe integers"""
|
||||
if value is True:
|
||||
return("1")
|
||||
elif value is False:
|
||||
return("0")
|
||||
else:
|
||||
return(value)
|
||||
|
||||
|
||||
def cmd_exists(cmds_list, cmd):
|
||||
"""Check if given command is in list of available commands
|
||||
|
||||
@param cmds_list: see L{RTorrent._rpc_methods}
|
||||
@type cmds_list: list
|
||||
|
||||
@param cmd: name of command to be checked
|
||||
@type cmd: str
|
||||
|
||||
@return: bool
|
||||
"""
|
||||
|
||||
return(cmd in cmds_list)
|
||||
|
||||
|
||||
def find_torrent(info_hash, torrent_list):
|
||||
"""Find torrent file in given list of Torrent classes
|
||||
|
||||
@param info_hash: info hash of torrent
|
||||
@type info_hash: str
|
||||
|
||||
@param torrent_list: list of L{Torrent} instances (see L{RTorrent.get_torrents})
|
||||
@type torrent_list: list
|
||||
|
||||
@return: L{Torrent} instance, or -1 if not found
|
||||
"""
|
||||
for t in torrent_list:
|
||||
if t.info_hash == info_hash:
|
||||
return t
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def is_valid_port(port):
|
||||
"""Check if given port is valid"""
|
||||
return(0 <= int(port) <= 65535)
|
||||
|
||||
|
||||
def convert_version_tuple_to_str(t):
|
||||
return(".".join([str(n) for n in t]))
|
||||
|
||||
|
||||
def safe_repr(fmt, *args, **kwargs):
|
||||
""" Formatter that handles unicode arguments """
|
||||
|
||||
if not is_py3():
|
||||
# unicode fmt can take str args, str fmt cannot take unicode args
|
||||
fmt = fmt.decode("utf-8")
|
||||
out = fmt.format(*args, **kwargs)
|
||||
return out.encode("utf-8")
|
||||
else:
|
||||
return fmt.format(*args, **kwargs)
|
|
@ -0,0 +1,30 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
import sys
|
||||
|
||||
|
||||
def is_py3():
|
||||
return sys.version_info[0] == 3
|
||||
|
||||
if is_py3():
|
||||
import xmlrpc.client as xmlrpclib
|
||||
else:
|
||||
import xmlrpclib
|
|
@ -0,0 +1,40 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
from common import convert_version_tuple_to_str
|
||||
|
||||
|
||||
class RTorrentVersionError(Exception):
|
||||
def __init__(self, min_version, cur_version):
|
||||
self.min_version = min_version
|
||||
self.cur_version = cur_version
|
||||
self.msg = "Minimum version required: {0}".format(
|
||||
convert_version_tuple_to_str(min_version))
|
||||
|
||||
def __str__(self):
|
||||
return(self.msg)
|
||||
|
||||
|
||||
class MethodError(Exception):
|
||||
def __init__(self, msg):
|
||||
self.msg = msg
|
||||
|
||||
def __str__(self):
|
||||
return(self.msg)
|
|
@ -0,0 +1,91 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
# from rtorrent.rpc import Method
|
||||
import rpc
|
||||
|
||||
from common import safe_repr
|
||||
|
||||
Method = rpc.Method
|
||||
|
||||
|
||||
class File:
|
||||
"""Represents an individual file within a L{Torrent} instance."""
|
||||
|
||||
def __init__(self, _rt_obj, info_hash, index, **kwargs):
|
||||
self._rt_obj = _rt_obj
|
||||
self.info_hash = info_hash # : info hash for the torrent the file is associated with
|
||||
self.index = index # : The position of the file within the file list
|
||||
for k in kwargs.keys():
|
||||
setattr(self, k, kwargs.get(k, None))
|
||||
|
||||
self.rpc_id = "{0}:f{1}".format(
|
||||
self.info_hash, self.index) # : unique id to pass to rTorrent
|
||||
|
||||
def update(self):
|
||||
"""Refresh file data
|
||||
|
||||
@note: All fields are stored as attributes to self.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
multicall = rtorrent.rpc.Multicall(self)
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
for method in retriever_methods:
|
||||
multicall.add(method, self.rpc_id)
|
||||
|
||||
multicall.call()
|
||||
|
||||
def __repr__(self):
|
||||
return safe_repr("File(index={0} path=\"{1}\")", self.index, self.path)
|
||||
|
||||
methods = [
|
||||
# RETRIEVERS
|
||||
Method(File, 'get_last_touched', 'f.get_last_touched'),
|
||||
Method(File, 'get_range_second', 'f.get_range_second'),
|
||||
Method(File, 'get_size_bytes', 'f.get_size_bytes'),
|
||||
Method(File, 'get_priority', 'f.get_priority'),
|
||||
Method(File, 'get_match_depth_next', 'f.get_match_depth_next'),
|
||||
Method(File, 'is_resize_queued', 'f.is_resize_queued',
|
||||
boolean=True,
|
||||
),
|
||||
Method(File, 'get_range_first', 'f.get_range_first'),
|
||||
Method(File, 'get_match_depth_prev', 'f.get_match_depth_prev'),
|
||||
Method(File, 'get_path', 'f.get_path'),
|
||||
Method(File, 'get_completed_chunks', 'f.get_completed_chunks'),
|
||||
Method(File, 'get_path_components', 'f.get_path_components'),
|
||||
Method(File, 'is_created', 'f.is_created',
|
||||
boolean=True,
|
||||
),
|
||||
Method(File, 'is_open', 'f.is_open',
|
||||
boolean=True,
|
||||
),
|
||||
Method(File, 'get_size_chunks', 'f.get_size_chunks'),
|
||||
Method(File, 'get_offset', 'f.get_offset'),
|
||||
Method(File, 'get_frozen_path', 'f.get_frozen_path'),
|
||||
Method(File, 'get_path_depth', 'f.get_path_depth'),
|
||||
Method(File, 'is_create_queued', 'f.is_create_queued',
|
||||
boolean=True,
|
||||
),
|
||||
|
||||
|
||||
# MODIFIERS
|
||||
]
|
|
@ -0,0 +1,84 @@
|
|||
# Copyright (c) 2013 Dean Gardiner, <gardiner91@gmail.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
import rpc
|
||||
|
||||
Method = rpc.Method
|
||||
|
||||
|
||||
class Group:
|
||||
__name__ = 'Group'
|
||||
|
||||
def __init__(self, _rt_obj, name):
|
||||
self._rt_obj = _rt_obj
|
||||
self.name = name
|
||||
|
||||
self.methods = [
|
||||
# RETRIEVERS
|
||||
Method(Group, 'get_max', 'group.' + self.name + '.ratio.max', varname='max'),
|
||||
Method(Group, 'get_min', 'group.' + self.name + '.ratio.min', varname='min'),
|
||||
Method(Group, 'get_upload', 'group.' + self.name + '.ratio.upload', varname='upload'),
|
||||
|
||||
# MODIFIERS
|
||||
Method(Group, 'set_max', 'group.' + self.name + '.ratio.max.set', varname='max'),
|
||||
Method(Group, 'set_min', 'group.' + self.name + '.ratio.min.set', varname='min'),
|
||||
Method(Group, 'set_upload', 'group.' + self.name + '.ratio.upload.set', varname='upload')
|
||||
]
|
||||
|
||||
rtorrent.rpc._build_rpc_methods(self, self.methods)
|
||||
|
||||
# Setup multicall_add method
|
||||
caller = lambda multicall, method, *args: \
|
||||
multicall.add(method, *args)
|
||||
setattr(self, "multicall_add", caller)
|
||||
|
||||
def _get_prefix(self):
|
||||
return 'group.' + self.name + '.ratio.'
|
||||
|
||||
def update(self):
|
||||
multicall = rtorrent.rpc.Multicall(self)
|
||||
|
||||
retriever_methods = [m for m in self.methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
|
||||
for method in retriever_methods:
|
||||
multicall.add(method)
|
||||
|
||||
multicall.call()
|
||||
|
||||
def enable(self):
|
||||
p = self._rt_obj._get_conn()
|
||||
return getattr(p, self._get_prefix() + 'enable')()
|
||||
|
||||
def disable(self):
|
||||
p = self._rt_obj._get_conn()
|
||||
return getattr(p, self._get_prefix() + 'disable')()
|
||||
|
||||
def set_command(self, *methods):
|
||||
methods = [m + '=' for m in methods]
|
||||
|
||||
m = rtorrent.rpc.Multicall(self)
|
||||
self.multicall_add(
|
||||
m, 'system.method.set',
|
||||
self._get_prefix() + 'command',
|
||||
*methods
|
||||
)
|
||||
|
||||
return(m.call()[-1])
|
|
@ -0,0 +1,281 @@
|
|||
# Copyright (C) 2011 by clueless <clueless.nospam ! mail.com>
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
# of this software and associated documentation files (the "Software"), to deal
|
||||
# in the Software without restriction, including without limitation the rights
|
||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
# copies of the Software, and to permit persons to whom the Software is
|
||||
# furnished to do so, subject to the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be included in
|
||||
# all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
# THE SOFTWARE.
|
||||
#
|
||||
# Version: 20111107
|
||||
#
|
||||
# Changelog
|
||||
# ---------
|
||||
# 2011-11-07 - Added support for Python2 (tested on 2.6)
|
||||
# 2011-10-03 - Fixed: moved check for end of list at the top of the while loop
|
||||
# in _decode_list (in case the list is empty) (Chris Lucas)
|
||||
# - Converted dictionary keys to str
|
||||
# 2011-04-24 - Changed date format to YYYY-MM-DD for versioning, bigger
|
||||
# integer denotes a newer version
|
||||
# - Fixed a bug that would treat False as an integral type but
|
||||
# encode it using the 'False' string, attempting to encode a
|
||||
# boolean now results in an error
|
||||
# - Fixed a bug where an integer value of 0 in a list or
|
||||
# dictionary resulted in a parse error while decoding
|
||||
#
|
||||
# 2011-04-03 - Original release
|
||||
|
||||
import sys
|
||||
|
||||
_py3 = sys.version_info[0] == 3
|
||||
|
||||
if _py3:
|
||||
_VALID_STRING_TYPES = (str,)
|
||||
else:
|
||||
_VALID_STRING_TYPES = (str, unicode) # @UndefinedVariable
|
||||
|
||||
_TYPE_INT = 1
|
||||
_TYPE_STRING = 2
|
||||
_TYPE_LIST = 3
|
||||
_TYPE_DICTIONARY = 4
|
||||
_TYPE_END = 5
|
||||
_TYPE_INVALID = 6
|
||||
|
||||
# Function to determine the type of he next value/item
|
||||
# Arguments:
|
||||
# char First character of the string that is to be decoded
|
||||
# Return value:
|
||||
# Returns an integer that describes what type the next value/item is
|
||||
|
||||
|
||||
def _gettype(char):
|
||||
if not isinstance(char, int):
|
||||
char = ord(char)
|
||||
if char == 0x6C: # 'l'
|
||||
return _TYPE_LIST
|
||||
elif char == 0x64: # 'd'
|
||||
return _TYPE_DICTIONARY
|
||||
elif char == 0x69: # 'i'
|
||||
return _TYPE_INT
|
||||
elif char == 0x65: # 'e'
|
||||
return _TYPE_END
|
||||
elif char >= 0x30 and char <= 0x39: # '0' '9'
|
||||
return _TYPE_STRING
|
||||
else:
|
||||
return _TYPE_INVALID
|
||||
|
||||
# Function to parse a string from the bendcoded data
|
||||
# Arguments:
|
||||
# data bencoded data, must be guaranteed to be a string
|
||||
# Return Value:
|
||||
# Returns a tuple, the first member of the tuple is the parsed string
|
||||
# The second member is whatever remains of the bencoded data so it can
|
||||
# be used to parse the next part of the data
|
||||
|
||||
|
||||
def _decode_string(data):
|
||||
end = 1
|
||||
# if py3, data[end] is going to be an int
|
||||
# if py2, data[end] will be a string
|
||||
if _py3:
|
||||
char = 0x3A
|
||||
else:
|
||||
char = chr(0x3A)
|
||||
|
||||
while data[end] != char: # ':'
|
||||
end = end + 1
|
||||
strlen = int(data[:end])
|
||||
return (data[end + 1:strlen + end + 1], data[strlen + end + 1:])
|
||||
|
||||
# Function to parse an integer from the bencoded data
|
||||
# Arguments:
|
||||
# data bencoded data, must be guaranteed to be an integer
|
||||
# Return Value:
|
||||
# Returns a tuple, the first member of the tuple is the parsed string
|
||||
# The second member is whatever remains of the bencoded data so it can
|
||||
# be used to parse the next part of the data
|
||||
|
||||
|
||||
def _decode_int(data):
|
||||
end = 1
|
||||
# if py3, data[end] is going to be an int
|
||||
# if py2, data[end] will be a string
|
||||
if _py3:
|
||||
char = 0x65
|
||||
else:
|
||||
char = chr(0x65)
|
||||
|
||||
while data[end] != char: # 'e'
|
||||
end = end + 1
|
||||
return (int(data[1:end]), data[end + 1:])
|
||||
|
||||
# Function to parse a bencoded list
|
||||
# Arguments:
|
||||
# data bencoded data, must be guaranted to be the start of a list
|
||||
# Return Value:
|
||||
# Returns a tuple, the first member of the tuple is the parsed list
|
||||
# The second member is whatever remains of the bencoded data so it can
|
||||
# be used to parse the next part of the data
|
||||
|
||||
|
||||
def _decode_list(data):
|
||||
x = []
|
||||
overflow = data[1:]
|
||||
while True: # Loop over the data
|
||||
if _gettype(overflow[0]) == _TYPE_END: # - Break if we reach the end of the list
|
||||
return (x, overflow[1:]) # and return the list and overflow
|
||||
|
||||
value, overflow = _decode(overflow) #
|
||||
if isinstance(value, bool) or overflow == '': # - if we have a parse error
|
||||
return (False, False) # Die with error
|
||||
else: # - Otherwise
|
||||
x.append(value) # add the value to the list
|
||||
|
||||
|
||||
# Function to parse a bencoded list
|
||||
# Arguments:
|
||||
# data bencoded data, must be guaranted to be the start of a list
|
||||
# Return Value:
|
||||
# Returns a tuple, the first member of the tuple is the parsed dictionary
|
||||
# The second member is whatever remains of the bencoded data so it can
|
||||
# be used to parse the next part of the data
|
||||
def _decode_dict(data):
|
||||
x = {}
|
||||
overflow = data[1:]
|
||||
while True: # Loop over the data
|
||||
if _gettype(overflow[0]) != _TYPE_STRING: # - If the key is not a string
|
||||
return (False, False) # Die with error
|
||||
key, overflow = _decode(overflow) #
|
||||
if key == False or overflow == '': # - If parse error
|
||||
return (False, False) # Die with error
|
||||
value, overflow = _decode(overflow) #
|
||||
if isinstance(value, bool) or overflow == '': # - If parse error
|
||||
print("Error parsing value")
|
||||
print(value)
|
||||
print(overflow)
|
||||
return (False, False) # Die with error
|
||||
else:
|
||||
# don't use bytes for the key
|
||||
key = key.decode()
|
||||
x[key] = value
|
||||
if _gettype(overflow[0]) == _TYPE_END:
|
||||
return (x, overflow[1:])
|
||||
|
||||
# Arguments:
|
||||
# data bencoded data in bytes format
|
||||
# Return Values:
|
||||
# Returns a tuple, the first member is the parsed data, could be a string,
|
||||
# an integer, a list or a dictionary, or a combination of those
|
||||
# The second member is the leftover of parsing, if everything parses correctly this
|
||||
# should be an empty byte string
|
||||
|
||||
|
||||
def _decode(data):
|
||||
btype = _gettype(data[0])
|
||||
if btype == _TYPE_INT:
|
||||
return _decode_int(data)
|
||||
elif btype == _TYPE_STRING:
|
||||
return _decode_string(data)
|
||||
elif btype == _TYPE_LIST:
|
||||
return _decode_list(data)
|
||||
elif btype == _TYPE_DICTIONARY:
|
||||
return _decode_dict(data)
|
||||
else:
|
||||
return (False, False)
|
||||
|
||||
# Function to decode bencoded data
|
||||
# Arguments:
|
||||
# data bencoded data, can be str or bytes
|
||||
# Return Values:
|
||||
# Returns the decoded data on success, this coud be bytes, int, dict or list
|
||||
# or a combinatin of those
|
||||
# If an error occurs the return value is False
|
||||
|
||||
|
||||
def decode(data):
|
||||
# if isinstance(data, str):
|
||||
# data = data.encode()
|
||||
decoded, overflow = _decode(data)
|
||||
return decoded
|
||||
|
||||
# Args: data as integer
|
||||
# return: encoded byte string
|
||||
|
||||
|
||||
def _encode_int(data):
|
||||
return b'i' + str(data).encode() + b'e'
|
||||
|
||||
# Args: data as string or bytes
|
||||
# Return: encoded byte string
|
||||
|
||||
|
||||
def _encode_string(data):
|
||||
return str(len(data)).encode() + b':' + data
|
||||
|
||||
# Args: data as list
|
||||
# Return: Encoded byte string, false on error
|
||||
|
||||
|
||||
def _encode_list(data):
|
||||
elist = b'l'
|
||||
for item in data:
|
||||
eitem = encode(item)
|
||||
if eitem == False:
|
||||
return False
|
||||
elist += eitem
|
||||
return elist + b'e'
|
||||
|
||||
# Args: data as dict
|
||||
# Return: encoded byte string, false on error
|
||||
|
||||
|
||||
def _encode_dict(data):
|
||||
edict = b'd'
|
||||
keys = []
|
||||
for key in data:
|
||||
if not isinstance(key, _VALID_STRING_TYPES) and not isinstance(key, bytes):
|
||||
return False
|
||||
keys.append(key)
|
||||
keys.sort()
|
||||
for key in keys:
|
||||
ekey = encode(key)
|
||||
eitem = encode(data[key])
|
||||
if ekey == False or eitem == False:
|
||||
return False
|
||||
edict += ekey + eitem
|
||||
return edict + b'e'
|
||||
|
||||
# Function to encode a variable in bencoding
|
||||
# Arguments:
|
||||
# data Variable to be encoded, can be a list, dict, str, bytes, int or a combination of those
|
||||
# Return Values:
|
||||
# Returns the encoded data as a byte string when successful
|
||||
# If an error occurs the return value is False
|
||||
|
||||
|
||||
def encode(data):
|
||||
if isinstance(data, bool):
|
||||
return False
|
||||
elif isinstance(data, int):
|
||||
return _encode_int(data)
|
||||
elif isinstance(data, bytes):
|
||||
return _encode_string(data)
|
||||
elif isinstance(data, _VALID_STRING_TYPES):
|
||||
return _encode_string(data.encode())
|
||||
elif isinstance(data, list):
|
||||
return _encode_list(data)
|
||||
elif isinstance(data, dict):
|
||||
return _encode_dict(data)
|
||||
else:
|
||||
return False
|
|
@ -0,0 +1,161 @@
|
|||
|
||||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
from lib.rtorrent.compat import is_py3
|
||||
import os.path
|
||||
import re
|
||||
import bencode as bencode
|
||||
import hashlib
|
||||
|
||||
if is_py3():
|
||||
from urllib.request import urlopen # @UnresolvedImport @UnusedImport
|
||||
else:
|
||||
from urllib2 import urlopen # @UnresolvedImport @Reimport
|
||||
|
||||
|
||||
class TorrentParser():
|
||||
def __init__(self, torrent):
|
||||
"""Decode and parse given torrent
|
||||
|
||||
@param torrent: handles: urls, file paths, string of torrent data
|
||||
@type torrent: str
|
||||
|
||||
@raise AssertionError: Can be raised for a couple reasons:
|
||||
- If _get_raw_torrent() couldn't figure out
|
||||
what X{torrent} is
|
||||
- if X{torrent} isn't a valid bencoded torrent file
|
||||
"""
|
||||
self.torrent = torrent
|
||||
self._raw_torrent = None # : testing yo
|
||||
self._torrent_decoded = None # : what up
|
||||
self.file_type = None
|
||||
|
||||
self._get_raw_torrent()
|
||||
assert self._raw_torrent is not None, "Couldn't get raw_torrent."
|
||||
if self._torrent_decoded is None:
|
||||
self._decode_torrent()
|
||||
assert isinstance(self._torrent_decoded, dict), "Invalid torrent file."
|
||||
self._parse_torrent()
|
||||
|
||||
def _is_raw(self):
|
||||
raw = False
|
||||
if isinstance(self.torrent, (str, bytes)):
|
||||
if isinstance(self._decode_torrent(self.torrent), dict):
|
||||
raw = True
|
||||
else:
|
||||
# reset self._torrent_decoded (currently equals False)
|
||||
self._torrent_decoded = None
|
||||
|
||||
return(raw)
|
||||
|
||||
def _get_raw_torrent(self):
|
||||
"""Get raw torrent data by determining what self.torrent is"""
|
||||
# already raw?
|
||||
if self._is_raw():
|
||||
self.file_type = "raw"
|
||||
self._raw_torrent = self.torrent
|
||||
return
|
||||
# local file?
|
||||
if os.path.isfile(self.torrent):
|
||||
self.file_type = "file"
|
||||
self._raw_torrent = open(self.torrent, "rb").read()
|
||||
# url?
|
||||
elif re.search("^(http|ftp):\/\/", self.torrent, re.I):
|
||||
self.file_type = "url"
|
||||
self._raw_torrent = urlopen(self.torrent).read()
|
||||
|
||||
def _decode_torrent(self, raw_torrent=None):
|
||||
if raw_torrent is None:
|
||||
raw_torrent = self._raw_torrent
|
||||
self._torrent_decoded = bencode.decode(raw_torrent)
|
||||
return(self._torrent_decoded)
|
||||
|
||||
def _calc_info_hash(self):
|
||||
self.info_hash = None
|
||||
if "info" in self._torrent_decoded.keys():
|
||||
info_encoded = bencode.encode(self._torrent_decoded["info"])
|
||||
|
||||
if info_encoded:
|
||||
self.info_hash = hashlib.sha1(info_encoded).hexdigest().upper()
|
||||
|
||||
return(self.info_hash)
|
||||
|
||||
def _parse_torrent(self):
|
||||
for k in self._torrent_decoded:
|
||||
key = k.replace(" ", "_").lower()
|
||||
setattr(self, key, self._torrent_decoded[k])
|
||||
|
||||
self._calc_info_hash()
|
||||
|
||||
|
||||
class NewTorrentParser(object):
|
||||
@staticmethod
|
||||
def _read_file(fp):
|
||||
return fp.read()
|
||||
|
||||
@staticmethod
|
||||
def _write_file(fp):
|
||||
fp.write()
|
||||
return fp
|
||||
|
||||
@staticmethod
|
||||
def _decode_torrent(data):
|
||||
return bencode.decode(data)
|
||||
|
||||
def __init__(self, input):
|
||||
self.input = input
|
||||
self._raw_torrent = None
|
||||
self._decoded_torrent = None
|
||||
self._hash_outdated = False
|
||||
|
||||
if isinstance(self.input, (str, bytes)):
|
||||
# path to file?
|
||||
if os.path.isfile(self.input):
|
||||
self._raw_torrent = self._read_file(open(self.input, "rb"))
|
||||
else:
|
||||
# assume input was the raw torrent data (do we really want
|
||||
# this?)
|
||||
self._raw_torrent = self.input
|
||||
|
||||
# file-like object?
|
||||
elif self.input.hasattr("read"):
|
||||
self._raw_torrent = self._read_file(self.input)
|
||||
|
||||
assert self._raw_torrent is not None, "Invalid input: input must be a path or a file-like object"
|
||||
|
||||
self._decoded_torrent = self._decode_torrent(self._raw_torrent)
|
||||
|
||||
assert isinstance(
|
||||
self._decoded_torrent, dict), "File could not be decoded"
|
||||
|
||||
def _calc_info_hash(self):
|
||||
self.info_hash = None
|
||||
info_dict = self._torrent_decoded["info"]
|
||||
self.info_hash = hashlib.sha1(bencode.encode(
|
||||
info_dict)).hexdigest().upper()
|
||||
|
||||
return(self.info_hash)
|
||||
|
||||
def set_tracker(self, tracker):
|
||||
self._decoded_torrent["announce"] = tracker
|
||||
|
||||
def get_tracker(self):
|
||||
return self._decoded_torrent.get("announce")
|
|
@ -0,0 +1,73 @@
|
|||
#
|
||||
# Copyright (c) 2013 Dean Gardiner, <gardiner91@gmail.com>
|
||||
#
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
from base64 import encodestring
|
||||
import string
|
||||
import xmlrpclib
|
||||
|
||||
|
||||
class BasicAuthTransport(xmlrpclib.Transport):
|
||||
def __init__(self, username=None, password=None):
|
||||
xmlrpclib.Transport.__init__(self)
|
||||
|
||||
self.username = username
|
||||
self.password = password
|
||||
|
||||
def send_auth(self, h):
|
||||
if self.username is not None and self.password is not None:
|
||||
h.putheader('AUTHORIZATION', "Basic %s" % string.replace(
|
||||
encodestring("%s:%s" % (self.username, self.password)),
|
||||
"\012", ""
|
||||
))
|
||||
|
||||
def single_request(self, host, handler, request_body, verbose=0):
|
||||
# issue XML-RPC request
|
||||
|
||||
h = self.make_connection(host)
|
||||
if verbose:
|
||||
h.set_debuglevel(1)
|
||||
|
||||
try:
|
||||
self.send_request(h, handler, request_body)
|
||||
self.send_host(h, host)
|
||||
self.send_user_agent(h)
|
||||
self.send_auth(h)
|
||||
self.send_content(h, request_body)
|
||||
|
||||
response = h.getresponse(buffering=True)
|
||||
if response.status == 200:
|
||||
self.verbose = verbose
|
||||
return self.parse_response(response)
|
||||
except xmlrpclib.Fault:
|
||||
raise
|
||||
except Exception:
|
||||
self.close()
|
||||
raise
|
||||
|
||||
#discard any response data and raise exception
|
||||
if response.getheader("content-length", 0):
|
||||
response.read()
|
||||
raise xmlrpclib.ProtocolError(
|
||||
host + handler,
|
||||
response.status, response.reason,
|
||||
response.msg,
|
||||
)
|
|
@ -0,0 +1,23 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
from lib.rtorrent.compat import xmlrpclib
|
||||
|
||||
HTTPServerProxy = xmlrpclib.ServerProxy
|
|
@ -0,0 +1,219 @@
|
|||
#!/usr/bin/python
|
||||
|
||||
# rtorrent_xmlrpc
|
||||
# (c) 2011 Roger Que <alerante@bellsouth.net>
|
||||
#
|
||||
# Modified portions:
|
||||
# (c) 2013 Dean Gardiner <gardiner91@gmail.com>
|
||||
#
|
||||
# Python module for interacting with rtorrent's XML-RPC interface
|
||||
# directly over SCGI, instead of through an HTTP server intermediary.
|
||||
# Inspired by Glenn Washburn's xmlrpc2scgi.py [1], but subclasses the
|
||||
# built-in xmlrpclib classes so that it is compatible with features
|
||||
# such as MultiCall objects.
|
||||
#
|
||||
# [1] <http://libtorrent.rakshasa.no/wiki/UtilsXmlrpc2scgi>
|
||||
#
|
||||
# Usage: server = SCGIServerProxy('scgi://localhost:7000/')
|
||||
# server = SCGIServerProxy('scgi:///path/to/scgi.sock')
|
||||
# print server.system.listMethods()
|
||||
# mc = xmlrpclib.MultiCall(server)
|
||||
# mc.get_up_rate()
|
||||
# mc.get_down_rate()
|
||||
# print mc()
|
||||
#
|
||||
#
|
||||
#
|
||||
# This program is free software; you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation; either version 2 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# This program is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with this program; if not, write to the Free Software
|
||||
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
|
||||
#
|
||||
# In addition, as a special exception, the copyright holders give
|
||||
# permission to link the code of portions of this program with the
|
||||
# OpenSSL library under certain conditions as described in each
|
||||
# individual source file, and distribute linked combinations
|
||||
# including the two.
|
||||
#
|
||||
# You must obey the GNU General Public License in all respects for
|
||||
# all of the code used other than OpenSSL. If you modify file(s)
|
||||
# with this exception, you may extend this exception to your version
|
||||
# of the file(s), but you are not obligated to do so. If you do not
|
||||
# wish to do so, delete this exception statement from your version.
|
||||
# If you delete this exception statement from all source files in the
|
||||
# program, then also delete it here.
|
||||
#
|
||||
#
|
||||
#
|
||||
# Portions based on Python's xmlrpclib:
|
||||
#
|
||||
# Copyright (c) 1999-2002 by Secret Labs AB
|
||||
# Copyright (c) 1999-2002 by Fredrik Lundh
|
||||
#
|
||||
# By obtaining, using, and/or copying this software and/or its
|
||||
# associated documentation, you agree that you have read, understood,
|
||||
# and will comply with the following terms and conditions:
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its associated documentation for any purpose and without fee is
|
||||
# hereby granted, provided that the above copyright notice appears in
|
||||
# all copies, and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of
|
||||
# Secret Labs AB or the author not be used in advertising or publicity
|
||||
# pertaining to distribution of the software without specific, written
|
||||
# prior permission.
|
||||
#
|
||||
# SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
|
||||
# TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
|
||||
# ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
|
||||
# BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
|
||||
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
|
||||
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
|
||||
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
|
||||
# OF THIS SOFTWARE.
|
||||
|
||||
import httplib
|
||||
import re
|
||||
import socket
|
||||
import urllib
|
||||
import xmlrpclib
|
||||
import errno
|
||||
|
||||
|
||||
class SCGITransport(xmlrpclib.Transport):
|
||||
# Added request() from Python 2.7 xmlrpclib here to backport to Python 2.6
|
||||
def request(self, host, handler, request_body, verbose=0):
|
||||
#retry request once if cached connection has gone cold
|
||||
for i in (0, 1):
|
||||
try:
|
||||
return self.single_request(host, handler, request_body, verbose)
|
||||
except socket.error, e:
|
||||
if i or e.errno not in (errno.ECONNRESET, errno.ECONNABORTED, errno.EPIPE):
|
||||
raise
|
||||
except httplib.BadStatusLine: #close after we sent request
|
||||
if i:
|
||||
raise
|
||||
|
||||
def single_request(self, host, handler, request_body, verbose=0):
|
||||
# Add SCGI headers to the request.
|
||||
headers = {'CONTENT_LENGTH': str(len(request_body)), 'SCGI': '1'}
|
||||
header = '\x00'.join(('%s\x00%s' % item for item in headers.iteritems())) + '\x00'
|
||||
header = '%d:%s' % (len(header), header)
|
||||
request_body = '%s,%s' % (header, request_body)
|
||||
|
||||
sock = None
|
||||
|
||||
try:
|
||||
if host:
|
||||
host, port = urllib.splitport(host)
|
||||
addrinfo = socket.getaddrinfo(host, int(port), socket.AF_INET,
|
||||
socket.SOCK_STREAM)
|
||||
sock = socket.socket(*addrinfo[0][:3])
|
||||
sock.connect(addrinfo[0][4])
|
||||
else:
|
||||
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
|
||||
sock.connect(handler)
|
||||
|
||||
self.verbose = verbose
|
||||
|
||||
sock.send(request_body)
|
||||
return self.parse_response(sock.makefile())
|
||||
finally:
|
||||
if sock:
|
||||
sock.close()
|
||||
|
||||
def parse_response(self, response):
|
||||
p, u = self.getparser()
|
||||
|
||||
response_body = ''
|
||||
while True:
|
||||
data = response.read(1024)
|
||||
if not data:
|
||||
break
|
||||
response_body += data
|
||||
|
||||
# Remove SCGI headers from the response.
|
||||
response_header, response_body = re.split(r'\n\s*?\n', response_body,
|
||||
maxsplit=1)
|
||||
|
||||
if self.verbose:
|
||||
print 'body:', repr(response_body)
|
||||
|
||||
p.feed(response_body)
|
||||
p.close()
|
||||
|
||||
return u.close()
|
||||
|
||||
|
||||
class SCGIServerProxy(xmlrpclib.ServerProxy):
|
||||
def __init__(self, uri, transport=None, encoding=None, verbose=False,
|
||||
allow_none=False, use_datetime=False):
|
||||
type, uri = urllib.splittype(uri)
|
||||
if type not in ('scgi'):
|
||||
raise IOError('unsupported XML-RPC protocol')
|
||||
self.__host, self.__handler = urllib.splithost(uri)
|
||||
if not self.__handler:
|
||||
self.__handler = '/'
|
||||
|
||||
if transport is None:
|
||||
transport = SCGITransport(use_datetime=use_datetime)
|
||||
self.__transport = transport
|
||||
|
||||
self.__encoding = encoding
|
||||
self.__verbose = verbose
|
||||
self.__allow_none = allow_none
|
||||
|
||||
def __close(self):
|
||||
self.__transport.close()
|
||||
|
||||
def __request(self, methodname, params):
|
||||
# call a method on the remote server
|
||||
|
||||
request = xmlrpclib.dumps(params, methodname, encoding=self.__encoding,
|
||||
allow_none=self.__allow_none)
|
||||
|
||||
response = self.__transport.request(
|
||||
self.__host,
|
||||
self.__handler,
|
||||
request,
|
||||
verbose=self.__verbose
|
||||
)
|
||||
|
||||
if len(response) == 1:
|
||||
response = response[0]
|
||||
|
||||
return response
|
||||
|
||||
def __repr__(self):
|
||||
return (
|
||||
"<SCGIServerProxy for %s%s>" %
|
||||
(self.__host, self.__handler)
|
||||
)
|
||||
|
||||
__str__ = __repr__
|
||||
|
||||
def __getattr__(self, name):
|
||||
# magic method dispatcher
|
||||
return xmlrpclib._Method(self.__request, name)
|
||||
|
||||
# note: to call a remote object with an non-standard name, use
|
||||
# result getattr(server, "strange-python-name")(args)
|
||||
|
||||
def __call__(self, attr):
|
||||
"""A workaround to get special attributes on the ServerProxy
|
||||
without interfering with the magic __getattr__
|
||||
"""
|
||||
if attr == "close":
|
||||
return self.__close
|
||||
elif attr == "transport":
|
||||
return self.__transport
|
||||
raise AttributeError("Attribute %r not found" % (attr,))
|
|
@ -0,0 +1,98 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
# from rtorrent.rpc import Method
|
||||
import rpc
|
||||
|
||||
from common import safe_repr
|
||||
|
||||
Method = rpc.Method
|
||||
|
||||
|
||||
class Peer:
|
||||
"""Represents an individual peer within a L{Torrent} instance."""
|
||||
def __init__(self, _rt_obj, info_hash, **kwargs):
|
||||
self._rt_obj = _rt_obj
|
||||
self.info_hash = info_hash # : info hash for the torrent the peer is associated with
|
||||
for k in kwargs.keys():
|
||||
setattr(self, k, kwargs.get(k, None))
|
||||
|
||||
self.rpc_id = "{0}:p{1}".format(
|
||||
self.info_hash, self.id) # : unique id to pass to rTorrent
|
||||
|
||||
def __repr__(self):
|
||||
return safe_repr("Peer(id={0})", self.id)
|
||||
|
||||
def update(self):
|
||||
"""Refresh peer data
|
||||
|
||||
@note: All fields are stored as attributes to self.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
multicall = rtorrent.rpc.Multicall(self)
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
for method in retriever_methods:
|
||||
multicall.add(method, self.rpc_id)
|
||||
|
||||
multicall.call()
|
||||
|
||||
methods = [
|
||||
# RETRIEVERS
|
||||
Method(Peer, 'is_preferred', 'p.is_preferred',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_down_rate', 'p.get_down_rate'),
|
||||
Method(Peer, 'is_unwanted', 'p.is_unwanted',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_peer_total', 'p.get_peer_total'),
|
||||
Method(Peer, 'get_peer_rate', 'p.get_peer_rate'),
|
||||
Method(Peer, 'get_port', 'p.get_port'),
|
||||
Method(Peer, 'is_snubbed', 'p.is_snubbed',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_id_html', 'p.get_id_html'),
|
||||
Method(Peer, 'get_up_rate', 'p.get_up_rate'),
|
||||
Method(Peer, 'is_banned', 'p.banned',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_completed_percent', 'p.get_completed_percent'),
|
||||
Method(Peer, 'completed_percent', 'p.completed_percent'),
|
||||
Method(Peer, 'get_id', 'p.get_id'),
|
||||
Method(Peer, 'is_obfuscated', 'p.is_obfuscated',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_down_total', 'p.get_down_total'),
|
||||
Method(Peer, 'get_client_version', 'p.get_client_version'),
|
||||
Method(Peer, 'get_address', 'p.get_address'),
|
||||
Method(Peer, 'is_incoming', 'p.is_incoming',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'is_encrypted', 'p.is_encrypted',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Peer, 'get_options_str', 'p.get_options_str'),
|
||||
Method(Peer, 'get_client_version', 'p.client_version'),
|
||||
Method(Peer, 'get_up_total', 'p.get_up_total'),
|
||||
|
||||
# MODIFIERS
|
||||
]
|
|
@ -0,0 +1,319 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
import inspect
|
||||
import lib.rtorrent
|
||||
import re
|
||||
from lib.rtorrent.common import bool_to_int, convert_version_tuple_to_str,\
|
||||
safe_repr
|
||||
from lib.rtorrent.err import MethodError
|
||||
from lib.rtorrent.compat import xmlrpclib
|
||||
|
||||
|
||||
def get_varname(rpc_call):
|
||||
"""Transform rpc method into variable name.
|
||||
|
||||
@newfield example: Example
|
||||
@example: if the name of the rpc method is 'p.get_down_rate', the variable
|
||||
name will be 'down_rate'
|
||||
"""
|
||||
# extract variable name from xmlrpc func name
|
||||
r = re.search(
|
||||
"([ptdf]\.|system\.|get\_|is\_|set\_)+([^=]*)", rpc_call, re.I)
|
||||
if r:
|
||||
return(r.groups()[-1])
|
||||
else:
|
||||
return(None)
|
||||
|
||||
|
||||
def _handle_unavailable_rpc_method(method, rt_obj):
|
||||
msg = "Method isn't available."
|
||||
if rt_obj._get_client_version_tuple() < method.min_version:
|
||||
msg = "This method is only available in " \
|
||||
"RTorrent version v{0} or later".format(
|
||||
convert_version_tuple_to_str(method.min_version))
|
||||
|
||||
raise MethodError(msg)
|
||||
|
||||
|
||||
class DummyClass:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
|
||||
class Method:
|
||||
"""Represents an individual RPC method"""
|
||||
|
||||
def __init__(self, _class, method_name,
|
||||
rpc_call, docstring=None, varname=None, **kwargs):
|
||||
self._class = _class # : Class this method is associated with
|
||||
self.class_name = _class.__name__
|
||||
self.method_name = method_name # : name of public-facing method
|
||||
self.rpc_call = rpc_call # : name of rpc method
|
||||
self.docstring = docstring # : docstring for rpc method (optional)
|
||||
self.varname = varname # : variable for the result of the method call, usually set to self.varname
|
||||
self.min_version = kwargs.get("min_version", (
|
||||
0, 0, 0)) # : Minimum version of rTorrent required
|
||||
self.boolean = kwargs.get("boolean", False) # : returns boolean value?
|
||||
self.post_process_func = kwargs.get(
|
||||
"post_process_func", None) # : custom post process function
|
||||
self.aliases = kwargs.get(
|
||||
"aliases", []) # : aliases for method (optional)
|
||||
self.required_args = []
|
||||
#: Arguments required when calling the method (not utilized)
|
||||
|
||||
self.method_type = self._get_method_type()
|
||||
|
||||
if self.varname is None:
|
||||
self.varname = get_varname(self.rpc_call)
|
||||
assert self.varname is not None, "Couldn't get variable name."
|
||||
|
||||
def __repr__(self):
|
||||
return safe_repr("Method(method_name='{0}', rpc_call='{1}')",
|
||||
self.method_name, self.rpc_call)
|
||||
|
||||
def _get_method_type(self):
|
||||
"""Determine whether method is a modifier or a retriever"""
|
||||
if self.method_name[:4] == "set_": return('m') # modifier
|
||||
else:
|
||||
return('r') # retriever
|
||||
|
||||
def is_modifier(self):
|
||||
if self.method_type == 'm':
|
||||
return(True)
|
||||
else:
|
||||
return(False)
|
||||
|
||||
def is_retriever(self):
|
||||
if self.method_type == 'r':
|
||||
return(True)
|
||||
else:
|
||||
return(False)
|
||||
|
||||
def is_available(self, rt_obj):
|
||||
if rt_obj._get_client_version_tuple() < self.min_version or \
|
||||
self.rpc_call not in rt_obj._get_rpc_methods():
|
||||
return(False)
|
||||
else:
|
||||
return(True)
|
||||
|
||||
|
||||
class Multicall:
|
||||
def __init__(self, class_obj, **kwargs):
|
||||
self.class_obj = class_obj
|
||||
if class_obj.__class__.__name__ == "RTorrent":
|
||||
self.rt_obj = class_obj
|
||||
else:
|
||||
self.rt_obj = class_obj._rt_obj
|
||||
self.calls = []
|
||||
|
||||
def add(self, method, *args):
|
||||
"""Add call to multicall
|
||||
|
||||
@param method: L{Method} instance or name of raw RPC method
|
||||
@type method: Method or str
|
||||
|
||||
@param args: call arguments
|
||||
"""
|
||||
# if a raw rpc method was given instead of a Method instance,
|
||||
# try and find the instance for it. And if all else fails, create a
|
||||
# dummy Method instance
|
||||
if isinstance(method, str):
|
||||
result = find_method(method)
|
||||
# if result not found
|
||||
if result == -1:
|
||||
method = Method(DummyClass, method, method)
|
||||
else:
|
||||
method = result
|
||||
|
||||
# ensure method is available before adding
|
||||
if not method.is_available(self.rt_obj):
|
||||
_handle_unavailable_rpc_method(method, self.rt_obj)
|
||||
|
||||
self.calls.append((method, args))
|
||||
|
||||
def list_calls(self):
|
||||
for c in self.calls:
|
||||
print(c)
|
||||
|
||||
def call(self):
|
||||
"""Execute added multicall calls
|
||||
|
||||
@return: the results (post-processed), in the order they were added
|
||||
@rtype: tuple
|
||||
"""
|
||||
m = xmlrpclib.MultiCall(self.rt_obj._get_conn())
|
||||
for call in self.calls:
|
||||
method, args = call
|
||||
rpc_call = getattr(method, "rpc_call")
|
||||
getattr(m, rpc_call)(*args)
|
||||
|
||||
results = m()
|
||||
results = tuple(results)
|
||||
results_processed = []
|
||||
|
||||
for r, c in zip(results, self.calls):
|
||||
method = c[0] # Method instance
|
||||
result = process_result(method, r)
|
||||
results_processed.append(result)
|
||||
# assign result to class_obj
|
||||
exists = hasattr(self.class_obj, method.varname)
|
||||
if not exists or not inspect.ismethod(getattr(self.class_obj, method.varname)):
|
||||
setattr(self.class_obj, method.varname, result)
|
||||
|
||||
return(tuple(results_processed))
|
||||
|
||||
|
||||
def call_method(class_obj, method, *args):
|
||||
"""Handles single RPC calls
|
||||
|
||||
@param class_obj: Peer/File/Torrent/Tracker/RTorrent instance
|
||||
@type class_obj: object
|
||||
|
||||
@param method: L{Method} instance or name of raw RPC method
|
||||
@type method: Method or str
|
||||
"""
|
||||
if method.is_retriever():
|
||||
args = args[:-1]
|
||||
else:
|
||||
assert args[-1] is not None, "No argument given."
|
||||
|
||||
if class_obj.__class__.__name__ == "RTorrent":
|
||||
rt_obj = class_obj
|
||||
else:
|
||||
rt_obj = class_obj._rt_obj
|
||||
|
||||
# check if rpc method is even available
|
||||
if not method.is_available(rt_obj):
|
||||
_handle_unavailable_rpc_method(method, rt_obj)
|
||||
|
||||
m = Multicall(class_obj)
|
||||
m.add(method, *args)
|
||||
# only added one method, only getting one result back
|
||||
ret_value = m.call()[0]
|
||||
|
||||
####### OBSOLETE ##########################################################
|
||||
# if method.is_retriever():
|
||||
# #value = process_result(method, ret_value)
|
||||
# value = ret_value #MultiCall already processed the result
|
||||
# else:
|
||||
# # we're setting the user's input to method.varname
|
||||
# # but we'll return the value that xmlrpc gives us
|
||||
# value = process_result(method, args[-1])
|
||||
##########################################################################
|
||||
|
||||
return(ret_value)
|
||||
|
||||
|
||||
def find_method(rpc_call):
|
||||
"""Return L{Method} instance associated with given RPC call"""
|
||||
method_lists = [
|
||||
lib.rtorrent.methods,
|
||||
lib.rtorrent.file.methods,
|
||||
lib.rtorrent.tracker.methods,
|
||||
lib.rtorrent.peer.methods,
|
||||
lib.rtorrent.torrent.methods,
|
||||
]
|
||||
|
||||
for l in method_lists:
|
||||
for m in l:
|
||||
if m.rpc_call.lower() == rpc_call.lower():
|
||||
return(m)
|
||||
|
||||
return(-1)
|
||||
|
||||
|
||||
def process_result(method, result):
|
||||
"""Process given C{B{result}} based on flags set in C{B{method}}
|
||||
|
||||
@param method: L{Method} instance
|
||||
@type method: Method
|
||||
|
||||
@param result: result to be processed (the result of given L{Method} instance)
|
||||
|
||||
@note: Supported Processing:
|
||||
- boolean - convert ones and zeros returned by rTorrent and
|
||||
convert to python boolean values
|
||||
"""
|
||||
# handle custom post processing function
|
||||
if method.post_process_func is not None:
|
||||
result = method.post_process_func(result)
|
||||
|
||||
# is boolean?
|
||||
if method.boolean:
|
||||
if result in [1, '1']:
|
||||
result = True
|
||||
elif result in [0, '0']:
|
||||
result = False
|
||||
|
||||
return(result)
|
||||
|
||||
|
||||
def _build_rpc_methods(class_, method_list):
|
||||
"""Build glorified aliases to raw RPC methods"""
|
||||
instance = None
|
||||
if not inspect.isclass(class_):
|
||||
instance = class_
|
||||
class_ = instance.__class__
|
||||
|
||||
for m in method_list:
|
||||
class_name = m.class_name
|
||||
if class_name != class_.__name__:
|
||||
continue
|
||||
|
||||
if class_name == "RTorrent":
|
||||
caller = lambda self, arg = None, method = m:\
|
||||
call_method(self, method, bool_to_int(arg))
|
||||
elif class_name == "Torrent":
|
||||
caller = lambda self, arg = None, method = m:\
|
||||
call_method(self, method, self.rpc_id,
|
||||
bool_to_int(arg))
|
||||
elif class_name in ["Tracker", "File"]:
|
||||
caller = lambda self, arg = None, method = m:\
|
||||
call_method(self, method, self.rpc_id,
|
||||
bool_to_int(arg))
|
||||
|
||||
elif class_name == "Peer":
|
||||
caller = lambda self, arg = None, method = m:\
|
||||
call_method(self, method, self.rpc_id,
|
||||
bool_to_int(arg))
|
||||
|
||||
elif class_name == "Group":
|
||||
caller = lambda arg = None, method = m: \
|
||||
call_method(instance, method, bool_to_int(arg))
|
||||
|
||||
if m.docstring is None:
|
||||
m.docstring = ""
|
||||
|
||||
# print(m)
|
||||
docstring = """{0}
|
||||
|
||||
@note: Variable where the result for this method is stored: {1}.{2}""".format(
|
||||
m.docstring,
|
||||
class_name,
|
||||
m.varname)
|
||||
|
||||
caller.__doc__ = docstring
|
||||
|
||||
for method_name in [m.method_name] + list(m.aliases):
|
||||
if instance is None:
|
||||
setattr(class_, method_name, caller)
|
||||
else:
|
||||
setattr(instance, method_name, caller)
|
|
@ -0,0 +1,517 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
import rpc
|
||||
# from rtorrent.rpc import Method
|
||||
import peer
|
||||
import tracker
|
||||
import file
|
||||
import compat
|
||||
|
||||
from common import safe_repr
|
||||
|
||||
Peer = peer.Peer
|
||||
Tracker = tracker.Tracker
|
||||
File = file.File
|
||||
Method = rpc.Method
|
||||
|
||||
|
||||
class Torrent:
|
||||
"""Represents an individual torrent within a L{RTorrent} instance."""
|
||||
|
||||
def __init__(self, _rt_obj, info_hash, **kwargs):
|
||||
self._rt_obj = _rt_obj
|
||||
self.info_hash = info_hash # : info hash for the torrent
|
||||
self.rpc_id = self.info_hash # : unique id to pass to rTorrent
|
||||
for k in kwargs.keys():
|
||||
setattr(self, k, kwargs.get(k, None))
|
||||
|
||||
self.peers = []
|
||||
self.trackers = []
|
||||
self.files = []
|
||||
|
||||
self._call_custom_methods()
|
||||
|
||||
def __repr__(self):
|
||||
return safe_repr("Torrent(info_hash=\"{0}\" name=\"{1}\")",
|
||||
self.info_hash, self.name)
|
||||
|
||||
def _call_custom_methods(self):
|
||||
"""only calls methods that check instance variables."""
|
||||
self._is_hash_checking_queued()
|
||||
self._is_started()
|
||||
self._is_paused()
|
||||
|
||||
def get_peers(self):
|
||||
"""Get list of Peer instances for given torrent.
|
||||
|
||||
@return: L{Peer} instances
|
||||
@rtype: list
|
||||
|
||||
@note: also assigns return value to self.peers
|
||||
"""
|
||||
self.peers = []
|
||||
retriever_methods = [m for m in peer.methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
# need to leave 2nd arg empty (dunno why)
|
||||
m = rpc.Multicall(self)
|
||||
m.add("p.multicall", self.info_hash, "",
|
||||
*[method.rpc_call + "=" for method in retriever_methods])
|
||||
|
||||
results = m.call()[0] # only sent one call, only need first result
|
||||
|
||||
for result in results:
|
||||
results_dict = {}
|
||||
# build results_dict
|
||||
for m, r in zip(retriever_methods, result):
|
||||
results_dict[m.varname] = rpc.process_result(m, r)
|
||||
|
||||
self.peers.append(Peer(
|
||||
self._rt_obj, self.info_hash, **results_dict))
|
||||
|
||||
return(self.peers)
|
||||
|
||||
def get_trackers(self):
|
||||
"""Get list of Tracker instances for given torrent.
|
||||
|
||||
@return: L{Tracker} instances
|
||||
@rtype: list
|
||||
|
||||
@note: also assigns return value to self.trackers
|
||||
"""
|
||||
self.trackers = []
|
||||
retriever_methods = [m for m in tracker.methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
|
||||
# need to leave 2nd arg empty (dunno why)
|
||||
m = rpc.Multicall(self)
|
||||
m.add("t.multicall", self.info_hash, "",
|
||||
*[method.rpc_call + "=" for method in retriever_methods])
|
||||
|
||||
results = m.call()[0] # only sent one call, only need first result
|
||||
|
||||
for result in results:
|
||||
results_dict = {}
|
||||
# build results_dict
|
||||
for m, r in zip(retriever_methods, result):
|
||||
results_dict[m.varname] = rpc.process_result(m, r)
|
||||
|
||||
self.trackers.append(Tracker(
|
||||
self._rt_obj, self.info_hash, **results_dict))
|
||||
|
||||
return(self.trackers)
|
||||
|
||||
def get_files(self):
|
||||
"""Get list of File instances for given torrent.
|
||||
|
||||
@return: L{File} instances
|
||||
@rtype: list
|
||||
|
||||
@note: also assigns return value to self.files
|
||||
"""
|
||||
|
||||
self.files = []
|
||||
retriever_methods = [m for m in file.methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
# 2nd arg can be anything, but it'll return all files in torrent
|
||||
# regardless
|
||||
m = rpc.Multicall(self)
|
||||
m.add("f.multicall", self.info_hash, "",
|
||||
*[method.rpc_call + "=" for method in retriever_methods])
|
||||
|
||||
results = m.call()[0] # only sent one call, only need first result
|
||||
|
||||
offset_method_index = retriever_methods.index(
|
||||
rpc.find_method("f.get_offset"))
|
||||
|
||||
# make a list of the offsets of all the files, sort appropriately
|
||||
offset_list = sorted([r[offset_method_index] for r in results])
|
||||
|
||||
for result in results:
|
||||
results_dict = {}
|
||||
# build results_dict
|
||||
for m, r in zip(retriever_methods, result):
|
||||
results_dict[m.varname] = rpc.process_result(m, r)
|
||||
|
||||
# get proper index positions for each file (based on the file
|
||||
# offset)
|
||||
f_index = offset_list.index(results_dict["offset"])
|
||||
|
||||
self.files.append(File(self._rt_obj, self.info_hash,
|
||||
f_index, **results_dict))
|
||||
|
||||
return(self.files)
|
||||
|
||||
def set_directory(self, d):
|
||||
"""Modify download directory
|
||||
|
||||
@note: Needs to stop torrent in order to change the directory.
|
||||
Also doesn't restart after directory is set, that must be called
|
||||
separately.
|
||||
"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.try_stop")
|
||||
self.multicall_add(m, "d.set_directory", d)
|
||||
|
||||
self.directory = m.call()[-1]
|
||||
|
||||
def set_directory_base(self, d):
|
||||
"""Modify base download directory
|
||||
|
||||
@note: Needs to stop torrent in order to change the directory.
|
||||
Also doesn't restart after directory is set, that must be called
|
||||
separately.
|
||||
"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.try_stop")
|
||||
self.multicall_add(m, "d.set_directory_base", d)
|
||||
|
||||
def start(self):
|
||||
"""Start the torrent"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.try_start")
|
||||
self.multicall_add(m, "d.is_active")
|
||||
|
||||
self.active = m.call()[-1]
|
||||
return(self.active)
|
||||
|
||||
def stop(self):
|
||||
""""Stop the torrent"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.try_stop")
|
||||
self.multicall_add(m, "d.is_active")
|
||||
|
||||
self.active = m.call()[-1]
|
||||
return(self.active)
|
||||
|
||||
def pause(self):
|
||||
"""Pause the torrent"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.pause")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def resume(self):
|
||||
"""Resume the torrent"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.resume")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def close(self):
|
||||
"""Close the torrent and it's files"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.close")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def erase(self):
|
||||
"""Delete the torrent
|
||||
|
||||
@note: doesn't delete the downloaded files"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.erase")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def check_hash(self):
|
||||
"""(Re)hash check the torrent"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.check_hash")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def poll(self):
|
||||
"""poll rTorrent to get latest peer/tracker/file information"""
|
||||
self.get_peers()
|
||||
self.get_trackers()
|
||||
self.get_files()
|
||||
|
||||
def update(self):
|
||||
"""Refresh torrent data
|
||||
|
||||
@note: All fields are stored as attributes to self.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
multicall = rpc.Multicall(self)
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
for method in retriever_methods:
|
||||
multicall.add(method, self.rpc_id)
|
||||
|
||||
multicall.call()
|
||||
|
||||
# custom functions (only call private methods, since they only check
|
||||
# local variables and are therefore faster)
|
||||
self._call_custom_methods()
|
||||
|
||||
def accept_seeders(self, accept_seeds):
|
||||
"""Enable/disable whether the torrent connects to seeders
|
||||
|
||||
@param accept_seeds: enable/disable accepting seeders
|
||||
@type accept_seeds: bool"""
|
||||
if accept_seeds:
|
||||
call = "d.accepting_seeders.enable"
|
||||
else:
|
||||
call = "d.accepting_seeders.disable"
|
||||
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, call)
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def announce(self):
|
||||
"""Announce torrent info to tracker(s)"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.tracker_announce")
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
@staticmethod
|
||||
def _assert_custom_key_valid(key):
|
||||
assert type(key) == int and key > 0 and key < 6, \
|
||||
"key must be an integer between 1-5"
|
||||
|
||||
def get_custom(self, key):
|
||||
"""
|
||||
Get custom value
|
||||
|
||||
@param key: the index for the custom field (between 1-5)
|
||||
@type key: int
|
||||
|
||||
@rtype: str
|
||||
"""
|
||||
|
||||
self._assert_custom_key_valid(key)
|
||||
m = rpc.Multicall(self)
|
||||
|
||||
field = "custom{0}".format(key)
|
||||
self.multicall_add(m, "d.get_{0}".format(field))
|
||||
setattr(self, field, m.call()[-1])
|
||||
|
||||
return (getattr(self, field))
|
||||
|
||||
def set_custom(self, key, value):
|
||||
"""
|
||||
Set custom value
|
||||
|
||||
@param key: the index for the custom field (between 1-5)
|
||||
@type key: int
|
||||
|
||||
@param value: the value to be stored
|
||||
@type value: str
|
||||
|
||||
@return: if successful, value will be returned
|
||||
@rtype: str
|
||||
"""
|
||||
|
||||
self._assert_custom_key_valid(key)
|
||||
m = rpc.Multicall(self)
|
||||
|
||||
self.multicall_add(m, "d.set_custom{0}".format(key), value)
|
||||
|
||||
return(m.call()[-1])
|
||||
|
||||
def set_visible(self, view, visible=True):
|
||||
p = self._rt_obj._get_conn()
|
||||
|
||||
if visible:
|
||||
return p.view.set_visible(self.info_hash, view)
|
||||
else:
|
||||
return p.view.set_not_visible(self.info_hash, view)
|
||||
|
||||
############################################################################
|
||||
# CUSTOM METHODS (Not part of the official rTorrent API)
|
||||
##########################################################################
|
||||
def _is_hash_checking_queued(self):
|
||||
"""Only checks instance variables, shouldn't be called directly"""
|
||||
# if hashing == 3, then torrent is marked for hash checking
|
||||
# if hash_checking == False, then torrent is waiting to be checked
|
||||
self.hash_checking_queued = (self.hashing == 3 and
|
||||
self.hash_checking is False)
|
||||
|
||||
return(self.hash_checking_queued)
|
||||
|
||||
def is_hash_checking_queued(self):
|
||||
"""Check if torrent is waiting to be hash checked
|
||||
|
||||
@note: Variable where the result for this method is stored Torrent.hash_checking_queued"""
|
||||
m = rpc.Multicall(self)
|
||||
self.multicall_add(m, "d.get_hashing")
|
||||
self.multicall_add(m, "d.is_hash_checking")
|
||||
results = m.call()
|
||||
|
||||
setattr(self, "hashing", results[0])
|
||||
setattr(self, "hash_checking", results[1])
|
||||
|
||||
return(self._is_hash_checking_queued())
|
||||
|
||||
def _is_paused(self):
|
||||
"""Only checks instance variables, shouldn't be called directly"""
|
||||
self.paused = (self.state == 0)
|
||||
return(self.paused)
|
||||
|
||||
def is_paused(self):
|
||||
"""Check if torrent is paused
|
||||
|
||||
@note: Variable where the result for this method is stored: Torrent.paused"""
|
||||
self.get_state()
|
||||
return(self._is_paused())
|
||||
|
||||
def _is_started(self):
|
||||
"""Only checks instance variables, shouldn't be called directly"""
|
||||
self.started = (self.state == 1)
|
||||
return(self.started)
|
||||
|
||||
def is_started(self):
|
||||
"""Check if torrent is started
|
||||
|
||||
@note: Variable where the result for this method is stored: Torrent.started"""
|
||||
self.get_state()
|
||||
return(self._is_started())
|
||||
|
||||
|
||||
methods = [
|
||||
# RETRIEVERS
|
||||
Method(Torrent, 'is_hash_checked', 'd.is_hash_checked',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'is_hash_checking', 'd.is_hash_checking',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_peers_max', 'd.get_peers_max'),
|
||||
Method(Torrent, 'get_tracker_focus', 'd.get_tracker_focus'),
|
||||
Method(Torrent, 'get_skip_total', 'd.get_skip_total'),
|
||||
Method(Torrent, 'get_state', 'd.get_state'),
|
||||
Method(Torrent, 'get_peer_exchange', 'd.get_peer_exchange'),
|
||||
Method(Torrent, 'get_down_rate', 'd.get_down_rate'),
|
||||
Method(Torrent, 'get_connection_seed', 'd.get_connection_seed'),
|
||||
Method(Torrent, 'get_uploads_max', 'd.get_uploads_max'),
|
||||
Method(Torrent, 'get_priority_str', 'd.get_priority_str'),
|
||||
Method(Torrent, 'is_open', 'd.is_open',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_peers_min', 'd.get_peers_min'),
|
||||
Method(Torrent, 'get_peers_complete', 'd.get_peers_complete'),
|
||||
Method(Torrent, 'get_tracker_numwant', 'd.get_tracker_numwant'),
|
||||
Method(Torrent, 'get_connection_current', 'd.get_connection_current'),
|
||||
Method(Torrent, 'is_complete', 'd.get_complete',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_peers_connected', 'd.get_peers_connected'),
|
||||
Method(Torrent, 'get_chunk_size', 'd.get_chunk_size'),
|
||||
Method(Torrent, 'get_state_counter', 'd.get_state_counter'),
|
||||
Method(Torrent, 'get_base_filename', 'd.get_base_filename'),
|
||||
Method(Torrent, 'get_state_changed', 'd.get_state_changed'),
|
||||
Method(Torrent, 'get_peers_not_connected', 'd.get_peers_not_connected'),
|
||||
Method(Torrent, 'get_directory', 'd.get_directory'),
|
||||
Method(Torrent, 'is_incomplete', 'd.incomplete',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_tracker_size', 'd.get_tracker_size'),
|
||||
Method(Torrent, 'is_multi_file', 'd.is_multi_file',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_local_id', 'd.get_local_id'),
|
||||
Method(Torrent, 'get_ratio', 'd.get_ratio',
|
||||
post_process_func=lambda x: x / 1000.0,
|
||||
),
|
||||
Method(Torrent, 'get_loaded_file', 'd.get_loaded_file'),
|
||||
Method(Torrent, 'get_max_file_size', 'd.get_max_file_size'),
|
||||
Method(Torrent, 'get_size_chunks', 'd.get_size_chunks'),
|
||||
Method(Torrent, 'is_pex_active', 'd.is_pex_active',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_hashing', 'd.get_hashing'),
|
||||
Method(Torrent, 'get_bitfield', 'd.get_bitfield'),
|
||||
Method(Torrent, 'get_local_id_html', 'd.get_local_id_html'),
|
||||
Method(Torrent, 'get_connection_leech', 'd.get_connection_leech'),
|
||||
Method(Torrent, 'get_peers_accounted', 'd.get_peers_accounted'),
|
||||
Method(Torrent, 'get_message', 'd.get_message'),
|
||||
Method(Torrent, 'is_active', 'd.is_active',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_size_bytes', 'd.get_size_bytes'),
|
||||
Method(Torrent, 'get_ignore_commands', 'd.get_ignore_commands'),
|
||||
Method(Torrent, 'get_creation_date', 'd.get_creation_date'),
|
||||
Method(Torrent, 'get_base_path', 'd.get_base_path'),
|
||||
Method(Torrent, 'get_left_bytes', 'd.get_left_bytes'),
|
||||
Method(Torrent, 'get_size_files', 'd.get_size_files'),
|
||||
Method(Torrent, 'get_size_pex', 'd.get_size_pex'),
|
||||
Method(Torrent, 'is_private', 'd.is_private',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, 'get_max_size_pex', 'd.get_max_size_pex'),
|
||||
Method(Torrent, 'get_num_chunks_hashed', 'd.get_chunks_hashed',
|
||||
aliases=("get_chunks_hashed",)),
|
||||
Method(Torrent, 'get_num_chunks_wanted', 'd.wanted_chunks'),
|
||||
Method(Torrent, 'get_priority', 'd.get_priority'),
|
||||
Method(Torrent, 'get_skip_rate', 'd.get_skip_rate'),
|
||||
Method(Torrent, 'get_completed_bytes', 'd.get_completed_bytes'),
|
||||
Method(Torrent, 'get_name', 'd.get_name'),
|
||||
Method(Torrent, 'get_completed_chunks', 'd.get_completed_chunks'),
|
||||
Method(Torrent, 'get_throttle_name', 'd.get_throttle_name'),
|
||||
Method(Torrent, 'get_free_diskspace', 'd.get_free_diskspace'),
|
||||
Method(Torrent, 'get_directory_base', 'd.get_directory_base'),
|
||||
Method(Torrent, 'get_hashing_failed', 'd.get_hashing_failed'),
|
||||
Method(Torrent, 'get_tied_to_file', 'd.get_tied_to_file'),
|
||||
Method(Torrent, 'get_down_total', 'd.get_down_total'),
|
||||
Method(Torrent, 'get_bytes_done', 'd.get_bytes_done'),
|
||||
Method(Torrent, 'get_up_rate', 'd.get_up_rate'),
|
||||
Method(Torrent, 'get_up_total', 'd.get_up_total'),
|
||||
Method(Torrent, 'is_accepting_seeders', 'd.accepting_seeders',
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, "get_chunks_seen", "d.chunks_seen",
|
||||
min_version=(0, 9, 1),
|
||||
),
|
||||
Method(Torrent, "is_partially_done", "d.is_partially_done",
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, "is_not_partially_done", "d.is_not_partially_done",
|
||||
boolean=True,
|
||||
),
|
||||
Method(Torrent, "get_time_started", "d.timestamp.started"),
|
||||
Method(Torrent, "get_custom1", "d.get_custom1"),
|
||||
Method(Torrent, "get_custom2", "d.get_custom2"),
|
||||
Method(Torrent, "get_custom3", "d.get_custom3"),
|
||||
Method(Torrent, "get_custom4", "d.get_custom4"),
|
||||
Method(Torrent, "get_custom5", "d.get_custom5"),
|
||||
|
||||
# MODIFIERS
|
||||
Method(Torrent, 'set_uploads_max', 'd.set_uploads_max'),
|
||||
Method(Torrent, 'set_tied_to_file', 'd.set_tied_to_file'),
|
||||
Method(Torrent, 'set_tracker_numwant', 'd.set_tracker_numwant'),
|
||||
Method(Torrent, 'set_priority', 'd.set_priority'),
|
||||
Method(Torrent, 'set_peers_max', 'd.set_peers_max'),
|
||||
Method(Torrent, 'set_hashing_failed', 'd.set_hashing_failed'),
|
||||
Method(Torrent, 'set_message', 'd.set_message'),
|
||||
Method(Torrent, 'set_throttle_name', 'd.set_throttle_name'),
|
||||
Method(Torrent, 'set_peers_min', 'd.set_peers_min'),
|
||||
Method(Torrent, 'set_ignore_commands', 'd.set_ignore_commands'),
|
||||
Method(Torrent, 'set_max_file_size', 'd.set_max_file_size'),
|
||||
Method(Torrent, 'set_custom5', 'd.set_custom5'),
|
||||
Method(Torrent, 'set_custom4', 'd.set_custom4'),
|
||||
Method(Torrent, 'set_custom2', 'd.set_custom2'),
|
||||
Method(Torrent, 'set_custom1', 'd.set_custom1'),
|
||||
Method(Torrent, 'set_custom3', 'd.set_custom3'),
|
||||
Method(Torrent, 'set_connection_current', 'd.set_connection_current'),
|
||||
]
|
|
@ -0,0 +1,138 @@
|
|||
# Copyright (c) 2013 Chris Lucas, <chris@chrisjlucas.com>
|
||||
# Permission is hereby granted, free of charge, to any person obtaining
|
||||
# a copy of this software and associated documentation files (the
|
||||
# "Software"), to deal in the Software without restriction, including
|
||||
# without limitation the rights to use, copy, modify, merge, publish,
|
||||
# distribute, sublicense, and/or sell copies of the Software, and to
|
||||
# permit persons to whom the Software is furnished to do so, subject to
|
||||
# the following conditions:
|
||||
#
|
||||
# The above copyright notice and this permission notice shall be
|
||||
# included in all copies or substantial portions of the Software.
|
||||
#
|
||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
# from rtorrent.rpc import Method
|
||||
import rpc
|
||||
|
||||
from common import safe_repr
|
||||
|
||||
Method = rpc.Method
|
||||
|
||||
|
||||
class Tracker:
|
||||
"""Represents an individual tracker within a L{Torrent} instance."""
|
||||
|
||||
def __init__(self, _rt_obj, info_hash, **kwargs):
|
||||
self._rt_obj = _rt_obj
|
||||
self.info_hash = info_hash # : info hash for the torrent using this tracker
|
||||
for k in kwargs.keys():
|
||||
setattr(self, k, kwargs.get(k, None))
|
||||
|
||||
# for clarity's sake...
|
||||
self.index = self.group # : position of tracker within the torrent's tracker list
|
||||
self.rpc_id = "{0}:t{1}".format(
|
||||
self.info_hash, self.index) # : unique id to pass to rTorrent
|
||||
|
||||
def __repr__(self):
|
||||
return safe_repr("Tracker(index={0}, url=\"{1}\")",
|
||||
self.index, self.url)
|
||||
|
||||
def enable(self):
|
||||
"""Alias for set_enabled("yes")"""
|
||||
self.set_enabled("yes")
|
||||
|
||||
def disable(self):
|
||||
"""Alias for set_enabled("no")"""
|
||||
self.set_enabled("no")
|
||||
|
||||
def update(self):
|
||||
"""Refresh tracker data
|
||||
|
||||
@note: All fields are stored as attributes to self.
|
||||
|
||||
@return: None
|
||||
"""
|
||||
multicall = rtorrent.rpc.Multicall(self)
|
||||
retriever_methods = [m for m in methods
|
||||
if m.is_retriever() and m.is_available(self._rt_obj)]
|
||||
for method in retriever_methods:
|
||||
multicall.add(method, self.rpc_id)
|
||||
|
||||
multicall.call()
|
||||
|
||||
methods = [
|
||||
# RETRIEVERS
|
||||
Method(Tracker, 'is_enabled', 't.is_enabled', boolean=True),
|
||||
Method(Tracker, 'get_id', 't.get_id'),
|
||||
Method(Tracker, 'get_scrape_incomplete', 't.get_scrape_incomplete'),
|
||||
Method(Tracker, 'is_open', 't.is_open', boolean=True),
|
||||
Method(Tracker, 'get_min_interval', 't.get_min_interval'),
|
||||
Method(Tracker, 'get_scrape_downloaded', 't.get_scrape_downloaded'),
|
||||
Method(Tracker, 'get_group', 't.get_group'),
|
||||
Method(Tracker, 'get_scrape_time_last', 't.get_scrape_time_last'),
|
||||
Method(Tracker, 'get_type', 't.get_type'),
|
||||
Method(Tracker, 'get_normal_interval', 't.get_normal_interval'),
|
||||
Method(Tracker, 'get_url', 't.get_url'),
|
||||
Method(Tracker, 'get_scrape_complete', 't.get_scrape_complete',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_activity_time_last', 't.activity_time_last',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_activity_time_next', 't.activity_time_next',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_failed_time_last', 't.failed_time_last',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_failed_time_next', 't.failed_time_next',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_success_time_last', 't.success_time_last',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'get_success_time_next', 't.success_time_next',
|
||||
min_version=(0, 8, 9),
|
||||
),
|
||||
Method(Tracker, 'can_scrape', 't.can_scrape',
|
||||
min_version=(0, 9, 1),
|
||||
boolean=True
|
||||
),
|
||||
Method(Tracker, 'get_failed_counter', 't.failed_counter',
|
||||
min_version=(0, 8, 9)
|
||||
),
|
||||
Method(Tracker, 'get_scrape_counter', 't.scrape_counter',
|
||||
min_version=(0, 8, 9)
|
||||
),
|
||||
Method(Tracker, 'get_success_counter', 't.success_counter',
|
||||
min_version=(0, 8, 9)
|
||||
),
|
||||
Method(Tracker, 'is_usable', 't.is_usable',
|
||||
min_version=(0, 9, 1),
|
||||
boolean=True
|
||||
),
|
||||
Method(Tracker, 'is_busy', 't.is_busy',
|
||||
min_version=(0, 9, 1),
|
||||
boolean=True
|
||||
),
|
||||
Method(Tracker, 'is_extra_tracker', 't.is_extra_tracker',
|
||||
min_version=(0, 9, 1),
|
||||
boolean=True,
|
||||
),
|
||||
Method(Tracker, "get_latest_sum_peers", "t.latest_sum_peers",
|
||||
min_version=(0, 9, 0)
|
||||
),
|
||||
Method(Tracker, "get_latest_new_peers", "t.latest_new_peers",
|
||||
min_version=(0, 9, 0)
|
||||
),
|
||||
|
||||
# MODIFIERS
|
||||
Method(Tracker, 'set_enabled', 't.set_enabled'),
|
||||
]
|
|
@ -0,0 +1,142 @@
|
|||
#coding=utf8
|
||||
import urllib
|
||||
import urllib2
|
||||
import urlparse
|
||||
import cookielib
|
||||
import re
|
||||
import StringIO
|
||||
try:
|
||||
import json
|
||||
except ImportError:
|
||||
import simplejson as json
|
||||
|
||||
from upload import MultiPartForm
|
||||
|
||||
class UTorrentClient(object):
|
||||
|
||||
def __init__(self, base_url, username, password):
|
||||
self.base_url = base_url
|
||||
self.username = username
|
||||
self.password = password
|
||||
self.opener = self._make_opener('uTorrent', base_url, username, password)
|
||||
self.token = self._get_token()
|
||||
#TODO refresh token, when necessary
|
||||
|
||||
def _make_opener(self, realm, base_url, username, password):
|
||||
'''uTorrent API need HTTP Basic Auth and cookie support for token verify.'''
|
||||
|
||||
auth_handler = urllib2.HTTPBasicAuthHandler()
|
||||
auth_handler.add_password(realm=realm,
|
||||
uri=base_url,
|
||||
user=username,
|
||||
passwd=password)
|
||||
opener = urllib2.build_opener(auth_handler)
|
||||
urllib2.install_opener(opener)
|
||||
|
||||
cookie_jar = cookielib.CookieJar()
|
||||
cookie_handler = urllib2.HTTPCookieProcessor(cookie_jar)
|
||||
|
||||
handlers = [auth_handler, cookie_handler]
|
||||
opener = urllib2.build_opener(*handlers)
|
||||
return opener
|
||||
|
||||
def _get_token(self):
|
||||
url = urlparse.urljoin(self.base_url, 'token.html')
|
||||
response = self.opener.open(url)
|
||||
token_re = "<div id='token' style='display:none;'>([^<>]+)</div>"
|
||||
match = re.search(token_re, response.read())
|
||||
return match.group(1)
|
||||
|
||||
|
||||
def list(self, **kwargs):
|
||||
params = [('list', '1')]
|
||||
params += kwargs.items()
|
||||
return self._action(params)
|
||||
|
||||
def start(self, *hashes):
|
||||
params = [('action', 'start'),]
|
||||
for hash in hashes:
|
||||
params.append(('hash', hash))
|
||||
return self._action(params)
|
||||
|
||||
def stop(self, *hashes):
|
||||
params = [('action', 'stop'),]
|
||||
for hash in hashes:
|
||||
params.append(('hash', hash))
|
||||
return self._action(params)
|
||||
|
||||
def pause(self, *hashes):
|
||||
params = [('action', 'pause'),]
|
||||
for hash in hashes:
|
||||
params.append(('hash', hash))
|
||||
return self._action(params)
|
||||
|
||||
def forcestart(self, *hashes):
|
||||
params = [('action', 'forcestart'),]
|
||||
for hash in hashes:
|
||||
params.append(('hash', hash))
|
||||
return self._action(params)
|
||||
|
||||
def remove(self, *hashes):
|
||||
params = [('action', 'remove'),]
|
||||
for hash in hashes:
|
||||
params.append(('hash', hash))
|
||||
return self._action(params)
|
||||
|
||||
def removedata(self, *hashes):
|
||||
params = [('action', 'removedata'),]
|
||||
for hash in hashes:
|
||||
params.append(('hash', hash))
|
||||
return self._action(params)
|
||||
|
||||
def recheck(self, *hashes):
|
||||
params = [('action', 'recheck'),]
|
||||
for hash in hashes:
|
||||
params.append(('hash', hash))
|
||||
return self._action(params)
|
||||
|
||||
def getfiles(self, hash):
|
||||
params = [('action', 'getfiles'), ('hash', hash)]
|
||||
return self._action(params)
|
||||
|
||||
def getprops(self, hash):
|
||||
params = [('action', 'getprops'), ('hash', hash)]
|
||||
return self._action(params)
|
||||
|
||||
def setprio(self, hash, priority, *files):
|
||||
params = [('action', 'setprio'), ('hash', hash), ('p', str(priority))]
|
||||
for file_index in files:
|
||||
params.append(('f', str(file_index)))
|
||||
|
||||
return self._action(params)
|
||||
|
||||
def addfile(self, filename, filepath=None, bytes=None):
|
||||
params = [('action', 'add-file')]
|
||||
|
||||
form = MultiPartForm()
|
||||
if filepath is not None:
|
||||
file_handler = open(filepath)
|
||||
else:
|
||||
file_handler = StringIO.StringIO(bytes)
|
||||
|
||||
form.add_file('torrent_file', filename.encode('utf-8'), file_handler)
|
||||
|
||||
return self._action(params, str(form), form.get_content_type())
|
||||
|
||||
def _action(self, params, body=None, content_type=None):
|
||||
#about token, see https://github.com/bittorrent/webui/wiki/TokenSystem
|
||||
url = self.base_url + '?token=' + self.token + '&' + urllib.urlencode(params)
|
||||
request = urllib2.Request(url)
|
||||
|
||||
if body:
|
||||
request.add_data(body)
|
||||
request.add_header('Content-length', len(body))
|
||||
if content_type:
|
||||
request.add_header('Content-type', content_type)
|
||||
|
||||
try:
|
||||
response = self.opener.open(request)
|
||||
return response.code, json.loads(response.read())
|
||||
except urllib2.HTTPError,e:
|
||||
raise
|
||||
|
|
@ -0,0 +1,71 @@
|
|||
#code copied from http://www.doughellmann.com/PyMOTW/urllib2/
|
||||
|
||||
import itertools
|
||||
import mimetools
|
||||
import mimetypes
|
||||
from cStringIO import StringIO
|
||||
import urllib
|
||||
import urllib2
|
||||
|
||||
class MultiPartForm(object):
|
||||
"""Accumulate the data to be used when posting a form."""
|
||||
|
||||
def __init__(self):
|
||||
self.form_fields = []
|
||||
self.files = []
|
||||
self.boundary = mimetools.choose_boundary()
|
||||
return
|
||||
|
||||
def get_content_type(self):
|
||||
return 'multipart/form-data; boundary=%s' % self.boundary
|
||||
|
||||
def add_field(self, name, value):
|
||||
"""Add a simple field to the form data."""
|
||||
self.form_fields.append((name, value))
|
||||
return
|
||||
|
||||
def add_file(self, fieldname, filename, fileHandle, mimetype=None):
|
||||
"""Add a file to be uploaded."""
|
||||
body = fileHandle.read()
|
||||
if mimetype is None:
|
||||
mimetype = mimetypes.guess_type(filename)[0] or 'application/octet-stream'
|
||||
self.files.append((fieldname, filename, mimetype, body))
|
||||
return
|
||||
|
||||
def __str__(self):
|
||||
"""Return a string representing the form data, including attached files."""
|
||||
# Build a list of lists, each containing "lines" of the
|
||||
# request. Each part is separated by a boundary string.
|
||||
# Once the list is built, return a string where each
|
||||
# line is separated by '\r\n'.
|
||||
parts = []
|
||||
part_boundary = '--' + self.boundary
|
||||
|
||||
# Add the form fields
|
||||
parts.extend(
|
||||
[ part_boundary,
|
||||
'Content-Disposition: form-data; name="%s"' % name,
|
||||
'',
|
||||
value,
|
||||
]
|
||||
for name, value in self.form_fields
|
||||
)
|
||||
|
||||
# Add the files to upload
|
||||
parts.extend(
|
||||
[ part_boundary,
|
||||
'Content-Disposition: file; name="%s"; filename="%s"' % \
|
||||
(field_name, filename),
|
||||
'Content-Type: %s' % content_type,
|
||||
'',
|
||||
body,
|
||||
]
|
||||
for field_name, filename, content_type, body in self.files
|
||||
)
|
||||
|
||||
# Flatten the list and add closing boundary marker,
|
||||
# then return CR+LF separated data
|
||||
flattened = list(itertools.chain(*parts))
|
||||
flattened.append('--' + self.boundary + '--')
|
||||
flattened.append('')
|
||||
return '\r\n'.join(flattened)
|
|
@ -243,10 +243,27 @@ class PostProcessor(object):
|
|||
#once a series name and issue are matched,
|
||||
#write the series/issue/filename to a tuple
|
||||
#when all done, iterate over the tuple until completion...
|
||||
comicseries = myDB.select("SELECT * FROM comics")
|
||||
#first we get a parsed results list of the files being processed, and then poll against the sql to get a short list of hits.
|
||||
fl = filechecker.FileChecker(self.nzb_folder, justparse=True)
|
||||
filelist = fl.listFiles()
|
||||
if filelist['comiccount'] == 0: # is None:
|
||||
logger.warn('There were no files located - check the debugging logs if you think this is in error.')
|
||||
return
|
||||
logger.info(filelist)
|
||||
logger.info('I have located ' + str(filelist['comiccount']) + ' files that I should be able to post-process. Continuing...')
|
||||
|
||||
manual_list = []
|
||||
|
||||
for fl in filelist['comiclist']:
|
||||
#mod_seriesname = '%' + re.sub(' ', '%', fl['series_name']).strip() + '%'
|
||||
as_d = filechecker.FileChecker(watchcomic=fl['series_name'].decode('utf-8'))
|
||||
as_dinfo = as_d.dynamic_replace(fl['series_name'])
|
||||
mod_seriesname = as_dinfo['mod_seriesname']
|
||||
logger.fdebug('Dynamic-ComicName: ' + mod_seriesname)
|
||||
comicseries = myDB.select('SELECT * FROM comics Where DynamicComicName=?', [mod_seriesname])
|
||||
if comicseries is None:
|
||||
logger.error(module + ' No Series in Watchlist - checking against Story Arcs (just in case). If I do not find anything, maybe you should be running Import?')
|
||||
break
|
||||
else:
|
||||
watchvals = []
|
||||
for wv in comicseries:
|
||||
|
@ -316,20 +333,13 @@ class PostProcessor(object):
|
|||
ccnt=0
|
||||
nm=0
|
||||
for cs in watchvals:
|
||||
watchmatch = filechecker.listFiles(self.nzb_folder, cs['ComicName'], cs['ComicPublisher'], cs['AlternateSearch'], manual=cs['WatchValues'])
|
||||
if watchmatch['comiccount'] == 0: # is None:
|
||||
wm = filechecker.FileChecker(watchcomic=cs['ComicName'], Publisher=cs['ComicPublisher'], AlternateSearch=cs['AlternateSearch'], manual=cs['WatchValues'])
|
||||
watchmatch = wm.matchIT(fl)
|
||||
if watchmatch['process_status'] == 'fail':
|
||||
nm+=1
|
||||
continue
|
||||
else:
|
||||
fn = 0
|
||||
fccnt = int(watchmatch['comiccount'])
|
||||
if len(watchmatch) == 1: continue
|
||||
while (fn < fccnt):
|
||||
try:
|
||||
tmpfc = watchmatch['comiclist'][fn]
|
||||
except IndexError, KeyError:
|
||||
break
|
||||
temploc= tmpfc['JusttheDigits'].replace('_', ' ')
|
||||
temploc= watchmatch['justthedigits'].replace('_', ' ')
|
||||
temploc = re.sub('[\#\']', '', temploc)
|
||||
|
||||
if 'annual' in temploc.lower():
|
||||
|
@ -348,9 +358,10 @@ class PostProcessor(object):
|
|||
|
||||
if issuechk is None:
|
||||
logger.fdebug(module + ' No corresponding issue # found for ' + str(cs['ComicID']))
|
||||
continue
|
||||
else:
|
||||
datematch = "True"
|
||||
if len(watchmatch) >= 1 and tmpfc['ComicYear'] is not None:
|
||||
if len(watchmatch) >= 1 and watchmatch['issue_year'] is not None:
|
||||
#if the # of matches is more than 1, we need to make sure we get the right series
|
||||
#compare the ReleaseDate for the issue, to the found issue date in the filename.
|
||||
#if ReleaseDate doesn't exist, use IssueDate
|
||||
|
@ -363,14 +374,13 @@ class PostProcessor(object):
|
|||
#logger.info('IssueDate: ' + str(issuechk['IssueDate']))
|
||||
if issuechk['ReleaseDate'] is not None and issuechk['ReleaseDate'] != '0000-00-00':
|
||||
monthval = issuechk['ReleaseDate']
|
||||
if int(issuechk['ReleaseDate'][:4]) < int(tmpfc['ComicYear']):
|
||||
logger.fdebug(module + ' ' + str(issuechk['ReleaseDate']) + ' is before the issue year of ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
|
||||
if int(issuechk['ReleaseDate'][:4]) < int(watchmatch['issue_year']):
|
||||
logger.fdebug(module + ' ' + str(issuechk['ReleaseDate']) + ' is before the issue year of ' + str(watchmatch['issue_year']) + ' that was discovered in the filename')
|
||||
datematch = "False"
|
||||
|
||||
else:
|
||||
monthval = issuechk['IssueDate']
|
||||
if int(issuechk['IssueDate'][:4]) < int(tmpfc['ComicYear']):
|
||||
logger.fdebug(module + ' ' + str(issuechk['IssueDate']) + ' is before the issue year ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
|
||||
if int(issuechk['IssueDate'][:4]) < int(watchmatch['issue_year']):
|
||||
logger.fdebug(module + ' ' + str(issuechk['IssueDate']) + ' is before the issue year ' + str(watchmatch['issue_year']) + ' that was discovered in the filename')
|
||||
datematch = "False"
|
||||
|
||||
if int(monthval[5:7]) == 11 or int(monthval[5:7]) == 12:
|
||||
|
@ -379,12 +389,10 @@ class PostProcessor(object):
|
|||
elif int(monthval[5:7]) == 1 or int(monthval[5:7]) == 2 or int(monthval[5:7]) == 3:
|
||||
issyr = int(monthval[:4]) - 1
|
||||
|
||||
|
||||
|
||||
if datematch == "False" and issyr is not None:
|
||||
logger.fdebug(module + ' ' + str(issyr) + ' comparing to ' + str(tmpfc['ComicYear']) + ' : rechecking by month-check versus year.')
|
||||
logger.fdebug(module + ' ' + str(issyr) + ' comparing to ' + str(watchmatch['issue_year']) + ' : rechecking by month-check versus year.')
|
||||
datematch = "True"
|
||||
if int(issyr) != int(tmpfc['ComicYear']):
|
||||
if int(issyr) != int(watchmatch['issue_year']):
|
||||
logger.fdebug(module + '[.:FAIL:.] Issue is before the modified issue year of ' + str(issyr))
|
||||
datematch = "False"
|
||||
|
||||
|
@ -392,23 +400,29 @@ class PostProcessor(object):
|
|||
logger.info(module + ' Found matching issue # ' + str(fcdigit) + ' for ComicID: ' + str(cs['ComicID']) + ' / IssueID: ' + str(issuechk['IssueID']))
|
||||
|
||||
if datematch == "True":
|
||||
manual_list.append({"ComicLocation": tmpfc['ComicLocation'],
|
||||
manual_list.append({"ComicLocation": os.path.join(watchmatch['comiclocation'],watchmatch['comicfilename']),
|
||||
"ComicID": cs['ComicID'],
|
||||
"IssueID": issuechk['IssueID'],
|
||||
"IssueNumber": issuechk['Issue_Number'],
|
||||
"ComicName": cs['ComicName']})
|
||||
else:
|
||||
logger.fdebug(module + ' Incorrect series - not populating..continuing post-processing')
|
||||
logger.fdebug(module + '[NON-MATCH: ' + cs['ComicName'] + '-' + cs['ComicID'] + '] Incorrect series - not populating..continuing post-processing')
|
||||
continue
|
||||
#ccnt+=1
|
||||
logger.fdebug(module + '[SUCCESSFUL MATCH: ' + cs['ComicName'] + '-' + cs['ComicID'] + '] Match verified for ' + fl['comicfilename'])
|
||||
break
|
||||
|
||||
fn+=1
|
||||
logger.fdebug(module + ' There are ' + str(len(manual_list)) + ' files found that match on your watchlist, ' + str(nm) + ' do not match anything and will be ignored.')
|
||||
logger.fdebug(module + ' There are ' + str(len(manual_list)) + ' files found that match on your watchlist, ' + str(int(filelist['comiccount'] - len(manual_list))) + ' do not match anything and will be ignored.')
|
||||
|
||||
#we should setup for manual post-processing of story-arc issues here
|
||||
arc_series = myDB.select("SELECT * FROM readinglist order by ComicName") # by StoryArcID")
|
||||
#we can also search by ComicID to just grab those particular arcs as an alternative as well (not done)
|
||||
logger.fdebug(module + ' Now Checking if the issue also resides in one of the storyarc\'s that I am watching.')
|
||||
for fl in filelist['comiclist']:
|
||||
mod_seriesname = '%' + re.sub(' ', '%', fl['series_name']).strip() + '%'
|
||||
arc_series = myDB.select("SELECT * FROM readinglist WHERE ComicName LIKE?", [fl['series_name']]) # by StoryArcID")
|
||||
manual_arclist = []
|
||||
if arc_series is None:
|
||||
logger.error(module + ' No Story Arcs in Watchlist - aborting Manual Post Processing. Maybe you should be running Import?')
|
||||
logger.error(module + ' No Story Arcs in Watchlist that contain that particular series - aborting Manual Post Processing. Maybe you should be running Import?')
|
||||
return
|
||||
else:
|
||||
arcvals = []
|
||||
|
@ -445,35 +459,32 @@ class PostProcessor(object):
|
|||
|
||||
for k,v in res.items():
|
||||
i = 0
|
||||
while i < len(v):
|
||||
#k is ComicName
|
||||
#v is ArcValues and WatchValues
|
||||
while i < len(v):
|
||||
if k is None or k == 'None':
|
||||
pass
|
||||
else:
|
||||
arcmatch = filechecker.listFiles(self.nzb_folder, k, v[i]['ArcValues']['ComicPublisher'], manual=v[i]['WatchValues'])
|
||||
if arcmatch['comiccount'] == 0:
|
||||
pass
|
||||
arcm = filechecker.FileChecker(watchcomic=k, Publisher=v[i]['ArcValues']['ComicPublisher'], manual=v[i]['WatchValues'])
|
||||
arcmatch = arcm.matchIT(fl)
|
||||
logger.info('arcmatch: ' + str(arcmatch))
|
||||
if arcmatch['process_status'] == 'fail':
|
||||
nm+=1
|
||||
else:
|
||||
fn = 0
|
||||
fccnt = int(arcmatch['comiccount'])
|
||||
if len(arcmatch) == 1: break
|
||||
while (fn < fccnt):
|
||||
try:
|
||||
tmpfc = arcmatch['comiclist'][fn]
|
||||
except IndexError, KeyError:
|
||||
break
|
||||
temploc= tmpfc['JusttheDigits'].replace('_', ' ')
|
||||
temploc= arcmatch['justthedigits'].replace('_', ' ')
|
||||
temploc = re.sub('[\#\']', '', temploc)
|
||||
|
||||
if helpers.issuedigits(temploc) != helpers.issuedigits(v[i]['ArcValues']['IssueNumber']):
|
||||
logger.info('issues dont match. Skipping')
|
||||
i+=1
|
||||
continue
|
||||
if 'annual' in temploc.lower():
|
||||
biannchk = re.sub('-', '', temploc.lower()).strip()
|
||||
if 'biannual' in biannchk:
|
||||
logger.fdebug(module + ' Bi-Annual detected.')
|
||||
fcdigit = helpers.issuedigits(re.sub('biannual', '', str(biannchk)).strip())
|
||||
else:
|
||||
logger.fdebug(module + ' Annual detected.')
|
||||
fcdigit = helpers.issuedigits(re.sub('annual', '', str(temploc.lower())).strip())
|
||||
logger.fdebug(module + ' Annual detected [' + str(fcdigit) +']. ComicID assigned as ' + str(v[i]['WatchValues']['ComicID']))
|
||||
annchk = "yes"
|
||||
issuechk = myDB.selectone("SELECT * from readinglist WHERE ComicID=? AND Int_IssueNumber=?", [v[i]['WatchValues']['ComicID'], fcdigit]).fetchone()
|
||||
else:
|
||||
|
@ -484,7 +495,7 @@ class PostProcessor(object):
|
|||
logger.fdebug(module + ' No corresponding issue # found for ' + str(v[i]['WatchValues']['ComicID']))
|
||||
else:
|
||||
datematch = "True"
|
||||
if len(arcmatch) >= 1 and tmpfc['ComicYear'] is not None:
|
||||
if len(arcmatch) >= 1 and arcmatch['issue_year'] is not None:
|
||||
#if the # of matches is more than 1, we need to make sure we get the right series
|
||||
#compare the ReleaseDate for the issue, to the found issue date in the filename.
|
||||
#if ReleaseDate doesn't exist, use IssueDate
|
||||
|
@ -493,18 +504,18 @@ class PostProcessor(object):
|
|||
logger.fdebug('issuedate:' + str(issuechk['IssueDate']))
|
||||
logger.fdebug('issuechk: ' + str(issuechk['IssueDate'][5:7]))
|
||||
|
||||
logger.info('ReleaseDate: ' + str(issuechk['StoreDate']))
|
||||
logger.info('StoreDate ' + str(issuechk['StoreDate']))
|
||||
logger.info('IssueDate: ' + str(issuechk['IssueDate']))
|
||||
if issuechk['StoreDate'] is not None and issuechk['StoreDate'] != '0000-00-00':
|
||||
monthval = issuechk['StoreDate']
|
||||
if int(issuechk['StoreDate'][:4]) < int(tmpfc['ComicYear']):
|
||||
logger.fdebug(module + ' ' + str(issuechk['StoreDate']) + ' is before the issue year of ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
|
||||
if int(issuechk['StoreDate'][:4]) < int(arcmatch['issue_year']):
|
||||
logger.fdebug(module + ' ' + str(issuechk['StoreDate']) + ' is before the issue year of ' + str(arcmatch['issue_year']) + ' that was discovered in the filename')
|
||||
datematch = "False"
|
||||
|
||||
else:
|
||||
monthval = issuechk['IssueDate']
|
||||
if int(issuechk['IssueDate'][:4]) < int(tmpfc['ComicYear']):
|
||||
logger.fdebug(module + ' ' + str(issuechk['IssueDate']) + ' is before the issue year ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
|
||||
if int(issuechk['IssueDate'][:4]) < int(arcmatch['issue_year']):
|
||||
logger.fdebug(module + ' ' + str(issuechk['IssueDate']) + ' is before the issue year ' + str(arcmatch['issue_year']) + ' that was discovered in the filename')
|
||||
datematch = "False"
|
||||
|
||||
if int(monthval[5:7]) == 11 or int(monthval[5:7]) == 12:
|
||||
|
@ -514,15 +525,18 @@ class PostProcessor(object):
|
|||
issyr = int(monthval[:4]) - 1
|
||||
|
||||
if datematch == "False" and issyr is not None:
|
||||
logger.fdebug(module + ' ' + str(issyr) + ' comparing to ' + str(tmpfc['ComicYear']) + ' : rechecking by month-check versus year.')
|
||||
logger.fdebug(module + ' ' + str(issyr) + ' comparing to ' + str(arcmatch['issue_year']) + ' : rechecking by month-check versus year.')
|
||||
datematch = "True"
|
||||
if int(issyr) != int(tmpfc['ComicYear']):
|
||||
if int(issyr) != int(arcmatch['issue_year']):
|
||||
logger.fdebug(module + '[.:FAIL:.] Issue is before the modified issue year of ' + str(issyr))
|
||||
datematch = "False"
|
||||
|
||||
else:
|
||||
logger.info(module + ' Found matching issue # ' + str(fcdigit) + ' for ComicID: ' + str(v[i]['WatchValues']['ComicID']) + ' / IssueID: ' + str(issuechk['IssueID']))
|
||||
|
||||
logger.info('datematch: ' + str(datematch))
|
||||
logger.info('temploc: ' + str(helpers.issuedigits(temploc)))
|
||||
logger.info('arcissue: ' + str(helpers.issuedigits(v[i]['ArcValues']['IssueNumber'])))
|
||||
if datematch == "True" and helpers.issuedigits(temploc) == helpers.issuedigits(v[i]['ArcValues']['IssueNumber']):
|
||||
passit = False
|
||||
if len(manual_list) > 0:
|
||||
|
@ -536,7 +550,7 @@ class PostProcessor(object):
|
|||
passit = True
|
||||
if passit == False:
|
||||
logger.info('[' + k + ' #' + str(issuechk['IssueNumber']) + '] MATCH: ' + tmpfc['ComicLocation'] + ' / ' + str(issuechk['IssueID']) + ' / ' + str(v[i]['ArcValues']['IssueID']))
|
||||
manual_arclist.append({"ComicLocation": tmpfc['ComicLocation'],
|
||||
manual_arclist.append({"ComicLocation": arcmatch['comiclocation'],
|
||||
"ComicID": v[i]['WatchValues']['ComicID'],
|
||||
"IssueID": v[i]['ArcValues']['IssueID'],
|
||||
"IssueNumber": v[i]['ArcValues']['IssueNumber'],
|
||||
|
@ -544,11 +558,15 @@ class PostProcessor(object):
|
|||
"IssueArcID": v[i]['ArcValues']['IssueArcID'],
|
||||
"ReadingOrder": v[i]['ArcValues']['ReadingOrder'],
|
||||
"ComicName": k})
|
||||
logger.fdebug(module + '[SUCCESSFUL MATCH: ' + k + '-' + v[i]['WatchValues']['ComicID'] + '] Match verified for ' + arcmatch['comicfilename'])
|
||||
break
|
||||
else:
|
||||
logger.fdebug(module + ' Incorrect series - not populating..continuing post-processing')
|
||||
fn+=1
|
||||
logger.fdebug(module + '[NON-MATCH: ' + k + '-' + v[i]['WatchValues']['ComicID'] + '] Incorrect series - not populating..continuing post-processing')
|
||||
|
||||
i+=1
|
||||
|
||||
|
||||
|
||||
if len(manual_arclist) > 0:
|
||||
logger.info('[STORY-ARC MANUAL POST-PROCESSING] I have found ' + str(len(manual_arclist)) + ' issues that belong to Story Arcs. Flinging them into the correct directories.')
|
||||
for ml in manual_arclist:
|
||||
|
@ -957,7 +975,7 @@ class PostProcessor(object):
|
|||
#check if duplicate dump folder is enabled and if so move duplicate file in there for manual intervention.
|
||||
#'dupe_file' - do not write new file as existing file is better quality
|
||||
#'dupe_src' - write new file, as existing file is a lesser quality (dupe)
|
||||
if mylar.DUPLICATE_DUMP:
|
||||
if mylar.DDUMP and not all([mylar.DUPLICATE_DUMP is None, mylar.DUPLICATE_DUMP == '']): #DUPLICATE_DUMP
|
||||
dupchkit = self.duplicate_process(dupthis)
|
||||
if dupchkit == False:
|
||||
logger.warn('Unable to move duplicate file - skipping post-processing of this file.')
|
||||
|
|
|
@ -55,6 +55,15 @@ PIDFILE= None
|
|||
CREATEPID = False
|
||||
SAFESTART = False
|
||||
AUTO_UPDATE = False
|
||||
NOWEEKLY = False
|
||||
|
||||
IMPORT_STATUS = None
|
||||
IMPORT_FILES = 0
|
||||
IMPORT_TOTALFILES = 0
|
||||
IMPORT_CID_COUNT = 0
|
||||
IMPORT_PARSED_COUNT = 0
|
||||
IMPORT_FAILURE_COUNT = 0
|
||||
CHECKENABLED = False
|
||||
|
||||
SCHED = Scheduler()
|
||||
|
||||
|
@ -136,6 +145,7 @@ COMICVINE_API = None
|
|||
DEFAULT_CVAPI = '583939a3df0a25fc4e8b7a29934a13078002dc27'
|
||||
CVAPI_RATE = 2
|
||||
CV_HEADERS = None
|
||||
BLACKLISTED_PUBLISHERS = None
|
||||
|
||||
CHECK_GITHUB = False
|
||||
CHECK_GITHUB_ON_STARTUP = False
|
||||
|
@ -365,6 +375,13 @@ FEEDINFO_32P = None
|
|||
VERIFY_32P = 1
|
||||
SNATCHEDTORRENT_NOTIFY = 0
|
||||
|
||||
RTORRENT_HOST = None
|
||||
RTORRENT_USERNAME = None
|
||||
RTORRENT_PASSWORD = None
|
||||
RTORRENT_STARTONLOAD = 0
|
||||
RTORRENT_LABEL = None
|
||||
RTORRENT_DIRECTORY = None
|
||||
|
||||
def CheckSection(sec):
|
||||
""" Check if INI section exists, if not create it """
|
||||
try:
|
||||
|
@ -414,15 +431,16 @@ def check_setting_str(config, cfg_name, item_name, def_val, log=True):
|
|||
def initialize():
|
||||
|
||||
with INIT_LOCK:
|
||||
global __INITIALIZED__, DBCHOICE, DBUSER, DBPASS, DBNAME, COMICVINE_API, DEFAULT_CVAPI, CVAPI_RATE, CV_HEADERS, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, UPCOMING_SNATCHED, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, MAX_LOGSIZE, OLDCONFIG_VERSION, OS_DETECT, \
|
||||
queue, LOCAL_IP, EXT_IP, HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, ENABLE_HTTPS, HTTPS_CERT, HTTPS_KEY, HTTPS_FORCE_ON, HOST_RETURN, API_ENABLED, API_KEY, DOWNLOAD_APIKEY, LAUNCH_BROWSER, GIT_PATH, SAFESTART, AUTO_UPDATE, \
|
||||
global __INITIALIZED__, DBCHOICE, DBUSER, DBPASS, DBNAME, COMICVINE_API, DEFAULT_CVAPI, CVAPI_RATE, CV_HEADERS, BLACKLISTED_PUBLISHERS, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, UPCOMING_SNATCHED, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, MAX_LOGSIZE, OLDCONFIG_VERSION, OS_DETECT, \
|
||||
queue, LOCAL_IP, EXT_IP, HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, ENABLE_HTTPS, HTTPS_CERT, HTTPS_KEY, HTTPS_FORCE_ON, HOST_RETURN, API_ENABLED, API_KEY, DOWNLOAD_APIKEY, LAUNCH_BROWSER, GIT_PATH, SAFESTART, NOWEEKLY, AUTO_UPDATE, \
|
||||
IMPORT_STATUS, IMPORT_FILES, IMPORT_TOTALFILES, IMPORT_CID_COUNT, IMPORT_PARSED_COUNT, IMPORT_FAILURE_COUNT, CHECKENABLED, \
|
||||
CURRENT_VERSION, LATEST_VERSION, CHECK_GITHUB, CHECK_GITHUB_ON_STARTUP, CHECK_GITHUB_INTERVAL, GIT_USER, GIT_BRANCH, USER_AGENT, DESTINATION_DIR, MULTIPLE_DEST_DIRS, CREATE_FOLDERS, DELETE_REMOVE_DIR, \
|
||||
DOWNLOAD_DIR, USENET_RETENTION, SEARCH_INTERVAL, NZB_STARTUP_SEARCH, INTERFACE, DUPECONSTRAINT, DDUMP, DUPLICATE_DUMP, AUTOWANT_ALL, AUTOWANT_UPCOMING, ZERO_LEVEL, ZERO_LEVEL_N, COMIC_COVER_LOCAL, HIGHCOUNT, \
|
||||
DOWNLOAD_SCAN_INTERVAL, FOLDER_SCAN_LOG_VERBOSE, IMPORTLOCK, NZB_DOWNLOADER, USE_SABNZBD, SAB_HOST, SAB_USERNAME, SAB_PASSWORD, SAB_APIKEY, SAB_CATEGORY, SAB_PRIORITY, SAB_TO_MYLAR, SAB_DIRECTORY, USE_BLACKHOLE, BLACKHOLE_DIR, ADD_COMICS, COMIC_DIR, IMP_MOVE, IMP_RENAME, IMP_METADATA, \
|
||||
USE_NZBGET, NZBGET_HOST, NZBGET_PORT, NZBGET_USERNAME, NZBGET_PASSWORD, NZBGET_CATEGORY, NZBGET_PRIORITY, NZBGET_DIRECTORY, NZBSU, NZBSU_UID, NZBSU_APIKEY, NZBSU_VERIFY, DOGNZB, DOGNZB_APIKEY, DOGNZB_VERIFY, \
|
||||
NEWZNAB, NEWZNAB_NAME, NEWZNAB_HOST, NEWZNAB_APIKEY, NEWZNAB_VERIFY, NEWZNAB_UID, NEWZNAB_ENABLED, EXTRA_NEWZNABS, NEWZNAB_EXTRA, \
|
||||
ENABLE_TORZNAB, TORZNAB_NAME, TORZNAB_HOST, TORZNAB_APIKEY, TORZNAB_CATEGORY, TORZNAB_VERIFY, \
|
||||
EXPERIMENTAL, ALTEXPERIMENTAL, \
|
||||
EXPERIMENTAL, ALTEXPERIMENTAL, RTORRENT_HOST, RTORRENT_USERNAME, RTORRENT_PASSWORD, RTORRENT_STARTONLOAD, RTORRENT_LABEL, RTORRENT_DIRECTORY, \
|
||||
ENABLE_META, CMTAGGER_PATH, CBR2CBZ_ONLY, CT_TAG_CR, CT_TAG_CBL, CT_CBZ_OVERWRITE, UNRAR_CMD, CT_SETTINGSPATH, UPDATE_ENDED, INDIE_PUB, BIGGIE_PUB, IGNORE_HAVETOTAL, SNATCHED_HAVETOTAL, PROVIDER_ORDER, \
|
||||
dbUpdateScheduler, searchScheduler, RSSScheduler, WeeklyScheduler, VersionScheduler, FolderMonitorScheduler, \
|
||||
ENABLE_TORRENTS, MINSEEDS, TORRENT_LOCAL, LOCAL_WATCHDIR, TORRENT_SEEDBOX, SEEDBOX_HOST, SEEDBOX_PORT, SEEDBOX_USER, SEEDBOX_PASS, SEEDBOX_WATCHDIR, \
|
||||
|
@ -626,6 +644,12 @@ def initialize():
|
|||
INDIE_PUB = check_setting_str(CFG, 'General', 'indie_pub', '75')
|
||||
BIGGIE_PUB = check_setting_str(CFG, 'General', 'biggie_pub', '55')
|
||||
|
||||
flattened_blacklisted_pub = check_setting_str(CFG, 'General', 'blacklisted_publishers', [], log=False)
|
||||
if len(flattened_blacklisted_pub) == 0:
|
||||
BLACKLISTED_PUBLISHERS = None
|
||||
else:
|
||||
BLACKLISTED_PUBLISHERS = list(itertools.izip(*[itertools.islice(flattened_blacklisted_pub, i, None, 1) for i in range(1)]))
|
||||
|
||||
ENABLE_RSS = bool(check_setting_int(CFG, 'General', 'enable_rss', 1))
|
||||
RSS_CHECKINTERVAL = check_setting_str(CFG, 'General', 'rss_checkinterval', '20')
|
||||
RSS_LASTRUN = check_setting_str(CFG, 'General', 'rss_lastrun', '')
|
||||
|
@ -676,6 +700,13 @@ def initialize():
|
|||
VERIFY_32P = bool(check_setting_int(CFG, 'Torrents', 'verify_32p', 1))
|
||||
SNATCHEDTORRENT_NOTIFY = bool(check_setting_int(CFG, 'Torrents', 'snatchedtorrent_notify', 0))
|
||||
|
||||
RTORRENT_HOST = check_setting_str(CFG, 'Torrents', 'rtorrent_host', '')
|
||||
RTORRENT_USERNAME = check_setting_str(CFG, 'Torrents', 'rtorrent_username', '')
|
||||
RTORRENT_PASSWORD = check_setting_str(CFG, 'Torrents', 'rtorrent_password', '')
|
||||
RTORRENT_STARTONLOAD = bool(check_setting_int(CFG, 'Torrents', 'rtorrent_startonload', 0))
|
||||
RTORRENT_LABEL = check_setting_str(CFG, 'Torrents', 'rtorrent_label', '')
|
||||
RTORRENT_DIRECTORY = check_setting_str(CFG, 'Torrents', 'rtorrent_directory', '')
|
||||
|
||||
#this needs to have it's own category - for now General will do.
|
||||
NZB_DOWNLOADER = check_setting_int(CFG, 'General', 'nzb_downloader', 0)
|
||||
#legacy support of older config - reload into old values for consistency.
|
||||
|
@ -1251,6 +1282,16 @@ def config_write():
|
|||
new_config['General']['nzb_startup_search'] = int(NZB_STARTUP_SEARCH)
|
||||
new_config['General']['add_comics'] = int(ADD_COMICS)
|
||||
new_config['General']['comic_dir'] = COMIC_DIR
|
||||
if BLACKLISTED_PUBLISHERS is None:
|
||||
flattened_blacklisted_pub = None
|
||||
else:
|
||||
flattened_blacklisted_pub = []
|
||||
for bpub in BLACKLISTED_PUBLISHERS:
|
||||
#for key, value in pro.items():
|
||||
for item in bpub:
|
||||
flattened_blacklisted_pub.append(item)
|
||||
#flattened_providers.append(str(value))
|
||||
new_config['General']['blacklisted_publishers'] = flattened_blacklisted_pub
|
||||
new_config['General']['imp_move'] = int(IMP_MOVE)
|
||||
new_config['General']['imp_rename'] = int(IMP_RENAME)
|
||||
new_config['General']['imp_metadata'] = int(IMP_METADATA)
|
||||
|
@ -1366,6 +1407,13 @@ def config_write():
|
|||
new_config['Torrents']['password_32p'] = PASSWORD_32P
|
||||
new_config['Torrents']['verify_32p'] = int(VERIFY_32P)
|
||||
new_config['Torrents']['snatchedtorrent_notify'] = int(SNATCHEDTORRENT_NOTIFY)
|
||||
new_config['Torrents']['rtorrent_host'] = RTORRENT_HOST
|
||||
new_config['Torrents']['rtorrent_username'] = RTORRENT_USERNAME
|
||||
new_config['Torrents']['rtorrent_password'] = RTORRENT_PASSWORD
|
||||
new_config['Torrents']['rtorrent_startonload'] = int(RTORRENT_STARTONLOAD)
|
||||
new_config['Torrents']['rtorrent_label'] = RTORRENT_LABEL
|
||||
new_config['Torrents']['rtorrent_directory'] = RTORRENT_DIRECTORY
|
||||
|
||||
new_config['SABnzbd'] = {}
|
||||
#new_config['SABnzbd']['use_sabnzbd'] = int(USE_SABNZBD)
|
||||
new_config['SABnzbd']['sab_host'] = SAB_HOST
|
||||
|
@ -1492,6 +1540,7 @@ def start():
|
|||
#threading.Thread(target=weeklypull.pullit).start()
|
||||
#now the scheduler (check every 24 hours)
|
||||
#SCHED.add_interval_job(weeklypull.pullit, hours=24)
|
||||
if not NOWEEKLY:
|
||||
WeeklyScheduler.thread.start()
|
||||
|
||||
#let's do a run at the Wanted issues here (on startup) if enabled.
|
||||
|
@ -1525,14 +1574,14 @@ def dbcheck():
|
|||
c_error = 'sqlite3.OperationalError'
|
||||
c=conn.cursor()
|
||||
|
||||
c.execute('CREATE TABLE IF NOT EXISTS comics (ComicID TEXT UNIQUE, ComicName TEXT, ComicSortName TEXT, ComicYear TEXT, DateAdded TEXT, Status TEXT, IncludeExtras INTEGER, Have INTEGER, Total INTEGER, ComicImage TEXT, ComicPublisher TEXT, ComicLocation TEXT, ComicPublished TEXT, NewPublish TEXT, LatestIssue TEXT, LatestDate TEXT, Description TEXT, QUALalt_vers TEXT, QUALtype TEXT, QUALscanner TEXT, QUALquality TEXT, LastUpdated TEXT, AlternateSearch TEXT, UseFuzzy TEXT, ComicVersion TEXT, SortOrder INTEGER, DetailURL TEXT, ForceContinuing INTEGER, ComicName_Filesafe TEXT, AlternateFileName TEXT, ComicImageURL TEXT, ComicImageALTURL TEXT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS comics (ComicID TEXT UNIQUE, ComicName TEXT, ComicSortName TEXT, ComicYear TEXT, DateAdded TEXT, Status TEXT, IncludeExtras INTEGER, Have INTEGER, Total INTEGER, ComicImage TEXT, ComicPublisher TEXT, ComicLocation TEXT, ComicPublished TEXT, NewPublish TEXT, LatestIssue TEXT, LatestDate TEXT, Description TEXT, QUALalt_vers TEXT, QUALtype TEXT, QUALscanner TEXT, QUALquality TEXT, LastUpdated TEXT, AlternateSearch TEXT, UseFuzzy TEXT, ComicVersion TEXT, SortOrder INTEGER, DetailURL TEXT, ForceContinuing INTEGER, ComicName_Filesafe TEXT, AlternateFileName TEXT, ComicImageURL TEXT, ComicImageALTURL TEXT, DynamicComicName TEXT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS issues (IssueID TEXT, ComicName TEXT, IssueName TEXT, Issue_Number TEXT, DateAdded TEXT, Status TEXT, Type TEXT, ComicID TEXT, ArtworkURL Text, ReleaseDate TEXT, Location TEXT, IssueDate TEXT, Int_IssueNumber INT, ComicSize TEXT, AltIssueNumber TEXT, IssueDate_Edit TEXT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS snatched (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Size INTEGER, DateAdded TEXT, Status TEXT, FolderName TEXT, ComicID TEXT, Provider TEXT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS upcoming (ComicName TEXT, IssueNumber TEXT, ComicID TEXT, IssueID TEXT, IssueDate TEXT, Status TEXT, DisplayComicName TEXT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS nzblog (IssueID TEXT, NZBName TEXT, SARC TEXT, PROVIDER TEXT, ID TEXT, AltNZBName TEXT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS weekly (SHIPDATE TEXT, PUBLISHER TEXT, ISSUE TEXT, COMIC VARCHAR(150), EXTRA TEXT, STATUS TEXT, ComicID TEXT, IssueID TEXT)')
|
||||
# c.execute('CREATE TABLE IF NOT EXISTS sablog (nzo_id TEXT, ComicName TEXT, ComicYEAR TEXT, ComicIssue TEXT, name TEXT, nzo_complete TEXT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS importresults (impID TEXT, ComicName TEXT, ComicYear TEXT, Status TEXT, ImportDate TEXT, ComicFilename TEXT, ComicLocation TEXT, WatchMatch TEXT, DisplayName TEXT, SRID TEXT, ComicID TEXT, IssueID TEXT, Volume TEXT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS importresults (impID TEXT, ComicName TEXT, ComicYear TEXT, Status TEXT, ImportDate TEXT, ComicFilename TEXT, ComicLocation TEXT, WatchMatch TEXT, DisplayName TEXT, SRID TEXT, ComicID TEXT, IssueID TEXT, Volume TEXT, IssueNumber TEXT, DynamicName TEXT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS readlist (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Status TEXT, DateAdded TEXT, Location TEXT, inCacheDir TEXT, SeriesYear TEXT, ComicID TEXT, StatusChange TEXT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS readinglist(StoryArcID TEXT, ComicName TEXT, IssueNumber TEXT, SeriesYear TEXT, IssueYEAR TEXT, StoryArc TEXT, TotalIssues TEXT, Status TEXT, inCacheDir TEXT, Location TEXT, IssueArcID TEXT, ReadingOrder INT, IssueID TEXT, ComicID TEXT, StoreDate TEXT, IssueDate TEXT, Publisher TEXT, IssuePublisher TEXT, IssueName TEXT, CV_ArcID TEXT, Int_IssueNumber INT)')
|
||||
c.execute('CREATE TABLE IF NOT EXISTS annuals (IssueID TEXT, Issue_Number TEXT, IssueName TEXT, IssueDate TEXT, Status TEXT, ComicID TEXT, GCDComicID TEXT, Location TEXT, ComicSize TEXT, Int_IssueNumber INT, ComicName TEXT, ReleaseDate TEXT, ReleaseComicID TEXT, ReleaseComicName TEXT, IssueDate_Edit TEXT)')
|
||||
|
@ -1630,6 +1679,13 @@ def dbcheck():
|
|||
except sqlite3.OperationalError:
|
||||
c.execute('ALTER TABLE comics ADD COLUMN NewPublish TEXT')
|
||||
|
||||
try:
|
||||
c.execute('SELECT DynamicComicName from comics')
|
||||
dynamic_upgrade = False
|
||||
except sqlite3.OperationalError:
|
||||
c.execute('ALTER TABLE comics ADD COLUMN DynamicComicName TEXT')
|
||||
dynamic_upgrade = True
|
||||
|
||||
# -- Issues Table --
|
||||
|
||||
try:
|
||||
|
@ -1709,6 +1765,17 @@ def dbcheck():
|
|||
c.execute('SELECT Volume from importresults')
|
||||
except sqlite3.OperationalError:
|
||||
c.execute('ALTER TABLE importresults ADD COLUMN Volume TEXT')
|
||||
|
||||
try:
|
||||
c.execute('SELECT IssueNumber from importresults')
|
||||
except sqlite3.OperationalError:
|
||||
c.execute('ALTER TABLE importresults ADD COLUMN IssueNumber TEXT')
|
||||
|
||||
try:
|
||||
c.execute('SELECT DynamicName from importresults')
|
||||
except sqlite3.OperationalError:
|
||||
c.execute('ALTER TABLE importresults ADD COLUMN DynamicName TEXT')
|
||||
|
||||
## -- Readlist Table --
|
||||
|
||||
try:
|
||||
|
@ -1946,6 +2013,10 @@ def dbcheck():
|
|||
conn.commit()
|
||||
c.close()
|
||||
|
||||
if dynamic_upgrade:
|
||||
logger.info('Updating db to include some important changes.')
|
||||
helpers.upgrade_dynamic()
|
||||
|
||||
def csv_load():
|
||||
# for redudant module calls..include this.
|
||||
conn = sqlite3.connect(DB_FILE)
|
||||
|
@ -2002,7 +2073,7 @@ def csv_load():
|
|||
c.close()
|
||||
|
||||
def halt():
|
||||
global __INITIALIZED__, dbUpdateScheduler, seachScheduler, RSSScheduler, WeeklyScheduler, \
|
||||
global __INITIALIZED__, dbUpdateScheduler, searchScheduler, RSSScheduler, WeeklyScheduler, \
|
||||
VersionScheduler, FolderMonitorScheduler, started
|
||||
|
||||
with INIT_LOCK:
|
||||
|
|
|
@ -8,7 +8,7 @@ from mylar import logger
|
|||
|
||||
class info32p(object):
|
||||
|
||||
def __init__(self, reauthenticate=False, searchterm=None):
|
||||
def __init__(self, reauthenticate=False, searchterm=None, test=False):
|
||||
|
||||
self.module = '[32P-AUTHENTICATION]'
|
||||
self.url = 'https://32pag.es/login.php'
|
||||
|
@ -19,6 +19,7 @@ class info32p(object):
|
|||
'User-Agent': 'Mozilla/5.0'}
|
||||
self.reauthenticate = reauthenticate
|
||||
self.searchterm = searchterm
|
||||
self.test = test
|
||||
|
||||
def authenticate(self):
|
||||
|
||||
|
@ -43,11 +44,56 @@ class info32p(object):
|
|||
|
||||
s.headers = self.headers
|
||||
try:
|
||||
s.get(self.url, verify=verify, timeout=30)
|
||||
t = s.get(self.url, verify=verify, timeout=30)
|
||||
except (requests.exceptions.SSLError, requests.exceptions.Timeout) as e:
|
||||
logger.error(self.module + ' Unable to establish connection to 32P: ' + str(e))
|
||||
return
|
||||
|
||||
chksoup = BeautifulSoup(t.content)
|
||||
chksoup.prettify()
|
||||
chk_login = chksoup.find_all("form", {"id":"loginform"})
|
||||
if not chk_login:
|
||||
logger.warn(self.module + ' Something is wrong - either 32p is offline, or your account has been temporarily banned (possibly).')
|
||||
logger.warn(self.module + ' Disabling provider until this gets addressed by manual intervention.')
|
||||
return "disable"
|
||||
|
||||
for ck in chk_login:
|
||||
#<div><div id='recaptchadiv'></div><input type='hidden' id='recaptchainp' value='' name='recaptchainp' /></div>
|
||||
captcha = ck.find("div", {"id":"recaptchadiv"})
|
||||
capt_error = ck.find("span", {"class":"notice hidden","id":"formnotice"})
|
||||
error_msg = ck.find("span", {"id":"formerror"})
|
||||
if error_msg:
|
||||
loginerror = " ".join(list(error_msg.stripped_strings))
|
||||
logger.warn(self.module + ' Warning: ' + loginerror)
|
||||
|
||||
if capt_error:
|
||||
aleft = ck.find("span", {"class":"info"})
|
||||
attemptsleft = " ".join(list(aleft.stripped_strings))
|
||||
if int(attemptsleft) < 6:
|
||||
logger.warn(self.module + ' ' + str(attemptsleft) + ' sign-on attempts left.')
|
||||
|
||||
if captcha:
|
||||
logger.warn(self.module + ' Captcha detected. Temporariliy disabling 32p (to re-enable answer the captcha manually in a normal browswer or wait ~10 minutes...')
|
||||
return "disable"
|
||||
else:
|
||||
logger.fdebug(self.module + ' Captcha currently not present - continuing to signon...')
|
||||
|
||||
if self.test:
|
||||
rtnmsg = ''
|
||||
if (not capt_error and not error_msg) or (capt_error and int(attemptsleft) == 6):
|
||||
rtnmsg += '[No Warnings/Errors]'
|
||||
else:
|
||||
if capt_error and int(attemptsleft) < 6:
|
||||
rtnmsg = '[' + str(attemptsleft) + ' sign-on attempts left]'
|
||||
if error_msg:
|
||||
rtnmsg += '[' + error_msg + ']'
|
||||
if not captcha:
|
||||
rtnmsg += '[No Captcha]'
|
||||
else:
|
||||
rtnmsg += '[Captcha Present!]'
|
||||
|
||||
return rtnmsg
|
||||
|
||||
# post to the login form
|
||||
r = s.post(self.url, data=self.payload, verify=verify)
|
||||
|
||||
|
@ -57,15 +103,23 @@ class info32p(object):
|
|||
soup.prettify()
|
||||
#check for invalid username/password and if it's invalid - disable provider so we don't autoban (manual intervention is required after).
|
||||
chk_login = soup.find_all("form", {"id":"loginform"})
|
||||
|
||||
for ck in chk_login:
|
||||
captcha = ck.find("div", {"id":"recaptchadiv"})
|
||||
errorlog = ck.find("span", {"id":"formerror"})
|
||||
errornot = ck.find("span", {"class":"notice hidden","id":"formnotice"})
|
||||
loginerror = " ".join(list(errorlog.stripped_strings)) #login_error.findNext(text=True)
|
||||
errornot = ck.find("span", {"class":"notice"})
|
||||
noticeerror = " ".join(list(errornot.stripped_strings)) #notice_error.findNext(text=True)
|
||||
if captcha:
|
||||
logger.warn(self.module + ' Captcha detected. Temporariliy disabling 32p (to re-enable answer the captcha manually in a normal browswer or wait ~10 minutes')
|
||||
if errorlog:
|
||||
logger.error(self.module + ' Error: ' + loginerror)
|
||||
if noticeerror:
|
||||
logger.error(self.module + ' Warning: ' + noticeerror)
|
||||
logger.error(self.module + ' Disabling 32P provider until username/password can be corrected / verified.')
|
||||
if errornot:
|
||||
aleft = ck.find("span", {"class":"info"})
|
||||
attemptsleft = " ".join(list(aleft.stripped_strings))
|
||||
if int(attemptsleft) < 6:
|
||||
logger.warn(self.module + ' ' + str(attemptsleft) + ' sign-on attempts left.')
|
||||
logger.error(self.module + ' Disabling 32P provider until errors can be fixed in order to avoid temporary bans.')
|
||||
return "disable"
|
||||
|
||||
|
||||
|
|
|
@ -32,52 +32,6 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
|
|||
|
||||
# Force mylar to use cmtagger_path = mylar.PROG_DIR to force the use of the included lib.
|
||||
|
||||
if platform.system() == "Windows":
|
||||
if mylar.UNRAR_CMD == 'None' or mylar.UNRAR_CMD == '' or mylar.UNRAR_CMD is None:
|
||||
unrar_cmd = "C:\Program Files\WinRAR\UnRAR.exe"
|
||||
else:
|
||||
unrar_cmd = mylar.UNRAR_CMD.strip()
|
||||
|
||||
# test for UnRAR
|
||||
if not os.path.isfile(unrar_cmd):
|
||||
unrar_cmd = "C:\Program Files (x86)\WinRAR\UnRAR.exe"
|
||||
if not os.path.isfile(unrar_cmd):
|
||||
logger.fdebug(module + ' Unable to locate UnRAR.exe - make sure it is installed.')
|
||||
logger.fdebug(module + ' Aborting meta-tagging.')
|
||||
return "fail"
|
||||
|
||||
logger.fdebug(module + ' UNRAR path set to : ' + unrar_cmd)
|
||||
|
||||
elif platform.system() == "Darwin":
|
||||
#Mac OS X
|
||||
sys_type = 'mac'
|
||||
if mylar.UNRAR_CMD == 'None' or mylar.UNRAR_CMD == '' or mylar.UNRAR_CMD is None:
|
||||
unrar_cmd = "/usr/local/bin/unrar"
|
||||
else:
|
||||
unrar_cmd = mylar.UNRAR_CMD.strip()
|
||||
|
||||
logger.fdebug(module + ' UNRAR path set to : ' + unrar_cmd)
|
||||
|
||||
else:
|
||||
#for the 'nix
|
||||
sys_type = 'linux'
|
||||
if mylar.UNRAR_CMD == 'None' or mylar.UNRAR_CMD == '' or mylar.UNRAR_CMD is None:
|
||||
if 'freebsd' in platform.linux_distribution()[0].lower():
|
||||
unrar_cmd = "/usr/local/bin/unrar"
|
||||
else:
|
||||
unrar_cmd = "/usr/bin/unrar"
|
||||
else:
|
||||
unrar_cmd = mylar.UNRAR_CMD.strip()
|
||||
|
||||
logger.fdebug(module + ' UNRAR path set to : ' + unrar_cmd)
|
||||
|
||||
|
||||
if not os.path.exists(unrar_cmd):
|
||||
logger.fdebug(module + ' WARNING: cannot find the unrar command.')
|
||||
logger.fdebug(module + ' File conversion and extension fixing not available')
|
||||
logger.fdebug(module + ' You probably need to edit this script, or install the missing tool, or both!')
|
||||
return "fail"
|
||||
|
||||
logger.fdebug(module + ' Filename is : ' + str(filename))
|
||||
|
||||
filepath = filename
|
||||
|
@ -107,14 +61,12 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
|
|||
downloadpath = os.path.abspath(dirName)
|
||||
sabnzbdscriptpath = os.path.dirname(sys.argv[0])
|
||||
comicpath = new_folder
|
||||
unrar_folder = os.path.join(comicpath, "unrard")
|
||||
|
||||
logger.fdebug(module + ' Paths / Locations:')
|
||||
logger.fdebug(module + ' scriptname : ' + scriptname)
|
||||
logger.fdebug(module + ' downloadpath : ' + downloadpath)
|
||||
logger.fdebug(module + ' sabnzbdscriptpath : ' + sabnzbdscriptpath)
|
||||
logger.fdebug(module + ' comicpath : ' + comicpath)
|
||||
logger.fdebug(module + ' unrar_folder : ' + unrar_folder)
|
||||
logger.fdebug(module + ' Running the ComicTagger Add-on for Mylar')
|
||||
|
||||
|
||||
|
@ -139,7 +91,7 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
|
|||
logger.warn(module + '[WARNING] Make sure that you are using the comictagger included with Mylar.')
|
||||
return "fail"
|
||||
|
||||
ctend = ctversion.find('\]')
|
||||
ctend = ctversion.find('\n')
|
||||
ctcheck = re.sub("[^0-9]", "", ctversion[:ctend])
|
||||
ctcheck = re.sub('\.', '', ctcheck).strip()
|
||||
if int(ctcheck) >= int('1115'): # (v1.1.15)
|
||||
|
@ -223,8 +175,8 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
|
|||
try:
|
||||
p = subprocess.Popen(script_cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
|
||||
out, err = p.communicate()
|
||||
logger.info(out)
|
||||
logger.info(err)
|
||||
#logger.info(out)
|
||||
#logger.info(err)
|
||||
if initial_ctrun and 'exported successfully' in out:
|
||||
logger.fdebug(module + '[COMIC-TAGGER] : ' +str(out))
|
||||
#Archive exported successfully to: X-Men v4 008 (2014) (Digital) (Nahga-Empire).cbz (Original deleted)
|
||||
|
|
18
mylar/cv.py
18
mylar/cv.py
|
@ -94,13 +94,8 @@ def pulldetails(comicid, type, issueid=None, offset=1, arclist=None, comicidlist
|
|||
logger.warn('Error fetching data from ComicVine: %s' % (e))
|
||||
return
|
||||
|
||||
#file = urllib2.urlopen(PULLURL)
|
||||
#convert to string:
|
||||
#data = file.read()
|
||||
#close file because we dont need it anymore:
|
||||
#file.close()
|
||||
#parse the xml you downloaded
|
||||
dom = parseString(r.content) #(data)
|
||||
logger.fdebug('cv status code : ' + str(r.status_code))
|
||||
dom = parseString(r.content)
|
||||
|
||||
return dom
|
||||
|
||||
|
@ -563,7 +558,6 @@ def GetSeriesYears(dom):
|
|||
return serieslist
|
||||
|
||||
def GetImportList(results):
|
||||
logger.info('booyah')
|
||||
importlist = results.getElementsByTagName('issue')
|
||||
serieslist = []
|
||||
importids = {}
|
||||
|
@ -596,11 +590,17 @@ def GetImportList(results):
|
|||
except:
|
||||
tempseries['ComicName'] = 'None'
|
||||
|
||||
try:
|
||||
tempseries['Issue_Number'] = implist.getElementsByTagName('issue_number')[0].firstChild.wholeText
|
||||
except:
|
||||
logger.fdebug('No Issue Number available - Trade Paperbacks, Graphic Novels and Compendiums are not supported as of yet.')
|
||||
|
||||
logger.info('tempseries:' + str(tempseries))
|
||||
serieslist.append({"ComicID": tempseries['ComicID'],
|
||||
"IssueID": tempseries['IssueID'],
|
||||
"ComicName": tempseries['ComicName'],
|
||||
"Issue_Name": tempseries['Issue_Name']})
|
||||
"Issue_Name": tempseries['Issue_Name'],
|
||||
"Issue_Number": tempseries['Issue_Number']})
|
||||
|
||||
|
||||
return serieslist
|
||||
|
|
2236
mylar/filechecker.py
2236
mylar/filechecker.py
File diff suppressed because it is too large
Load Diff
133
mylar/helpers.py
133
mylar/helpers.py
|
@ -909,7 +909,20 @@ def issuedigits(issnum):
|
|||
if int_issnum is not None:
|
||||
return int_issnum
|
||||
|
||||
elif u'\xbd' in issnum:
|
||||
#try:
|
||||
# issnum.decode('ascii')
|
||||
# logger.fdebug('ascii character.')
|
||||
#except:
|
||||
# logger.fdebug('Unicode character detected: ' + issnum)
|
||||
#else: issnum.decode(mylar.SYS_ENCODING).decode('utf-8')
|
||||
|
||||
if type(issnum) == str:
|
||||
try:
|
||||
issnum = issnum.decode('utf-8')
|
||||
except:
|
||||
issnum = issnum.decode('windows-1252')
|
||||
|
||||
if u'\xbd' in issnum:
|
||||
int_issnum = .5 * 1000
|
||||
elif u'\xbc' in issnum:
|
||||
int_issnum = .25 * 1000
|
||||
|
@ -1120,6 +1133,26 @@ def latestdate_fix():
|
|||
|
||||
return
|
||||
|
||||
def upgrade_dynamic():
|
||||
import db, logger
|
||||
dynamic_list = []
|
||||
myDB = db.DBConnection()
|
||||
clist = myDB.select('SELECT * FROM Comics')
|
||||
for cl in clist:
|
||||
cl_d = mylar.filechecker.FileChecker(watchcomic=cl['ComicName'])
|
||||
cl_dyninfo = cl_d.dynamic_replace(cl['ComicName'])
|
||||
dynamic_list.append({'DynamicComicName': cl_dyninfo['mod_seriesname'],
|
||||
'ComicID': cl['ComicID']})
|
||||
|
||||
if len(dynamic_list) > 0:
|
||||
for dl in dynamic_list:
|
||||
CtrlVal = {"ComicID": dl['ComicID']}
|
||||
newVal = {"DynamicComicName": dl['DynamicComicName']}
|
||||
myDB.upsert("Comics", newVal, CtrlVal)
|
||||
|
||||
logger.info('Finshed updating ' + str(len(dynamic_list)) + ' entries within the db.')
|
||||
return
|
||||
|
||||
def checkFolder():
|
||||
from mylar import PostProcessor, logger
|
||||
import Queue
|
||||
|
@ -1298,9 +1331,17 @@ def IssueDetails(filelocation, IssueID=None):
|
|||
issuetag = None
|
||||
pic_extensions = ('.jpg','.png','.webp')
|
||||
modtime = os.path.getmtime(dstlocation)
|
||||
low_infile = 999999
|
||||
|
||||
try:
|
||||
with zipfile.ZipFile(dstlocation, 'r') as inzipfile:
|
||||
for infile in inzipfile.namelist():
|
||||
tmp_infile = re.sub("[^0-9]","", infile).strip()
|
||||
if tmp_infile == '':
|
||||
pass
|
||||
elif int(tmp_infile) < int(low_infile):
|
||||
low_infile = tmp_infile
|
||||
low_infile_name = infile
|
||||
if infile == 'ComicInfo.xml':
|
||||
logger.fdebug('Extracting ComicInfo.xml to display.')
|
||||
dst = os.path.join(mylar.CACHE_DIR, 'ComicInfo.xml')
|
||||
|
@ -1332,6 +1373,17 @@ def IssueDetails(filelocation, IssueID=None):
|
|||
local_file.close
|
||||
cover = "found"
|
||||
|
||||
if cover != "found":
|
||||
logger.fdebug('Invalid naming sequence for jpgs discovered. Attempting to find the lowest sequence and will use as cover (it might not work). Currently : ' + str(low_infile))
|
||||
local_file = open(os.path.join(mylar.CACHE_DIR, 'temp.jpg'), "wb")
|
||||
local_file.write(inzipfile.read(low_infile_name))
|
||||
local_file.close
|
||||
cover = "found"
|
||||
|
||||
except:
|
||||
logger.info('ERROR. Unable to properly retrieve the cover for displaying. It\'s probably best to re-tag this file.')
|
||||
return
|
||||
|
||||
ComicImage = os.path.join('cache', 'temp.jpg?' +str(modtime))
|
||||
IssueImage = replacetheslash(ComicImage)
|
||||
|
||||
|
@ -1438,7 +1490,7 @@ def IssueDetails(filelocation, IssueID=None):
|
|||
pagecount = result.getElementsByTagName('PageCount')[0].firstChild.wholeText
|
||||
except:
|
||||
pagecount = 0
|
||||
logger.fdebug("number of pages I counted: " + str(pagecount))
|
||||
|
||||
i = 0
|
||||
|
||||
try:
|
||||
|
@ -1451,14 +1503,15 @@ def IssueDetails(filelocation, IssueID=None):
|
|||
while (i < int(pagecount)):
|
||||
pageinfo = result.getElementsByTagName('Page')[i].attributes
|
||||
attrib = pageinfo.getNamedItem('Image')
|
||||
logger.fdebug('Frontcover validated as being image #: ' + str(attrib.value))
|
||||
#logger.fdebug('Frontcover validated as being image #: ' + str(attrib.value))
|
||||
att = pageinfo.getNamedItem('Type')
|
||||
logger.fdebug('pageinfo: ' + str(pageinfo))
|
||||
if att.value == 'FrontCover':
|
||||
logger.fdebug('FrontCover detected. Extracting.')
|
||||
#logger.fdebug('FrontCover detected. Extracting.')
|
||||
break
|
||||
i+=1
|
||||
elif issuetag == 'comment':
|
||||
logger.info('CBL Tagging.')
|
||||
stripline = 'Archive: ' + dstlocation
|
||||
data = re.sub(stripline, '', data.encode("utf-8")).strip()
|
||||
if data is None or data == '':
|
||||
|
@ -1468,17 +1521,39 @@ def IssueDetails(filelocation, IssueID=None):
|
|||
lastmodified = ast_data['lastModified']
|
||||
|
||||
dt = ast_data['ComicBookInfo/1.0']
|
||||
try:
|
||||
publisher = dt['publisher']
|
||||
except:
|
||||
publisher = None
|
||||
try:
|
||||
year = dt['publicationYear']
|
||||
except:
|
||||
year = None
|
||||
try:
|
||||
month = dt['publicationMonth']
|
||||
except:
|
||||
month = None
|
||||
try:
|
||||
day = dt['publicationDay']
|
||||
except:
|
||||
day = None
|
||||
try:
|
||||
issue_title = dt['title']
|
||||
except:
|
||||
issue_title = None
|
||||
try:
|
||||
series_title = dt['series']
|
||||
except:
|
||||
series_title = None
|
||||
try:
|
||||
issue_number = dt['issue']
|
||||
except:
|
||||
issue_number = None
|
||||
try:
|
||||
summary = dt['comments']
|
||||
except:
|
||||
summary = "None"
|
||||
|
||||
editor = "None"
|
||||
colorist = "None"
|
||||
artist = "None"
|
||||
|
@ -1487,10 +1562,25 @@ def IssueDetails(filelocation, IssueID=None):
|
|||
cover_artist = "None"
|
||||
penciller = "None"
|
||||
inker = "None"
|
||||
|
||||
try:
|
||||
series_volume = dt['volume']
|
||||
except:
|
||||
series_volume = None
|
||||
|
||||
try:
|
||||
t = dt['credits']
|
||||
except:
|
||||
editor = None
|
||||
colorist = None
|
||||
artist = None
|
||||
writer = None
|
||||
letterer = None
|
||||
cover_artist = None
|
||||
penciller = None
|
||||
inker = None
|
||||
|
||||
else:
|
||||
for cl in dt['credits']:
|
||||
if cl['role'] == 'Editor':
|
||||
if editor == "None": editor = cl['person']
|
||||
|
@ -1680,33 +1770,52 @@ def duplicate_filecheck(filename, ComicID=None, IssueID=None, StoryArcID=None):
|
|||
logger.info('[DUPECHECK] Assuming 0-byte file - this one is gonna get hammered.')
|
||||
|
||||
logger.fdebug('[DUPECHECK] Based on duplication preferences I will retain based on : ' + mylar.DUPECONSTRAINT)
|
||||
if 'cbr' in mylar.DUPECONSTRAINT or 'cbz' in mylar.DUPECONSTRAINT:
|
||||
|
||||
tmp_dupeconstraint = mylar.DUPECONSTRAINT
|
||||
|
||||
if any(['cbr' in mylar.DUPECONSTRAINT, 'cbz' in mylar.DUPECONSTRAINT]):
|
||||
if 'cbr' in mylar.DUPECONSTRAINT:
|
||||
if filename.endswith('.cbr'):
|
||||
#this has to be configured in config - either retain cbr or cbz.
|
||||
if dupchk['Location'].endswith('.cbz'):
|
||||
#keep dupechk['Location']
|
||||
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in file : ' + dupchk['Location'])
|
||||
rtnval.append({'action': "dupe_file",
|
||||
'to_dupe': filename})
|
||||
if dupchk['Location'].endswith('.cbr'):
|
||||
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] BOTH files are in cbr format. Retaining the larger filesize of the two.')
|
||||
tmp_dupeconstraint = 'filesize'
|
||||
else:
|
||||
#keep filename
|
||||
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in file : ' + filename)
|
||||
rtnval.append({'action': "dupe_src",
|
||||
'to_dupe': os.path.join(series['ComicLocation'], dupchk['Location'])})
|
||||
else:
|
||||
if dupchk['Location'].endswith('.cbz'):
|
||||
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] BOTH files are in cbz format. Retaining the larger filesize of the two.')
|
||||
tmp_dupeconstraint = 'filesize'
|
||||
else:
|
||||
#keep filename
|
||||
logger.info('[DUPECHECK-CBR PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in file : ' + dupchk['Location'])
|
||||
rtnval.append({'action': "dupe_file",
|
||||
'to_dupe': filename})
|
||||
|
||||
elif 'cbz' in mylar.DUPECONSTRAINT:
|
||||
if filename.endswith('.cbr'):
|
||||
if dupchk['Location'].endswith('.cbr'):
|
||||
#keep dupchk['Location']
|
||||
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] BOTH files are in cbr format. Retaining the larger filesize of the two.')
|
||||
tmp_dupeconstraint = 'filesize'
|
||||
else:
|
||||
#keep filename
|
||||
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in filename : ' + dupchk['Location'])
|
||||
rtnval.append({'action': "dupe_file",
|
||||
'to_dupe': filename})
|
||||
else:
|
||||
if dupchk['Location'].endswith('.cbz'):
|
||||
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] BOTH files are in cbz format. Retaining the larger filesize of the two.')
|
||||
tmp_dupeconstraint = 'filesize'
|
||||
else:
|
||||
#keep filename
|
||||
logger.info('[DUPECHECK-CBZ PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining newly scanned in filename : ' + filename)
|
||||
rtnval.append({'action': "dupe_src",
|
||||
'to_dupe': os.path.join(series['ComicLocation'], dupchk['Location'])})
|
||||
|
||||
if mylar.DUPECONSTRAINT == 'filesize':
|
||||
if mylar.DUPECONSTRAINT == 'filesize' or tmp_dupeconstraint == 'filesize':
|
||||
if filesz <= int(dupsize) and int(dupsize) != 0:
|
||||
logger.info('[DUPECHECK-FILESIZE PRIORITY] [#' + dupchk['Issue_Number'] + '] Retaining currently scanned in filename : ' + dupchk['Location'])
|
||||
rtnval.append({'action': "dupe_file",
|
||||
|
|
|
@ -447,12 +447,14 @@ def addComictoDB(comicid, mismatch=None, pullupd=None, imported=None, ogcname=No
|
|||
statinfo = os.stat(coverfile)
|
||||
coversize = statinfo.st_size
|
||||
|
||||
if int(coversize) < 35000 or str(r.status_code) != '200':
|
||||
if int(coversize) < 30000 or str(r.status_code) != '200':
|
||||
if str(r.status_code) != '200':
|
||||
logger.info('Trying to grab an alternate cover due to problems trying to retrieve the main cover image.')
|
||||
else:
|
||||
logger.info('Image size invalid [' + str(coversize) + ' bytes] - trying to get alternate cover image.')
|
||||
logger.fdebug('invalid image link is here: ' + comic['ComicImage'])
|
||||
|
||||
if os.path.exists(coverfile):
|
||||
os.remove(coverfile)
|
||||
|
||||
logger.info('Attempting to retrieve alternate comic image for the series.')
|
||||
|
@ -673,7 +675,11 @@ def addComictoDB(comicid, mismatch=None, pullupd=None, imported=None, ogcname=No
|
|||
logger.info('Returning to Future-Check module to complete the add & remove entry.')
|
||||
return
|
||||
|
||||
if imported == 'yes':
|
||||
if calledfrom == 'addbyid':
|
||||
logger.info('Sucessfully added ' + comic['ComicName'] + ' (' + str(SeriesYear) + ') by directly using the ComicVine ID')
|
||||
return
|
||||
|
||||
if imported:
|
||||
logger.info('Successfully imported : ' + comic['ComicName'])
|
||||
#now that it's moved / renamed ... we remove it from importResults or mark as completed.
|
||||
|
||||
|
@ -686,9 +692,6 @@ def addComictoDB(comicid, mismatch=None, pullupd=None, imported=None, ogcname=No
|
|||
"ComicID": comicid}
|
||||
myDB.upsert("importresults", newValue, controlValue)
|
||||
|
||||
if calledfrom == 'addbyid':
|
||||
logger.info('Sucessfully added ' + comic['ComicName'] + ' (' + str(SeriesYear) + ') by directly using the ComicVine ID')
|
||||
return
|
||||
|
||||
def GCDimport(gcomicid, pullupd=None, imported=None, ogcname=None):
|
||||
# this is for importing via GCD only and not using CV.
|
||||
|
|
|
@ -19,12 +19,13 @@ import os
|
|||
import glob
|
||||
import re
|
||||
import shutil
|
||||
import random
|
||||
|
||||
import mylar
|
||||
from mylar import db, logger, helpers, importer, updater
|
||||
from mylar import db, logger, helpers, importer, updater, filechecker
|
||||
|
||||
# You can scan a single directory and append it to the current library by specifying append=True
|
||||
def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None):
|
||||
def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None, queue=None):
|
||||
|
||||
if cron and not mylar.LIBRARYSCAN:
|
||||
return
|
||||
|
@ -47,12 +48,18 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
|
|||
basedir = dir
|
||||
|
||||
comic_list = []
|
||||
failure_list = []
|
||||
comiccnt = 0
|
||||
extensions = ('cbr','cbz')
|
||||
cv_location = []
|
||||
cbz_retry = 0
|
||||
|
||||
mylar.IMPORT_STATUS = 'Now attempting to parse files for additional information'
|
||||
|
||||
#mylar.IMPORT_PARSED_COUNT #used to count what #/totalfiles the filename parser is currently on
|
||||
for r, d, f in os.walk(dir):
|
||||
for files in f:
|
||||
mylar.IMPORT_FILES +=1
|
||||
if 'cvinfo' in files:
|
||||
cv_location.append(r)
|
||||
logger.fdebug('CVINFO found: ' + os.path.join(r))
|
||||
|
@ -60,20 +67,77 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
|
|||
comic = files
|
||||
comicpath = os.path.join(r, files)
|
||||
comicsize = os.path.getsize(comicpath)
|
||||
t = filechecker.FileChecker(dir=r, file=comic)
|
||||
results = t.listFiles()
|
||||
#logger.info(results)
|
||||
#'type': re.sub('\.','', filetype).strip(),
|
||||
#'sub': path_list,
|
||||
#'volume': volume,
|
||||
#'match_type': match_type,
|
||||
#'comicfilename': filename,
|
||||
#'comiclocation': clocation,
|
||||
#'series_name': series_name,
|
||||
#'series_volume': issue_volume,
|
||||
#'series_year': issue_year,
|
||||
#'justthedigits': issue_number,
|
||||
#'annualcomicid': annual_comicid,
|
||||
#'scangroup': scangroup}
|
||||
|
||||
logger.fdebug('Comic: ' + comic + ' [' + comicpath + '] - ' + str(comicsize) + ' bytes')
|
||||
comiccnt+=1
|
||||
|
||||
if results:
|
||||
resultline = '[PARSE-' + results['parse_status'].upper() + ']'
|
||||
resultline += '[SERIES: ' + results['series_name'] + ']'
|
||||
if results['series_volume'] is not None:
|
||||
resultline += '[VOLUME: ' + results['series_volume'] + ']'
|
||||
if results['issue_year'] is not None:
|
||||
resultline += '[ISSUE YEAR: ' + str(results['issue_year']) + ']'
|
||||
if results['issue_number'] is not None:
|
||||
resultline += '[ISSUE #: ' + results['issue_number'] + ']'
|
||||
logger.fdebug(resultline)
|
||||
else:
|
||||
logger.fdebug('[PARSED] FAILURE.')
|
||||
continue
|
||||
|
||||
# We need the unicode path to use for logging, inserting into database
|
||||
unicode_comic_path = comicpath.decode(mylar.SYS_ENCODING, 'replace')
|
||||
|
||||
comic_dict = {'ComicFilename': comic,
|
||||
if results['parse_status'] == 'success':
|
||||
comic_list.append({'ComicFilename': comic,
|
||||
'ComicLocation': comicpath,
|
||||
'ComicSize': comicsize,
|
||||
'Unicode_ComicLocation': unicode_comic_path}
|
||||
comic_list.append(comic_dict)
|
||||
'Unicode_ComicLocation': unicode_comic_path,
|
||||
'parsedinfo': {'series_name': results['series_name'],
|
||||
'series_volume': results['series_volume'],
|
||||
'issue_year': results['issue_year'],
|
||||
'issue_number': results['issue_number']}
|
||||
})
|
||||
comiccnt +=1
|
||||
mylar.IMPORT_PARSED_COUNT +=1
|
||||
else:
|
||||
failure_list.append({'ComicFilename': comic,
|
||||
'ComicLocation': comicpath,
|
||||
'ComicSize': comicsize,
|
||||
'Unicode_ComicLocation': unicode_comic_path,
|
||||
'parsedinfo': {'series_name': results['series_name'],
|
||||
'series_volume': results['series_volume'],
|
||||
'issue_year': results['issue_year'],
|
||||
'issue_number': results['issue_number']}
|
||||
})
|
||||
mylar.IMPORT_FAILURE_COUNT +=1
|
||||
if comic.endswith('.cbz'):
|
||||
cbz_retry +=1
|
||||
|
||||
|
||||
mylar.IMPORT_TOTALFILES = comiccnt
|
||||
logger.info('I have successfully discovered & parsed a total of ' + str(comiccnt) + ' files....analyzing now')
|
||||
logger.info('I have not been able to determine what ' + str(len(failure_list)) + ' files are')
|
||||
logger.info('However, ' + str(cbz_retry) + ' files are in a cbz format, which may contain metadata.')
|
||||
|
||||
mylar.IMPORT_STATUS = 'Successfully parsed ' + str(comiccnt) + ' files'
|
||||
|
||||
#return queue.put(valreturn)
|
||||
|
||||
logger.info("I've found a total of " + str(comiccnt) + " comics....analyzing now")
|
||||
#logger.info("comiclist: " + str(comic_list))
|
||||
myDB = db.DBConnection()
|
||||
|
||||
#let's load in the watchlist to see if we have any matches.
|
||||
|
@ -144,13 +208,17 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
|
|||
issueid_list = []
|
||||
cvscanned_loc = None
|
||||
cvinfo_CID = None
|
||||
cnt = 0
|
||||
mylar.IMPORT_STATUS = '[0%] Now parsing individual filenames for metadata if available'
|
||||
|
||||
for i in comic_list:
|
||||
mylar.IMPORT_STATUS = '[' + str(cnt) + '/' + str(comiccnt) + '] Now parsing individual filenames for metadata if available'
|
||||
logger.fdebug('Analyzing : ' + i['ComicFilename'])
|
||||
comfilename = i['ComicFilename']
|
||||
comlocation = i['ComicLocation']
|
||||
issueinfo = None
|
||||
|
||||
#probably need to zero these issue-related metadata to None so we can pick the best option
|
||||
issuevolume = None
|
||||
|
||||
#Make sure cvinfo is checked for FIRST (so that CID can be attached to all files properly thereafter as they're scanned in)
|
||||
if os.path.dirname(comlocation) in cv_location and os.path.dirname(comlocation) != cvscanned_loc:
|
||||
|
@ -181,376 +249,236 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
|
|||
# continue
|
||||
|
||||
if mylar.IMP_METADATA:
|
||||
logger.info('metatagging checking enabled.')
|
||||
#if read tags is enabled during import, check here.
|
||||
if i['ComicLocation'].endswith('.cbz'):
|
||||
logger.info('Attempting to read tags present in filename: ' + i['ComicLocation'])
|
||||
logger.fdebug('[IMPORT-CBZ] Metatagging checking enabled.')
|
||||
logger.info('[IMPORT-CBZ} Attempting to read tags present in filename: ' + i['ComicLocation'])
|
||||
issueinfo = helpers.IssueDetails(i['ComicLocation'])
|
||||
logger.info('issueinfo: ' + str(issueinfo))
|
||||
if issueinfo is None:
|
||||
logger.fdebug('[IMPORT-CBZ] No valid metadata contained within filename. Dropping down to parsing the filename itself.')
|
||||
pass
|
||||
else:
|
||||
issuenotes_id = None
|
||||
logger.info('Successfully retrieved some tags. Lets see what I can figure out.')
|
||||
logger.info('[IMPORT-CBZ] Successfully retrieved some tags. Lets see what I can figure out.')
|
||||
comicname = issueinfo[0]['series']
|
||||
logger.fdebug('Series Name: ' + comicname)
|
||||
issue_number = issueinfo[0]['issue_number']
|
||||
logger.fdebug('Issue Number: ' + str(issue_number))
|
||||
issuetitle = issueinfo[0]['title']
|
||||
logger.fdebug('Issue Title: ' + issuetitle)
|
||||
issueyear = issueinfo[0]['year']
|
||||
logger.fdebug('Issue Year: ' + str(issueyear))
|
||||
if comicname is not None:
|
||||
logger.fdebug('[IMPORT-CBZ] Series Name: ' + comicname)
|
||||
as_d = filechecker.FileChecker(watchcomic=comicname.decode('utf-8'))
|
||||
as_dyninfo = as_d.dynamic_replace(comicname)
|
||||
logger.fdebug('Dynamic-ComicName: ' + as_dyninfo['mod_seriesname'])
|
||||
else:
|
||||
logger.fdebug('[IMPORT-CBZ] No series name found within metadata. This is bunk - dropping down to file parsing for usable information.')
|
||||
issueinfo = None
|
||||
issue_number = None
|
||||
|
||||
if issueinfo is not None:
|
||||
try:
|
||||
issuevolume = issueinfo[0]['volume']
|
||||
issueyear = issueinfo[0]['year']
|
||||
except:
|
||||
issueyear = None
|
||||
|
||||
#if the issue number is a non-numeric unicode string, this will screw up along with impID
|
||||
issue_number = issueinfo[0]['issue_number']
|
||||
if issue_number is not None:
|
||||
logger.fdebug('[IMPORT-CBZ] Issue Number: ' + issue_number)
|
||||
else:
|
||||
issue_number = i['parsed']['issue_number']
|
||||
|
||||
if 'annual' in comicname.lower() or 'annual' in comfilename.lower():
|
||||
if issue_number is None or issue_number == 'None':
|
||||
logger.info('Annual detected with no issue number present within metadata. Assuming year as issue.')
|
||||
try:
|
||||
issue_number = 'Annual ' + str(issueyear)
|
||||
except:
|
||||
issue_number = 'Annual ' + i['parsed']['issue_year']
|
||||
else:
|
||||
logger.info('Annual detected with issue number present within metadata.')
|
||||
if 'annual' not in issue_number.lower():
|
||||
issue_number = 'Annual ' + issue_number
|
||||
mod_series = re.sub('annual', '', comicname, flags=re.I).strip()
|
||||
else:
|
||||
mod_series = comicname
|
||||
|
||||
logger.fdebug('issue number SHOULD Be: ' + issue_number)
|
||||
|
||||
try:
|
||||
issuetitle = issueinfo[0]['title']
|
||||
except:
|
||||
issuetitle = None
|
||||
try:
|
||||
issueyear = issueinfo[0]['year']
|
||||
except:
|
||||
issueyear = None
|
||||
try:
|
||||
issuevolume = str(issueinfo[0]['volume'])
|
||||
if all([issuevolume is not None, issuevolume != 'None']) and not issuevolume.lower().startswith('v'):
|
||||
issuevolume = 'v' + str(issuevolume)
|
||||
logger.fdebug('[TRY]issue volume is: ' + str(issuevolume))
|
||||
except:
|
||||
logger.fdebug('[EXCEPT]issue volume is: ' + str(issuevolume))
|
||||
issuevolume = None
|
||||
|
||||
if any([comicname is None, comicname == 'None', issue_number is None, issue_number == 'None']):
|
||||
logger.fdebug('[IMPORT-CBZ] Improperly tagged file as the metatagging is invalid. Ignoring meta and just parsing the filename.')
|
||||
issueinfo = None
|
||||
pass
|
||||
else:
|
||||
# if used by ComicTagger, Notes field will have the IssueID.
|
||||
issuenotes = issueinfo[0]['notes']
|
||||
logger.fdebug('Notes: ' + issuenotes)
|
||||
if issuenotes is not None:
|
||||
logger.fdebug('[IMPORT-CBZ] Notes: ' + issuenotes)
|
||||
if issuenotes is not None and issuenotes != 'None':
|
||||
if 'Issue ID' in issuenotes:
|
||||
st_find = issuenotes.find('Issue ID')
|
||||
tmp_issuenotes_id = re.sub("[^0-9]", " ", issuenotes[st_find:]).strip()
|
||||
if tmp_issuenotes_id.isdigit():
|
||||
issuenotes_id = tmp_issuenotes_id
|
||||
logger.fdebug('Successfully retrieved CV IssueID for ' + comicname + ' #' + str(issue_number) + ' [' + str(issuenotes_id) + ']')
|
||||
logger.fdebug('[IMPORT-CBZ] Successfully retrieved CV IssueID for ' + comicname + ' #' + issue_number + ' [' + str(issuenotes_id) + ']')
|
||||
elif 'CVDB' in issuenotes:
|
||||
st_find = issuenotes.find('CVDB')
|
||||
tmp_issuenotes_id = re.sub("[^0-9]", " ", issuenotes[st_find:]).strip()
|
||||
if tmp_issuenotes_id.isdigit():
|
||||
issuenotes_id = tmp_issuenotes_id
|
||||
logger.fdebug('Successfully retrieved CV IssueID for ' + comicname + ' #' + str(issue_number) + ' [' + str(issuenotes_id) + ']')
|
||||
logger.fdebug('[IMPORT-CBZ] Successfully retrieved CV IssueID for ' + comicname + ' #' + issue_number + ' [' + str(issuenotes_id) + ']')
|
||||
else:
|
||||
logger.fdebug('Unable to retrieve IssueID from meta-tagging. If there is other metadata present I will use that.')
|
||||
logger.fdebug('[IMPORT-CBZ] Unable to retrieve IssueID from meta-tagging. If there is other metadata present I will use that.')
|
||||
|
||||
logger.fdebug("adding " + comicname + " to the import-queue!")
|
||||
impid = comicname + '-' + str(issueyear) + '-' + str(issue_number) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
|
||||
logger.fdebug("impid: " + str(impid))
|
||||
logger.fdebug('[IMPORT-CBZ] Adding ' + comicname + ' to the import-queue!')
|
||||
#impid = comicname + '-' + str(issueyear) + '-' + str(issue_number) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
|
||||
impid = str(random.randint(1000000,99999999))
|
||||
logger.fdebug('[IMPORT-CBZ] impid: ' + str(impid))
|
||||
#make sure we only add in those issueid's which don't already have a comicid attached via the cvinfo scan above (this is for reverse-lookup of issueids)
|
||||
issuepopulated = False
|
||||
if cvinfo_CID is None:
|
||||
issueid_list.append(issuenotes_id)
|
||||
if issuenotes_id is None:
|
||||
logger.info('[IMPORT-CBZ] No ComicID detected where it should be. Bypassing this metadata entry and going the parsing route [' + comfilename + ']')
|
||||
else:
|
||||
#we need to store the impid here as well so we can look it up.
|
||||
issueid_list.append({'issueid': issuenotes_id,
|
||||
'importinfo': {'impid': impid,
|
||||
'comicid': None,
|
||||
'comicname': comicname,
|
||||
'dynamicname': as_dyninfo['mod_seriesname'],
|
||||
'comicyear': issueyear,
|
||||
'issuenumber': issue_number,
|
||||
'volume': issuevolume,
|
||||
'comfilename': comfilename,
|
||||
'comlocation': comlocation.decode(mylar.SYS_ENCODING)}
|
||||
})
|
||||
mylar.IMPORT_CID_COUNT +=1
|
||||
issuepopulated = True
|
||||
|
||||
if issuepopulated == False:
|
||||
if cvscanned_loc == os.path.dirname(comlocation):
|
||||
cv_cid = cvinfo_CID
|
||||
logger.info('CVINFO_COMICID attached : ' + str(cv_cid))
|
||||
logger.fdebug('[IMPORT-CBZ] CVINFO_COMICID attached : ' + str(cv_cid))
|
||||
else:
|
||||
cv_cid = None
|
||||
import_by_comicids.append({
|
||||
"impid": impid,
|
||||
"comicid": cv_cid,
|
||||
"watchmatch": None,
|
||||
"displayname": helpers.cleanName(comicname),
|
||||
"comicname": comicname, #com_NAME,
|
||||
"displayname": mod_series,
|
||||
"comicname": comicname,
|
||||
"dynamicname": as_dyninfo['mod_seriesname'],
|
||||
"comicyear": issueyear,
|
||||
"issuenumber": issue_number,
|
||||
"volume": issuevolume,
|
||||
"issueid": issuenotes_id,
|
||||
"comfilename": comfilename,
|
||||
"comlocation": comlocation.decode(mylar.SYS_ENCODING)
|
||||
})
|
||||
|
||||
mylar.IMPORT_CID_COUNT +=1
|
||||
else:
|
||||
logger.info(i['ComicLocation'] + ' is not in a metatagged format (cbz). Bypassing reading of the metatags')
|
||||
pass
|
||||
#logger.fdebug(i['ComicFilename'] + ' is not in a metatagged format (cbz). Bypassing reading of the metatags')
|
||||
|
||||
if issueinfo is None:
|
||||
#let's clean up the filename for matching purposes
|
||||
|
||||
cfilename = re.sub('[\_\#\,\/\:\;\-\!\$\%\&\+\'\?\@]', ' ', comfilename)
|
||||
#cfilename = re.sub('\s', '_', str(cfilename))
|
||||
d_filename = re.sub('[\_\#\,\/\;\!\$\%\&\?\@]', ' ', comfilename)
|
||||
d_filename = re.sub('[\:\-\+\']', '#', d_filename)
|
||||
|
||||
#strip extraspaces
|
||||
d_filename = re.sub('\s+', ' ', d_filename)
|
||||
cfilename = re.sub('\s+', ' ', cfilename)
|
||||
|
||||
#versioning - remove it
|
||||
subsplit = cfilename.replace('_', ' ').split()
|
||||
volno = None
|
||||
volyr = None
|
||||
for subit in subsplit:
|
||||
if subit[0].lower() == 'v':
|
||||
vfull = 0
|
||||
if subit[1:].isdigit():
|
||||
#if in format v1, v2009 etc...
|
||||
if len(subit) > 3:
|
||||
# if it's greater than 3 in length, then the format is Vyyyy
|
||||
vfull = 1 # add on 1 character length to account for extra space
|
||||
cfilename = re.sub(subit, '', cfilename)
|
||||
d_filename = re.sub(subit, '', d_filename)
|
||||
volno = re.sub("[^0-9]", " ", subit)
|
||||
elif subit.lower()[:3] == 'vol':
|
||||
#if in format vol.2013 etc
|
||||
#because the '.' in Vol. gets removed, let's loop thru again after the Vol hit to remove it entirely
|
||||
logger.fdebug('volume indicator detected as version #:' + str(subit))
|
||||
cfilename = re.sub(subit, '', cfilename)
|
||||
cfilename = " ".join(cfilename.split())
|
||||
d_filename = re.sub(subit, '', d_filename)
|
||||
d_filename = " ".join(d_filename.split())
|
||||
volyr = re.sub("[^0-9]", " ", subit).strip()
|
||||
logger.fdebug('volume year set as : ' + str(volyr))
|
||||
cm_cn = 0
|
||||
|
||||
#we need to track the counter to make sure we are comparing the right array parts
|
||||
#this takes care of the brackets :)
|
||||
m = re.findall('[^()]+', d_filename) #cfilename)
|
||||
lenm = len(m)
|
||||
logger.fdebug("there are " + str(lenm) + " words.")
|
||||
cnt = 0
|
||||
yearmatch = "false"
|
||||
foundonwatch = "False"
|
||||
issue = 999999
|
||||
|
||||
|
||||
while (cnt < lenm):
|
||||
if m[cnt] is None: break
|
||||
if m[cnt] == ' ':
|
||||
pass
|
||||
if i['parsedinfo']['issue_number'] is None:
|
||||
if 'annual' in i['parsedinfo']['series_name'].lower():
|
||||
logger.fdebug('Annual detected with no issue number present. Assuming year as issue.')##1 issue')
|
||||
if i['parsedinfo']['issue_year'] is not None:
|
||||
issuenumber = 'Annual ' + str(i['parsedinfo']['issue_year'])
|
||||
else:
|
||||
logger.fdebug(str(cnt) + ". Bracket Word: " + m[cnt])
|
||||
if cnt == 0:
|
||||
comic_andiss = m[cnt]
|
||||
logger.fdebug("Comic: " + comic_andiss)
|
||||
# if it's not in the standard format this will bork.
|
||||
# let's try to accomodate (somehow).
|
||||
# first remove the extension (if any)
|
||||
extensions = ('cbr', 'cbz')
|
||||
if comic_andiss.lower().endswith(extensions):
|
||||
comic_andiss = comic_andiss[:-4]
|
||||
logger.fdebug("removed extension from filename.")
|
||||
#now we have to break up the string regardless of formatting.
|
||||
#let's force the spaces.
|
||||
comic_andiss = re.sub('_', ' ', comic_andiss)
|
||||
cs = comic_andiss.split()
|
||||
cs_len = len(cs)
|
||||
cn = ''
|
||||
ydetected = 'no'
|
||||
idetected = 'no'
|
||||
decimaldetect = 'no'
|
||||
for i in reversed(xrange(len(cs))):
|
||||
#start at the end.
|
||||
logger.fdebug("word: " + str(cs[i]))
|
||||
#assume once we find issue - everything prior is the actual title
|
||||
#idetected = no will ignore everything so it will assume all title
|
||||
if cs[i][:-2] == '19' or cs[i][:-2] == '20' and idetected == 'no':
|
||||
logger.fdebug("year detected: " + str(cs[i]))
|
||||
ydetected = 'yes'
|
||||
result_comyear = cs[i]
|
||||
elif cs[i].isdigit() and idetected == 'no' or '.' in cs[i]:
|
||||
if '.' in cs[i]:
|
||||
#make sure it's a number on either side of decimal and assume decimal issue.
|
||||
decst = cs[i].find('.')
|
||||
dec_st = cs[i][:decst]
|
||||
dec_en = cs[i][decst +1:]
|
||||
logger.fdebug("st: " + str(dec_st))
|
||||
logger.fdebug("en: " + str(dec_en))
|
||||
if dec_st.isdigit() and dec_en.isdigit():
|
||||
logger.fdebug("decimal issue detected...adjusting.")
|
||||
issue = dec_st + "." + dec_en
|
||||
logger.fdebug("issue detected: " + str(issue))
|
||||
idetected = 'yes'
|
||||
issuenumber = 'Annual 1'
|
||||
else:
|
||||
logger.fdebug("false decimal represent. Chunking to extra word.")
|
||||
cn = cn + cs[i] + " "
|
||||
#break
|
||||
issuenumber = i['parsedinfo']['issue_number']
|
||||
|
||||
if 'annual' in i['parsedinfo']['series_name'].lower():
|
||||
mod_series = re.sub('annual', '', i['parsedinfo']['series_name'], flags=re.I).strip()
|
||||
logger.fdebug('Annual detected with no issue number present. Assuming year as issue.')##1 issue')
|
||||
if i['parsedinfo']['issue_number'] is not None:
|
||||
issuenumber = 'Annual ' + str(i['parsedinfo']['issue_number'])
|
||||
else:
|
||||
issue = cs[i]
|
||||
logger.fdebug("issue detected : " + str(issue))
|
||||
idetected = 'yes'
|
||||
|
||||
elif '\#' in cs[i] or decimaldetect == 'yes':
|
||||
logger.fdebug("issue detected: " + str(cs[i]))
|
||||
idetected = 'yes'
|
||||
else: cn = cn + cs[i] + " "
|
||||
if ydetected == 'no':
|
||||
#assume no year given in filename...
|
||||
result_comyear = "0000"
|
||||
logger.fdebug("cm?: " + str(cn))
|
||||
if issue is not '999999':
|
||||
comiss = issue
|
||||
if i['parsedinfo']['issue_year'] is not None:
|
||||
issuenumber = 'Annual ' + str(i['parsedinfo']['issue_year'])
|
||||
else:
|
||||
logger.ERROR("Invalid Issue number (none present) for " + comfilename)
|
||||
break
|
||||
cnsplit = cn.split()
|
||||
cname = ''
|
||||
findcn = 0
|
||||
while (findcn < len(cnsplit)):
|
||||
cname = cname + cs[findcn] + " "
|
||||
findcn+=1
|
||||
cname = cname[:len(cname)-1] # drop the end space...
|
||||
logger.fdebug('assuming name is : ' + cname)
|
||||
com_NAME = cname
|
||||
logger.fdebug('com_NAME : ' + com_NAME)
|
||||
yearmatch = "True"
|
||||
issuenumber = 'Annual 1'
|
||||
else:
|
||||
logger.fdebug('checking ' + m[cnt])
|
||||
# we're assuming that the year is in brackets (and it should be damnit)
|
||||
if m[cnt][:-2] == '19' or m[cnt][:-2] == '20':
|
||||
logger.fdebug('year detected: ' + str(m[cnt]))
|
||||
ydetected = 'yes'
|
||||
result_comyear = m[cnt]
|
||||
elif m[cnt][:3].lower() in datelist:
|
||||
logger.fdebug('possible issue date format given - verifying')
|
||||
#if the date of the issue is given as (Jan 2010) or (January 2010) let's adjust.
|
||||
#keeping in mind that ',' and '.' are already stripped from the string
|
||||
if m[cnt][-4:].isdigit():
|
||||
ydetected = 'yes'
|
||||
result_comyear = m[cnt][-4:]
|
||||
logger.fdebug('Valid Issue year of ' + str(result_comyear) + 'detected in format of ' + str(m[cnt]))
|
||||
cnt+=1
|
||||
mod_series = i['parsedinfo']['series_name']
|
||||
issuenumber = i['parsedinfo']['issue_number']
|
||||
|
||||
displength = len(cname)
|
||||
logger.fdebug('cname length : ' + str(displength) + ' --- ' + str(cname))
|
||||
logger.fdebug('d_filename is : ' + d_filename)
|
||||
charcount = d_filename.count('#')
|
||||
logger.fdebug('charcount is : ' + str(charcount))
|
||||
if charcount > 0:
|
||||
logger.fdebug('entering loop')
|
||||
for i, m in enumerate(re.finditer('\#', d_filename)):
|
||||
if m.end() <= displength:
|
||||
logger.fdebug(comfilename[m.start():m.end()])
|
||||
# find occurance in c_filename, then replace into d_filname so special characters are brought across
|
||||
newchar = comfilename[m.start():m.end()]
|
||||
logger.fdebug('newchar:' + str(newchar))
|
||||
d_filename = d_filename[:m.start()] + str(newchar) + d_filename[m.end():]
|
||||
logger.fdebug('d_filename:' + str(d_filename))
|
||||
|
||||
dispname = d_filename[:displength]
|
||||
logger.fdebug('dispname : ' + dispname)
|
||||
logger.fdebug('[' + mod_series + '] Adding to the import-queue!')
|
||||
isd = filechecker.FileChecker(watchcomic=mod_series.decode('utf-8'))
|
||||
is_dyninfo = isd.dynamic_replace(mod_series)
|
||||
logger.fdebug('Dynamic-ComicName: ' + is_dyninfo['mod_seriesname'])
|
||||
|
||||
splitit = []
|
||||
watchcomic_split = []
|
||||
logger.fdebug("filename comic and issue: " + comic_andiss)
|
||||
|
||||
#changed this from '' to ' '
|
||||
comic_iss_b4 = re.sub('[\-\:\,]', ' ', comic_andiss)
|
||||
comic_iss = comic_iss_b4.replace('.', ' ')
|
||||
comic_iss = re.sub('[\s+]', ' ', comic_iss).strip()
|
||||
logger.fdebug("adjusted comic and issue: " + str(comic_iss))
|
||||
#remove 'the' from here for proper comparisons.
|
||||
if ' the ' in comic_iss.lower():
|
||||
comic_iss = re.sub('\\bthe\\b', '', comic_iss).strip()
|
||||
splitit = comic_iss.split(None)
|
||||
logger.fdebug("adjusting from: " + str(comic_iss_b4) + " to: " + str(comic_iss))
|
||||
#here we cycle through the Watchlist looking for a match.
|
||||
while (cm_cn < watchcnt):
|
||||
#setup the watchlist
|
||||
comname = ComicName[cm_cn]
|
||||
comyear = ComicYear[cm_cn]
|
||||
compub = ComicPublisher[cm_cn]
|
||||
comtotal = ComicTotal[cm_cn]
|
||||
comicid = ComicID[cm_cn]
|
||||
watch_location = ComicLocation[cm_cn]
|
||||
|
||||
# there shouldn't be an issue in the comic now, so let's just assume it's all gravy.
|
||||
splitst = len(splitit)
|
||||
watchcomic_split = helpers.cleanName(comname)
|
||||
watchcomic_split = re.sub('[\-\:\,\.]', ' ', watchcomic_split).split(None)
|
||||
|
||||
logger.fdebug(str(splitit) + " file series word count: " + str(splitst))
|
||||
logger.fdebug(str(watchcomic_split) + " watchlist word count: " + str(len(watchcomic_split)))
|
||||
if (splitst) != len(watchcomic_split):
|
||||
logger.fdebug("incorrect comic lengths...not a match")
|
||||
# if str(splitit[0]).lower() == "the":
|
||||
# logger.fdebug("THE word detected...attempting to adjust pattern matching")
|
||||
# splitit[0] = splitit[4:]
|
||||
else:
|
||||
logger.fdebug("length match..proceeding")
|
||||
n = 0
|
||||
scount = 0
|
||||
logger.fdebug("search-length: " + str(splitst))
|
||||
logger.fdebug("Watchlist-length: " + str(len(watchcomic_split)))
|
||||
while (n <= (splitst) -1):
|
||||
logger.fdebug("splitit: " + str(splitit[n]))
|
||||
if n < (splitst) and n < len(watchcomic_split):
|
||||
logger.fdebug(str(n) + " Comparing: " + str(watchcomic_split[n]) + " .to. " + str(splitit[n]))
|
||||
if '+' in watchcomic_split[n]:
|
||||
watchcomic_split[n] = re.sub('+', '', str(watchcomic_split[n]))
|
||||
if str(watchcomic_split[n].lower()) in str(splitit[n].lower()) and len(watchcomic_split[n]) >= len(splitit[n]):
|
||||
logger.fdebug("word matched on : " + str(splitit[n]))
|
||||
scount+=1
|
||||
#elif ':' in splitit[n] or '-' in splitit[n]:
|
||||
# splitrep = splitit[n].replace('-', '')
|
||||
# logger.fdebug("non-character keyword...skipped on " + splitit[n])
|
||||
elif str(splitit[n]).lower().startswith('v'):
|
||||
logger.fdebug("possible versioning..checking")
|
||||
#we hit a versioning # - account for it
|
||||
if splitit[n][1:].isdigit():
|
||||
comicversion = str(splitit[n])
|
||||
logger.fdebug("version found: " + str(comicversion))
|
||||
else:
|
||||
logger.fdebug("Comic / Issue section")
|
||||
if splitit[n].isdigit():
|
||||
logger.fdebug("issue detected")
|
||||
else:
|
||||
logger.fdebug("non-match for: "+ str(splitit[n]))
|
||||
pass
|
||||
n+=1
|
||||
#set the match threshold to 80% (for now)
|
||||
# if it's less than 80% consider it a non-match and discard.
|
||||
#splitit has to splitit-1 because last position is issue.
|
||||
wordcnt = int(scount)
|
||||
logger.fdebug("scount:" + str(wordcnt))
|
||||
totalcnt = int(splitst)
|
||||
logger.fdebug("splitit-len:" + str(totalcnt))
|
||||
spercent = (wordcnt /totalcnt) * 100
|
||||
logger.fdebug("we got " + str(spercent) + " percent.")
|
||||
if int(spercent) >= 80:
|
||||
logger.fdebug("it's a go captain... - we matched " + str(spercent) + "%!")
|
||||
logger.fdebug("this should be a match!")
|
||||
logger.fdebug("issue we found for is : " + str(comiss))
|
||||
#set the year to the series we just found ;)
|
||||
result_comyear = comyear
|
||||
#issue comparison now as well
|
||||
logger.info(u"Found " + comname + " (" + str(comyear) + ") issue: " + str(comiss))
|
||||
watchmatch = str(comicid)
|
||||
dispname = DisplayName[cm_cn]
|
||||
foundonwatch = "True"
|
||||
break
|
||||
elif int(spercent) < 80:
|
||||
logger.fdebug("failure - we only got " + str(spercent) + "% right!")
|
||||
cm_cn+=1
|
||||
|
||||
if foundonwatch == "False":
|
||||
watchmatch = None
|
||||
#---if it's not a match - send it to the importer.
|
||||
n = 0
|
||||
|
||||
if volyr is None:
|
||||
if result_comyear is None:
|
||||
result_comyear = '0000' #no year in filename basically.
|
||||
else:
|
||||
if result_comyear is None:
|
||||
result_comyear = volyr
|
||||
if volno is None:
|
||||
if volyr is None:
|
||||
vol_label = None
|
||||
else:
|
||||
vol_label = volyr
|
||||
else:
|
||||
vol_label = volno
|
||||
|
||||
logger.fdebug("adding " + com_NAME + " to the import-queue!")
|
||||
impid = dispname + '-' + str(result_comyear) + '-' + str(comiss) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
|
||||
#impid = dispname + '-' + str(result_comyear) + '-' + str(comiss) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
|
||||
impid = str(random.randint(1000000,99999999))
|
||||
logger.fdebug("impid: " + str(impid))
|
||||
if cvscanned_loc == os.path.dirname(comlocation):
|
||||
cv_cid = cvinfo_CID
|
||||
logger.info('CVINFO_COMICID attached : ' + str(cv_cid))
|
||||
logger.fdebug('CVINFO_COMICID attached : ' + str(cv_cid))
|
||||
else:
|
||||
cv_cid = None
|
||||
|
||||
if issuevolume is None:
|
||||
logger.fdebug('issue volume is none : ' + str(issuevolume))
|
||||
if i['parsedinfo']['series_volume'] is None:
|
||||
issuevolume = None
|
||||
else:
|
||||
if str(i['parsedinfo']['series_volume'].lower()).startswith('v'):
|
||||
issuevolume = i['parsedinfo']['series_volume']
|
||||
else:
|
||||
issuevolume = 'v' + str(i['parsedinfo']['series_volume'])
|
||||
else:
|
||||
logger.fdebug('issue volume not none : ' + str(issuevolume))
|
||||
if issuevolume.lower().startswith('v'):
|
||||
issuevolume = issuevolume
|
||||
else:
|
||||
issuevolume = 'v' + str(issuevolume)
|
||||
|
||||
logger.fdebug('IssueVolume is : ' + str(issuevolume))
|
||||
|
||||
import_by_comicids.append({
|
||||
"impid": impid,
|
||||
"comicid": cv_cid,
|
||||
"issueid": None,
|
||||
"watchmatch": watchmatch,
|
||||
"displayname": dispname,
|
||||
"comicname": dispname, #com_NAME,
|
||||
"comicyear": result_comyear,
|
||||
"volume": vol_label,
|
||||
"watchmatch": None, #watchmatch (should be true/false if it already exists on watchlist)
|
||||
"displayname": mod_series,
|
||||
"comicname": i['parsedinfo']['series_name'],
|
||||
"dynamicname": is_dyninfo['mod_seriesname'].lower(),
|
||||
"comicyear": i['parsedinfo']['issue_year'],
|
||||
"issuenumber": issuenumber,
|
||||
"volume": issuevolume,
|
||||
"comfilename": comfilename,
|
||||
"comlocation": comlocation.decode(mylar.SYS_ENCODING)
|
||||
})
|
||||
cnt+=1
|
||||
#logger.fdebug('import_by_ids: ' + str(import_by_comicids))
|
||||
|
||||
#reverse lookup all of the gathered IssueID's in order to get the related ComicID
|
||||
vals = mylar.cv.getComic(None, 'import', comicidlist=issueid_list)
|
||||
logger.fdebug('vals returned:' + str(vals))
|
||||
reverse_issueids = []
|
||||
for x in issueid_list:
|
||||
reverse_issueids.append(x['issueid'])
|
||||
|
||||
vals = None
|
||||
if len(reverse_issueids) > 0:
|
||||
mylar.IMPORT_STATUS = 'Now Reverse looking up ' + str(len(reverse_issueids)) + ' IssueIDs to get the ComicIDs'
|
||||
vals = mylar.cv.getComic(None, 'import', comicidlist=reverse_issueids)
|
||||
#logger.fdebug('vals returned:' + str(vals))
|
||||
|
||||
if len(watch_kchoice) > 0:
|
||||
watchchoice['watchlist'] = watch_kchoice
|
||||
|
@ -629,79 +557,93 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
|
|||
if not len(import_by_comicids):
|
||||
return "Completed"
|
||||
if len(import_by_comicids) > 0:
|
||||
import_comicids['comic_info'] = import_by_comicids
|
||||
#import_comicids['comic_info'] = import_by_comicids
|
||||
#if vals:
|
||||
# import_comicids['issueid_info'] = vals
|
||||
#else:
|
||||
# import_comicids['issueid_info'] = None
|
||||
if vals:
|
||||
import_comicids['issueid_info'] = vals
|
||||
cvimport_comicids = vals
|
||||
import_cv_ids = len(vals)
|
||||
else:
|
||||
import_comicids['issueid_info'] = None
|
||||
cvimport_comicids = None
|
||||
import_cv_ids = 0
|
||||
#logger.fdebug('import comicids: ' + str(import_by_comicids))
|
||||
|
||||
logger.fdebug('import comicids: ' + str(import_by_comicids))
|
||||
|
||||
return import_comicids, len(import_by_comicids)
|
||||
return {'import_by_comicids': import_by_comicids,
|
||||
'import_count': len(import_by_comicids),
|
||||
'CV_import_comicids': cvimport_comicids,
|
||||
'import_cv_ids': import_cv_ids,
|
||||
'issueid_list': issueid_list,
|
||||
'failure_list': failure_list}
|
||||
|
||||
|
||||
def scanLibrary(scan=None, queue=None):
|
||||
valreturn = []
|
||||
if scan:
|
||||
try:
|
||||
soma, noids = libraryScan()
|
||||
soma = libraryScan(queue=queue)
|
||||
except Exception, e:
|
||||
logger.error('Unable to complete the scan: %s' % e)
|
||||
logger.error('[IMPORT] Unable to complete the scan: %s' % e)
|
||||
mylar.IMPORT_STATUS = None
|
||||
return
|
||||
if soma == "Completed":
|
||||
logger.info('Sucessfully completed import.')
|
||||
logger.info('[IMPORT] Sucessfully completed import.')
|
||||
else:
|
||||
logger.info('Starting mass importing...' + str(noids) + ' records.')
|
||||
#this is what it should do...
|
||||
#store soma (the list of comic_details from importing) into sql table so import can be whenever
|
||||
#display webpage showing results
|
||||
#allow user to select comic to add (one at a time)
|
||||
#call addComic off of the webpage to initiate the add.
|
||||
#return to result page to finish or continue adding.
|
||||
#....
|
||||
#threading.Thread(target=self.searchit).start()
|
||||
#threadthis = threadit.ThreadUrl()
|
||||
#result = threadthis.main(soma)
|
||||
myDB = db.DBConnection()
|
||||
sl = 0
|
||||
logger.fdebug("number of records: " + str(noids))
|
||||
while (sl < int(noids)):
|
||||
soma_sl = soma['comic_info'][sl]
|
||||
issue_info = soma['issueid_info']
|
||||
logger.fdebug("soma_sl: " + str(soma_sl))
|
||||
logger.fdebug("issue_info: " + str(issue_info))
|
||||
logger.fdebug("comicname: " + soma_sl['comicname'])
|
||||
logger.fdebug("filename: " + soma_sl['comfilename'])
|
||||
if issue_info is not None:
|
||||
for iss in issue_info:
|
||||
if soma_sl['issueid'] == iss['IssueID']:
|
||||
logger.info('IssueID match: ' + str(iss['IssueID']))
|
||||
logger.info('ComicName: ' + str(iss['ComicName'] + '[' + str(iss['ComicID'])+ ']'))
|
||||
IssID = iss['IssueID']
|
||||
ComicID = iss['ComicID']
|
||||
displayname = iss['ComicName']
|
||||
comicname = iss['ComicName']
|
||||
break
|
||||
else:
|
||||
IssID = None
|
||||
displayname = soma_sl['displayname'].encode('utf-8')
|
||||
comicname = soma_sl['comicname'].encode('utf-8')
|
||||
ComicID = soma_sl['comicid'] #if it's been scanned in for cvinfo, this will be the CID - otherwise it's None
|
||||
mylar.IMPORT_STATUS = 'Now adding the completed results to the DB.'
|
||||
logger.info('[IMPORT] Parsing/Reading of files completed!')
|
||||
logger.info('[IMPORT] Attempting to import ' + str(int(soma['import_cv_ids'] + soma['import_count'])) + ' files into your watchlist.')
|
||||
logger.info('[IMPORT-BREAKDOWN] Files with ComicIDs successfully extracted: ' + str(soma['import_cv_ids']))
|
||||
logger.info('[IMPORT-BREAKDOWN] Files that had to be parsed: ' + str(soma['import_count']))
|
||||
logger.info('[IMPORT-BREAKDOWN] Files that were unable to be parsed: ' + str(len(soma['failure_list'])))
|
||||
logger.info('[IMPORT-BREAKDOWN] Failure Files: ' + str(soma['failure_list']))
|
||||
|
||||
controlValue = {"impID": soma_sl['impid']}
|
||||
newValue = {"ComicYear": soma_sl['comicyear'],
|
||||
"Status": "Not Imported",
|
||||
"ComicName": comicname,
|
||||
"DisplayName": displayname,
|
||||
"ComicID": ComicID,
|
||||
"IssueID": IssID,
|
||||
"Volume": soma_sl['volume'],
|
||||
"ComicFilename": soma_sl['comfilename'].encode('utf-8'),
|
||||
"ComicLocation": soma_sl['comlocation'].encode('utf-8'),
|
||||
myDB = db.DBConnection()
|
||||
|
||||
#first we do the CV ones.
|
||||
if int(soma['import_cv_ids']) > 0:
|
||||
for i in soma['CV_import_comicids']:
|
||||
#we need to find the impid in the issueid_list as that holds the impid + other info
|
||||
abc = [x for x in soma['issueid_list'] if x['issueid'] == i['IssueID']]
|
||||
ghi = abc[0]['importinfo']
|
||||
|
||||
nspace_dynamicname = re.sub('[\|\s]', '', ghi['dynamicname'].lower()).strip()
|
||||
#these all have related ComicID/IssueID's...just add them as is.
|
||||
controlValue = {"impID": ghi['impid']}
|
||||
newValue = {"Status": "Not Imported",
|
||||
"ComicName": i['ComicName'],
|
||||
"DisplayName": i['ComicName'],
|
||||
"DynamicName": nspace_dynamicname,
|
||||
"ComicID": i['ComicID'],
|
||||
"IssueID": i['IssueID'],
|
||||
"IssueNumber": i['Issue_Number'],
|
||||
"Volume": ghi['volume'],
|
||||
"ComicYear": ghi['comicyear'],
|
||||
"ComicFilename": ghi['comfilename'].decode('utf-8'),
|
||||
"ComicLocation": ghi['comlocation'],
|
||||
"ImportDate": helpers.today(),
|
||||
"WatchMatch": soma_sl['watchmatch']}
|
||||
"WatchMatch": None} #i['watchmatch']}
|
||||
myDB.upsert("importresults", newValue, controlValue)
|
||||
sl+=1
|
||||
|
||||
if int(soma['import_count']) > 0:
|
||||
for ss in soma['import_by_comicids']:
|
||||
nspace_dynamicname = re.sub('[\|\s]', '', ss['dynamicname'].lower()).strip()
|
||||
controlValue = {"impID": ss['impid']}
|
||||
newValue = {"ComicYear": ss['comicyear'],
|
||||
"Status": "Not Imported",
|
||||
"ComicName": ss['comicname'], #.encode('utf-8'),
|
||||
"DisplayName": ss['displayname'], #.encode('utf-8'),
|
||||
"DynamicName": nspace_dynamicname,
|
||||
"ComicID": ss['comicid'], #if it's been scanned in for cvinfo, this will be the CID - otherwise it's None
|
||||
"IssueID": None,
|
||||
"Volume": ss['volume'],
|
||||
"IssueNumber": ss['issuenumber'].decode('utf-8'),
|
||||
"ComicFilename": ss['comfilename'].decode('utf-8'), #ss['comfilename'].encode('utf-8'),
|
||||
"ComicLocation": ss['comlocation'],
|
||||
"ImportDate": helpers.today(),
|
||||
"WatchMatch": ss['watchmatch']}
|
||||
myDB.upsert("importresults", newValue, controlValue)
|
||||
|
||||
# because we could be adding volumes/series that span years, we need to account for this
|
||||
# add the year to the db under the term, valid-years
|
||||
# add the issue to the db under the term, min-issue
|
||||
|
@ -710,6 +652,7 @@ def scanLibrary(scan=None, queue=None):
|
|||
# unzip -z filename.cbz will show the comment field of the zip which contains the metadata.
|
||||
|
||||
#self.importResults()
|
||||
mylar.IMPORT_STATUS = 'Import completed.'
|
||||
valreturn.append({"somevalue": 'self.ie',
|
||||
"result": 'success'})
|
||||
return queue.put(valreturn)
|
||||
|
|
14
mylar/mb.py
14
mylar/mb.py
|
@ -137,7 +137,7 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
|
|||
if not totalResults:
|
||||
return False
|
||||
if int(totalResults) > 1000:
|
||||
logger.warn('Search returned more than 1000 hits [' + str(totalResults) + ']. Only displaying first 2000 results - use more specifics or the exact ComicID if required.')
|
||||
logger.warn('Search returned more than 1000 hits [' + str(totalResults) + ']. Only displaying first 1000 results - use more specifics or the exact ComicID if required.')
|
||||
totalResults = 1000
|
||||
countResults = 0
|
||||
while (countResults < int(totalResults)):
|
||||
|
@ -176,7 +176,7 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
|
|||
try:
|
||||
xmlTag = result.getElementsByTagName('name')[n].firstChild.wholeText
|
||||
xmlTag = xmlTag.rstrip()
|
||||
logger.fdebug('name: ' + str(xmlTag))
|
||||
logger.fdebug('name: ' + xmlTag)
|
||||
except:
|
||||
logger.error('There was a problem retrieving the given data from ComicVine. Ensure that www.comicvine.com is accessible.')
|
||||
return
|
||||
|
@ -278,7 +278,7 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
|
|||
if (result.getElementsByTagName('start_year')[0].firstChild) is not None:
|
||||
xmlYr = result.getElementsByTagName('start_year')[0].firstChild.wholeText
|
||||
else: xmlYr = "0000"
|
||||
#logger.info('name:' + str(xmlTag) + ' -- ' + str(xmlYr))
|
||||
logger.info('name:' + xmlTag + ' -- ' + str(xmlYr) + ' [limityear: ' + str(limityear) + ']')
|
||||
if xmlYr in limityear or limityear == 'None':
|
||||
xmlurl = result.getElementsByTagName('site_detail_url')[0].firstChild.wholeText
|
||||
idl = len (result.getElementsByTagName('id'))
|
||||
|
@ -293,7 +293,7 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
|
|||
if xmlid is None:
|
||||
logger.error('Unable to figure out the comicid - skipping this : ' + str(xmlurl))
|
||||
continue
|
||||
#logger.info('xmlid: ' + str(xmlid))
|
||||
|
||||
publishers = result.getElementsByTagName('publisher')
|
||||
if len(publishers) > 0:
|
||||
pubnames = publishers[0].getElementsByTagName('name')
|
||||
|
@ -303,6 +303,12 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
|
|||
xmlpub = "Unknown"
|
||||
else:
|
||||
xmlpub = "Unknown"
|
||||
logger.info('publisher: ' + xmlpub)
|
||||
#ignore specific publishers on a global scale here.
|
||||
if mylar.BLACKLISTED_PUBLISHERS is not None and any([x for x in mylar.BLACKLISTED_PUBLISHERS if x.lower() == xmlpub.lower()]):
|
||||
#'panini' in xmlpub.lower() or 'deagostini' in xmlpub.lower() or 'Editorial Televisa' in xmlpub.lower():
|
||||
logger.fdebug('Blacklisted publisher [' + xmlpub + ']. Ignoring this result.')
|
||||
continue
|
||||
|
||||
try:
|
||||
xmldesc = result.getElementsByTagName('description')[0].firstChild.wholeText
|
||||
|
|
|
@ -15,20 +15,18 @@ def movefiles(comicid, comlocation, ogcname, imported=None):
|
|||
for impr in impres:
|
||||
srcimp = impr['ComicLocation']
|
||||
orig_filename = impr['ComicFilename']
|
||||
orig_iss = impr['impID'].rfind('-')
|
||||
orig_iss = impr['impID'][orig_iss +1:]
|
||||
logger.fdebug("Issue :" + str(orig_iss))
|
||||
logger.fdebug("Issue :" + impr['IssueNumber'])
|
||||
#before moving check to see if Rename to Mylar structure is enabled.
|
||||
if mylar.IMP_RENAME and mylar.FILE_FORMAT != '':
|
||||
logger.fdebug("Renaming files according to configuration details : " + str(mylar.FILE_FORMAT))
|
||||
renameit = helpers.rename_param(comicid, impr['ComicName'], orig_iss, orig_filename)
|
||||
renameit = helpers.rename_param(comicid, impr['ComicName'], impr['IssueNumber'], orig_filename)
|
||||
nfilename = renameit['nfilename']
|
||||
dstimp = os.path.join(comlocation, nfilename)
|
||||
else:
|
||||
logger.fdebug("Renaming files not enabled, keeping original filename(s)")
|
||||
dstimp = os.path.join(comlocation, orig_filename)
|
||||
|
||||
logger.info("moving " + str(srcimp) + " ... to " + str(dstimp))
|
||||
logger.info("moving " + srcimp + " ... to " + dstimp)
|
||||
try:
|
||||
shutil.move(srcimp, dstimp)
|
||||
except (OSError, IOError):
|
||||
|
|
|
@ -24,7 +24,7 @@ def newpull():
|
|||
r = requests.get(pagelinks, verify=False)
|
||||
|
||||
except Exception, e:
|
||||
logger.warn('Error fetching data: %s' % (tmpprov, e))
|
||||
logger.warn('Error fetching data: %s' % e)
|
||||
|
||||
soup = BeautifulSoup(r.content)
|
||||
getthedate = soup.findAll("div", {"class": "Headline"})[0]
|
||||
|
|
|
@ -224,7 +224,10 @@ def torrents(pickfeed=None, seriesname=None, issue=None, feedinfo=None):
|
|||
seeddigits = 0
|
||||
|
||||
if int(mylar.MINSEEDS) >= int(seeddigits):
|
||||
#new releases has it as '&id', notification feeds have it as %ampid (possibly even &id
|
||||
link = feedme.entries[i].link
|
||||
link = re.sub('&','&', link)
|
||||
link = re.sub('&','&', link)
|
||||
linkst = link.find('&id')
|
||||
linken = link.find('&', linkst +1)
|
||||
if linken == -1:
|
||||
|
@ -493,48 +496,47 @@ def torrentdbsearch(seriesname, issue, comicid=None, nzbprov=None):
|
|||
#cache db that have the incorrect entry, we'll adjust.
|
||||
torTITLE = re.sub('&', '&', tor['Title']).strip()
|
||||
|
||||
torsplit = torTITLE.split('/')
|
||||
#torsplit = torTITLE.split(' ')
|
||||
if mylar.PREFERRED_QUALITY == 1:
|
||||
if 'cbr' in torTITLE:
|
||||
logger.fdebug('Quality restriction enforced [ cbr only ]. Accepting result.')
|
||||
else:
|
||||
logger.fdebug('Quality restriction enforced [ cbr only ]. Rejecting result.')
|
||||
continue
|
||||
elif mylar.PREFERRED_QUALITY == 2:
|
||||
if 'cbz' in torTITLE:
|
||||
logger.fdebug('Quality restriction enforced [ cbz only ]. Accepting result.')
|
||||
else:
|
||||
logger.fdebug('Quality restriction enforced [ cbz only ]. Rejecting result.')
|
||||
|
||||
continue
|
||||
logger.fdebug('tor-Title: ' + torTITLE)
|
||||
logger.fdebug('there are ' + str(len(torsplit)) + ' sections in this title')
|
||||
#logger.fdebug('there are ' + str(len(torsplit)) + ' sections in this title')
|
||||
i=0
|
||||
if nzbprov is not None:
|
||||
if nzbprov != tor['Site']:
|
||||
logger.fdebug('this is a result from ' + str(tor['Site']) + ', not the site I am looking for of ' + str(nzbprov))
|
||||
continue
|
||||
#0 holds the title/issue and format-type.
|
||||
ext_check = True # extension checker to enforce cbr/cbz filetype restrictions.
|
||||
while (i < len(torsplit)):
|
||||
#we'll rebuild the string here so that it's formatted accordingly to be passed back to the parser.
|
||||
logger.fdebug('section(' + str(i) + '): ' + torsplit[i])
|
||||
#remove extensions
|
||||
titletemp = torsplit[i]
|
||||
titletemp = re.sub('cbr', '', titletemp)
|
||||
titletemp = re.sub('cbz', '', titletemp)
|
||||
titletemp = re.sub('none', '', titletemp)
|
||||
|
||||
if i == 0:
|
||||
rebuiltline = titletemp
|
||||
else:
|
||||
rebuiltline = rebuiltline + ' (' + titletemp + ')'
|
||||
i+=1
|
||||
|
||||
if ext_check == False:
|
||||
continue
|
||||
logger.fdebug('rebuiltline is :' + rebuiltline)
|
||||
#--- this was for old cbt feeds, no longer used for 32p
|
||||
# while (i < len(torsplit)):
|
||||
# #we'll rebuild the string here so that it's formatted accordingly to be passed back to the parser.
|
||||
# logger.fdebug('section(' + str(i) + '): ' + torsplit[i])
|
||||
# #remove extensions
|
||||
# titletemp = torsplit[i]
|
||||
# titletemp = re.sub('cbr', '', titletemp)
|
||||
# titletemp = re.sub('cbz', '', titletemp)
|
||||
# titletemp = re.sub('none', '', titletemp)
|
||||
|
||||
# if i == 0:
|
||||
# rebuiltline = titletemp
|
||||
# else:
|
||||
# rebuiltline = rebuiltline + ' (' + titletemp + ')'
|
||||
# i+=1
|
||||
# logger.fdebug('rebuiltline is :' + rebuiltline)
|
||||
#----
|
||||
seriesname_mod = seriesname
|
||||
foundname_mod = torsplit[0]
|
||||
foundname_mod = torTITLE #torsplit[0]
|
||||
seriesname_mod = re.sub("\\band\\b", " ", seriesname_mod.lower())
|
||||
foundname_mod = re.sub("\\band\\b", " ", foundname_mod.lower())
|
||||
seriesname_mod = re.sub("\\bthe\\b", " ", seriesname_mod.lower())
|
||||
|
@ -571,24 +573,24 @@ def torrentdbsearch(seriesname, issue, comicid=None, nzbprov=None):
|
|||
extra = ''
|
||||
|
||||
#the title on 32P has a mix-mash of crap...ignore everything after cbz/cbr to cleanit
|
||||
ctitle = torTITLE.find('cbr')
|
||||
if ctitle == 0:
|
||||
ctitle = torTITLE.find('cbz')
|
||||
if ctitle == 0:
|
||||
ctitle = torTITLE.find('none')
|
||||
if ctitle == 0:
|
||||
logger.fdebug('cannot determine title properly - ignoring for now.')
|
||||
continue
|
||||
cttitle = torTITLE[:ctitle]
|
||||
#ctitle = torTITLE.find('cbr')
|
||||
#if ctitle == 0:
|
||||
# ctitle = torTITLE.find('cbz')
|
||||
# if ctitle == 0:
|
||||
# ctitle = torTITLE.find('none')
|
||||
# if ctitle == 0:
|
||||
# logger.fdebug('cannot determine title properly - ignoring for now.')
|
||||
# continue
|
||||
#cttitle = torTITLE[:ctitle]
|
||||
|
||||
if tor['Site'] == '32P':
|
||||
st_pub = rebuiltline.find('(')
|
||||
if st_pub < 2 and st_pub != -1:
|
||||
st_end = rebuiltline.find(')')
|
||||
rebuiltline = rebuiltline[st_end +1:]
|
||||
# if tor['Site'] == '32P':
|
||||
# st_pub = rebuiltline.find('(')
|
||||
# if st_pub < 2 and st_pub != -1:
|
||||
# st_end = rebuiltline.find(')')
|
||||
# rebuiltline = rebuiltline[st_end +1:]
|
||||
|
||||
tortheinfo.append({
|
||||
'title': rebuiltline, #cttitle,
|
||||
'title': torTITLE, #cttitle,
|
||||
'link': tor['Link'],
|
||||
'pubdate': tor['Pubdate'],
|
||||
'site': tor['Site'],
|
||||
|
@ -889,6 +891,17 @@ def torsend2client(seriesname, issue, seriesyear, linkit, site):
|
|||
return "pass"
|
||||
|
||||
elif mylar.TORRENT_SEEDBOX:
|
||||
if mylar.RTORRENT_HOST:
|
||||
import test
|
||||
rp = test.RTorrent()
|
||||
torrent_info = rp.main(filepath=filepath)
|
||||
|
||||
logger.info(torrent_info)
|
||||
if torrent_info:
|
||||
return "pass"
|
||||
else:
|
||||
return "fail"
|
||||
else:
|
||||
tssh = ftpsshup.putfile(filepath, filename)
|
||||
return tssh
|
||||
|
||||
|
|
|
@ -846,6 +846,7 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
|
|||
|
||||
ctchk = cleantitle.split()
|
||||
ctchk_indexes = []
|
||||
origvol = None
|
||||
volfound = False
|
||||
vol_nono = []
|
||||
new_cleantitle = []
|
||||
|
@ -870,6 +871,7 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
|
|||
tmpsplit = ct
|
||||
if tmpsplit.lower().startswith('vol'):
|
||||
logger.fdebug('volume detected - stripping and re-analzying for volume label.')
|
||||
origvol = tmpsplit
|
||||
if '.' in tmpsplit:
|
||||
tmpsplit = re.sub('.', '', tmpsplit).strip()
|
||||
tmpsplit = re.sub('vol', '', tmpsplit.lower()).strip()
|
||||
|
@ -912,6 +914,8 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
|
|||
continue
|
||||
|
||||
if fndcomicversion:
|
||||
cleantitle = re.sub(origvol, '', cleantitle).strip()
|
||||
logger.fdebug('Newly finished reformed cleantitle (with NO volume label): ' + cleantitle)
|
||||
versionfound = "yes"
|
||||
break
|
||||
|
||||
|
|
|
@ -0,0 +1,75 @@
|
|||
import os
|
||||
import sys
|
||||
import re
|
||||
import time
|
||||
import shutil
|
||||
import traceback
|
||||
from base64 import b16encode, b32decode
|
||||
|
||||
from torrent.helpers.variable import link, symlink, is_rarfile
|
||||
|
||||
import lib.requests as requests
|
||||
from lib.unrar2 import RarFile
|
||||
|
||||
import torrent.clients.rtorrent as TorClient
|
||||
|
||||
import mylar
|
||||
from mylar import logger, helpers
|
||||
|
||||
class RTorrent(object):
|
||||
def __init__(self):
|
||||
self.client = TorClient.TorrentClient()
|
||||
if not self.client.connect(mylar.RTORRENT_HOST,
|
||||
mylar.RTORRENT_USERNAME,
|
||||
mylar.RTORRENT_PASSWORD):
|
||||
logger.error('could not connect to %s, exiting', mylar.RTORRENT_HOST)
|
||||
sys.exit(-1)
|
||||
|
||||
def main(self, torrent_hash=None, filepath=None):
|
||||
|
||||
torrent = self.client.find_torrent(torrent_hash)
|
||||
if torrent:
|
||||
logger.warn("%s Torrent already exists. Not downloading at this time.", torrent_hash)
|
||||
return
|
||||
|
||||
if filepath:
|
||||
loadit = self.client.load_torrent(filepath)
|
||||
if loadit:
|
||||
torrent_hash = self.get_the_hash(filepath)
|
||||
else:
|
||||
return
|
||||
|
||||
torrent = self.client.find_torrent(torrent_hash)
|
||||
if torrent is None:
|
||||
logger.warn("Couldn't find torrent with hash: %s", torrent_hash)
|
||||
sys.exit(-1)
|
||||
|
||||
torrent_info = self.client.get_torrent(torrent)
|
||||
if torrent_info['completed']:
|
||||
logger.info("Directory: %s", torrent_info['folder'])
|
||||
logger.info("Name: %s", torrent_info['name'])
|
||||
logger.info("FileSize: %s", helpers.human_size(torrent_info['total_filesize']))
|
||||
logger.info("Completed: %s", torrent_info['completed'])
|
||||
logger.info("Downloaded: %s", helpers.human_size(torrent_info['download_total']))
|
||||
logger.info("Uploaded: %s", helpers.human_size(torrent_info['upload_total']))
|
||||
logger.info("Ratio: %s", torrent_info['ratio'])
|
||||
#logger.info("Time Started: %s", torrent_info['time_started'])
|
||||
logger.info("Seeding Time: %s", helpers.humanize_time(int(time.time()) - torrent_info['time_started']))
|
||||
|
||||
if torrent_info['label']:
|
||||
logger.info("Torrent Label: %s", torrent_info['label'])
|
||||
|
||||
logger.info(torrent_info)
|
||||
return torrent_info
|
||||
|
||||
def get_the_hash(self, filepath):
|
||||
import hashlib, StringIO
|
||||
import lib.rtorrent.lib.bencode as bencode
|
||||
|
||||
# Open torrent file
|
||||
torrent_file = open(filepath, "rb")
|
||||
metainfo = bencode.decode(torrent_file.read())
|
||||
info = metainfo['info']
|
||||
thehash = hashlib.sha1(bencode.encode(info)).hexdigest().upper()
|
||||
logger.info('Hash: ' + thehash)
|
||||
return thehash
|
|
@ -0,0 +1,116 @@
|
|||
import os
|
||||
|
||||
from lib.rtorrent import RTorrent
|
||||
|
||||
import mylar
|
||||
from mylar import logger, helpers
|
||||
|
||||
class TorrentClient(object):
|
||||
def __init__(self):
|
||||
self.conn = None
|
||||
|
||||
def connect(self, host, username, password):
|
||||
if self.conn is not None:
|
||||
return self.conn
|
||||
|
||||
if not host:
|
||||
return False
|
||||
|
||||
if username and password:
|
||||
self.conn = RTorrent(
|
||||
host,
|
||||
username,
|
||||
password
|
||||
)
|
||||
else:
|
||||
self.conn = RTorrent(host)
|
||||
|
||||
return self.conn
|
||||
|
||||
def find_torrent(self, hash):
|
||||
return self.conn.find_torrent(hash)
|
||||
|
||||
def get_torrent (self, torrent):
|
||||
torrent_files = []
|
||||
torrent_directory = os.path.normpath(torrent.directory)
|
||||
try:
|
||||
for f in torrent.get_files():
|
||||
if not os.path.normpath(f.path).startswith(torrent_directory):
|
||||
file_path = os.path.join(torrent_directory, f.path.lstrip('/'))
|
||||
else:
|
||||
file_path = f.path
|
||||
|
||||
torrent_files.append(file_path)
|
||||
torrent_info = {
|
||||
'hash': torrent.info_hash,
|
||||
'name': torrent.name,
|
||||
'label': torrent.get_custom1() if torrent.get_custom1() else '',
|
||||
'folder': torrent_directory,
|
||||
'completed': torrent.complete,
|
||||
'files': torrent_files,
|
||||
'upload_total': torrent.get_up_total(),
|
||||
'download_total': torrent.get_down_total(),
|
||||
'ratio': torrent.get_ratio(),
|
||||
'total_filesize': torrent.get_size_bytes(),
|
||||
'time_started': torrent.get_time_started()
|
||||
}
|
||||
|
||||
except Exception:
|
||||
raise
|
||||
|
||||
return torrent_info if torrent_info else False
|
||||
|
||||
def load_torrent(self, filepath):
|
||||
start = bool(mylar.RTORRENT_STARTONLOAD)
|
||||
|
||||
logger.info('filepath to torrent file set to : ' + filepath)
|
||||
|
||||
torrent = self.conn.load_torrent(filepath, verify_load=True)
|
||||
if not torrent:
|
||||
return False
|
||||
|
||||
if mylar.RTORRENT_LABEL:
|
||||
torrent.set_custom(1, mylar.RTORRENT_LABEL)
|
||||
logger.info('Setting label for torrent to : ' + mylar.RTORRENT_LABEL)
|
||||
|
||||
if mylar.RTORRENT_DIRECTORY:
|
||||
torrent.set_directory(mylar.RTORRENT_DIRECTORY)
|
||||
logger.info('Setting directory for torrent to : ' + mylar.RTORRENT_DIRECTORY)
|
||||
|
||||
logger.info('Successfully loaded torrent.')
|
||||
|
||||
#note that if set_directory is enabled, the torrent has to be started AFTER it's loaded or else it will give chunk errors and not seed
|
||||
if start:
|
||||
logger.info('[' + str(start) + '] Now starting torrent.')
|
||||
torrent.start()
|
||||
else:
|
||||
logger.info('[' + str(start) + '] Not starting torrent due to configuration setting.')
|
||||
return True
|
||||
|
||||
def start_torrent(self, torrent):
|
||||
return torrent.start()
|
||||
|
||||
def stop_torrent(self, torrent):
|
||||
return torrent.stop()
|
||||
|
||||
def delete_torrent(self, torrent):
|
||||
deleted = []
|
||||
try:
|
||||
for file_item in torrent.get_files():
|
||||
file_path = os.path.join(torrent.directory, file_item.path)
|
||||
os.unlink(file_path)
|
||||
deleted.append(file_item.path)
|
||||
|
||||
if torrent.is_multi_file() and torrent.directory.endswith(torrent.name):
|
||||
try:
|
||||
for path, _, _ in os.walk(torrent.directory, topdown=False):
|
||||
os.rmdir(path)
|
||||
deleted.append(path)
|
||||
except:
|
||||
pass
|
||||
except Exception:
|
||||
raise
|
||||
|
||||
torrent.erase()
|
||||
|
||||
return deleted
|
|
@ -0,0 +1,96 @@
|
|||
import os
|
||||
|
||||
from libs.utorrent.client import UTorrentClient
|
||||
|
||||
# Only compatible with uTorrent 3.0+
|
||||
|
||||
class TorrentClient(object):
|
||||
def __init__(self):
|
||||
self.conn = None
|
||||
|
||||
def connect(self, host, username, password):
|
||||
if self.conn is not None:
|
||||
return self.conn
|
||||
|
||||
if not host:
|
||||
return False
|
||||
|
||||
if username and password:
|
||||
self.conn = UTorrentClient(
|
||||
host,
|
||||
username,
|
||||
password
|
||||
)
|
||||
else:
|
||||
self.conn = UTorrentClient(host)
|
||||
|
||||
return self.conn
|
||||
|
||||
def find_torrent(self, hash):
|
||||
try:
|
||||
torrent_list = self.conn.list()[1]
|
||||
|
||||
for t in torrent_list['torrents']:
|
||||
if t[0] == hash:
|
||||
torrent = t
|
||||
|
||||
except Exception:
|
||||
raise
|
||||
|
||||
return torrent if torrent else False
|
||||
|
||||
def get_torrent(self, torrent):
|
||||
if not torrent[26]:
|
||||
raise 'Only compatible with uTorrent 3.0+'
|
||||
|
||||
torrent_files = []
|
||||
torrent_completed = False
|
||||
torrent_directory = os.path.normpath(torrent[26])
|
||||
try:
|
||||
|
||||
if torrent[4] == 1000:
|
||||
torrent_completed = True
|
||||
|
||||
files = self.conn.getfiles(torrent[0])[1]['files'][1]
|
||||
|
||||
for f in files:
|
||||
if not os.path.normpath(f[0]).startswith(torrent_directory):
|
||||
file_path = os.path.join(torrent_directory, f[0].lstrip('/'))
|
||||
else:
|
||||
file_path = f[0]
|
||||
|
||||
torrent_files.append(file_path)
|
||||
|
||||
torrent_info = {
|
||||
'hash': torrent[0],
|
||||
'name': torrent[2],
|
||||
'label': torrent[11] if torrent[11] else '',
|
||||
'folder': torrent[26],
|
||||
'completed': torrent_completed,
|
||||
'files': torrent_files,
|
||||
}
|
||||
except Exception:
|
||||
raise
|
||||
|
||||
return torrent_info
|
||||
|
||||
def start_torrent(self, torrent_hash):
|
||||
return self.conn.start(torrent_hash)
|
||||
|
||||
def stop_torrent(self, torrent_hash):
|
||||
return self.conn.stop(torrent_hash)
|
||||
|
||||
def delete_torrent(self, torrent):
|
||||
deleted = []
|
||||
try:
|
||||
files = self.conn.getfiles(torrent[0])[1]['files'][1]
|
||||
|
||||
for f in files:
|
||||
deleted.append(os.path.normpath(os.path.join(torrent[26], f[0])))
|
||||
|
||||
self.conn.removedata(torrent[0])
|
||||
|
||||
except Exception:
|
||||
raise
|
||||
|
||||
return deleted
|
|
@ -0,0 +1,31 @@
|
|||
import os
|
||||
|
||||
def link(src, dst):
|
||||
if os.name == 'nt':
|
||||
import ctypes
|
||||
if ctypes.windll.kernel32.CreateHardLinkW(unicode(dst), unicode(src), 0) == 0: raise ctypes.WinError()
|
||||
else:
|
||||
os.link(src, dst)
|
||||
|
||||
def symlink(src, dst):
|
||||
if os.name == 'nt':
|
||||
import ctypes
|
||||
if ctypes.windll.kernel32.CreateSymbolicLinkW(unicode(dst), unicode(src), 1 if os.path.isdir(src) else 0) in [0, 1280]: raise ctypes.WinError()
|
||||
else:
|
||||
os.symlink(src, dst)
|
||||
|
||||
def is_rarfile(f):
|
||||
import binascii
|
||||
|
||||
with open(f, "rb") as f:
|
||||
byte = f.read(12)
|
||||
|
||||
spanned = binascii.hexlify(byte[10])
|
||||
main = binascii.hexlify(byte[11])
|
||||
|
||||
if spanned == "01" and main == "01": # main rar archive in a set of archives
|
||||
return True
|
||||
elif spanned == "00" and main == "00": # single rar
|
||||
return True
|
||||
|
||||
return False
|
|
@ -780,7 +780,9 @@ def forceRescan(ComicID, archive=None, module=None):
|
|||
logger.info(module + ' Now checking files for ' + rescan['ComicName'] + ' (' + str(rescan['ComicYear']) + ') in ' + rescan['ComicLocation'])
|
||||
fca = []
|
||||
if archive is None:
|
||||
tmpval = filechecker.listFiles(dir=rescan['ComicLocation'], watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
|
||||
tval = filechecker.FileChecker(dir=rescan['ComicLocation'], watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
|
||||
tmpval = tval.listFiles()
|
||||
#tmpval = filechecker.listFiles(dir=rescan['ComicLocation'], watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
|
||||
comiccnt = int(tmpval['comiccount'])
|
||||
logger.fdebug(module + 'comiccnt is:' + str(comiccnt))
|
||||
fca.append(tmpval)
|
||||
|
@ -790,12 +792,16 @@ def forceRescan(ComicID, archive=None, module=None):
|
|||
logger.fdebug(module + 'os.path.basename: ' + os.path.basename(rescan['ComicLocation']))
|
||||
pathdir = os.path.join(mylar.MULTIPLE_DEST_DIRS, os.path.basename(rescan['ComicLocation']))
|
||||
logger.info(module + ' Now checking files for ' + rescan['ComicName'] + ' (' + str(rescan['ComicYear']) + ') in :' + pathdir)
|
||||
tmpv = filechecker.listFiles(dir=pathdir, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
|
||||
mvals = filechecker.FileChecker(dir=pathdir, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
|
||||
tmpv = mvals.listFiles()
|
||||
#tmpv = filechecker.listFiles(dir=pathdir, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=altnames)
|
||||
logger.fdebug(module + 'tmpv filecount: ' + str(tmpv['comiccount']))
|
||||
comiccnt += int(tmpv['comiccount'])
|
||||
fca.append(tmpv)
|
||||
else:
|
||||
files_arc = filechecker.listFiles(dir=archive, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=rescan['AlternateSearch'])
|
||||
# files_arc = filechecker.listFiles(dir=archive, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=rescan['AlternateSearch'])
|
||||
arcval = filechecker.FileChecker(dir=archive, watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=rescan['AlternateSearch'])
|
||||
files_arc = arcval.listFiles()
|
||||
fca.append(files_arc)
|
||||
comiccnt = int(files_arc['comiccount'])
|
||||
fcb = []
|
||||
|
@ -1001,7 +1007,7 @@ def forceRescan(ComicID, archive=None, module=None):
|
|||
logger.fdebug(module + ' Matched...issue: ' + rescan['ComicName'] + '#' + reiss['Issue_Number'] + ' --- ' + str(int_iss))
|
||||
havefiles+=1
|
||||
haveissue = "yes"
|
||||
isslocation = tmpfc['ComicFilename']
|
||||
isslocation = tmpfc['ComicFilename'].decode('utf-8')
|
||||
issSize = str(tmpfc['ComicSize'])
|
||||
logger.fdebug(module + ' .......filename: ' + isslocation)
|
||||
logger.fdebug(module + ' .......filesize: ' + str(tmpfc['ComicSize']))
|
||||
|
@ -1141,9 +1147,9 @@ def forceRescan(ComicID, archive=None, module=None):
|
|||
logger.fdebug(module + ' Matched...annual issue: ' + rescan['ComicName'] + '#' + str(reann['Issue_Number']) + ' --- ' + str(int_iss))
|
||||
havefiles+=1
|
||||
haveissue = "yes"
|
||||
isslocation = str(tmpfc['ComicFilename'])
|
||||
isslocation = tmpfc['ComicFilename'].decode('utf-8')
|
||||
issSize = str(tmpfc['ComicSize'])
|
||||
logger.fdebug(module + ' .......filename: ' + str(isslocation))
|
||||
logger.fdebug(module + ' .......filename: ' + isslocation)
|
||||
logger.fdebug(module + ' .......filesize: ' + str(tmpfc['ComicSize']))
|
||||
# to avoid duplicate issues which screws up the count...let's store the filename issues then
|
||||
# compare earlier...
|
||||
|
@ -1215,6 +1221,7 @@ def forceRescan(ComicID, archive=None, module=None):
|
|||
myDB.upsert("annuals", newValueDict, controlValueDict)
|
||||
ANNComicID = None
|
||||
else:
|
||||
logger.fdebug(newValueDict)
|
||||
#issID_to_write.append({"tableName": "issues",
|
||||
# "valueDict": newValueDict,
|
||||
# "keyDict": controlValueDict})
|
||||
|
@ -1226,7 +1233,7 @@ def forceRescan(ComicID, archive=None, module=None):
|
|||
# logger.info('writing ' + str(iss))
|
||||
# writethis = myDB.upsert(iss['tableName'], iss['valueDict'], iss['keyDict'])
|
||||
|
||||
logger.fdebug(module + ' IssueID to ignore: ' + str(issID_to_ignore))
|
||||
#logger.fdebug(module + ' IssueID to ignore: ' + str(issID_to_ignore))
|
||||
|
||||
#here we need to change the status of the ones we DIDN'T FIND above since the loop only hits on FOUND issues.
|
||||
update_iss = []
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
|
||||
# This file is part of Mylar.
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Mylar is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
|
@ -16,6 +18,7 @@
|
|||
from __future__ import with_statement
|
||||
|
||||
import os
|
||||
import sys
|
||||
import cherrypy
|
||||
import datetime
|
||||
import re
|
||||
|
@ -214,7 +217,11 @@ class WebInterface(object):
|
|||
if issue == 0:
|
||||
#if it's an issue 0, CV doesn't have any data populated yet - so bump it up one to at least get the current results.
|
||||
issue = 1
|
||||
try:
|
||||
searchresults, explicit = mb.findComic(name, mode, issue=issue)
|
||||
except TypeError:
|
||||
logger.error('Unable to perform required pull-list search for : [name: ' + name + '][issue: ' + issue + '][mode: ' + mode + ']')
|
||||
return
|
||||
elif type == 'comic' and mode == 'series':
|
||||
if name.startswith('4050-'):
|
||||
mismatch = "no"
|
||||
|
@ -222,11 +229,23 @@ class WebInterface(object):
|
|||
logger.info('Attempting to add directly by ComicVineID: ' + str(comicid) + '. I sure hope you know what you are doing.')
|
||||
threading.Thread(target=importer.addComictoDB, args=[comicid, mismatch, None]).start()
|
||||
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % comicid)
|
||||
try:
|
||||
searchresults, explicit = mb.findComic(name, mode, issue=None, explicit=explicit)
|
||||
except TypeError:
|
||||
logger.error('Unable to perform required pull-list search for : [name: ' + name + '][issue: ' + issue + '][mode: ' + mode + '][explicitsearch:' + explicit + ']')
|
||||
return
|
||||
elif type == 'comic' and mode == 'want':
|
||||
try:
|
||||
searchresults, explicit = mb.findComic(name, mode, issue)
|
||||
except TypeError:
|
||||
logger.error('Unable to perform required one-off pull-list search for : [name: ' + name + '][issue: ' + issue + '][mode: ' + mode + ']')
|
||||
return
|
||||
elif type == 'story_arc':
|
||||
try:
|
||||
searchresults, explicit = mb.findComic(name, mode=None, issue=None, explicit='explicit', type='story_arc')
|
||||
except TypeError:
|
||||
logger.error('Unable to perform required story-arc search for : [arc: ' + name + '][mode: ' + mode + '][explicitsearch: explicit]')
|
||||
return
|
||||
|
||||
searchresults = sorted(searchresults, key=itemgetter('comicyear', 'issues'), reverse=True)
|
||||
#print ("Results: " + str(searchresults))
|
||||
|
@ -1955,23 +1974,62 @@ class WebInterface(object):
|
|||
myDB = db.DBConnection()
|
||||
comicstoimport = []
|
||||
if action == 'massimport':
|
||||
logger.info('initiating mass import.')
|
||||
cnames = myDB.select("SELECT ComicName from importresults WHERE Status='Not Imported' GROUP BY ComicName")
|
||||
logger.info('Initiating mass import.')
|
||||
cnames = myDB.select("SELECT ComicName, ComicID, Volume, DynamicName from importresults WHERE Status='Not Imported' GROUP BY DynamicName, Volume")
|
||||
for cname in cnames:
|
||||
comicstoimport.append(cname['ComicName'].decode('utf-8', 'replace'))
|
||||
if cname['ComicID']:
|
||||
comicid = cname['ComicID']
|
||||
else:
|
||||
comicid = None
|
||||
comicstoimport.append({'ComicName': cname['ComicName'].decode('utf-8', 'replace'),
|
||||
'DynamicName': cname['DynamicName'],
|
||||
'Volume': cname['Volume'],
|
||||
'ComicID': comicid})
|
||||
logger.info(str(len(comicstoimport)) + ' series will be attempted to be imported.')
|
||||
else:
|
||||
for ComicName in args:
|
||||
if action == 'importselected':
|
||||
logger.info("initiating mass import mode for " + ComicName)
|
||||
logger.info('importing selected series.')
|
||||
logger.info(args)
|
||||
for k,v in args.items():
|
||||
#k = Comicname[Volume]
|
||||
#v = DynamicName
|
||||
logger.info('k: ' + k)
|
||||
logger.info('v: ' + v)
|
||||
Volst = k.find('[')
|
||||
logger.info('volst: ' + str(Volst))
|
||||
volume = re.sub('[\[\]]', '', k[Volst:]).strip()
|
||||
logger.info('volume: ' + str(volume))
|
||||
ComicName = k[:Volst].strip()
|
||||
logger.info('comicname: ' + ComicName)
|
||||
DynamicName = v
|
||||
logger.info('dynamicname: ' + DynamicName)
|
||||
cid = ComicName.decode('utf-8', 'replace')
|
||||
comicstoimport.append(cid)
|
||||
logger.info('cid: ' + cid)
|
||||
comicstoimport.append({'ComicName': cid,
|
||||
'DynamicName': DynamicName,
|
||||
'Volume': volume,
|
||||
'ComicID': None})
|
||||
|
||||
elif action == 'removeimport':
|
||||
logger.info("removing " + ComicName + " from the Import list")
|
||||
myDB.action('DELETE from importresults WHERE ComicName=?', [ComicName])
|
||||
for k,v in args:
|
||||
logger.info('k: ' + k)
|
||||
logger.info('v: ' + v)
|
||||
Volst = k.find('[')
|
||||
volume = re.sub('[\[\]]', '', k[Volst:]).strip()
|
||||
ComicName = k[:Volst].strip()
|
||||
DynamicName = v
|
||||
if volume is None:
|
||||
logger.info('Removing ' + ComicName + ' from the Import list')
|
||||
myDB.action('DELETE from importresults WHERE DynamicName=? AND Volume is NULL', [DynamicName])
|
||||
else:
|
||||
logger.info('Removing ' + ComicName + ' [' + str(volume) + '] from the Import list')
|
||||
myDB.action('DELETE from importresults WHERE DynamicName=? AND Volume=?', [DynamicName, Volume])
|
||||
|
||||
if len(comicstoimport) > 0:
|
||||
logger.debug("Mass importing the following series: %s" % comicstoimport)
|
||||
logger.info('Initiating selected import mode for ' + str(len(comicstoimport)) + ' series.')
|
||||
|
||||
if len(comicstoimport) > 0:
|
||||
logger.debug('The following series will now be attempted to be imported: %s' % comicstoimport)
|
||||
threading.Thread(target=self.preSearchit, args=[None, comicstoimport, len(comicstoimport)]).start()
|
||||
raise cherrypy.HTTPRedirect("importResults")
|
||||
|
||||
|
@ -2085,6 +2143,9 @@ class WebInterface(object):
|
|||
lowyear = 9999
|
||||
maxyear = 0
|
||||
for la in totalcnt:
|
||||
if la['IssueDate'] is None:
|
||||
continue
|
||||
else:
|
||||
if int(la['IssueDate'][:4]) > maxyear:
|
||||
maxyear = int(la['IssueDate'][:4])
|
||||
if int(la['IssueDate'][:4]) < lowyear:
|
||||
|
@ -2392,7 +2453,6 @@ class WebInterface(object):
|
|||
if ArcWatch is None:
|
||||
logger.info("No Story Arcs to search")
|
||||
else:
|
||||
Comics = myDB.select("SELECT * FROM comics")
|
||||
|
||||
arc_match = []
|
||||
wantedlist = []
|
||||
|
@ -2421,17 +2481,16 @@ class WebInterface(object):
|
|||
sarc_title = arc['StoryArc']
|
||||
logger.fdebug("arc: " + arc['StoryArc'] + " : " + arc['ComicName'] + " : " + arc['IssueNumber'])
|
||||
|
||||
mod_arc = re.sub('[\:/,\'\/\-\&\%\$\#\@\!\*\+\.]', '', arc['ComicName'])
|
||||
mod_arc = re.sub('\\bthe\\b', '', mod_arc.lower())
|
||||
mod_arc = re.sub('\\band\\b', '', mod_arc.lower())
|
||||
mod_arc = re.sub(r'\s', '', mod_arc)
|
||||
matcheroso = "no"
|
||||
for comic in Comics:
|
||||
#logger.fdebug("comic: " + comic['ComicName'])
|
||||
mod_watch = re.sub('[\:\,\'\/\-\&\%\$\#\@\!\*\+\.]', '', comic['ComicName'])
|
||||
mod_watch = re.sub('\\bthe\\b', '', mod_watch.lower())
|
||||
mod_watch = re.sub('\\band\\b', '', mod_watch.lower())
|
||||
mod_watch = re.sub(r'\s', '', mod_watch)
|
||||
mod_seriesname = '%' + re.sub(' ', '%', arc['ComicName']).strip() + '%'
|
||||
comics = myDB.select('SELECT * FROM comics Where ComicName LIKE ?', [mod_seriesname])
|
||||
|
||||
for comic in comics:
|
||||
fc = filechecker.FileChecker(watchcomic=arc['ComicName'])
|
||||
modi_names = fc.dynamic_replace(comic['ComicName'])
|
||||
mod_arc = modi_names['mod_watchcomic'] #is from the arc db
|
||||
mod_watch = modi_names['mod_seriesname'] #is from the comics db
|
||||
|
||||
if mod_watch == mod_arc:# and arc['SeriesYear'] == comic['ComicYear']:
|
||||
logger.fdebug("initial name match - confirming issue # is present in series")
|
||||
if comic['ComicID'][:1] == 'G':
|
||||
|
@ -2483,18 +2542,15 @@ class WebInterface(object):
|
|||
|
||||
logger.fdebug('destination location set to : ' + dstloc)
|
||||
|
||||
filechk = filechecker.listFiles(dstloc, arc['ComicName'], Publisher=None, sarc='true')
|
||||
fchk = filechecker.FileChecker(dir=dstloc, watchcomic=arc['ComicName'], Publisher=None, sarc='true', justparse=True)
|
||||
filechk = fchk.listFiles()
|
||||
fn = 0
|
||||
fccnt = filechk['comiccount']
|
||||
logger.fdebug('files in directory: ' + str(fccnt))
|
||||
while (fn < fccnt) and fccnt != 0:
|
||||
for tmpfc in filechk['comiclist']:
|
||||
haveissue = "no"
|
||||
issuedupe = "no"
|
||||
try:
|
||||
tmpfc = filechk['comiclist'][fn]
|
||||
except IndexError:
|
||||
break
|
||||
temploc = tmpfc['JusttheDigits'].replace('_', ' ')
|
||||
temploc = tmpfc['issue_number'].replace('_', ' ')
|
||||
fcdigit = helpers.issuedigits(arc['IssueNumber'])
|
||||
int_iss = helpers.issuedigits(temploc)
|
||||
if int_iss == fcdigit:
|
||||
|
@ -2502,9 +2558,9 @@ class WebInterface(object):
|
|||
#update readinglist db to reflect status.
|
||||
if mylar.READ2FILENAME:
|
||||
readorder = helpers.renamefile_readingorder(arc['ReadingOrder'])
|
||||
dfilename = str(readorder) + "-" + tmpfc['ComicFilename']
|
||||
dfilename = str(readorder) + "-" + tmpfc['comicfilename']
|
||||
else:
|
||||
dfilename = tmpfc['ComicFilename']
|
||||
dfilename = tmpfc['comicfilename']
|
||||
|
||||
newVal = {"Status": "Downloaded",
|
||||
"Location": dfilename} #tmpfc['ComicFilename']}
|
||||
|
@ -2726,24 +2782,6 @@ class WebInterface(object):
|
|||
|
||||
ReadMassCopy.exposed = True
|
||||
|
||||
def importLog(self, ComicName, SRID=None):
|
||||
myDB = db.DBConnection()
|
||||
impchk = None
|
||||
if SRID != 'None':
|
||||
impchk = myDB.selectone("SELECT * FROM importresults WHERE SRID=?", [SRID]).fetchone()
|
||||
if impchk is None:
|
||||
logger.error('No associated log found for this ID : ' + SRID)
|
||||
if impchk is None:
|
||||
impchk = myDB.selectone("SELECT * FROM importresults WHERE ComicName=?", [ComicName]).fetchone()
|
||||
if impchk is None:
|
||||
logger.error('No associated log found for this ComicName : ' + ComicName)
|
||||
return
|
||||
|
||||
implog = impchk['implog'].replace("\n", "<br />\n")
|
||||
return implog
|
||||
# return serve_template(templatename="importlog.html", title="Log", implog=implog)
|
||||
importLog.exposed = True
|
||||
|
||||
def logs(self):
|
||||
return serve_template(templatename="logs.html", title="Log", lineList=mylar.LOG_LIST)
|
||||
logs.exposed = True
|
||||
|
@ -2937,7 +2975,12 @@ class WebInterface(object):
|
|||
return serve_template(templatename="searchresults.html", title='Import Results for: "' + comicname + '"', searchresults=sresults, type=type, imported='confirm', ogcname=comicid, explicit=explicit)
|
||||
confirmResult.exposed = True
|
||||
|
||||
def comicScan(self, path, scan=0, libraryscan=0, redirect=None, autoadd=0, imp_move=0, imp_rename=0, imp_metadata=0):
|
||||
def Check_ImportStatus(self):
|
||||
logger.info('import_status: ' + mylar.IMPORT_STATUS)
|
||||
return mylar.IMPORT_STATUS
|
||||
Check_ImportStatus.exposed = True
|
||||
|
||||
def comicScan(self, path, scan=0, libraryscan=0, redirect=None, autoadd=0, imp_move=0, imp_rename=0, imp_metadata=0, forcescan=0):
|
||||
import Queue
|
||||
queue = Queue.Queue()
|
||||
|
||||
|
@ -2948,13 +2991,23 @@ class WebInterface(object):
|
|||
mylar.IMP_RENAME = imp_rename
|
||||
mylar.IMP_METADATA = imp_metadata
|
||||
mylar.config_write()
|
||||
|
||||
logger.info('forcescan is: ' + str(forcescan))
|
||||
if mylar.IMPORTLOCK and forcescan == 1:
|
||||
logger.info('Removing Current lock on import - if you do this AND another process is legitimately running, your causing your own problems.')
|
||||
mylar.IMPORTLOCK = False
|
||||
|
||||
#thread the scan.
|
||||
if scan == '1':
|
||||
scan = True
|
||||
mylar.IMPORT_STATUS = 'Now starting the import'
|
||||
return self.ThreadcomicScan(scan, queue)
|
||||
else:
|
||||
scan = False
|
||||
return
|
||||
comicScan.exposed = True
|
||||
|
||||
def ThreadcomicScan(self, scan, queue):
|
||||
thread_ = threading.Thread(target=librarysync.scanLibrary, name="LibraryScan", args=[scan, queue])
|
||||
thread_.start()
|
||||
thread_.join()
|
||||
|
@ -2964,41 +3017,95 @@ class WebInterface(object):
|
|||
yield chk[0]['result']
|
||||
logger.info('Successfully scanned in directory. Enabling the importResults button now.')
|
||||
mylar.IMPORTBUTTON = True #globally set it to ON after the scan so that it will be picked up.
|
||||
mylar.IMPORT_STATUS = 'Import completed.'
|
||||
break
|
||||
return
|
||||
comicScan.exposed = True
|
||||
ThreadcomicScan.exposed = True
|
||||
|
||||
def importResults(self):
|
||||
myDB = db.DBConnection()
|
||||
results = myDB.select("SELECT * FROM importresults WHERE WatchMatch is Null OR WatchMatch LIKE 'C%' group by ComicName COLLATE NOCASE")
|
||||
results = myDB.select("SELECT * FROM importresults WHERE WatchMatch is Null OR WatchMatch LIKE 'C%' group by DynamicName, Volume, Status COLLATE NOCASE")
|
||||
#this is to get the count of issues;
|
||||
res = []
|
||||
countit = []
|
||||
ann_cnt = 0
|
||||
for result in results:
|
||||
res.append(result)
|
||||
for x in res:
|
||||
countthis = myDB.select("SELECT count(*) FROM importresults WHERE ComicName=?", [x['ComicName']])
|
||||
countit.append({"ComicName": x['ComicName'],
|
||||
"IssueCount": countthis[0][0]})
|
||||
for ct in countit:
|
||||
ctrlVal = {"ComicName": ct['ComicName']}
|
||||
newVal = {"IssueCount": ct['IssueCount']}
|
||||
myDB.upsert("importresults", newVal, ctrlVal)
|
||||
if x['Volume']:
|
||||
#becuase Volume gets stored as NULL in the db, we need to account for it coming into here as a possible None value.
|
||||
countthis = myDB.select("SELECT count(*) FROM importresults WHERE DynamicName=? AND Volume=? AND Status=?", [x['DynamicName'],x['Volume'],x['Status']])
|
||||
countannuals = myDB.select("SELECT count(*) FROM importresults WHERE DynamicName=? AND Volume=? AND IssueNumber LIKE 'Annual%' AND Status=?", [x['DynamicName'],x['Volume'],x['Status']])
|
||||
else:
|
||||
countthis = myDB.select("SELECT count(*) FROM importresults WHERE DynamicName=? AND Volume IS NULL AND Status=?", [x['DynamicName'],x['Status']])
|
||||
countannuals = myDB.select("SELECT count(*) FROM importresults WHERE DynamicName=? AND Volume IS NULL AND IssueNumber LIKE 'Annual%' AND Status=?", [x['DynamicName'],x['Status']])
|
||||
countit.append({"DynamicName": x['DynamicName'],
|
||||
"Volume": x['Volume'],
|
||||
"IssueCount": countthis[0][0],
|
||||
"AnnualCount": countannuals[0][0],
|
||||
"ComicName": x['ComicName'],
|
||||
"DisplayName": x['DisplayName'],
|
||||
"Volume": x['Volume'],
|
||||
"ComicYear": x['ComicYear'],
|
||||
"Status": x['Status'],
|
||||
"ComicID": x['ComicID'],
|
||||
"WatchMatch": x['WatchMatch'],
|
||||
"ImportDate": x['ImportDate'],
|
||||
"SRID": x['SRID']})
|
||||
|
||||
#for ct in countit:
|
||||
# ctrlVal = {"DynamicName": ct['DynamicName'],
|
||||
# "Volume": ct['Volume']}
|
||||
# newVal = {"IssueCount": ct['IssueCount']}
|
||||
# myDB.upsert("importresults", newVal, ctrlVal)
|
||||
#logger.info("counted " + str(countit) + " issues for " + str(result['ComicName']))
|
||||
#need to reload results now
|
||||
results = myDB.select("SELECT * FROM importresults WHERE WatchMatch is Null OR WatchMatch LIKE 'C%' group by ComicName COLLATE NOCASE")
|
||||
watchresults = myDB.select("SELECT * FROM importresults WHERE WatchMatch is not Null AND WatchMatch NOT LIKE 'C%' group by ComicName COLLATE NOCASE")
|
||||
return serve_template(templatename="importresults.html", title="Import Results", results=results, watchresults=watchresults)
|
||||
#results = myDB.select("SELECT * FROM importresults WHERE WatchMatch is Null OR WatchMatch LIKE 'C%' group by DynamicName, Volume COLLATE NOCASE")
|
||||
#watchresults = myDB.select("SELECT * FROM importresults WHERE WatchMatch is not Null AND WatchMatch NOT LIKE 'C%' group by DynamicName, Volume COLLATE NOCASE")
|
||||
return serve_template(templatename="importresults.html", title="Import Results", results=countit) #results, watchresults=watchresults)
|
||||
importResults.exposed = True
|
||||
|
||||
def deleteimport(self, ComicName):
|
||||
def ImportFilelisting(self, comicname, dynamicname, volume):
|
||||
myDB = db.DBConnection()
|
||||
logger.info("Removing import data for Comic: " + ComicName)
|
||||
myDB.action('DELETE from importresults WHERE ComicName=?', [ComicName])
|
||||
if volume is None or volume == 'None':
|
||||
results = myDB.select("SELECT * FROM importresults WHERE (WatchMatch is Null OR WatchMatch LIKE 'C%') AND DynamicName=? AND Volume IS NULL",[dynamicname])
|
||||
else:
|
||||
if not volume.lower().startswith('v'):
|
||||
volume = 'v' + str(volume)
|
||||
results = myDB.select("SELECT * FROM importresults WHERE (WatchMatch is Null OR WatchMatch LIKE 'C%') AND DynamicName=? AND Volume=?",[dynamicname,volume])
|
||||
|
||||
filelisting = '<table width="500"><tr><td>'
|
||||
filelisting += '<center><b>Files that have been scanned in for:</b></center>'
|
||||
if volume is None or volume == 'None':
|
||||
filelisting += '<center><b>' + re.sub('\+', ' ', comicname) + '</b></center></td></tr><tr><td>'
|
||||
else:
|
||||
filelisting += '<center><b>' + re.sub('\+', ' ', comicname) + ' [' + str(volume) + ']</b></center></td></tr><tr><td>'
|
||||
#filelisting += '<div style="height:300px;overflow:scroll;overflow-x:hidden;">'
|
||||
filelisting += '<div style="display:inline-block;overflow-y:auto:overflow-x:hidden;">'
|
||||
cnt = 0
|
||||
for result in results:
|
||||
filelisting += result['ComicFilename'] + '</br>'
|
||||
filelisting += '</div></td></tr>'
|
||||
filelisting += '<tr><td align="right">' + str(len(results)) + ' Files.</td></tr>'
|
||||
filelisting += '</table>'
|
||||
return filelisting
|
||||
ImportFilelisting.exposed = True
|
||||
|
||||
def deleteimport(self, ComicName, volume, DynamicName, Status):
|
||||
myDB = db.DBConnection()
|
||||
if volume is None or volume == 'None':
|
||||
logname = ComicName
|
||||
else:
|
||||
logname = ComicName + '[' + str(volume) + ']'
|
||||
logger.info("Removing import data for Comic: " + logname)
|
||||
if volume is None or volume == 'None':
|
||||
myDB.action('DELETE from importresults WHERE DynamicName=? AND Volume is NULL AND Status=?', [DynamicName, Status])
|
||||
else:
|
||||
myDB.action('DELETE from importresults WHERE DynamicName=? AND Volume=? AND Status=?', [DynamicName, volume, Status])
|
||||
raise cherrypy.HTTPRedirect("importResults")
|
||||
deleteimport.exposed = True
|
||||
|
||||
def preSearchit(self, ComicName, comiclist=None, mimp=0, displaycomic=None, comicid=None):
|
||||
def preSearchit(self, ComicName, comiclist=None, mimp=0, volume=None, displaycomic=None, comicid=None, dynamicname=None, displayline=None):
|
||||
if mylar.IMPORTLOCK:
|
||||
logger.info('There is an import already running. Please wait for it to finish, and then you can resubmit this import.')
|
||||
return
|
||||
|
@ -3008,6 +3115,8 @@ class WebInterface(object):
|
|||
if mimp == 0:
|
||||
comiclist = []
|
||||
comiclist.append({"ComicName": ComicName,
|
||||
"DynamicName": dynamicname,
|
||||
"Volume": volume,
|
||||
"ComicID": comicid})
|
||||
|
||||
with importlock:
|
||||
|
@ -3018,8 +3127,8 @@ class WebInterface(object):
|
|||
#otherwise, comicID present by itself indicates a watch match that already exists and is done below this sequence.
|
||||
RemoveIDS = []
|
||||
for comicinfo in comiclist:
|
||||
logger.info('Checking for any valid metatagging already present.')
|
||||
logger.info(comicinfo['ComicID'])
|
||||
logger.info('Checking for any valid ComicID\'s already present within filenames.')
|
||||
logger.info(comicinfo)
|
||||
if comicinfo['ComicID'] is None or comicinfo['ComicID'] == 'None':
|
||||
continue
|
||||
else:
|
||||
|
@ -3039,20 +3148,34 @@ class WebInterface(object):
|
|||
#we need to remove these items from the comiclist now, so they don't get processed again
|
||||
if len(RemoveIDS) > 0:
|
||||
for RID in RemoveIDS:
|
||||
newlist = {k:comiclist[k] for k in comiclist if comiclist[k]['ComicID'] != RID}
|
||||
newlist = {k:comiclist[k] for k in comiclist if k['ComicID'] != RID}
|
||||
comiclist = newlist
|
||||
logger.info('newlist: ' + str(newlist))
|
||||
|
||||
for cl in comiclist:
|
||||
implog = ''
|
||||
implog = implog + "imp_rename:" + str(mylar.IMP_RENAME) + "\n"
|
||||
implog = implog + "imp_move:" + str(mylar.IMP_MOVE) + "\n"
|
||||
ComicName = cl['ComicName']
|
||||
logger.info('comicname is :' + ComicName)
|
||||
implog = implog + "comicName: " + str(ComicName) + "\n"
|
||||
results = myDB.select("SELECT * FROM importresults WHERE ComicName=?", [ComicName])
|
||||
volume = cl['Volume']
|
||||
DynamicName = cl['DynamicName']
|
||||
logger.fdebug('comicname: ' + ComicName)
|
||||
logger.fdebug('dyn: ' + DynamicName)
|
||||
|
||||
if volume is None or volume == 'None':
|
||||
comic_and_vol = ComicName
|
||||
else:
|
||||
comic_and_vol = ComicName + ' (' + str(volume) + ')'
|
||||
logger.info('[' + comic_and_vol + '] Now preparing to import. First I need to determine the highest issue, and possible year(s) of the series.')
|
||||
if volume is None or volume == 'None':
|
||||
logger.info('[none] dynamicname: ' + DynamicName)
|
||||
logger.info('[none] volume: None')
|
||||
|
||||
results = myDB.select("SELECT * FROM importresults WHERE DynamicName=? AND Volume IS NULL AND Status='Not Imported'", [DynamicName])
|
||||
else:
|
||||
logger.info('[!none] dynamicname: ' + DynamicName)
|
||||
logger.info('[!none] volume: ' + volume)
|
||||
results = myDB.select("SELECT * FROM importresults WHERE DynamicName=? AND Volume=? AND Status='Not Imported'", [DynamicName,volume])
|
||||
|
||||
if not results:
|
||||
logger.info('I cannot find any results.')
|
||||
logger.info('I cannot find any results for the given series. I should remove this from the list.')
|
||||
continue
|
||||
#if results > 0:
|
||||
# print ("There are " + str(results[7]) + " issues to import of " + str(ComicName))
|
||||
|
@ -3077,13 +3200,10 @@ class WebInterface(object):
|
|||
watchmatched = ''
|
||||
|
||||
if watchmatched.startswith('C'):
|
||||
implog = implog + "Confirmed. ComicID already provided - initiating auto-magik mode for import.\n"
|
||||
comicid = result['WatchMatch'][1:]
|
||||
implog = implog + result['WatchMatch'] + " .to. " + str(comicid) + "\n"
|
||||
#since it's already in the watchlist, we just need to move the files and re-run the filechecker.
|
||||
#self.refreshArtist(comicid=comicid,imported='yes')
|
||||
if mylar.IMP_MOVE:
|
||||
implog = implog + "Mass import - Move files\n"
|
||||
comloc = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [comicid]).fetchone()
|
||||
|
||||
movedata_comicid = comicid
|
||||
|
@ -3094,30 +3214,31 @@ class WebInterface(object):
|
|||
#check for existing files... (this is already called after move files in importer)
|
||||
#updater.forceRescan(comicid)
|
||||
else:
|
||||
implog = implog + "nothing to do if I'm not moving.\n"
|
||||
raise cherrypy.HTTPRedirect("importResults")
|
||||
else:
|
||||
comicstoIMP.append(result['ComicLocation'].decode(mylar.SYS_ENCODING, 'replace'))
|
||||
getiss = result['impID'].rfind('-')
|
||||
getiss = result['impID'][getiss +1:]
|
||||
imlog = implog + "figured issue is : " + str(getiss) + "\n"
|
||||
if (result['ComicYear'] not in yearRANGE) or (yearRANGE is None):
|
||||
if result['ComicYear'] <> "0000":
|
||||
implog = implog + "adding..." + str(result['ComicYear']) + "\n"
|
||||
comicstoIMP.append(result['ComicLocation'])#.decode(mylar.SYS_ENCODING, 'replace'))
|
||||
getiss = result['IssueNumber']
|
||||
logger.info('getiss:' + str(getiss))
|
||||
if 'annual' in getiss.lower():
|
||||
tmpiss = re.sub('[^0-9]','', getiss).strip()
|
||||
if any([tmpiss.startswith('19'), tmpiss.startswith('20')]) and len(tmpiss) == 4:
|
||||
logger.fdebug('annual detected with no issue [' + getiss + ']. Skipping this entry for determining series length.')
|
||||
continue
|
||||
else:
|
||||
if (result['ComicYear'] not in yearRANGE) or all([yearRANGE is None, yearRANGE == 'None']):
|
||||
if result['ComicYear'] <> "0000" and result['ComicYear'] is not None:
|
||||
yearRANGE.append(str(result['ComicYear']))
|
||||
yearTOP = str(result['ComicYear'])
|
||||
getiss_num = helpers.issuedigits(getiss)
|
||||
miniss_num = helpers.issuedigits(str(minISSUE))
|
||||
startiss_num = helpers.issuedigits(str(startISSUE))
|
||||
if int(getiss_num) > int(miniss_num):
|
||||
implog = implog + "issue now set to : " + str(getiss) + " ... it was : " + str(minISSUE) + "\n"
|
||||
logger.fdebug('Minimum issue now set to : ' + str(getiss) + ' - it was : ' + str(minISSUE))
|
||||
minISSUE = str(getiss)
|
||||
#logger.fdebug('Minimum issue now set to : ' + getiss + ' - it was : ' + minISSUE)
|
||||
minISSUE = getiss
|
||||
if int(getiss_num) < int(startiss_num):
|
||||
implog = implog + "issue now set to : " + str(getiss) + " ... it was : " + str(startISSUE) + "\n"
|
||||
logger.fdebug('Start issue now set to : ' + str(getiss) + ' - it was : ' + str(startISSUE))
|
||||
#logger.fdebug('Start issue now set to : ' + getiss + ' - it was : ' + startISSUE)
|
||||
startISSUE = str(getiss)
|
||||
if helpers.issuedigits(startISSUE) == 1000: # if it's an issue #1, get the year and assume that's the start.
|
||||
if helpers.issuedigits(startISSUE) == 1000 and result['ComicYear'] is not None: # if it's an issue #1, get the year and assume that's the start.
|
||||
startyear = result['ComicYear']
|
||||
|
||||
#taking this outside of the transaction in an attempt to stop db locking.
|
||||
|
@ -3129,45 +3250,37 @@ class WebInterface(object):
|
|||
raise cherrypy.HTTPRedirect("importResults")
|
||||
|
||||
#figure out # of issues and the year range allowable
|
||||
logger.info('yearTOP: ' + str(yearTOP))
|
||||
logger.info('minISSUE: ' + str(minISSUE))
|
||||
logger.info('yearRANGE: ' + str(yearRANGE))
|
||||
if starttheyear is None:
|
||||
if yearTOP > 0:
|
||||
if helpers.int_num(minISSUE) < 1000:
|
||||
maxyear = int(yearTOP)
|
||||
if all([yearTOP != None, yearTOP != 'None']):
|
||||
if int(str(yearTOP)) > 0:
|
||||
minni = helpers.int_num(minISSUE)
|
||||
if minni < 1:
|
||||
maxyear = int(str(yearTOP))
|
||||
else:
|
||||
maxyear = int(yearTOP) - (int(minISSUE) / 12)
|
||||
maxyear = int(str(yearTOP)) - (minni / 12)
|
||||
if str(maxyear) not in yearRANGE:
|
||||
yearRANGE.append(str(maxyear))
|
||||
implog = implog + "there is a " + str(maxyear) + " year variation based on the 12 issues/year\n"
|
||||
for i in range(maxyear, int(yearTOP),1):
|
||||
if not any(int(x) == int(i) for x in yearRANGE):
|
||||
yearRANGE.append(str(i))
|
||||
else:
|
||||
yearRANGE = None
|
||||
else:
|
||||
implog = implog + "no year detected in any issues...Nulling the value\n"
|
||||
yearRANGE = None
|
||||
else:
|
||||
implog = implog + "First issue detected as starting in " + str(starttheyear) + ". Setting start range to that.\n"
|
||||
yearRANGE.append(starttheyear)
|
||||
|
||||
if yearRANGE is not None:
|
||||
yearRANGE = sorted(yearRANGE, reverse=True)
|
||||
#determine a best-guess to # of issues in series
|
||||
#this needs to be reworked / refined ALOT more.
|
||||
#minISSUE = highest issue #, startISSUE = lowest issue #
|
||||
numissues = helpers.int_num(minISSUE) - helpers.int_num(startISSUE) +1 # add 1 to account for one issue itself.
|
||||
numissues = len(comicstoIMP)
|
||||
#numissues = helpers.int_num(minISSUE) - helpers.int_num(startISSUE) +1 # add 1 to account for one issue itself.
|
||||
#normally minissue would work if the issue #'s started at #1.
|
||||
implog = implog + "the years involved are : " + str(yearRANGE) + "\n"
|
||||
implog = implog + "highest issue # is : " + str(minISSUE) + "\n"
|
||||
implog = implog + "lowest issue # is : " + str(startISSUE) + "\n"
|
||||
implog = implog + "approximate number of issues : " + str(numissues) + "\n"
|
||||
implog = implog + "issues present on system : " + str(len(comicstoIMP)) + "\n"
|
||||
implog = implog + "versioning checking on filenames: \n"
|
||||
cnsplit = ComicName.split()
|
||||
#cnwords = len(cnsplit)
|
||||
#cnvers = cnsplit[cnwords-1]
|
||||
ogcname = ComicName
|
||||
for splitt in cnsplit:
|
||||
if 'v' in str(splitt):
|
||||
implog = implog + "possible versioning detected.\n"
|
||||
if splitt[1:].isdigit():
|
||||
implog = implog + splitt + " - assuming versioning. Removing from initial search pattern.\n"
|
||||
ComicName = re.sub(str(splitt), '', ComicName)
|
||||
implog = implog + "new comicname is : " + ComicName + "\n"
|
||||
# we need to pass the original comicname here into the entire importer module
|
||||
# so that we can reference the correct issues later.
|
||||
|
||||
mode='series'
|
||||
displaycomic = helpers.filesafe(ComicName)
|
||||
|
@ -3175,65 +3288,108 @@ class WebInterface(object):
|
|||
displaycomic = re.sub('\s+', ' ', displaycomic).strip()
|
||||
logger.fdebug('displaycomic : ' + displaycomic)
|
||||
logger.fdebug('comicname : ' + ComicName)
|
||||
searchterm = '"' + displaycomic + '"'
|
||||
if yearRANGE is None:
|
||||
sresults, explicit = mb.findComic(displaycomic, mode, issue=numissues, explicit='all') #ogcname, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues)
|
||||
sresults, explicit = mb.findComic(searchterm, mode, issue=numissues, explicit='all') #ogcname, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues)
|
||||
else:
|
||||
sresults, explicit = mb.findComic(displaycomic, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ogcname, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ComicName, mode, issue=numissues, limityear=yearRANGE)
|
||||
sresults, explicit = mb.findComic(searchterm, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ogcname, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ComicName, mode, issue=numissues, limityear=yearRANGE)
|
||||
type='comic'
|
||||
|
||||
if len(sresults) == 1:
|
||||
sr = sresults[0]
|
||||
implog = implog + "only one result...automagik-mode enabled for " + displaycomic + " :: " + str(sr['comicid']) + "\n"
|
||||
logger.fdebug("only one result...automagik-mode enabled for " + displaycomic + " :: " + str(sr['comicid']))
|
||||
#we now need to cycle through the results until we get a hit on both dynamicname AND year (~count of issues possibly).
|
||||
logger.fdebug('[' + str(len(sresults)) + '] search results')
|
||||
search_matches = []
|
||||
|
||||
for results in sresults:
|
||||
rsn = filechecker.FileChecker()
|
||||
rsn_run = rsn.dynamic_replace(results['name'])
|
||||
result_name = rsn_run['mod_seriesname']
|
||||
result_comicid = results['comicid']
|
||||
result_year = results['comicyear']
|
||||
logger.fdebug('Comparing: ' + re.sub('[\|\s]', '', DynamicName.lower()).strip() + ' - TO - ' + re.sub('[\|\s]', '', result_name.lower()).strip())
|
||||
if re.sub('[\|\s]', '', DynamicName.lower()).strip() == re.sub('[\|\s]', '', result_name.lower()).strip():
|
||||
logger.info('[IMPORT MATCH] ' + result_name + ' (' + str(result_comicid) + ')')
|
||||
search_matches.append({'comicid': results['comicid'],
|
||||
'series': results['name'],
|
||||
'dynamicseries': result_name,
|
||||
'seriesyear': result_year})
|
||||
|
||||
|
||||
if len(search_matches) == 1:
|
||||
sr = search_matches[0]
|
||||
logger.info("There is only one result...automagik-mode enabled for " + sr['series'] + " :: " + str(sr['comicid']))
|
||||
resultset = 1
|
||||
# #need to move the files here.
|
||||
elif len(sresults) == 0 or len(sresults) is None:
|
||||
implog = implog + "no results, removing the year from the agenda and re-querying.\n"
|
||||
else:
|
||||
if len(search_matches) == 0 or len(search_matches) is None:
|
||||
logger.fdebug("no results, removing the year from the agenda and re-querying.")
|
||||
sresults, explicit = mb.findComic(ogcname, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues)
|
||||
if len(sresults) == 1:
|
||||
sr = sresults[0]
|
||||
implog = implog + "only one result...automagik-mode enabled for " + displaycomic + " :: " + str(sr['comicid']) + "\n"
|
||||
logger.fdebug("only one result...automagik-mode enabled for " + displaycomic + " :: " + str(sr['comicid']))
|
||||
sresults, explicit = mb.findComic(searchterm, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues)
|
||||
logger.fdebug('[' + str(len(sresults)) + '] search results')
|
||||
for results in sresults:
|
||||
rsn = filechecker.FileChecker()
|
||||
rsn_run = rsn.dynamic_replace(results['name'])
|
||||
result_name = rsn_run['mod_seriesname']
|
||||
result_comicid = results['comicid']
|
||||
result_year = results['comicyear']
|
||||
logger.fdebug('Comparing: ' + re.sub('[\|\s]', '', DynamicName.lower()).strip() + ' - TO - ' + re.sub('[\|\s]', '', result_name.lower()).strip())
|
||||
if re.sub('[\|\s]', '', DynamicName.lower()).strip() == re.sub('[\|\s]', '', result_name.lower()).strip():
|
||||
logger.info('[IMPORT MATCH] ' + result_name + ' (' + str(result_comicid) + ')')
|
||||
search_matches.append({'comicid': results['comicid'],
|
||||
'series': results['name'],
|
||||
'dynamicseries': result_name,
|
||||
'seriesyear': result_year})
|
||||
|
||||
if len(search_matches) == 1:
|
||||
sr = search_matches[0]
|
||||
logger.info("There is only one result...automagik-mode enabled for " + sr['series'] + " :: " + str(sr['comicid']))
|
||||
resultset = 1
|
||||
else:
|
||||
resultset = 0
|
||||
else:
|
||||
implog = implog + "returning results to screen - more than one possibility.\n"
|
||||
logger.fdebug("Returning results to Select option - more than one possibility, manual intervention required.")
|
||||
logger.info('Returning results to Select option - there are ' + str(len(search_matches)) + ' possibilities, manual intervention required.')
|
||||
resultset = 0
|
||||
|
||||
#generate random Search Results ID to allow for easier access for viewing logs / search results.
|
||||
|
||||
import random
|
||||
SRID = str(random.randint(100000, 999999))
|
||||
|
||||
#write implog to db here.
|
||||
ctrlVal = {"ComicName": ogcname} #{"ComicName": ComicName}
|
||||
newVal = {"implog": implog,
|
||||
"SRID": SRID}
|
||||
if len(sresults) > 1:
|
||||
#link the SRID to the series that was just imported so that it can reference the search results when requested.
|
||||
|
||||
if volume is None or volume == 'None':
|
||||
ctrlVal = {"DynamicName": DynamicName,
|
||||
"ComicName": ComicName}
|
||||
else:
|
||||
ctrlVal = {"DynamicName": DynamicName,
|
||||
"ComicName": ComicName,
|
||||
"Volume": volume}
|
||||
|
||||
newVal = {"SRID": SRID,
|
||||
"Status": 'Manual Intervention'}
|
||||
|
||||
myDB.upsert("importresults", newVal, ctrlVal)
|
||||
|
||||
# store the search results for series that returned more than one result for user to select later / when they want.
|
||||
# should probably assign some random numeric for an id to reference back at some point.
|
||||
for sr in sresults:
|
||||
for sres in sresults:
|
||||
cVal = {"SRID": SRID,
|
||||
"comicid": sr['comicid']}
|
||||
"comicid": sres['comicid']}
|
||||
#should store ogcname in here somewhere to account for naming conversions above.
|
||||
nVal = {"Series": ComicName,
|
||||
"results": len(sresults),
|
||||
"publisher": sr['publisher'],
|
||||
"haveit": sr['haveit'],
|
||||
"name": sr['name'],
|
||||
"deck": sr['deck'],
|
||||
"url": sr['url'],
|
||||
"description": sr['description'],
|
||||
"comicimage": sr['comicimage'],
|
||||
"issues": sr['issues'],
|
||||
"publisher": sres['publisher'],
|
||||
"haveit": sres['haveit'],
|
||||
"name": sres['name'],
|
||||
"deck": sres['deck'],
|
||||
"url": sres['url'],
|
||||
"description": sres['description'],
|
||||
"comicimage": sres['comicimage'],
|
||||
"issues": sres['issues'],
|
||||
"ogcname": ogcname,
|
||||
"comicyear": sr['comicyear']}
|
||||
"comicyear": sres['comicyear']}
|
||||
myDB.upsert("searchresults", nVal, cVal)
|
||||
|
||||
if resultset == 1:
|
||||
logger.info('now adding...')
|
||||
self.addbyid(sr['comicid'], calledby=True, imported='yes', ogcname=ogcname)
|
||||
#implog = implog + "ogcname -- " + str(ogcname) + "\n"
|
||||
#cresults = self.addComic(comicid=sr['comicid'],comicname=sr['name'],comicyear=sr['comicyear'],comicpublisher=sr['publisher'],comicimage=sr['comicimage'],comicissues=sr['issues'],imported='yes',ogcname=ogcname) #imported=comicstoIMP,ogcname=ogcname)
|
||||
|
@ -3241,24 +3397,32 @@ class WebInterface(object):
|
|||
#else:
|
||||
#return serve_template(templatename="searchresults.html", title='Import Results for: "' + displaycomic + '"',searchresults=sresults, type=type, imported='yes', ogcname=ogcname, name=ogcname, explicit=explicit, serinfo=None) #imported=comicstoIMP, ogcname=ogcname)
|
||||
#status update.
|
||||
ctrlVal = {"ComicName": ComicName}
|
||||
if volume is None or volume == 'None':
|
||||
ctrlVal = {"DynamicName": DynamicName,
|
||||
"ComicName": ComicName}
|
||||
else:
|
||||
ctrlVal = {"DynamicName": DynamicName,
|
||||
"ComicName": ComicName,
|
||||
"Volume": volume}
|
||||
|
||||
newVal = {"Status": 'Imported',
|
||||
"SRID": SRID,
|
||||
"ComicID": sr['comicid']}
|
||||
myDB.upsert("importresults", newVal, ctrlVal)
|
||||
|
||||
mylar.IMPORTLOCK = False
|
||||
logger.info('Importing finished.')
|
||||
|
||||
preSearchit.exposed = True
|
||||
|
||||
def importresults_popup(self, SRID, ComicName, imported=None, ogcname=None):
|
||||
def importresults_popup(self, SRID, ComicName, imported=None, ogcname=None, DynamicName=None):
|
||||
myDB = db.DBConnection()
|
||||
results = myDB.select("SELECT * FROM searchresults WHERE SRID=?", [SRID])
|
||||
if results:
|
||||
return serve_template(templatename="importresults_popup.html", title="results", searchtext=ComicName, searchresults=results)
|
||||
else:
|
||||
logger.warn('There are no search results to view for this entry ' + ComicName + ' [' + str(SRID) + ']. Something is probably wrong.')
|
||||
return
|
||||
raise cherrypy.HTTPRedirect("importResults")
|
||||
importresults_popup.exposed = True
|
||||
|
||||
def pretty_git(self, br_history):
|
||||
|
@ -3937,39 +4101,92 @@ class WebInterface(object):
|
|||
logger.fdebug('sab_password: ' + str(sab_password))
|
||||
logger.fdebug('sab_apikey: ' + str(sab_apikey))
|
||||
if mylar.USE_SABNZBD:
|
||||
import urllib2
|
||||
from xml.dom.minidom import parseString
|
||||
import lib.requests as requests
|
||||
from xml.dom.minidom import parseString, Element
|
||||
|
||||
#if user/pass given, we can auto-fill the API ;)
|
||||
if sab_username is None or sab_password is None:
|
||||
logger.error('No Username / Password provided for SABnzbd credentials. Unable to test API key')
|
||||
return
|
||||
return "Invalid Username/Password provided"
|
||||
logger.fdebug('testing connection to SABnzbd @ ' + sab_host)
|
||||
logger.fdebug('SAB API Key :' + sab_apikey)
|
||||
if sab_host.endswith('/'):
|
||||
sabhost = sab_host
|
||||
else:
|
||||
sabhost = sab_host + '/'
|
||||
querysab = sabhost + "api?mode=get_config§ion=misc&output=xml&apikey=" + sab_apikey
|
||||
file = urllib2.urlopen(querysab)
|
||||
data = file.read()
|
||||
file.close()
|
||||
dom = parseString(data)
|
||||
|
||||
querysab = sabhost + 'api'
|
||||
payload = {'mode': 'get_config',
|
||||
'section': 'misc',
|
||||
'output': 'xml',
|
||||
'apikey': sab_apikey}
|
||||
|
||||
if sabhost.startswith('https'):
|
||||
verify = True
|
||||
else:
|
||||
verify = False
|
||||
|
||||
try:
|
||||
r = requests.get(querysab, params=payload, verify=verify)
|
||||
except Exception, e:
|
||||
logger.warn('Error fetching data from %s: %s' % (sab_host, e))
|
||||
if requests.exceptions.SSLError:
|
||||
logger.warn('Cannot verify ssl certificate. Attempting to authenticate with no ssl-certificate verification.')
|
||||
try:
|
||||
from lib.requests.packages.urllib3 import disable_warnings
|
||||
disable_warnings()
|
||||
except:
|
||||
logger.warn('Unable to disable https warnings. Expect some spam if using https nzb providers.')
|
||||
|
||||
verify = False
|
||||
|
||||
try:
|
||||
r = requests.get(querysab, params=payload, verify=verify)
|
||||
except Exception, e:
|
||||
logger.warn('Error fetching data from %s: %s' % (sab_host, e))
|
||||
return 'Unable to retrieve data from SABnzbd'
|
||||
else:
|
||||
return 'Unable to retrieve data from SABnzbd'
|
||||
|
||||
|
||||
logger.info('status code: ' + str(r.status_code))
|
||||
|
||||
if str(r.status_code) != '200':
|
||||
logger.warn('Unable to properly query SABnzbd @' + sabhost + ' [Status Code returned: ' + str(r.status_code) + ']')
|
||||
data = False
|
||||
else:
|
||||
data = r.content
|
||||
|
||||
if data:
|
||||
dom = parseString(data)
|
||||
else:
|
||||
return 'Unable to reach SABnzbd'
|
||||
|
||||
try:
|
||||
if dom.getElementsByTagName('status')[0].firstChild.wholeText == 'True':
|
||||
q_sabhost = dom.getElementsByTagName('host')[0].firstChild.wholeText
|
||||
q_nzbkey = dom.getElementsByTagName('nzb_key')[0].firstChild.wholeText
|
||||
q_apikey = dom.getElementsByTagName('api_key')[0].firstChild.wholeText
|
||||
else:
|
||||
raise ValueError
|
||||
except:
|
||||
errorm = dom.getElementsByTagName('error')[0].firstChild.wholeText
|
||||
logger.error(u"Error detected attempting to retrieve SAB data using FULL APIKey: " + errorm)
|
||||
if errorm == 'API Key Incorrect':
|
||||
logger.fdebug('You may have given me just the right amount of power (NZBKey), will test SABnzbd against the NZBkey now')
|
||||
querysab = sabhost + "api?mode=addurl&name=http://www.example.com/example.nzb&nzbname=NiceName&output=xml&apikey=" + mylar.SAB_APIKEY
|
||||
file = urllib2.urlopen(querysab)
|
||||
data = file.read()
|
||||
file.close()
|
||||
dom = parseString(data)
|
||||
querysab = sabhost + 'api'
|
||||
payload = {'mode': 'addurl',
|
||||
'name': 'http://www.example.com/example.nzb',
|
||||
'nzbname': 'NiceName',
|
||||
'output': 'xml',
|
||||
'apikey': sab_apikey}
|
||||
try:
|
||||
r = requests.get(querysab, params=payload, verify=verify)
|
||||
except Exception, e:
|
||||
logger.warn('Error fetching data from %s: %s' % (sab_host, e))
|
||||
return 'Unable to retrieve data from SABnzbd'
|
||||
|
||||
dom = parseString(r.content)
|
||||
qdata = dom.getElementsByTagName('status')[0].firstChild.wholeText
|
||||
|
||||
if str(qdata) == 'True':
|
||||
|
@ -3981,13 +4198,13 @@ class WebInterface(object):
|
|||
logger.error(str(qerror) + ' - check that the API (NZBkey) is correct, use the auto-detect option AND/OR check host:port settings')
|
||||
qd = False
|
||||
|
||||
if qd == False: return
|
||||
if qd == False: return "Invalid APIKey provided."
|
||||
|
||||
#test which apikey provided
|
||||
if q_nzbkey != sab_apikey:
|
||||
if q_apikey != sab_apikey:
|
||||
logger.error('APIKey provided does not match with SABnzbd')
|
||||
return
|
||||
return "Invalid APIKey provided"
|
||||
else:
|
||||
logger.info('APIKey provided is FULL APIKey which is too much power - changing to NZBKey')
|
||||
mylar.SAB_APIKEY = q_nzbkey
|
||||
|
@ -3997,9 +4214,10 @@ class WebInterface(object):
|
|||
logger.info('APIKey provided is NZBKey which is the correct key.')
|
||||
|
||||
logger.info('Connection to SABnzbd tested sucessfully')
|
||||
return "Successfully verified APIkey"
|
||||
else:
|
||||
logger.error('You do not have anything stated for SAB Host. Please correct and try again.')
|
||||
return
|
||||
return "Invalid SABnzbd host specified"
|
||||
SABtest.exposed = True
|
||||
|
||||
def shutdown(self):
|
||||
|
@ -4082,35 +4300,50 @@ class WebInterface(object):
|
|||
|
||||
downloadthis.exposed = True
|
||||
|
||||
def IssueInfo(self, filelocation):
|
||||
def IssueInfo(self, filelocation, comicname=None, issue=None, date=None, title=None):
|
||||
filelocation = filelocation.encode('ASCII')
|
||||
filelocation = urllib.unquote_plus(filelocation).decode('utf8')
|
||||
issuedetails = helpers.IssueDetails(filelocation)
|
||||
if issuedetails:
|
||||
#print str(issuedetails)
|
||||
issueinfo = '<table width="500"><tr><td>'
|
||||
issueinfo += '<img style="float: left; padding-right: 10px" src=' + issuedetails[0]['IssueImage'] + ' height="400" width="263">'
|
||||
issueinfo += '<h1><center><b>' + issuedetails[0]['series'] + '</br>[#' + issuedetails[0]['issue_number'] + ']</b></center></h1>'
|
||||
issueinfo += '<center>"' + issuedetails[0]['title'] + '"</center></br>'
|
||||
seriestitle = issuedetails[0]['series']
|
||||
if any([seriestitle == 'None', seriestitle is None]):
|
||||
seriestitle = comicname
|
||||
|
||||
issuenumber = issuedetails[0]['issue_number']
|
||||
if any([issuenumber == 'None', issuenumber is None]):
|
||||
issuenumber = issue
|
||||
|
||||
issuetitle = issuedetails[0]['title']
|
||||
if any([issuetitle == 'None', issuetitle is None]):
|
||||
issuetitle = title
|
||||
|
||||
issueinfo += '<h1><center><b>' + seriestitle + '</br>[#' + issuenumber + ']</b></center></h1>'
|
||||
issueinfo += '<center>"' + issuetitle + '"</center></br>'
|
||||
issueinfo += '</br><p class="alignleft">' + str(issuedetails[0]['pagecount']) + ' pages</p>'
|
||||
if issuedetails[0]['day'] is None:
|
||||
issueinfo += '<p class="alignright">(' + str(issuedetails[0]['year']) + '-' + str(issuedetails[0]['month']) + ')</p></br>'
|
||||
if all([issuedetails[0]['day'] is None, issuedetails[0]['month'] is None, issuedetails[0]['year'] is None]):
|
||||
issueinfo += '<p class="alignright">(' + str(date) + ')</p></br>'
|
||||
else:
|
||||
issueinfo += '<p class="alignright">(' + str(issuedetails[0]['year']) + '-' + str(issuedetails[0]['month']) + '-' + str(issuedetails[0]['day']) + ')</p></br>'
|
||||
if not issuedetails[0]['writer'] == 'None':
|
||||
if not any([issuedetails[0]['writer'] == 'None', issuedetails[0]['writer'] is None]):
|
||||
issueinfo += 'Writer: ' + issuedetails[0]['writer'] + '</br>'
|
||||
if not issuedetails[0]['penciller'] == 'None':
|
||||
if not any([issuedetails[0]['penciller'] == 'None', issuedetails[0]['penciller'] is None]):
|
||||
issueinfo += 'Penciller: ' + issuedetails[0]['penciller'] + '</br>'
|
||||
if not issuedetails[0]['inker'] == 'None':
|
||||
if not any([issuedetails[0]['inker'] == 'None', issuedetails[0]['inker'] is None]):
|
||||
issueinfo += 'Inker: ' + issuedetails[0]['inker'] + '</br>'
|
||||
if not issuedetails[0]['colorist'] == 'None':
|
||||
if not any([issuedetails[0]['colorist'] == 'None', issuedetails[0]['colorist'] is None]):
|
||||
issueinfo += 'Colorist: ' + issuedetails[0]['colorist'] + '</br>'
|
||||
if not issuedetails[0]['letterer'] == 'None':
|
||||
if not any([issuedetails[0]['letterer'] == 'None', issuedetails[0]['letterer'] is None]):
|
||||
issueinfo += 'Letterer: ' + issuedetails[0]['letterer'] + '</br>'
|
||||
if not issuedetails[0]['editor'] == 'None':
|
||||
if not any([issuedetails[0]['editor'] == 'None', issuedetails[0]['editor'] is None]):
|
||||
issueinfo += 'Editor: ' + issuedetails[0]['editor'] + '</br>'
|
||||
issueinfo += '</td></tr>'
|
||||
#issueinfo += '<img src="interfaces/default/images/rename.png" height="25" width="25"></td></tr>'
|
||||
issuesumm = None
|
||||
if all([issuedetails[0]['summary'] == 'None', issuedetails[0]['summary'] is None]):
|
||||
issuesumm = 'No summary available within metatagging.'
|
||||
else:
|
||||
if len(issuedetails[0]['summary']) > 1000:
|
||||
issuesumm = issuedetails[0]['summary'][:1000] + '...'
|
||||
else:
|
||||
|
@ -4249,3 +4482,56 @@ class WebInterface(object):
|
|||
logger.info('here')
|
||||
return
|
||||
orderThis.exposed = True
|
||||
|
||||
def torrentit(self, torrent_hash):
|
||||
import test
|
||||
#import lib.torrent.libs.rtorrent as rTorrent
|
||||
from base64 import b16encode, b32decode
|
||||
#torrent_hash # Hash of the torrent
|
||||
logger.fdebug("Working on torrent: " + torrent_hash)
|
||||
|
||||
if len(torrent_hash) == 32:
|
||||
torrent_hash = b16encode(b32decode(torrent_hash))
|
||||
|
||||
if not len(torrent_hash) == 40:
|
||||
logger.error("Torrent hash is missing, or an invalid hash value has been passed")
|
||||
return
|
||||
else:
|
||||
rp = test.RTorrent()
|
||||
torrent_info = rp.main(torrent_hash)
|
||||
|
||||
if torrent_info['completed']:
|
||||
logger.info("Client: %s", mylar.RTORRENT_HOST)
|
||||
logger.info("Directory: %s", torrent_info['folder'])
|
||||
logger.info("Name: %s", torrent_info['name'])
|
||||
logger.info("Hash: %s", torrent_info['hash'])
|
||||
logger.info("FileSize: %s", helpers.human_size(torrent_info['total_filesize']))
|
||||
logger.info("Completed: %s", torrent_info['completed'])
|
||||
logger.info("Downloaded: %s", helpers.human_size(torrent_info['download_total']))
|
||||
logger.info("Uploaded: %s", helpers.human_size(torrent_info['upload_total']))
|
||||
logger.info("Ratio: %s", torrent_info['ratio'])
|
||||
|
||||
if torrent_info['label']:
|
||||
logger.info("Torrent Label: %s", torrent_info['label'])
|
||||
|
||||
torrentit.exposed = True
|
||||
|
||||
def get_the_hash(self, filepath):
|
||||
import hashlib, StringIO
|
||||
import lib.rtorrent.lib.bencode as bencode
|
||||
|
||||
# Open torrent file
|
||||
torrent_file = open(os.path.join('/home/hero/mylar/cache', filepath), "rb")
|
||||
metainfo = bencode.decode(torrent_file.read())
|
||||
info = metainfo['info']
|
||||
thehash = hashlib.sha1(bencode.encode(info)).hexdigest().upper()
|
||||
logger.info('Hash: ' + thehash)
|
||||
|
||||
get_the_hash.exposed = True
|
||||
|
||||
def test_32p(self):
|
||||
import auth32p
|
||||
p = auth32p.info32p(test=True)
|
||||
rtnvalues = p.authenticate()
|
||||
return rtnvalues
|
||||
test_32p.exposed = True
|
||||
|
|
|
@ -472,15 +472,7 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
|
|||
b_list = []
|
||||
comicid = []
|
||||
|
||||
# if it's a one-off check (during an add series), load the comicname here and ignore below.
|
||||
if comic1off_name:
|
||||
logger.fdebug("This is a one-off for " + comic1off_name + '[ latest issue: ' + str(issue) + ' ]')
|
||||
lines.append(comic1off_name.strip())
|
||||
unlines.append(comic1off_name.strip())
|
||||
comicid.append(comic1off_id)
|
||||
latestissue.append(issue)
|
||||
w = 1
|
||||
else:
|
||||
if comic1off_name is None:
|
||||
#let's read in the comic.watchlist from the db here
|
||||
#cur.execute("SELECT ComicID, ComicName_Filesafe, ComicYear, ComicPublisher, ComicPublished, LatestDate, ForceContinuing, AlternateSearch, LatestIssue from comics WHERE Status = 'Active'")
|
||||
weeklylist = []
|
||||
|
@ -577,6 +569,16 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
|
|||
|
||||
else:
|
||||
logger.fdebug("Determined to not be a Continuing series at this time.")
|
||||
else:
|
||||
# if it's a one-off check (during an add series), load the comicname here and ignore below.
|
||||
logger.fdebug("This is a one-off for " + comic1off_name + ' [ latest issue: ' + str(issue) + ' ]')
|
||||
lines.append(comic1off_name.strip())
|
||||
unlines.append(comic1off_name.strip())
|
||||
comicid.append(comic1off_id)
|
||||
latestissue.append(issue)
|
||||
w = 1
|
||||
|
||||
if w >= 1:
|
||||
cnt = int(w -1)
|
||||
cntback = int(w -1)
|
||||
kp = []
|
||||
|
|
Loading…
Reference in New Issue