FIX: included version of comictagger should now work with both Windows and *nix based OS' again, IMP: Global Copy/Move option available when performing post-processing, IMP: Added a verbose file-checking option (FOLDER_SCAN_LOG_VERBOSE) - when enabled will log as it currently does during manual post-processing/file-checking runs, when disabled it will not spam the log nearly as much resulting in more readable log files, IMP: Added Verbose debug logging both via startup option(-v), as well as toggle button in Log GUI (from headphones), as well as per-page loading of log file(s) in GUI, FIX: When doing manual post-processing on issues that were in story arcs, will now indicate X story-arc issues were post-processed for better visibility, FIX: Fixed an issue with deleting from the nzblog table when story arc issues were post-processed, IMP: Added WEEKFOLDER_LOC to the config.ini to allow for specification of where the weekly download directories will default to (as opposed to off of ComicLocation root), IMP: Better handling of some special character references in series titles when looking for series on the auto-wanted list, IMP: 32P will now auto-disable provider if logon returns invalid credentials, FIX: When using alt_pull on weekly pull list, xA0 unicode character caused error, FIX: If title had invalid character in filename that was replaced with a character that already existed in the title, would not scan in during file-checking, FIX: When searching for a series (weeklypull-list/add a series), if the title contained 'and' or '&' would return really mixed up results, FIX: When Post-Processing, if filename being processed had special characters (ie. comma) and was different than nzbname, in some cases would fail to find/move issues, IMP: Utilize internal comictagger to convert from cbr/cbz, IMP: Added more checks when post-processing to ensure files are handled correctly, IMP: Added meta-tag reading when importing series/issues - if previously tagged with CT, will reverse look-up the provided IssueID to reference the correct ComicID, IMP: If scanned directory during import contins cvinfo file, use that and force the ComicID to entire directory when importing a series, IMP: Manual meta-tagging issues will no longer create temporary directories and/or create files in the Comic Location root causing problems for some users, FIX: Annuals weren't properly sorted upon loading of comic details page for some series, IMP: Added some extra checks when validating/creating directories, FIX: Fixed a problem when displaying some covers of .cbz files on the comic details page

This commit is contained in:
evilhero 2016-01-26 02:49:56 -05:00
parent efe447f517
commit d182321d9b
41 changed files with 1732 additions and 10157 deletions

1
.gitignore vendored
View File

@ -10,3 +10,4 @@ Thumbs.db
*.torrent
ehtumbs.db
Thumbs.db
lib/comictaggerlib/ct_settings/

View File

@ -79,9 +79,13 @@ def main():
args = parser.parse_args()
if args.verbose:
mylar.VERBOSE = 2
elif args.quiet:
mylar.VERBOSE = 0
mylar.VERBOSE = True
if args.quiet:
mylar.QUIET = True
# Do an intial setup of the logger.
logger.initLogger(console=not mylar.QUIET, log_dir=False,
verbose=mylar.VERBOSE)
#if args.update:
# print('Attempting to update Mylar so things can work again...')

View File

@ -25,7 +25,7 @@
<link rel="shortcut icon" href="images/favicon.ico" type="image/x-icon">
${next.headIncludes()}
<script src="js/libs/modernizr-1.7.min.js"></script>
<script src="js/libs/modernizr-2.8.3.min.js"></script>
</head>
<body>
<%
@ -105,11 +105,10 @@
</footer>
<a href="#main" id="toTop"><span>Back to top</span></a>
</div>
<script src="//code.jquery.com/jquery-1.9.1.js"></script>
<!--<script src="http://code.jquery.com/ui/1.10.3/jquery-ui.js"></script> -->
<script src="js/libs/jquery-1.7.2.min.js"></script>
<script src="js/libs/jquery-ui.min.js"></script>
<script src="js/common.js"></script>
${next.javascriptIncludes()}

6
data/interfaces/default/comicdetails.html Executable file → Normal file
View File

@ -574,7 +574,7 @@
%elif (annual['Status'] == 'Wanted'):
<a href="#" title="Mark annual as Skipped" onclick="doAjaxCall('unqueueissue?IssueID=${annual['IssueID']}&ComicID=${annual['ComicID']}&ReleaseComicID=${annual['ReleaseComicID']}',$(this),'table')" data-success="'${annual['Issue_Number']}' has been marked as skipped"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" class="highqual" /></a>
%elif (annual['Status'] == 'Snatched'):
<a href="#" onclick="doAjaxCall('retryit?ComicName=${annual['ComicName'] |u}&ComicID=${annual['ComicID']}&IssueID=${annual['IssueID']}&IssueNumber=${annual['Issue_Number']}&ComicYear=${annual['IssueDate']}&ReleaseComicID=${annual['ReleaseComicID']}', $(this),'table')" data-success="Retrying the same version of '${issue['ComicName']}' '${issue['Issue_Number']}'" title="Retry the same download again"><img src="interfaces/default/images/retry_icon.png" height="25" width="25" class="highqual" /></a>
<a href="#" onclick="doAjaxCall('retryit?ComicName=${annual['ComicName'] |u}&ComicID=${annual['ComicID']}&IssueID=${annual['IssueID']}&IssueNumber=${annual['Issue_Number']}&ComicYear=${annual['IssueDate']}&ReleaseComicID=${annual['ReleaseComicID']}', $(this),'table')" data-success="Retrying the same version of '${annual['ComicName']}' '${annual['Issue_Number']}'" title="Retry the same download again"><img src="interfaces/default/images/retry_icon.png" height="25" width="25" class="highqual" /></a>
<a href="#" title="Mark annual as Skipped" onclick="doAjaxCall('unqueueissue?IssueID=${annual['IssueID']}&ComicID=${annual['ComicID']}&ReleaseComicID=${annual['ReleaseComicID']}',$(this),'table')" data-success="'${annual['Issue_Number']}' has been marked as skipped"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" class="highqual" /></a>
%if mylar.FAILED_DOWNLOAD_HANDLING:
<a href="#" title="Mark annual as Failed" onclick="doAjaxCall('unqueueissue?IssueID=${issue['IssueID']}&ComicID=${issue['ComicID']}&mode="failed"',$(this),'table')" data-success="'${issue['Issue_Number']}' has been marked as Failed"><img src="interfaces/default/images/failed.png" height="25" width="25" class="highqual" /></a>
@ -610,7 +610,7 @@
<a href="#" title="Archive" onclick="doAjaxCall('archiveissue?IssueID=${annual['IssueID']}',$(this),'table')"><img src="interfaces/default/images/archive_icon.png" height="25" width="25" title="Mark issue as Archived" class="highqual" /></a>
<a href="#" title="Add to Reading List"><img src="interfaces/default/images/glasses-icon.png" height="25" width="25" class="highqual" /></a>
-->
<a href="#" onclick="doAjaxCall('retryit?ComicName=${annual['ComicName'] |u}&ComicID=${annual['ComicID']}&IssueID=${annual['IssueID']}&IssueNumber=${annual['Issue_Number']}&ComicYear=${annual['IssueDate']}&ReleaseComicID=${annual['ReleaseComicID']}', $(this),'table')" data-success="Retrying the same version of '${issue['ComicName']}' '${issue['Issue_Number']}'" title="Retry the same download again"><img src="interfaces/default/images/retry_icon.png" height="25" width="25" class="highqual" /></a>
<a href="#" onclick="doAjaxCall('retryit?ComicName=${annual['ComicName'] |u}&ComicID=${annual['ComicID']}&IssueID=${annual['IssueID']}&IssueNumber=${annual['Issue_Number']}&ComicYear=${annual['IssueDate']}&ReleaseComicID=${annual['ReleaseComicID']}', $(this),'table')" data-success="Retrying the same version of '${annual['ComicName']}' '${annual['Issue_Number']}'" title="Retry the same download again"><img src="interfaces/default/images/retry_icon.png" height="25" width="25" class="highqual" /></a>
<a href="#" title="Mark annual as Skipped" onclick="doAjaxCall('unqueueissue?IssueID=${annual['IssueID']}&ComicID=${annual['ComicID']}',$(this),'table')" data-success="'${annual['Issue_Number']}' has been marked as skipped"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" class="highqual" /></a>
%else:
@ -822,7 +822,7 @@
"iDisplayLength": 10
});
resetFilters("issue");
resetFilters("issue", "annual");
setTimeout(function(){
initFancybox();
},1500);

View File

@ -156,12 +156,6 @@
<input type="checkbox" name="launch_browser" value="1" ${config['launch_browser']} /> <label>Launch Browser on Startup</label>
</div>
<!--
<div class="row checkbox">
<input type="checkbox" name="logverbose" value="2" ${config['logverbose']} /> <label>Verbose Logging</label>
<br/><small>*Use this only when experiencing problems*</small>
</div>
-->
<div class="row checkbox">
<input type="checkbox" name="syno_fix" value="1" ${config['syno_fix']} /> <label>Synology Fix</label>
<br/><small>*Use this if experiencing parsing problems*</small>
@ -244,7 +238,7 @@
<div class="row">
<label>File CHMOD</label>
<input type="text" name="chmod_file" value="${config['chmod_file']}" size="50">
<small>Permissions on created/moved directories</small>
<small>Permissions on created/moved files</small>
</div>
%if 'windows' not in mylar.OS_DETECT:
<div class="row">
@ -738,6 +732,21 @@
<input type="checkbox" id="post_processing" onclick="initConfigCheckbox($this));" name="post_processing" value="1" ${config['post_processing']} /><label>Enable Post-Processing<small> (not checked = NO post-processing/post-management)</small></label>
</div>
<div class="config">
<div class="row left">
<label>When Post-Processing
<select name="file_opts clearfix">
%for x in ['move', 'copy']:
<%
if config['file_opts'] == x:
outputselect = 'selected'
else:
outputselect = ''
%>
<option value=${x} ${outputselect}>${x}</option>
%endfor
</select> the files</label>
</div>
<br/>
<div class="row checkbox left clearfix">
<input type="checkbox" id="enable_check_folder" onclick="initConfigCheckbox($this));" name="enable_check_folder" value="1" ${config['enable_check_folder']} /><label>Enable Folder Monitoring<small></label>
</div>
@ -777,7 +786,7 @@
</div>
</fieldset>
<fieldset>
<legend>Metadata Tagging</legend><small class="heading"><span style="float: left; margin-right: .3em; margin-top: 4px;" class="ui-icon ui-icon-info"></span>ComicTagger is inclucded but configparser is required</small>
<legend>Metadata Tagging</legend><small class="heading"><span style="float: left; margin-right: .3em; margin-top: 4px;" class="ui-icon ui-icon-info"></span>ComicTagger is included but configparser is required</small>
<div class="row checkbox left clearfix">
<input id="enable_meta" type="checkbox" onclick="initConfigCheckbox($this));" name="enable_meta" value="1" ${config['enable_meta']} /><label>Enable Metadata Tagging</label>
</div>

View File

@ -171,13 +171,60 @@ table.display_no_select tr.heading2 td {
table.display_no_select td {
padding: 8px 10px;
font-size: 16px;
font-size: 12px;
}
table
table.display_no_select td.center {
text-align: center;
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* DataTables display for log screen
*/
table.display_log {
margin: 20px auto;
clear: both;
border:1px solid #EEE;
width: 100%;
/* Note Firefox 3.5 and before have a bug with border-collapse
* ( https://bugzilla.mozilla.org/show%5Fbug.cgi?id=155955 )
* border-spacing: 0; is one possible option. Conditional-css.com is
* useful for this kind of thing
*
* Further note IE 6/7 has problems when calculating widths with border width.
* It subtracts one px relative to the other browsers from the first column, and
* adds one to the end...
*
* If you want that effect I'd suggest setting a border-top/left on th/td's and
* then filling in the gaps with other borders.
*/
}
table.display_log thead th {
padding: 3px 18px 3px 10px;
background-color: white;
font-weight: bold;
}
table.display_log tfoot th {
padding: 3px 18px 3px 10px;
border-top: 1px solid black;
font-weight: bold;
}
table.display_log tr.heading2 td {
border-bottom: 1px solid #aaa;
}
table.display_log td {
padding: 4px 10px;
font-size: 12px;
}
table
table.display_log td.center {
text-align: center;
}
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* DataTables sorting

View File

@ -1304,21 +1304,17 @@ div#artistheader h2 a {
background-color: #FFF;
}
#log_table th#timestamp {
min-width: 150px;
min-width: 125px;
text-align: left;
}
#log_table th#level {
min-width: 60px;
max-width: 60px;
text-align: left;
}
#log_table th#message {
min-width: 500px;
min-width: 600px;
text-align: left;
}
#log_table td {
font-size: 12px;
padding: 2px 10px;
}
#searchresults_table th#name {
min-width: 525px;
text-align: left;

View File

@ -52,6 +52,9 @@
</tr>
</table>
</div>
</div>
<div class="table_wrapper">
<form action="markImports" method="get" id="markImports">
<div id="markcomic">
<select name="action" onChange="doAjaxCall('markImports',$(this),'table',true);" data-error="You didn't select any comics">
@ -61,6 +64,7 @@
</select>
<input type="hidden" value="Go">
</div>
<table class="display" id="impresults_table">
<tr/><tr/>
<tr><center><h3>To be Imported</h3></center></tr>
@ -79,19 +83,24 @@
%if results:
%for result in results:
<%
grade = 'X'
if result['DisplayName'] is None:
displayname = result['ComicName']
else:
displayname = result['DisplayName']
endif
%>
<tr>
<tr class="grade${grade}">
<td id="select"><input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="${result['ComicName']}" value="${result['ComicName']}" class="checkbox" />
<td id="comicname">${displayname}</td>
<td id="comicname">${displayname}
%if result['ComicID'] is not None:
[${result['ComicID']}]
%endif
</td>
<td id="comicyear"><title="${result['ComicYear']}">${result['ComicYear']}</td>
<td id="comicissues"><title="${result['IssueCount']}">${result['IssueCount']}</td>
<td id="status">
%if result['ComicID']:
%if result['ComicID'] and result['Status'] == 'Imported':
<a href="comicDetails?ComicID=${result['ComicID']}">${result['Status']}</a>
%else:
${result['Status']}
@ -103,7 +112,7 @@
<td id="importdate">${result['ImportDate']}</td>
<td id="addcomic">
%if result['Status'] == 'Not Imported':
[<a href="#" title="Import ${result['ComicName']} into your watchlist" onclick="doAjaxCall('preSearchit?ComicName=${result['ComicName']| u}&displaycomic=${displayname}| u}',$(this),'table')" data-success="Imported ${result['ComicName']}">Import</a>]
[<a href="#" title="Import ${result['ComicName']} into your watchlist" onclick="doAjaxCall('preSearchit?ComicName=${result['ComicName']| u}&displaycomic=${displayname}| u}&comicid=${result['ComicID']}',$(this),'table')" data-success="Imported ${result['ComicName']}">Import</a>]
%endif
[<a href="deleteimport?ComicName=${result['ComicName']}">Remove</a>]
%if result['implog'] is not None:
@ -125,39 +134,67 @@
</tbody>
</table>
</form>
</div>
</div>
</%def>
<%def name="headIncludes()">
<link rel="stylesheet" href="interfaces/default/css/data_table.css">
</%def>
<%def name="javascriptIncludes()">
<script src="js/libs/jquery.dataTables.min.js"></script>
<script type="text/javascript">
$('.showlog').click(function (event) {
var width = 575,
height = 400,
left = ($(window).width() - width) / 2,
top = ($(window).height() - height) / 2,
url = this.href,
opts = 'status=1' +
',width=' + width +
',height=' + height +
',top=' + top +
',left=' + left;
<script type="text/javascript">
$('.showlog').click(function (event) {
var width = 575,
height = 400,
left = ($(window).width() - width) / 2,
top = ($(window).height() - height) / 2,
url = this.href,
opts = 'status=1' +
',width=' + width +
',height=' + height +
',top=' + top +
',left=' + left;
window.open(url, 'twitte', opts);
window.open(url, 'twitte', opts);
return false;
});
</script>
<script>
function initThisPage() {
jQuery( "#tabs" ).tabs();
initActions();
$('#impresults_table').dataTable(
{
"bDestroy": true,
//"aoColumnDefs": [
// { 'bSortable': false, 'aTargets': [ 0 , 2 ] },
// { 'bVisible': false, 'aTargets': [2] },
// { 'sType': 'numeric', 'aTargets': [2] },
// { 'iDataSort': [2], 'aTargets': [3] }
//],
"aLengthMenu": [[10, 25, 50, -1], [10, 25, 50, 'All' ]],
"oLanguage": {
"sLengthMenu":"Show _MENU_ results per page",
"sEmptyTable": "No results",
"sInfo":"Showing _START_ to _END_ of _TOTAL_ results",
"sInfoEmpty":"Showing 0 to 0 of 0 results",
"sInfoFiltered":"(filtered from _MAX_ total results)",
"sSearch" : ""},
"bStateSave": true,
"iDisplayLength": 25,
"sPaginationType": "full_numbers",
"aaSorting": [2, 'desc']
});
resetFilters("result");
}
return false;
});
</script>
<script>
function initThisPage() {
jQuery( "#tabs" ).tabs();
initActions();
};
$(document).ready(function() {
initThisPage();
});
</script>
$(document).ready(function(){
initThisPage();
});
$(window).load(function(){
initFancybox();
});
</script>
</%def>

View File

@ -4,59 +4,46 @@
from mylar import helpers
%>
<%def name="headerIncludes()">
<div id="subhead_container">
<div id="subhead_menu">
<a id="menu_link_delete" href="#" onclick="doAjaxCall('clearLogs',$(this),'table')" data-success="All logs cleared">Clear Log</a>
<a id="menu_link_edit" href="toggleVerbose">Toggle Debug Log
%if mylar.VERBOSE:
ON
%endif
</a>
</div>
</div>
</%def>
<%def name="body()">
<div class="title">
<h1 class="clearfix"><img src="interfaces/default/images/icon_logs.png" alt="Logs"/>Logs</h1>
</div>
<table class="display" id="log_table">
<!-- <form action="log_change" method="GET">
<div class="row">
<label>Interface</label>
<select name="log_level">
%for loglevel in ['Info', 'Warning', 'Debug']:
<%
if loglevel == mylar.LOG_LEVEL:
selected = 'selected'
else:
selected = ''
%>
<option value="${loglevel}" ${selected}>${loglevel}</option>
%endfor
</select>
</div>
<input type="button" value="Go!" onclick="doAjaxCall('log_change?log_level=${loglevel}',$(this),'table',true)" data-success="Log level changed to ${loglevel}">
</form>
-->
<table class="display_log" id="log_table">
<thead>
<tr>
<th id="timestamp">Timestamp</th>
<th id="level">Level</th>
<!-- <th id="thread">Thread</th> -->
<th id="message">Message</th>
</tr>
</thead>
<tbody>
%for line in lineList:
<%
timestamp, message, level, threadname = line
if level == 'WARNING' or level == 'ERROR':
grade = 'X'
else:
grade = 'Z'
if threadname is None:
threadname = ''
%>
<tr class="grade${grade}">
<td id="timestamp">${timestamp}</td>
<td id="level">${level}</td>
<!-- <td id="thread">${threadname}</td> -->
<td id="message">${message}</td>
</tr>
%endfor
</tbody>
</table>
</tbody>
</table>
<br>
<div align="center">Refresh rate:
<select id="refreshrate" onchange="setRefresh()">
<option value="0" selected="selected">No Refresh</option>
<option value="5">5 Seconds</option>
<option value="15">15 Seconds</option>
<option value="30">30 Seconds</option>
<option value="60">60 Seconds</option>
<option value="300">5 Minutes</option>
<option value="600">10 Minutes</option>
</select></div>
</%def>
<%def name="headIncludes()">
@ -65,23 +52,65 @@
<%def name="javascriptIncludes()">
<script src="js/libs/jquery.dataTables.min.js"></script>
<script>
$(document).ready(function()
{
$('#log_table').dataTable(
{
"oLanguage": {
"sLengthMenu":"Show _MENU_ lines per page",
"sEmptyTable": "No log information available",
"sInfo":"Showing _START_ to _END_ of _TOTAL_ lines",
"sInfoEmpty":"Showing 0 to 0 of 0 lines",
"sInfoFiltered":"(filtered from _MAX_ total lines)"},
"bStateSave": true,
"iDisplayLength": 100,
"sPaginationType": "full_numbers",
"aaSorting": []
<script>
$(document).ready(function() {
initActions();
});
});
</script>
$('#log_table').dataTable( {
"bProcessing": true,
"bServerSide": true,
"sAjaxSource": 'getLog',
"sPaginationType": "full_numbers",
"aaSorting": [[0, 'desc']],
"iDisplayLength": 25,
"bStateSave": true,
"oLanguage": {
"sSearch":"Filter:",
"sLengthMenu":"Show _MENU_ lines per page",
"sEmptyTable": "No log information available",
"sInfo":"Showing _START_ to _END_ of _TOTAL_ lines",
"sInfoEmpty":"Showing 0 to 0 of 0 lines",
"sInfoFiltered":"(filtered from _MAX_ total lines)"},
"fnRowCallback": function (nRow, aData, iDisplayIndex, iDisplayIndexFull) {
if (aData[1] === "ERROR") {
$('td', nRow).closest('tr').addClass("gradeX");
} else if (aData[1] === "WARNING") {
$('td', nRow).closest('tr').addClass("gradeW");
} else {
$('td', nRow).closest('tr').addClass("gradeZ");
}
return nRow;
},
"fnDrawCallback": function (o) {
// Jump to top of page
$('html,body').scrollTop(0);
},
"fnServerData": function ( sSource, aoData, fnCallback ) {
/* Add some extra data to the sender */
$.getJSON(sSource, aoData, function (json) {
fnCallback(json)
});
}
});
});
</script>
<script>
var timer;
function setRefresh()
{
refreshrate = document.getElementById('refreshrate');
if(refreshrate != null)
{
if(timer)
{
clearInterval(timer);
}
if(refreshrate.value != 0)
{
timer = setInterval("$('#log_table').dataTable().fnDraw()",1000*refreshrate.value);
}
}
}
</script>
</%def>

View File

@ -51,12 +51,10 @@
<div class="row checkbox">
<input type="checkbox" name="autoadd" id="autoadd" value="1" ${checked(mylar.ADD_COMICS)}><label>Auto-add new series</label>
</div>
<!--
<div class="row checkbox">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="imp_metadata" id="imp_metadata" value="1" ${checked(mylar.IMP_METADATA)}><label>Use existing Metadata</label>
<small>Use existing Metadata to better locate series for import</small>
</div>
-->
<div class="row checkbox">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="imp_move" onclick="initConfigCheckbox($this));" id="imp_move" value="1" ${checked(mylar.IMP_MOVE)}><label>Move files into corresponding Series directory</label>
<small>Leaving this unchecked will not move anything, but will mark the issues as Archived</small>

View File

@ -96,7 +96,7 @@
<td id="years">${item['SpanYears']}</td>
<td id="have"><span title="${item['percent']}"></span>${css}<div style="width:${item['percent']}%"><span class="progressbar-front-text">${item['Have']}/${item['Total']}</span></div></td>
<td id="options">
<a title="Remove from Story Arc Watchlist" onclick="doAjaxCall('removefromreadlist?StoryArcID=${item['StoryArcID']}',$(this),'table')" data-success="Sucessfully removed ${item['StoryArc']} from list."><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" /></a>
<a title="Remove from Story Arc Watchlist" onclick="doAjaxCall('removefromreadlist?StoryArcID=${item['StoryArcID']}&ArcName=${item['StoryArc']}',$(this),'table')" data-success="Sucessfully removed ${item['StoryArc']} from list."><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" /></a>
%if item['CV_ArcID']:
<a title="Refresh Series" onclick="doAjaxCall('addStoryArc_thread?arcid=${item['StoryArcID']}&cvarcid=${item['CV_ArcID']}&storyarcname=${item['StoryArc']}&arcrefresh=True',$(this),'table')" data-success="Now refreshing ${item['StoryArc']}."><img src="interfaces/default/images/refresh.png" height="25" width="25" /></a>
%endif

17
data/js/common.js Normal file
View File

@ -0,0 +1,17 @@
window.log = function(){
log.history = log.history || [];
log.history.push(arguments);
arguments.callee = arguments.callee.caller;
if (this.console) {
console.log(Array.prototype.slice.call(arguments));
}
};
function toggle(source) {
checkboxes = document.getElementsByClassName('checkbox');
for (var i in checkboxes) {
checkboxes[i].checked = source.checked;
}
}

4
data/js/libs/jquery-1.11.1.min.js vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

0
data/js/libs/jquery-1.7.2.min.js vendored Executable file → Normal file
View File

0
data/js/libs/jquery.dataTables.min.js vendored Executable file → Normal file
View File

File diff suppressed because one or more lines are too long

4
data/js/libs/modernizr-2.8.3.min.js vendored Executable file

File diff suppressed because one or more lines are too long

0
lib/comictaggerlib/cli.py Normal file → Executable file
View File

View File

@ -18,7 +18,6 @@ See the License for the specific language governing permissions and
limitations under the License.
"""
#import sys
import os
import sys
import configparser
@ -33,10 +32,7 @@ class ComicTaggerSettings:
@staticmethod
def getSettingsFolder():
filename_encoding = sys.getfilesystemencoding()
if platform.system() == "Windows":
folder = os.path.join( os.environ['APPDATA'], 'ComicTagger' )
else:
folder = os.path.join( os.path.expanduser('~') , '.ComicTagger')
folder = os.path.join(ComicTaggerSettings.baseDir(), 'ct_settings')
if folder is not None:
folder = folder.decode(filename_encoding)
return folder

View File

@ -40,8 +40,6 @@ class PostProcessor(object):
EXISTS_SMALLER = 3
DOESNT_EXIST = 4
# IGNORED_FILESTRINGS = [ "" ]
NZB_NAME = 1
FOLDER_NAME = 2
FILE_NAME = 3
@ -53,18 +51,6 @@ class PostProcessor(object):
file_path: The path to the file to be processed
nzb_name: The name of the NZB which resulted in this file being downloaded (optional)
"""
# absolute path to the folder that is being processed
#self.folder_path = ek.ek(os.path.dirname, ek.ek(os.path.abspath, file_path))
# full path to file
#self.file_path = file_path
# file name only
#self.file_name = ek.ek(os.path.basename, file_path)
# the name of the folder only
#self.folder_name = ek.ek(os.path.basename, self.folder_path)
# name of the NZB that resulted in this folder
self.nzb_name = nzb_name
self.nzb_folder = nzb_folder
@ -72,10 +58,15 @@ class PostProcessor(object):
self.module = module + '[POST-PROCESSING]'
else:
self.module = '[POST-PROCESSING]'
if queue: self.queue = queue
#self.in_history = False
#self.release_group = None
#self.is_proper = False
if queue:
self.queue = queue
if mylar.FILE_OPTS == 'copy':
self.fileop = shutil.copy
else:
self.fileop = shutil.move
self.valreturn = []
self.log = ''
@ -246,8 +237,8 @@ class PostProcessor(object):
wv_comicversion = wv['ComicVersion']
wv_publisher = wv['ComicPublisher']
wv_total = wv['Total']
logger.fdebug('Checking ' + wv['ComicName'] + ' [' + str(wv['ComicYear']) + '] -- ' + str(wv['ComicID']))
if mylar.FOLDER_SCAN_LOG_VERBOSE:
logger.fdebug('Checking ' + wv['ComicName'] + ' [' + str(wv['ComicYear']) + '] -- ' + str(wv['ComicID']))
#force it to use the Publication Date of the latest issue instead of the Latest Date (which could be anything)
latestdate = myDB.select('SELECT IssueDate from issues WHERE ComicID=? order by ReleaseDate DESC', [wv['ComicID']])
@ -565,16 +556,27 @@ class PostProcessor(object):
metaresponse = "fail"
if metaresponse == "fail":
logger.fdebug(module + ' Unable to write metadata successfully - check mylar.log file.')
logger.fdebug(module + ' Unable to write metadata successfully - check mylar.log file. Attempting to continue without metatagging...')
elif metaresponse == "unrar error":
logger.error(module + ' This is a corrupt archive - whether CRC errors or it is incomplete. Marking as BAD, and retrying it.')
continue
#launch failed download handling here.
elif metaresponse.startswith('file not found'):
filename_in_error = os.path.split(metaresponse, '||')[1]
self._log("The file cannot be found in the location provided for metatagging to be used [" + filename_in_error + "]. Please verify it exists, and re-run if necessary. Attempting to continue without metatagging...")
logger.error(module + ' The file cannot be found in the location provided for metatagging to be used [' + filename_in_error + ']. Please verify it exists, and re-run if necessary. Attempting to continue without metatagging...')
else:
ofilename = os.path.split(metaresponse)[1]
logger.info(module + ' Sucessfully wrote metadata to .cbz (' + ofilename + ') - Continuing..')
self._log('Sucessfully wrote metadata to .cbz (' + ofilename + ') - proceeding...')
filechecker.validateAndCreateDirectory(grdst, True, module=module)
checkdirectory = filechecker.validateAndCreateDirectory(grdst, True, module=module)
if not checkdirectory:
logger.warn(module + ' Error trying to validate/create directory. Aborting this process at this time.')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
dfilename = ofilename
@ -602,12 +604,11 @@ class PostProcessor(object):
grab_src = os.path.join(self.nzb_folder, ofilename)
logger.fdebug(module + ' Source Path : ' + grab_src)
logger.info(module + ' Moving ' + str(ofilename) + ' into directory : ' + str(grab_dst))
logger.info(module + ' ' + mylar.FILE_OPTS + 'ing ' + str(ofilename) + ' into directory : ' + str(grab_dst))
try:
shutil.move(grab_src, grab_dst)
self.fileop(grab_src, grab_dst)
except (OSError, IOError):
logger.warn(module + ' Failed to move directory - check directories and manually re-run.')
logger.warn(module + ' Failed to ' + mylar.FILE_OPTS + ' directory - check directories and manually re-run.')
return
#tidyup old path
@ -621,16 +622,18 @@ class PostProcessor(object):
logger.fdebug(module + ' Removed temporary directory : ' + self.nzb_folder)
#delete entry from nzblog table
if 'S' in sandwich:
IssArcID = 'S' + str(ml['IssueArcID'])
myDB.action('DELETE from nzblog WHERE IssueID=? AND SARC=?', [IssArcID,ml['StoryArc']])
logger.fdebug(module + ' IssueArcID: ' + str(ml['IssueArcID']))
ctrlVal = {"IssueArcID": ml['IssueArcID']}
newVal = {"Status": "Downloaded",
"Location": grab_dst}
logger.fdebug('writing: ' + str(newVal) + ' -- ' + str(ctrlVal))
myDB.upsert("readinglist", newVal, ctrlVal)
#if it was downloaded via mylar from the storyarc section, it will have an 'S' in the nzblog
#if it was downloaded outside of mylar and/or not from the storyarc section, it will be a normal issueid in the nzblog
#IssArcID = 'S' + str(ml['IssueArcID'])
myDB.action('DELETE from nzblog WHERE IssueID=? AND SARC=?', ['S' + str(ml['IssueArcID']),ml['StoryArc']])
myDB.action('DELETE from nzblog WHERE IssueID=? AND SARC=?', [ml['IssueArcID'],ml['StoryArc']])
logger.fdebug(module + ' IssueArcID: ' + str(ml['IssueArcID']))
ctrlVal = {"IssueArcID": ml['IssueArcID']}
newVal = {"Status": "Downloaded",
"Location": grab_dst}
logger.fdebug('writing: ' + str(newVal) + ' -- ' + str(ctrlVal))
myDB.upsert("readinglist", newVal, ctrlVal)
logger.fdebug(module + ' [' + ml['StoryArc'] + '] Post-Processing completed for: ' + grab_dst)
@ -797,10 +800,14 @@ class PostProcessor(object):
metaresponse = "fail"
if metaresponse == "fail":
logger.fdebug(module + ' Unable to write metadata successfully - check mylar.log file.')
logger.fdebug(module + ' Unable to write metadata successfully - check mylar.log file. Attempting to continue without metatagging...')
elif metaresponse == "unrar error":
logger.error(module + ' This is a corrupt archive - whether CRC errors or it is incomplete. Marking as BAD, and retrying it.')
#launch failed download handling here.
elif metaresponse.startswith('file not found'):
filename_in_error = os.path.split(metaresponse, '||')[1]
self._log("The file cannot be found in the location provided for metatagging [" + filename_in_error + "]. Please verify it exists, and re-run if necessary. Attempting to continue without metatagging...")
logger.error(module + ' The file cannot be found in the location provided for metagging [' + filename_in_error + ']. Please verify it exists, and re-run if necessary. Attempting to continue without metatagging...')
else:
ofilename = os.path.split(metaresponse)[1]
logger.info(module + ' Sucessfully wrote metadata to .cbz (' + ofilename + ') - Continuing..')
@ -817,7 +824,12 @@ class PostProcessor(object):
else:
grdst = mylar.DESTINATION_DIR
filechecker.validateAndCreateDirectory(grdst, True, module=module)
checkdirectory = filechecker.validateAndCreateDirectory(grdst, True, module=module)
if not checkdirectory:
logger.warn(module + ' Error trying to validate/create directory. Aborting this process at this time.')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
if sandwich is not None and 'S' in sandwich:
#if from a StoryArc, check to see if we're appending the ReadingOrder to the filename
@ -840,24 +852,26 @@ class PostProcessor(object):
self._log("Source Path : " + grab_src)
logger.info(module + ' Source Path : ' + grab_src)
logger.info(module + ' Moving ' + str(ofilename) + ' into directory : ' + str(grab_dst))
logger.info(module + ' ' + mylar.FILE_OPTS + 'ing ' + str(ofilename) + ' into directory : ' + str(grab_dst))
try:
shutil.move(grab_src, grab_dst)
self.fileop(grab_src, grab_dst)
except (OSError, IOError):
self._log("Failed to move directory - check directories and manually re-run.")
logger.debug(module + ' Failed to move directory - check directories and manually re-run.')
self._log("Failed to " + mylar.FILE_OPTS + " directory - check directories and manually re-run.")
logger.debug(module + ' Failed to ' + mylar.FILE_OPTS + ' directory - check directories and manually re-run.')
return
#tidyup old path
try:
shutil.rmtree(self.nzb_folder)
except (OSError, IOError):
self._log("Failed to remove temporary directory.")
logger.debug(module + ' Failed to remove temporary directory - check directory and manually re-run.')
return
if mylar.FILE_OPTS == 'move':
try:
shutil.rmtree(self.nzb_folder)
except (OSError, IOError):
self._log("Failed to remove temporary directory.")
logger.debug(module + ' Failed to remove temporary directory - check directory and manually re-run.')
return
logger.debug(module + ' Removed temporary directory : ' + self.nzb_folder)
self._log("Removed temporary directory : " + self.nzb_folder)
logger.debug(module + ' Removed temporary directory : ' + self.nzb_folder)
self._log("Removed temporary directory : " + self.nzb_folder)
#delete entry from nzblog table
myDB.action('DELETE from nzblog WHERE issueid=?', [issueid])
@ -882,10 +896,15 @@ class PostProcessor(object):
if self.nzb_name == 'Manual Run':
#loop through the hits here.
if len(manual_list) == 0:
if len(manual_list) == 0 and len(manual_arclist) == 0:
logger.info(module + ' No matches for Manual Run ... exiting.')
return
elif len(manual_arclist) > 0 and len(manual_list) == 0:
logger.info(module + ' Manual post-processing completed for ' + str(len(manual_arclist)) + ' story-arc issues.')
return
elif len(manual_arclist) > 0:
logger.info(module + ' Manual post-processing completed for ' + str(len(manual_arclist)) + ' story-arc issues.')
i = 0
for ml in manual_list:
i+=1
@ -1149,19 +1168,49 @@ class PostProcessor(object):
ofilename = None
#if it's a Manual Run, use the ml['ComicLocation'] for the exact filename.
if ml is None:
ofilename = None
for root, dirnames, filenames in os.walk(self.nzb_folder, followlinks=True):
for filename in filenames:
if filename.lower().endswith(extensions):
odir = root
logger.fdebug(module + ' odir (root): ' + odir)
ofilename = filename
logger.fdebug(module + ' ofilename: ' + ofilename)
path, ext = os.path.splitext(ofilename)
try:
if odir is None:
logger.fdebug(module + ' No root folder set.')
odir = self.nzb_folder
except:
logger.error(module + ' unable to set root folder. Forcing it due to some error above most likely.')
odir = self.nzb_folder
if ofilename is None:
self._log("Unable to locate a valid cbr/cbz file. Aborting post-processing for this filename.")
logger.error(module + ' unable to locate a valid cbr/cbz file. Aborting post-processing for this filename.')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
logger.fdebug(module + ' odir: ' + odir)
logger.fdebug(module + ' ofilename: ' + ofilename)
#if meta-tagging is not enabled, we need to declare the check as being fail
#if meta-tagging is enabled, it gets changed just below to a default of pass
pcheck = "fail"
#tag the meta.
if mylar.ENABLE_META:
self._log("Metatagging enabled - proceeding...")
logger.fdebug(module + ' Metatagging enabled - proceeding...')
pcheck = "pass"
try:
import cmtagmylar
if ml is None:
pcheck = cmtagmylar.run(self.nzb_folder, issueid=issueid, comversion=comversion)
pcheck = cmtagmylar.run(self.nzb_folder, issueid=issueid, comversion=comversion, filename=os.path.join(odir, ofilename))
else:
pcheck = cmtagmylar.run(self.nzb_folder, issueid=issueid, comversion=comversion, manual="yes", filename=ml['ComicLocation'])
@ -1186,10 +1235,23 @@ class PostProcessor(object):
"issuenumber": issuenzb['Issue_Number'],
"annchk": annchk})
return self.queue.put(self.valreturn)
elif pcheck.startswith('file not found'):
filename_in_error = os.path.split(pcheck, '||')[1]
self._log("The file cannot be found in the location provided [" + filename_in_error + "]. Please verify it exists, and re-run if necessary. Aborting.")
logger.error(module + ' The file cannot be found in the location provided [' + filename_in_error + ']. Please verify it exists, and re-run if necessary. Aborting')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
else:
otofilename = pcheck
#need to set the filename source as the new name of the file returned from comictagger.
ofilename = os.path.split(pcheck)[1]
ext = os.path.splitext(ofilename)[1]
self._log("Sucessfully wrote metadata to .cbz - Continuing..")
logger.info(module + ' Sucessfully wrote metadata to .cbz (' + os.path.split(otofilename)[1] + ') - Continuing..')
logger.info(module + ' Sucessfully wrote metadata to .cbz (' + ofilename + ') - Continuing..')
#if this is successful, and we're copying to dst then set the file op to move this cbz so we
#don't leave a cbr/cbz in the origianl directory.
#self.fileop = shutil.move
#Run Pre-script
if mylar.ENABLE_PRE_SCRIPTS:
@ -1229,45 +1291,53 @@ class PostProcessor(object):
#if it's a Manual Run, use the ml['ComicLocation'] for the exact filename.
if ml is None:
ofilename = None
for root, dirnames, filenames in os.walk(self.nzb_folder, followlinks=True):
for filename in filenames:
if filename.lower().endswith(extensions):
odir = root
logger.fdebug(module + ' odir (root): ' + odir)
ofilename = filename
logger.fdebug(module + ' ofilename: ' + ofilename)
path, ext = os.path.splitext(ofilename)
try:
if odir is None:
logger.fdebug(module + ' No root folder set.')
odir = self.nzb_folder
except:
logger.error(module + ' unable to set root folder. Forcing it due to some error above most likely.')
odir = self.nzb_folder
# if ml is None:
# ofilename = None
# for root, dirnames, filenames in os.walk(self.nzb_folder, followlinks=True):
# for filename in filenames:
# if filename.lower().endswith(extensions):
# odir = root
# logger.fdebug(module + ' odir (root): ' + odir)
# ofilename = filename
# logger.fdebug(module + ' ofilename: ' + ofilename)
# path, ext = os.path.splitext(ofilename)
# try:
# if odir is None:
# logger.fdebug(module + ' No root folder set.')
# odir = self.nzb_folder
# except:
# logger.error(module + ' unable to set root folder. Forcing it due to some error above most likely.')
# odir = self.nzb_folde
#
# if ofilename is None:
# self._log("Unable to locate a valid cbr/cbz file. Aborting post-processing for this filename.")
# logger.error(module + ' unable to locate a valid cbr/cbz file. Aborting post-processing for this filename.')
# self.valreturn.append({"self.log": self.log,
# "mode": 'stop'})
# return self.queue.put(self.valreturn)
# logger.fdebug(module + ' odir: ' + odir)
# logger.fdebug(module + ' ofilename: ' + ofilename)
if ofilename is None:
self._log("Unable to locate a valid cbr/cbz file. Aborting post-processing for this filename.")
logger.error(module + ' unable to locate a valid cbr/cbz file. Aborting post-processing for this filename.')
if ml:
# else:
if pcheck == "fail":
odir, ofilename = os.path.split(ml['ComicLocation'])
else:
odir = os.path.split(ml['ComicLocation'])[0]
logger.fdebug(module + ' ofilename:' + ofilename)
#ofilename = otofilename
if any([ofilename == odir, ofilename == odir[:-1], ofilename == '']):
self._log("There was a problem deciphering the filename/directory - please verify that the filename : [" + ofilename + "] exists in location [" + odir + "]. Aborting.")
logger.error(module + ' There was a problem deciphering the filename/directory - please verify that the filename : [' + ofilename + '] exists in location [' + odir + ']. Aborting.')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
logger.fdebug(module + ' odir: ' + odir)
logger.fdebug(module + ' ofilename: ' + ofilename)
else:
if pcheck == "fail":
otofilename = ml['ComicLocation']
logger.fdebug(module + ' otofilename:' + otofilename)
odir, ofilename = os.path.split(otofilename)
logger.fdebug(module + ' odir: ' + odir)
logger.fdebug(module + ' ofilename: ' + ofilename)
path, ext = os.path.splitext(ofilename)
logger.fdebug(module + ' path: ' + path)
ext = os.path.splitext(ofilename)[1]
logger.fdebug(module + ' ext:' + ext)
if ofilename is None:
if ofilename is None or ofilename == '':
logger.error(module + ' Aborting PostProcessing - the filename does not exist in the location given. Make sure that ' + self.nzb_folder + ' exists and is the correct location.')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
@ -1297,7 +1367,13 @@ class PostProcessor(object):
#src = os.path.join(self.nzb_folder, ofilename)
src = os.path.join(odir, ofilename)
filechecker.validateAndCreateDirectory(comlocation, True, module=module)
checkdirectory = filechecker.validateAndCreateDirectory(comlocation, True, module=module)
if not checkdirectory:
logger.warn(module + ' Error trying to validate/create directory. Aborting this process at this time.')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
if mylar.LOWERCASE_FILENAMES:
dst = os.path.join(comlocation, (nfilename + ext).lower())
@ -1319,36 +1395,40 @@ class PostProcessor(object):
if mylar.RENAME_FILES:
if str(ofilename) != str(nfilename + ext):
logger.fdebug(module + ' Renaming ' + os.path.join(odir, ofilename) + ' ..to.. ' + os.path.join(odir, nfilename + ext))
os.rename(os.path.join(odir, ofilename), os.path.join(odir, nfilename + ext))
#if mylar.FILE_OPTS == 'move':
# os.rename(os.path.join(odir, ofilename), os.path.join(odir, nfilename + ext))
# else:
# self.fileop(os.path.join(odir, ofilename), os.path.join(odir, nfilename + ext))
else:
logger.fdebug(module + ' Filename is identical as original, not renaming.')
#src = os.path.join(self.nzb_folder, str(nfilename + ext))
src = os.path.join(odir, nfilename + ext)
src = os.path.join(odir, ofilename)
try:
shutil.move(src, dst)
self.fileop(src, dst)
except (OSError, IOError):
self._log("Failed to move directory - check directories and manually re-run.")
self._log("Failed to " + mylar.FILE_OPTS + " directory - check directories and manually re-run.")
self._log("Post-Processing ABORTED.")
logger.warn(module + ' Failed to move directory : ' + src + ' to ' + dst + ' - check directory and manually re-run')
logger.warn(module + ' Failed to ' + mylar.FILE_OPTS + ' directory : ' + src + ' to ' + dst + ' - check directory and manually re-run')
logger.warn(module + ' Post-Processing ABORTED')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
#tidyup old path
try:
shutil.rmtree(self.nzb_folder)
except (OSError, IOError):
self._log("Failed to remove temporary directory - check directory and manually re-run.")
self._log("Post-Processing ABORTED.")
logger.warn(module + ' Failed to remove temporary directory : ' + self.nzb_folder)
logger.warn(module + ' Post-Processing ABORTED')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
self._log("Removed temporary directory : " + self.nzb_folder)
logger.fdebug(module + ' Removed temporary directory : ' + self.nzb_folder)
if mylar.FILE_OPTS == 'move':
try:
shutil.rmtree(self.nzb_folder)
except (OSError, IOError):
self._log("Failed to remove temporary directory - check directory and manually re-run.")
self._log("Post-Processing ABORTED.")
logger.warn(module + ' Failed to remove temporary directory : ' + self.nzb_folder)
logger.warn(module + ' Post-Processing ABORTED')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
self._log("Removed temporary directory : " + self.nzb_folder)
logger.fdebug(module + ' Removed temporary directory : ' + self.nzb_folder)
else:
#downtype = for use with updater on history table to set status to 'Post-Processed'
downtype = 'PP'
@ -1356,39 +1436,40 @@ class PostProcessor(object):
src = os.path.join(odir, ofilename)
if mylar.RENAME_FILES:
if str(ofilename) != str(nfilename + ext):
logger.fdebug(module + ' Renaming ' + os.path.join(odir, str(ofilename)) + ' ..to.. ' + os.path.join(odir, self.nzb_folder, str(nfilename + ext)))
os.rename(os.path.join(odir, str(ofilename)), os.path.join(odir, str(nfilename + ext)))
src = os.path.join(odir, str(nfilename + ext))
logger.fdebug(module + ' Renaming ' + os.path.join(odir, str(ofilename))) #' ..to.. ' + os.path.join(odir, self.nzb_folder, str(nfilename + ext)))
#os.rename(os.path.join(odir, str(ofilename)), os.path.join(odir, str(nfilename + ext)))
#src = os.path.join(odir, str(nfilename + ext))
else:
logger.fdebug(module + ' Filename is identical as original, not renaming.')
logger.fdebug(module + ' odir src : ' + src)
logger.fdebug(module + ' Moving ' + src + ' ... to ... ' + dst)
logger.fdebug(module + ' ' + mylar.FILE_OPTS + 'ing ' + src + ' ... to ... ' + dst)
try:
shutil.move(src, dst)
self.fileop(src, dst)
except (OSError, IOError):
logger.fdebug(module + ' Failed to move directory - check directories and manually re-run.')
logger.fdebug(module + ' Failed to ' + mylar.FILE_OPTS + ' directory - check directories and manually re-run.')
logger.fdebug(module + ' Post-Processing ABORTED.')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
logger.fdebug(module + ' Successfully moved to : ' + dst)
logger.info(module + ' ' + mylar.FILE_OPTS + ' successful to : ' + dst)
#tidyup old path
try:
if os.path.isdir(odir) and odir != self.nzb_folder:
logger.fdebug(module + 'self.nzb_folder: ' + self.nzb_folder)
# check to see if the directory is empty or not.
if not os.listdir(odir):
logger.fdebug(module + ' Tidying up. Deleting folder : ' + odir)
shutil.rmtree(odir)
if mylar.FILE_OPTS == 'move':
#tidyup old path
try:
if os.path.isdir(odir) and odir != self.nzb_folder:
logger.fdebug(module + 'self.nzb_folder: ' + self.nzb_folder)
# check to see if the directory is empty or not.
if not os.listdir(odir):
logger.fdebug(module + ' Tidying up. Deleting folder : ' + odir)
shutil.rmtree(odir)
else:
raise OSError(module + ' ' + odir + ' not empty. Skipping removal of directory - this will either be caught in further post-processing or it will have to be removed manually.')
else:
raise OSError(module + ' ' + odir + ' not empty. Skipping removal of directory - this will either be caught in further post-processing or it will have to be removed manually.')
else:
raise OSError(module + ' ' + odir + ' unable to remove at this time.')
except (OSError, IOError):
logger.fdebug(module + ' Failed to remove temporary directory (' + odir + ') - Processing will continue, but manual removal is necessary')
raise OSError(module + ' ' + odir + ' unable to remove at this time.')
except (OSError, IOError):
logger.fdebug(module + ' Failed to remove temporary directory (' + odir + ') - Processing will continue, but manual removal is necessary')
#Hopefully set permissions on downloaded file
if mylar.OS_DETECT != 'windows':
@ -1402,6 +1483,12 @@ class PostProcessor(object):
logger.error(module + ' Failed to change file permissions. Ensure that the user running Mylar has proper permissions to change permissions in : ' + dst)
logger.fdebug(module + ' Continuing post-processing but unable to change file permissions in ' + dst)
#let's reset the fileop to the original setting just in case it's a manual pp run
if mylar.FILE_OPTS == 'copy':
self.fileop = shutil.copy
else:
self.fileop = shutil.move
#delete entry from nzblog table
myDB.action('DELETE from nzblog WHERE issueid=?', [issueid])
@ -1439,7 +1526,14 @@ class PostProcessor(object):
storyarcd = os.path.join(mylar.DESTINATION_DIR, mylar.GRABBAG_DIR)
grdst = mylar.DESTINATION_DIR
filechecker.validateAndCreateDirectory(grdst, True, module=module)
checkdirectory = filechecker.validateAndCreateDirectory(grdst, True, module=module)
if not checkdirectory:
logger.warn(module + ' Error trying to validate/create directory. Aborting this process at this time.')
self.valreturn.append({"self.log": self.log,
"mode": 'stop'})
return self.queue.put(self.valreturn)
if mylar.READ2FILENAME:
logger.fdebug(module + ' readingorder#: ' + str(arcinfo['ReadingOrder']))
@ -1573,6 +1667,9 @@ class FolderCheck():
self.queue = Queue.Queue()
def run(self):
if mylar.IMPORTLOCK:
logger.info('There is an import currently running. In order to ensure successful import - deferring this until the import is finished.')
return
#monitor a selected folder for 'snatched' files that haven't been processed
#junk the queue as it's not needed for folder monitoring, but needed for post-processing to run without error.
logger.info(self.module + ' Checking folder ' + mylar.CHECK_FOLDER + ' for newly snatched downloads')

View File

@ -49,7 +49,7 @@ SIGNAL = None
SYS_ENCODING = None
OS_DETECT = platform.system()
VERBOSE = 1
VERBOSE = False
DAEMON = False
PIDFILE= None
CREATEPID = False
@ -63,6 +63,7 @@ __INITIALIZED__ = False
started = False
WRITELOCK = False
LOGTYPE = None
IMPORTLOCK = False
## for use with updated scheduler (not working atm)
INIT_LOCK = Lock()
@ -95,6 +96,7 @@ DBNAME = None
LOG_DIR = None
LOG_LIST = []
MAX_LOGSIZE = None
QUIET = False
CACHE_DIR = None
SYNO_FIX = False
@ -120,7 +122,6 @@ API_ENABLED = False
API_KEY = None
DOWNLOAD_APIKEY = None
LAUNCH_BROWSER = False
LOGVERBOSE = None
GIT_PATH = None
INSTALL_TYPE = None
CURRENT_VERSION = None
@ -154,12 +155,13 @@ COMIC_DIR = None
LIBRARYSCAN = False
IMP_MOVE = False
IMP_RENAME = False
IMP_METADATA = False # should default to False - this is enabled for testing only.
IMP_METADATA = True # should default to False - this is enabled for testing only.
SEARCH_INTERVAL = 360
NZB_STARTUP_SEARCH = False
LIBRARYSCAN_INTERVAL = 300
DOWNLOAD_SCAN_INTERVAL = 5
FOLDER_SCAN_LOG_VERBOSE = 0
CHECK_FOLDER = None
ENABLE_CHECK_FOLDER = False
INTERFACE = None
@ -211,6 +213,7 @@ CVINFO = False
LOG_LEVEL = None
POST_PROCESSING = 1
POST_PROCESSING_SCRIPT = None
FILE_OPTS = None
NZB_DOWNLOADER = None #0 = sabnzbd, #1 = nzbget, #2 = blackhole
@ -301,6 +304,7 @@ COPY2ARCDIR = 0
CVURL = None
WEEKFOLDER = 0
WEEKFOLDER_LOC = None
LOCMOVE = 0
NEWCOM_DIR = None
FFTONEWCOM_DIR = 0
@ -406,11 +410,11 @@ def check_setting_str(config, cfg_name, item_name, def_val, log=True):
def initialize():
with INIT_LOCK:
global __INITIALIZED__, DBCHOICE, DBUSER, DBPASS, DBNAME, COMICVINE_API, DEFAULT_CVAPI, CVAPI_RATE, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, UPCOMING_SNATCHED, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, MAX_LOGSIZE, LOGVERBOSE, OLDCONFIG_VERSION, OS_DETECT, \
global __INITIALIZED__, DBCHOICE, DBUSER, DBPASS, DBNAME, COMICVINE_API, DEFAULT_CVAPI, CVAPI_RATE, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, UPCOMING_SNATCHED, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, MAX_LOGSIZE, OLDCONFIG_VERSION, OS_DETECT, \
queue, LOCAL_IP, EXT_IP, HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, ENABLE_HTTPS, HTTPS_CERT, HTTPS_KEY, HTTPS_FORCE_ON, HOST_RETURN, API_ENABLED, API_KEY, DOWNLOAD_APIKEY, LAUNCH_BROWSER, GIT_PATH, SAFESTART, AUTO_UPDATE, \
CURRENT_VERSION, LATEST_VERSION, CHECK_GITHUB, CHECK_GITHUB_ON_STARTUP, CHECK_GITHUB_INTERVAL, GIT_USER, GIT_BRANCH, USER_AGENT, DESTINATION_DIR, MULTIPLE_DEST_DIRS, CREATE_FOLDERS, DELETE_REMOVE_DIR, \
DOWNLOAD_DIR, USENET_RETENTION, SEARCH_INTERVAL, NZB_STARTUP_SEARCH, INTERFACE, DUPECONSTRAINT, AUTOWANT_ALL, AUTOWANT_UPCOMING, ZERO_LEVEL, ZERO_LEVEL_N, COMIC_COVER_LOCAL, HIGHCOUNT, \
LIBRARYSCAN, LIBRARYSCAN_INTERVAL, DOWNLOAD_SCAN_INTERVAL, NZB_DOWNLOADER, USE_SABNZBD, SAB_HOST, SAB_USERNAME, SAB_PASSWORD, SAB_APIKEY, SAB_CATEGORY, SAB_PRIORITY, SAB_TO_MYLAR, SAB_DIRECTORY, USE_BLACKHOLE, BLACKHOLE_DIR, ADD_COMICS, COMIC_DIR, IMP_MOVE, IMP_RENAME, IMP_METADATA, \
LIBRARYSCAN, LIBRARYSCAN_INTERVAL, DOWNLOAD_SCAN_INTERVAL, FOLDER_SCAN_LOG_VERBOSE, IMPORTLOCK, NZB_DOWNLOADER, USE_SABNZBD, SAB_HOST, SAB_USERNAME, SAB_PASSWORD, SAB_APIKEY, SAB_CATEGORY, SAB_PRIORITY, SAB_TO_MYLAR, SAB_DIRECTORY, USE_BLACKHOLE, BLACKHOLE_DIR, ADD_COMICS, COMIC_DIR, IMP_MOVE, IMP_RENAME, IMP_METADATA, \
USE_NZBGET, NZBGET_HOST, NZBGET_PORT, NZBGET_USERNAME, NZBGET_PASSWORD, NZBGET_CATEGORY, NZBGET_PRIORITY, NZBGET_DIRECTORY, NZBSU, NZBSU_UID, NZBSU_APIKEY, DOGNZB, DOGNZB_APIKEY, OMGWTFNZBS, OMGWTFNZBS_USERNAME, OMGWTFNZBS_APIKEY, \
NEWZNAB, NEWZNAB_NAME, NEWZNAB_HOST, NEWZNAB_APIKEY, NEWZNAB_UID, NEWZNAB_ENABLED, EXTRA_NEWZNABS, NEWZNAB_EXTRA, \
ENABLE_TORZNAB, TORZNAB_NAME, TORZNAB_HOST, TORZNAB_APIKEY, TORZNAB_CATEGORY, \
@ -421,8 +425,8 @@ def initialize():
ENABLE_RSS, RSS_CHECKINTERVAL, RSS_LASTRUN, FAILED_DOWNLOAD_HANDLING, FAILED_AUTO, ENABLE_TORRENT_SEARCH, ENABLE_KAT, KAT_PROXY, ENABLE_32P, MODE_32P, KEYS_32P, RSSFEED_32P, USERNAME_32P, PASSWORD_32P, AUTHKEY_32P, PASSKEY_32P, FEEDINFO_32P, VERIFY_32P, SNATCHEDTORRENT_NOTIFY, \
PROWL_ENABLED, PROWL_PRIORITY, PROWL_KEYS, PROWL_ONSNATCH, NMA_ENABLED, NMA_APIKEY, NMA_PRIORITY, NMA_ONSNATCH, PUSHOVER_ENABLED, PUSHOVER_PRIORITY, PUSHOVER_APIKEY, PUSHOVER_USERKEY, PUSHOVER_ONSNATCH, BOXCAR_ENABLED, BOXCAR_ONSNATCH, BOXCAR_TOKEN, \
PUSHBULLET_ENABLED, PUSHBULLET_APIKEY, PUSHBULLET_DEVICEID, PUSHBULLET_ONSNATCH, LOCMOVE, NEWCOM_DIR, FFTONEWCOM_DIR, \
PREFERRED_QUALITY, MOVE_FILES, RENAME_FILES, LOWERCASE_FILENAMES, USE_MINSIZE, MINSIZE, USE_MAXSIZE, MAXSIZE, CORRECT_METADATA, FOLDER_FORMAT, FILE_FORMAT, REPLACE_CHAR, REPLACE_SPACES, ADD_TO_CSV, CVINFO, LOG_LEVEL, POST_PROCESSING, POST_PROCESSING_SCRIPT, SEARCH_DELAY, GRABBAG_DIR, READ2FILENAME, SEND2READ, TAB_ENABLE, TAB_HOST, TAB_USER, TAB_PASS, TAB_DIRECTORY, STORYARCDIR, COPY2ARCDIR, CVURL, CHECK_FOLDER, ENABLE_CHECK_FOLDER, \
COMIC_LOCATION, QUAL_ALTVERS, QUAL_SCANNER, QUAL_TYPE, QUAL_QUALITY, ENABLE_EXTRA_SCRIPTS, EXTRA_SCRIPTS, ENABLE_PRE_SCRIPTS, PRE_SCRIPTS, PULLNEW, ALT_PULL, COUNT_ISSUES, COUNT_HAVES, COUNT_COMICS, SYNO_FIX, CHMOD_FILE, CHMOD_DIR, CHOWNER, CHGROUP, ANNUALS_ON, CV_ONLY, CV_ONETIMER, WEEKFOLDER, UMASK
PREFERRED_QUALITY, MOVE_FILES, RENAME_FILES, LOWERCASE_FILENAMES, USE_MINSIZE, MINSIZE, USE_MAXSIZE, MAXSIZE, CORRECT_METADATA, FOLDER_FORMAT, FILE_FORMAT, REPLACE_CHAR, REPLACE_SPACES, ADD_TO_CSV, CVINFO, LOG_LEVEL, POST_PROCESSING, POST_PROCESSING_SCRIPT, FILE_OPTS, SEARCH_DELAY, GRABBAG_DIR, READ2FILENAME, SEND2READ, TAB_ENABLE, TAB_HOST, TAB_USER, TAB_PASS, TAB_DIRECTORY, STORYARCDIR, COPY2ARCDIR, CVURL, CHECK_FOLDER, ENABLE_CHECK_FOLDER, \
COMIC_LOCATION, QUAL_ALTVERS, QUAL_SCANNER, QUAL_TYPE, QUAL_QUALITY, ENABLE_EXTRA_SCRIPTS, EXTRA_SCRIPTS, ENABLE_PRE_SCRIPTS, PRE_SCRIPTS, PULLNEW, ALT_PULL, COUNT_ISSUES, COUNT_HAVES, COUNT_COMICS, SYNO_FIX, CHMOD_FILE, CHMOD_DIR, CHOWNER, CHGROUP, ANNUALS_ON, CV_ONLY, CV_ONETIMER, WEEKFOLDER, WEEKFOLDER_LOC, UMASK
if __INITIALIZED__:
return False
@ -475,11 +479,6 @@ def initialize():
API_KEY = check_setting_str(CFG, 'General', 'api_key', '')
LAUNCH_BROWSER = bool(check_setting_int(CFG, 'General', 'launch_browser', 1))
AUTO_UPDATE = bool(check_setting_int(CFG, 'General', 'auto_update', 0))
LOGVERBOSE = bool(check_setting_int(CFG, 'General', 'logverbose', 0))
if LOGVERBOSE:
VERBOSE = 2
else:
VERBOSE = 1
MAX_LOGSIZE = check_setting_int(CFG, 'General', 'max_logsize', 1000000)
if not MAX_LOGSIZE:
MAX_LOGSIZE = 1000000
@ -513,6 +512,7 @@ def initialize():
IMP_RENAME = bool(check_setting_int(CFG, 'General', 'imp_rename', 0))
IMP_METADATA = bool(check_setting_int(CFG, 'General', 'imp_metadata', 0))
DOWNLOAD_SCAN_INTERVAL = check_setting_int(CFG, 'General', 'download_scan_interval', 5)
FOLDER_SCAN_LOG_VERBOSE = check_setting_int(CFG, 'General', 'folder_scan_log_verbose', 0)
CHECK_FOLDER = check_setting_str(CFG, 'General', 'check_folder', '')
ENABLE_CHECK_FOLDER = bool(check_setting_int(CFG, 'General', 'enable_check_folder', 0))
INTERFACE = check_setting_str(CFG, 'General', 'interface', 'default')
@ -542,6 +542,7 @@ def initialize():
#default to ComicLocation
GRABBAG_DIR = DESTINATION_DIR
WEEKFOLDER = bool(check_setting_int(CFG, 'General', 'weekfolder', 0))
WEEKFOLDER_LOC = check_setting_str(CFG, 'General', 'weekfolder_loc', '')
LOCMOVE = bool(check_setting_int(CFG, 'General', 'locmove', 0))
if LOCMOVE is None:
LOCMOVE = 0
@ -610,9 +611,8 @@ def initialize():
PRE_SCRIPTS = check_setting_str(CFG, 'General', 'pre_scripts', '')
POST_PROCESSING = bool(check_setting_int(CFG, 'General', 'post_processing', 1))
POST_PROCESSING_SCRIPT = check_setting_str(CFG, 'General', 'post_processing_script', '')
FILE_OPTS = check_setting_str(CFG, 'General', 'file_opts', 'move')
ENABLE_META = bool(check_setting_int(CFG, 'General', 'enable_meta', 0))
CMTAGGER_PATH = check_setting_str(CFG, 'General', 'cmtagger_path', '')
CT_TAG_CR = bool(check_setting_int(CFG, 'General', 'ct_tag_cr', 1))
CT_TAG_CBL = bool(check_setting_int(CFG, 'General', 'ct_tag_cbl', 1))
CT_CBZ_OVERWRITE = bool(check_setting_int(CFG, 'General', 'ct_cbz_overwrite', 0))
@ -933,11 +933,11 @@ def initialize():
try:
os.makedirs(LOG_DIR)
except OSError:
if VERBOSE:
if not QUIET:
print 'Unable to create the log directory. Logging to screen only.'
# Start the logger, silence console logging if we need to
logger.initLogger(verbose=VERBOSE) #logger.mylar_log.initLogger(verbose=VERBOSE)
logger.initLogger(console=not QUIET, log_dir=LOG_DIR, verbose=VERBOSE) #logger.mylar_log.initLogger(verbose=VERBOSE)
#try to get the local IP using socket. Get this on every startup so it's at least current for existing session.
import socket
@ -1044,6 +1044,24 @@ def initialize():
#set the default URL for ComicVine API here.
CVURL = 'http://www.comicvine.com/api/'
#comictagger - force to use included version if option is enabled.
if ENABLE_META:
CMTAGGER_PATH = PROG_DIR
logger.info('Setting ComicTagger default path to : ' + PROG_DIR)
#we need to make sure the default folder setting for the comictagger settings exists so things don't error out
CT_SETTINGSPATH = os.path.join(PROG_DIR, 'lib', 'comictaggerlib', 'ct_settings')
if os.path.exists(os.path.join(PROG_DIR, 'lib', 'comictaggerlib', 'ct_settings')):
logger.info('ComicTagger settings location exists.')
else:
try:
os.mkdir(os.path.join(PROG_DIR, 'lib', 'comictaggerlib', 'ct_settings'))
except OSError,e:
if e.errno != errno.EEXIST:
logger.error('Unable to create setting directory for ComicTagger. This WILL cause problems when tagging.')
else:
logger.info('Successfully created ComicTagger Settings location.')
if LOCMOVE:
helpers.updateComicLocation()
@ -1198,7 +1216,6 @@ def config_write():
new_config['General']['auto_update'] = int(AUTO_UPDATE)
new_config['General']['log_dir'] = LOG_DIR
new_config['General']['max_logsize'] = MAX_LOGSIZE
new_config['General']['logverbose'] = int(LOGVERBOSE)
new_config['General']['git_path'] = GIT_PATH
new_config['General']['cache_dir'] = CACHE_DIR
new_config['General']['annuals_on'] = int(ANNUALS_ON)
@ -1230,6 +1247,7 @@ def config_write():
new_config['General']['imp_metadata'] = int(IMP_METADATA)
new_config['General']['enable_check_folder'] = int(ENABLE_CHECK_FOLDER)
new_config['General']['download_scan_interval'] = DOWNLOAD_SCAN_INTERVAL
new_config['General']['folder_scan_log_verbose'] = FOLDER_SCAN_LOG_VERBOSE
new_config['General']['check_folder'] = CHECK_FOLDER
new_config['General']['interface'] = INTERFACE
new_config['General']['dupeconstraint'] = DUPECONSTRAINT
@ -1277,12 +1295,13 @@ def config_write():
new_config['General']['pre_scripts'] = PRE_SCRIPTS
new_config['General']['post_processing'] = int(POST_PROCESSING)
new_config['General']['post_processing_script'] = POST_PROCESSING_SCRIPT
new_config['General']['file_opts'] = FILE_OPTS
new_config['General']['weekfolder'] = int(WEEKFOLDER)
new_config['General']['weekfolder_loc'] = WEEKFOLDER_LOC
new_config['General']['locmove'] = int(LOCMOVE)
new_config['General']['newcom_dir'] = NEWCOM_DIR
new_config['General']['fftonewcom_dir'] = int(FFTONEWCOM_DIR)
new_config['General']['enable_meta'] = int(ENABLE_META)
new_config['General']['cmtagger_path'] = CMTAGGER_PATH
new_config['General']['ct_tag_cr'] = int(CT_TAG_CR)
new_config['General']['ct_tag_cbl'] = int(CT_TAG_CBL)
new_config['General']['ct_cbz_overwrite'] = int(CT_CBZ_OVERWRITE)
@ -1502,7 +1521,7 @@ def dbcheck():
c.execute('CREATE TABLE IF NOT EXISTS nzblog (IssueID TEXT, NZBName TEXT, SARC TEXT, PROVIDER TEXT, ID TEXT, AltNZBName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS weekly (SHIPDATE TEXT, PUBLISHER TEXT, ISSUE TEXT, COMIC VARCHAR(150), EXTRA TEXT, STATUS TEXT, ComicID TEXT, IssueID TEXT)')
# c.execute('CREATE TABLE IF NOT EXISTS sablog (nzo_id TEXT, ComicName TEXT, ComicYEAR TEXT, ComicIssue TEXT, name TEXT, nzo_complete TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS importresults (impID TEXT, ComicName TEXT, ComicYear TEXT, Status TEXT, ImportDate TEXT, ComicFilename TEXT, ComicLocation TEXT, WatchMatch TEXT, DisplayName TEXT, SRID TEXT, ComicID TEXT, IssueID TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS importresults (impID TEXT, ComicName TEXT, ComicYear TEXT, Status TEXT, ImportDate TEXT, ComicFilename TEXT, ComicLocation TEXT, WatchMatch TEXT, DisplayName TEXT, SRID TEXT, ComicID TEXT, IssueID TEXT, Volume TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS readlist (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Status TEXT, DateAdded TEXT, Location TEXT, inCacheDir TEXT, SeriesYear TEXT, ComicID TEXT, StatusChange TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS readinglist(StoryArcID TEXT, ComicName TEXT, IssueNumber TEXT, SeriesYear TEXT, IssueYEAR TEXT, StoryArc TEXT, TotalIssues TEXT, Status TEXT, inCacheDir TEXT, Location TEXT, IssueArcID TEXT, ReadingOrder INT, IssueID TEXT, ComicID TEXT, StoreDate TEXT, IssueDate TEXT, Publisher TEXT, IssuePublisher TEXT, IssueName TEXT, CV_ArcID TEXT, Int_IssueNumber INT)')
c.execute('CREATE TABLE IF NOT EXISTS annuals (IssueID TEXT, Issue_Number TEXT, IssueName TEXT, IssueDate TEXT, Status TEXT, ComicID TEXT, GCDComicID TEXT, Location TEXT, ComicSize TEXT, Int_IssueNumber INT, ComicName TEXT, ReleaseDate TEXT, ReleaseComicID TEXT, ReleaseComicName TEXT, IssueDate_Edit TEXT)')
@ -1675,6 +1694,10 @@ def dbcheck():
except sqlite3.OperationalError:
c.execute('ALTER TABLE importresults ADD COLUMN IssueID TEXT')
try:
c.execute('SELECT Volume from importresults')
except sqlite3.OperationalError:
c.execute('ALTER TABLE importresults ADD COLUMN Volume TEXT')
## -- Readlist Table --
try:
@ -1903,6 +1926,7 @@ def dbcheck():
c.execute("DELETE from issues WHERE ComicName='None' OR ComicName LIKE 'Comic ID%' OR ComicName is NULL")
c.execute("DELETE from annuals WHERE ComicName='None' OR ComicName is NULL or Issue_Number is NULL")
c.execute("DELETE from upcoming WHERE ComicName='None' OR ComicName is NULL or IssueNumber is NULL")
c.execute("DELETE from importresults WHERE ComicName='None' OR ComicName is NULL")
logger.info('Ensuring DB integrity - Removing all Erroneous Comics (ie. named None)')
logger.info('Correcting Null entries that make the main page break on startup.')

View File

@ -53,11 +53,28 @@ class info32p(object):
#need a way to find response code (200=OK), but returns 200 for everything even failed signons (returns a blank page)
#logger.info('[32P] response: ' + str(r.content))
soup = BeautifulSoup(r.content)
soup.prettify()
#check for invalid username/password and if it's invalid - disable provider so we don't autoban (manual intervention is required after).
chk_login = soup.find_all("form", {"id":"loginform"})
for ck in chk_login:
errorlog = ck.find("span", {"id":"formerror"})
loginerror = " ".join(list(errorlog.stripped_strings)) #login_error.findNext(text=True)
errornot = ck.find("span", {"class":"notice"})
noticeerror = " ".join(list(errornot.stripped_strings)) #notice_error.findNext(text=True)
logger.error(self.module + ' Error: ' + loginerror)
if noticeerror:
logger.error(self.module + ' Warning: ' + noticeerror)
logger.error(self.module + ' Disabling 32P provider until username/password can be corrected / verified.')
return "disable"
if self.searchterm:
if not self.searchterm:
logger.info('[32P] Successfully authenticated. Verifying authentication & passkeys for usage.')
else:
logger.info('[32P] Successfully authenticated. Initiating search for : ' + self.searchterm)
return self.search32p(s)
soup = BeautifulSoup(r.content)
all_script = soup.find_all("script", {"src": False})
all_script2 = soup.find_all("link", {"rel": "alternate"})

View File

@ -25,26 +25,14 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
logger.fdebug(module + ' dirName:' + dirName)
## Set the directory in which comictagger and other external commands are located - IMPORTANT - ##
# ( User may have to modify, depending on their setup, but these are some guesses for now )
# 2015-11-23: Recent CV API changes restrict the rate-limit to 1 api request / second.
# ComicTagger has to be included now with the install as a timer had to be added to allow for the 1/second rule.
# The below is pretty much outdated - so will force mylar to use cmtagger_path = mylar.PROG_DIR to force the use of the included lib.
comictagger_cmd = os.path.join(mylar.CMTAGGER_PATH, 'comictagger.py')
logger.fdebug('ComicTagger Path location for internal comictagger.py set to : ' + comictagger_cmd)
# Force mylar to use cmtagger_path = mylar.PROG_DIR to force the use of the included lib.
if platform.system() == "Windows":
#if it's a source install.
sys_type = 'windows'
if os.path.isdir(os.path.join(mylar.CMTAGGER_PATH, '.git')):
comictagger_cmd = os.path.join(mylar.CMTAGGER_PATH, 'comictagger.py')
else:
#regardless of 32/64 bit install
if 'comictagger.exe' in mylar.CMTAGGER_PATH:
comictagger_cmd = mylar.CMTAGGER_PATH
else:
comictagger_cmd = os.path.join(mylar.CMTAGGER_PATH, 'comictagger.exe')
if mylar.UNRAR_CMD == 'None' or mylar.UNRAR_CMD == '' or mylar.UNRAR_CMD is None:
unrar_cmd = "C:\Program Files\WinRAR\UnRAR.exe"
else:
@ -63,7 +51,6 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
elif platform.system() == "Darwin":
#Mac OS X
sys_type = 'mac'
comictagger_cmd = os.path.join(mylar.CMTAGGER_PATH, 'comictagger.py')
if mylar.UNRAR_CMD == 'None' or mylar.UNRAR_CMD == '' or mylar.UNRAR_CMD is None:
unrar_cmd = "/usr/local/bin/unrar"
else:
@ -92,39 +79,39 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
logger.fdebug(module + ' continuing with PostProcessing, but I am not using metadata.')
return "fail"
#set this to the lib path (ie. '<root of mylar>/lib')
comictagger_cmd = os.path.join(mylar.CMTAGGER_PATH, 'comictagger.py')
# if not os.path.exists( comictagger_cmd ):
# print "ERROR: can't find the ComicTagger program: {0}".format( comictagger_cmd )
# print " You probably need to edit this script!"
# sys.exit( 1 )
file_conversion = True
file_extension_fixing = True
if not os.path.exists(unrar_cmd):
logger.fdebug(module + ' WARNING: cannot find the unrar command.')
logger.fdebug(module + ' File conversion and extension fixing not available')
logger.fdebug(module + ' You probably need to edit this script, or install the missing tool, or both!')
return "fail"
#file_conversion = False
#file_extension_fixing = False
logger.fdebug(module + ' Filename is : ' + str(filename))
filepath = filename
try:
filename = os.path.split(filename)[1] # just the filename itself
except:
logger.warn('Unable to detect filename within directory - I am aborting the tagging. You best check things out.')
return "fail"
#make use of temporary file location in order to post-process this to ensure that things don't get hammered when converting
try:
import tempfile
new_folder = os.path.join(tempfile.mkdtemp(prefix='mylar_', dir=mylar.CACHE_DIR)) #prefix, suffix, dir
new_filepath = os.path.join(new_folder, filename)
shutil.copy(filepath, new_filepath)
filepath = new_filepath
except:
logger.warn(module + ' Unable to create temporary directory to perform meta-tagging. Processing without metatagging.')
return "fail"
## Sets up other directories ##
scriptname = os.path.basename(sys.argv[0])
downloadpath = os.path.abspath(dirName)
sabnzbdscriptpath = os.path.dirname(sys.argv[0])
if manual is None:
comicpath = os.path.join(downloadpath, "temp")
else:
chkpath, chkfile = os.path.split(filename)
logger.fdebug(module + ' chkpath: ' + chkpath)
logger.fdebug(module + ' chkfile: ' + chkfile)
extensions = ('.cbr', '.cbz')
if os.path.isdir(chkpath) and chkpath != downloadpath:
logger.fdebug(module + ' Changing ' + downloadpath + ' location to ' + chkpath + ' as it is a directory.')
downloadpath = chkpath
comicpath = os.path.join(downloadpath, issueid)
comicpath = new_folder
unrar_folder = os.path.join(comicpath, "unrard")
logger.fdebug(module + ' Paths / Locations:')
@ -135,201 +122,22 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
logger.fdebug(module + ' unrar_folder : ' + unrar_folder)
logger.fdebug(module + ' Running the ComicTagger Add-on for Mylar')
if os.path.exists(comicpath):
shutil.rmtree(comicpath)
logger.fdebug(module + ' Attempting to create directory @: ' + str(comicpath))
try:
os.makedirs(comicpath)
except OSError:
raise
logger.fdebug(module + ' Created directory @ : ' + str(comicpath))
logger.fdebug(module + ' Filename is : ' + str(filename))
if filename is None:
filename_list = glob.glob(os.path.join(downloadpath, "*.cbz"))
filename_list.extend(glob.glob(os.path.join(downloadpath, "*.cbr")))
fcount = 1
for f in filename_list:
if fcount > 1:
logger.fdebug(module + ' More than one cbr/cbz within path, performing Post-Process on first file detected: ' + f)
break
if f.endswith('.cbz'):
logger.fdebug(module + ' .cbz file detected. Excluding from temporary directory move at this time.')
comicpath = downloadpath
else:
shutil.move(f, comicpath)
filename = f # just the filename itself
fcount += 1
else:
# if the filename is identical to the parent folder, the entire subfolder gets copied since it's the first match, instead of just the file
#if os.path.isfile(filename):
#if the filename doesn't exist - force the path assuming it's the 'download path'
filename = os.path.join(downloadpath, filename)
logger.fdebug(module + ' The path where the file is that I was provided is probably wrong - modifying it to : ' + filename)
shutil.move(filename, os.path.join(comicpath, os.path.split(filename)[1]))
logger.fdebug(module + ' moving : ' + filename + ' to ' + os.path.join(comicpath, os.path.split(filename)[1]))
try:
filename = os.path.split(filename)[1] # just the filename itself
except:
logger.warn('Unable to detect filename within directory - I am aborting the tagging. You best check things out.')
return "fail"
#print comicpath
#print os.path.join(comicpath, filename)
if filename.endswith('.cbr'):
f = os.path.join(comicpath, filename)
if zipfile.is_zipfile(f):
logger.fdebug(module + ' zipfile detected')
base = os.path.splitext(f)[0]
shutil.move(f, base + ".cbz")
logger.fdebug(module + ' {0}: renaming {1} to be a cbz'.format(scriptname, os.path.basename(f)))
filename = base + '.cbz'
if file_extension_fixing:
if filename.endswith('.cbz'):
logger.info(module + ' Filename detected as a .cbz file.')
f = os.path.join(comicpath, filename)
logger.fdebug(module + ' filename : ' + f)
if os.path.isfile(f):
try:
rar_test_cmd_output = "is not RAR archive" # default, in case of error
rar_test_cmd_output = subprocess.check_output([unrar_cmd, "t", f])
except:
logger.fdebug(module + ' This is a zipfile. Unable to test rar.')
if not "is not RAR archive" in rar_test_cmd_output:
base = os.path.splitext(f)[0]
shutil.move(f, base + ".cbr")
logger.fdebug(module + ' {0}: renaming {1} to be a cbr'.format(scriptname, os.path.basename(f)))
else:
try:
with open(f): pass
except:
logger.warn(module + ' No zip file present')
return "fail"
#if the temp directory is the LAST directory in the path, it's part of the CT logic path above
#and can be removed to allow a copy back to the original path to work.
if 'temp' in os.path.basename(os.path.normpath(comicpath)):
pathbase = os.path.dirname(os.path.dirname(comicpath))
base = os.path.join(pathbase, filename)
else:
base = os.path.join(re.sub(issueid, '', comicpath), filename) #extension is already .cbz
logger.fdebug(module + ' Base set to : ' + base)
logger.fdebug(module + ' Moving : ' + f + ' - to - ' + base)
shutil.move(f, base)
try:
with open(base):
logger.fdebug(module + ' Verified file exists in location: ' + base)
removetemp = True
except:
logger.fdebug(module + ' Cannot verify file exist in location: ' + base)
removetemp = False
if removetemp == True:
if comicpath != downloadpath:
shutil.rmtree(comicpath)
logger.fdebug(module + ' Successfully removed temporary directory: ' + comicpath)
else:
logger.fdebug(module + ' Unable to remove temporary directory since it is identical to the download location : ' + comicpath)
logger.fdebug(module + ' new filename : ' + base)
nfilename = base
# Now rename all CBR files to RAR
if filename.endswith('.cbr'):
#logger.fdebug('renaming .cbr to .rar')
f = os.path.join(comicpath, filename)
base = os.path.splitext(f)[0]
baserar = base + ".rar"
shutil.move(f, baserar)
## Changes any cbr files to cbz files for insertion of metadata ##
if file_conversion:
f = os.path.join(comicpath, filename)
logger.fdebug(module + ' {0}: converting {1} to be zip format'.format(scriptname, os.path.basename(f)))
basename = os.path.splitext(f)[0]
zipname = basename + ".cbz"
# Move into the folder where we will be unrar-ing things
os.makedirs(unrar_folder)
os.chdir(unrar_folder)
# Extract and zip up
logger.fdebug(module + ' {0}: Comicpath is ' + baserar) # os.path.join(comicpath,basename))
logger.fdebug(module + ' {0}: Unrar is ' + unrar_folder)
try:
#subprocess.Popen( [ unrar_cmd, "x", os.path.join(comicpath,basename) ] ).communicate()
output = subprocess.check_output([unrar_cmd, 'x', baserar])
except CalledProcessError as e:
if e.returncode == 3:
logger.warn(module + ' [Unrar Error 3] - Broken Archive.')
elif e.returncode == 1:
logger.warn(module + ' [Unrar Error 1] - No files to extract.')
logger.warn(module + ' Marking this as an incomplete download.')
return "unrar error"
shutil.make_archive(basename, "zip", unrar_folder)
# get out of unrar folder and clean up
os.chdir(comicpath)
shutil.rmtree(unrar_folder)
## Changes zip to cbz
f = os.path.join(comicpath, os.path.splitext(filename)[0] + ".zip")
#print "zipfile" + f
try:
with open(f): pass
except:
logger.warn(module + ' No zip file present:' + f)
return "fail"
base = os.path.splitext(f)[0]
shutil.move(f, base + ".cbz")
nfilename = base + ".cbz"
#else:
# logger.fdebug(module + ' Filename:' + filename)
# nfilename = filename
#if os.path.isfile( nfilename ):
# logger.fdebug(module + ' File exists in given location already : ' + nfilename)
# file_dir, file_n = os.path.split(nfilename)
#else:
# #remove the IssueID from the path
# file_dir = re.sub(issueid, '', comicpath)
# file_n = os.path.split(nfilename)[1]
if manual is None:
file_dir = downloadpath
else:
file_dir = re.sub(issueid, '', comicpath)
try:
file_n = os.path.split(nfilename)[1]
except:
logger.error(module + ' unable to retrieve filename properly. Check your logs as there is probably an error or misconfiguration indicated (such as unable to locate unrar or configparser)')
return "fail"
logger.fdebug(module + ' Converted directory: ' + str(file_dir))
logger.fdebug(module + ' Converted filename: ' + str(file_n))
logger.fdebug(module + ' Destination path: ' + os.path.join(file_dir, file_n)) #dirName,file_n))
logger.fdebug(module + ' dirName: ' + dirName)
logger.fdebug(module + ' absDirName: ' + os.path.abspath(dirName))
##set up default comictagger options here.
#used for cbr - to - cbz conversion
#depending on copy/move - eitehr we retain the rar or we don't.
if mylar.FILE_OPTS == 'move':
cbr2cbzoptions = ["-e", "--delete-rar"]
else:
cbr2cbzoptions = ["-e"]
if comversion is None or comversion == '':
comversion = '1'
comversion = re.sub('[^0-9]', '', comversion).strip()
cvers = 'volume=' + str(comversion)
tagoptions = ["-s", "-m", cvers] #"--verbose"
## check comictagger version - less than 1.15.beta - take your chances.
if sys_type == 'windows':
ctversion = subprocess.check_output([comictagger_cmd, "--version"])
else:
ctversion = subprocess.check_output([sys.executable, comictagger_cmd, "--version"])
ctversion = subprocess.check_output([sys.executable, comictagger_cmd, "--version"])
ctend = ctversion.find(':')
ctcheck = re.sub("[^0-9]", "", ctversion[:ctend])
@ -363,7 +171,7 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
return "fail"
#if it's a cbz file - check if no-overwrite existing tags is enabled / disabled in config.
if nfilename.endswith('.cbz'):
if filename.endswith('.cbz'):
if mylar.CT_CBZ_OVERWRITE:
logger.fdebug(module + ' Will modify existing tag blocks even if it exists.')
else:
@ -377,38 +185,34 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
original_tagoptions = tagoptions
og_tagtype = None
initial_ctrun = True
while (i <= tagcnt):
if i == 1:
tagtype = 'cr' # CR meta-tagging cycle.
tagdisp = 'ComicRack tagging'
elif i == 2:
tagtype = 'cbl' # Cbl meta-tagging cycle
tagdisp = 'Comicbooklover tagging'
f_tagoptions = original_tagoptions
if og_tagtype is not None:
for index, item in enumerate(f_tagoptions):
if item == og_tagtype:
f_tagoptions[index] = tagtype
if initial_ctrun:
f_tagoptions = cbr2cbzoptions
f_tagoptions.extend([filepath])
else:
f_tagoptions.extend(["--type", tagtype, nfilename])
if i == 1:
tagtype = 'cr' # CR meta-tagging cycle.
tagdisp = 'ComicRack tagging'
elif i == 2:
tagtype = 'cbl' # Cbl meta-tagging cycle
tagdisp = 'Comicbooklover tagging'
og_tagtype = tagtype
f_tagoptions = original_tagoptions
logger.info(module + ' ' + tagdisp + ' meta-tagging processing started.')
if og_tagtype is not None:
for index, item in enumerate(f_tagoptions):
if item == og_tagtype:
f_tagoptions[index] = tagtype
else:
f_tagoptions.extend(["--type", tagtype, filepath])
#new CV API restriction - one api request / second (redundant here).
#if mylar.CVAPI_RATE is None or mylar.CVAPI_RATE < 2:
# time.sleep(2)
#else:
# time.sleep(mylar.CVAPI_RATE)
og_tagtype = tagtype
if sys_type == 'windows':
currentScriptName = str(comictagger_cmd).decode("string_escape")
else:
currentScriptName = sys.executable + ' ' + str(comictagger_cmd).decode("string_escape")
logger.info(module + ' ' + tagdisp + ' meta-tagging processing started.')
currentScriptName = sys.executable + ' ' + str(comictagger_cmd).decode("string_escape")
logger.fdebug(module + ' Enabling ComicTagger script: ' + str(currentScriptName) + ' with options: ' + str(f_tagoptions))
# generate a safe command line string to execute the script and provide all the parameters
script_cmd = shlex.split(currentScriptName, posix=False) + f_tagoptions
@ -417,57 +221,36 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
logger.fdebug(module + ' Executing command: ' +str(script_cmd))
logger.fdebug(module + ' Absolute path to script: ' +script_cmd[0])
try:
p = subprocess.Popen(script_cmd)
p = subprocess.Popen(script_cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
out, err = p.communicate() # @UnusedVariable
logger.fdebug(module + '[COMIC-TAGGER] : ' +str(out))
logger.info(module + '[COMIC-TAGGER] Successfully wrote ' + tagdisp)
except OSError, e:
logger.warn(module + '[COMIC-TAGGER] Unable to run comictagger with the options provided: ' + str(script_cmd))
## Tag each CBZ, and move it back to original directory ##
#if use_cvapi == "True":
# if issueid is None:
# subprocess.Popen( [ comictagger_cmd, "-s", "-t", tagtype, "--cv-api-key", mylar.COMICVINE_API, "-f", "-o", "--verbose", "--nooverwrite", nfilename ] ).communicate()
# else:
# subprocess.Popen( [ comictagger_cmd, "-s", "-t", tagtype, "--cv-api-key", mylar.COMICVINE_API, "-o", "--id", issueid, "--verbose", nfilename ] ).communicate()
# logger.info(module + ' ' + tagdisp + ' meta-tagging complete')
# #increment CV API counter.
# mylar.CVAPI_COUNT +=1
#else:
# if issueid is None:
# subprocess.Popen( [ comictagger_cmd, "-s", "-t", tagtype, "-f", "-o", "--verbose", "--nooverwrite", nfilename ] ).communicate()
# else:
# subprocess.Popen( [ comictagger_cmd, "-s", "-t", tagtype, "-o", "--id", issueid, "--verbose", "--nooverwrite", nfilename ] ).communicate()
# #increment CV API counter.
# mylar.CVAPI_COUNT +=1
i+=1
if os.path.exists(os.path.join(os.path.abspath(file_dir), file_n)): # (os.path.abspath(dirName),file_n)):
logger.fdebug(module + ' Unable to move from temporary directory - file already exists in destination: ' + os.path.join(os.path.abspath(file_dir), file_n))
else:
try:
shutil.move(os.path.join(comicpath, nfilename), os.path.join(os.path.abspath(file_dir), file_n)) #os.path.abspath(dirName),file_n))
#shutil.move( nfilename, os.path.join(os.path.abspath(dirName),file_n))
logger.fdebug(module + ' Sucessfully moved file from temporary path.')
except:
logger.error(module + ' Unable to move file from temporary path [' + os.path.join(comicpath, nfilename) + ']. Deletion of temporary path halted.')
logger.error(module + ' attempt to move: ' + os.path.join(comicpath, nfilename) + ' to ' + os.path.join(os.path.abspath(file_dir), file_n))
return os.path.join(os.path.abspath(file_dir), file_n) # os.path.join(comicpath, nfilename)
i = 0
os.chdir(mylar.PROG_DIR)
while i < 10:
try:
logger.fdebug(module + ' Attempting to remove: ' + comicpath)
shutil.rmtree(comicpath)
except:
time.sleep(.1)
#logger.info('out:' + str(out))
#logger.info('err:' + str(err))
if initial_ctrun and 'exported successfully' in out:
logger.fdebug(module + '[COMIC-TAGGER] : ' +str(out))
#Archive exported successfully to: X-Men v4 008 (2014) (Digital) (Nahga-Empire).cbz (Original deleted)
tmpfilename = re.sub('Archive exported successfully to: ', '', out.rstrip())
if mylar.FILE_OPTS == 'move':
tmpfilename = re.sub('\(Original deleted\)', '', tmpname).strip()
filepath = os.path.join(comicpath, tmpfilename)
logger.fdebug(module + '[COMIC-TAGGER][CBR-TO-CBZ] New filename: ' + filepath)
initial_ctrun = False
elif initial_ctrun and 'Archive is not a RAR' in out:
initial_ctrun = False
elif 'Cannot find' in out:
logger.warn(module + '[COMIC-TAGGER] Unable to locate file: ' + filename)
file_error = 'file not found||' + filename
return file_error
elif 'not a comic archive!' in out:
logger.warn(module + '[COMIC-TAGGER] Unable to locate file: ' + filename)
file_error = 'file not found||' + filename
return file_error
else:
return os.path.join(os.path.abspath(file_dir), file_n) # dirName), file_n)
i += 1
logger.info(module + '[COMIC-TAGGER] Successfully wrote ' + tagdisp)
i+=1
except OSError, e:
#Cannot find The Walking Dead 150 (2016) (Digital) (Zone-Empire).cbr
logger.warn(module + '[COMIC-TAGGER] Unable to run comictagger with the options provided: ' + str(script_cmd))
return "fail"
logger.fdebug(module + ' Failed to remove temporary path : ' + str(comicpath))
return os.path.join(os.path.abspath(file_dir), file_n) # dirName),file_n)
return filepath

View File

@ -73,6 +73,8 @@ def pulldetails(comicid, type, issueid=None, offset=1, arclist=None, comicidlist
PULLURL = mylar.CVURL + 'story_arcs/?api_key=' + str(comicapi) + '&format=xml&filter=name:' + str(issueid) + '&field_list=cover_date'
elif type == 'comicyears':
PULLURL = mylar.CVURL + 'volumes/?api_key=' + str(comicapi) + '&format=xml&filter=id:' + str(comicidlist) + '&field_list=name,id,start_year,publisher&offset=' + str(offset)
elif type == 'import':
PULLURL = mylar.CVURL + 'issues/?api_key=' + str(comicapi) + '&format=xml&filter=id:' + (comicidlist) + '&field_list=cover_date,id,issue_number,name,date_last_updated,store_date,volume' + '&offset=' + str(offset)
#logger.info('CV.PULLURL: ' + PULLURL)
#new CV API restriction - one api request / second.
@ -155,6 +157,45 @@ def getComic(comicid, type, issueid=None, arc=None, arcid=None, arclist=None, co
#set the offset to 0, since we're doing a filter.
dom = pulldetails(arcid, 'comicyears', offset=0, comicidlist=comicidlist)
return GetSeriesYears(dom)
elif type == 'import':
#used by the importer when doing a scan with metatagging enabled. If metatagging comes back true, then there's an IssueID present
#within the tagging (with CT). This compiles all of the IssueID's during a scan (in 100's), and returns the corresponding CV data
#related to the given IssueID's - namely ComicID, Name, Volume (more at some point, but those are the important ones).
offset = 1
if len(comicidlist) <= 100:
endcnt = len(comicidlist)
else:
endcnt = 100
id_count = 0
import_list = []
logger.fdebug('comicidlist:' + str(comicidlist))
while id_count < len(comicidlist):
#break it up by 100 per api hit
#do the first 100 regardless
in_cnt = 0
for i in range(id_count, endcnt):
if in_cnt == 0:
tmpidlist = str(comicidlist[i])
else:
tmpidlist += '|' + str(comicidlist[i])
in_cnt +=1
logger.info('tmpidlist: ' + str(tmpidlist))
searched = pulldetails(None, 'import', offset=0, comicidlist=tmpidlist)
if searched is None:
break
else:
tGIL = GetImportList(searched)
import_list += tGIL
endcnt +=100
id_count +=100
return import_list
def GetComicInfo(comicid, dom, safechk=None):
if safechk is None:
@ -511,6 +552,48 @@ def GetSeriesYears(dom):
return serieslist
def GetImportList(results):
logger.info('booyah')
importlist = results.getElementsByTagName('issue')
serieslist = []
importids = {}
tempseries = {}
for implist in importlist:
try:
totids = len(implist.getElementsByTagName('id'))
idt = 0
while (idt < totids):
if implist.getElementsByTagName('id')[idt].parentNode.nodeName == 'volume':
tempseries['ComicID'] = implist.getElementsByTagName('id')[idt].firstChild.wholeText
elif implist.getElementsByTagName('id')[idt].parentNode.nodeName == 'issue':
tempseries['IssueID'] = implist.getElementsByTagName('id')[idt].firstChild.wholeText
idt += 1
except:
tempseries['ComicID'] = None
try:
totnames = len(implist.getElementsByTagName('name'))
tot = 0
while (tot < totnames):
if implist.getElementsByTagName('name')[tot].parentNode.nodeName == 'volume':
tempseries['ComicName'] = implist.getElementsByTagName('name')[tot].firstChild.wholeText
elif implist.getElementsByTagName('name')[tot].parentNode.nodeName == 'issue':
try:
tempseries['Issue_Name'] = implist.getElementsByTagName('name')[tot].firstChild.wholeText
except:
tempseries['Issue_Name'] = None
tot += 1
except:
tempseries['ComicName'] = 'None'
logger.info('tempseries:' + str(tempseries))
serieslist.append({"ComicID": tempseries['ComicID'],
"IssueID": tempseries['IssueID'],
"ComicName": tempseries['ComicName'],
"Issue_Name": tempseries['Issue_Name']})
return serieslist
def drophtml(html):
from bs4 import BeautifulSoup

File diff suppressed because it is too large Load Diff

View File

@ -745,8 +745,10 @@ def updateComicLocation():
if mylar.NEWCOM_DIR is not None:
logger.info('Performing a one-time mass update to Comic Location')
#create the root dir if it doesn't exist
mylar.filechecker.validateAndCreateDirectory(mylar.NEWCOM_DIR, create=True)
checkdirectory = mylar.filechecker.validateAndCreateDirectory(mylar.NEWCOM_DIR, create=True)
if not checkdirectory:
logger.warn('Error trying to validate/create directory. Aborting this process at this time.')
return
dirlist = myDB.select("SELECT * FROM comics")
comloc = []
@ -1316,7 +1318,7 @@ def IssueDetails(filelocation, IssueID=None):
cover = "found"
break
elif ('001.jpg' in infile or '001.png' in infile or '001.webp' in infile) and cover == "notfound":
elif any(['001.jpg' in infile, '001.png' in infile, '001.webp' in infile, '01.jpg' in infile, '01.png' in infile, '01.webp' in infile]) and cover == "notfound":
logger.fdebug('Extracting primary image ' + infile + ' as coverfile for display.')
local_file = open(os.path.join(mylar.CACHE_DIR, 'temp.jpg'), "wb")
local_file.write(inzipfile.read(infile))
@ -1355,6 +1357,10 @@ def IssueDetails(filelocation, IssueID=None):
series_title = result.getElementsByTagName('Series')[0].firstChild.wholeText
except:
series_title = "None"
try:
series_volume = result.getElementsByTagName('Volume')[0].firstChild.wholeText
except:
series_volume = "None"
try:
issue_number = result.getElementsByTagName('Number')[0].firstChild.wholeText
except:
@ -1466,6 +1472,10 @@ def IssueDetails(filelocation, IssueID=None):
cover_artist = "None"
penciller = "None"
inker = "None"
try:
series_volume = dt['volume']
except:
series_volume = None
for cl in dt['credits']:
if cl['role'] == 'Editor':
if editor == "None": editor = cl['person']
@ -1507,6 +1517,7 @@ def IssueDetails(filelocation, IssueID=None):
issuedetails.append({"title": issue_title,
"series": series_title,
"volume": series_volume,
"issue_number": issue_number,
"summary": summary,
"notes": notes,

View File

@ -83,7 +83,10 @@ def addComictoDB(comicid, mismatch=None, pullupd=None, imported=None, ogcname=No
latestissueinfo.append({"latestiss": dbcomic['LatestIssue'],
"latestdate": dbcomic['LatestDate']})
filechecker.validateAndCreateDirectory(comlocation, True)
checkdirectory = filechecker.validateAndCreateDirectory(comlocation, True)
if not checkdirectory:
logger.warn('Error trying to validate/create directory. Aborting this process at this time.')
return
oldcomversion = dbcomic['ComicVersion'] #store the comicversion and chk if it exists before hammering.
myDB.upsert("comics", newValueDict, controlValueDict)
@ -213,6 +216,8 @@ def addComictoDB(comicid, mismatch=None, pullupd=None, imported=None, ogcname=No
#else:
#sresults = mb.findComic(annComicName, mode, issue=annissues, limityear=annualval['AnnualYear'])
#print "annualyear: " + str(annualval['AnnualYear'])
annual_types_ignore = {'paperback', 'collecting', 'reprints', 'collected', 'print edition', 'tpb', 'available in print'}
logger.fdebug('[IMPORTER-ANNUAL] - Annual Year:' + str(annualyear))
sresults, explicit = mb.findComic(annComicName, mode, issue=None, explicit='all')#,explicit=True)
type='comic'
@ -225,7 +230,7 @@ def addComictoDB(comicid, mismatch=None, pullupd=None, imported=None, ogcname=No
while (num_res < len(sresults)):
sr = sresults[num_res]
logger.fdebug("description:" + sr['description'])
if 'paperback' in sr['description'] or 'collecting' in sr['description'] or 'reprints' in sr['description'] or 'collected' in sr['description']:
if any(x in sr['description'].lower() for x in annual_types_ignore):
logger.fdebug('[IMPORTER-ANNUAL] - tradeback/collected edition detected - skipping ' + str(sr['comicid']))
else:
if comicid in sr['description']:
@ -377,7 +382,10 @@ def addComictoDB(comicid, mismatch=None, pullupd=None, imported=None, ogcname=No
# logger.info(u"Directory successfully created at: " + str(comlocation))
#except OSError:
# logger.error(u"Could not create comicdir : " + str(comlocation))
filechecker.validateAndCreateDirectory(comlocation, True)
checkdirectory = filechecker.validateAndCreateDirectory(comlocation, True)
if not checkdirectory:
logger.warn('Error trying to validate/create directory. Aborting this process at this time.')
return
#try to account for CV not updating new issues as fast as GCD
#seems CV doesn't update total counts
@ -786,7 +794,10 @@ def GCDimport(gcomicid, pullupd=None, imported=None, ogcname=None):
# logger.info(u"Directory successfully created at: " + str(comlocation))
#except OSError:
# logger.error(u"Could not create comicdir : " + str(comlocation))
filechecker.validateAndCreateDirectory(comlocation, True)
checkdirectory = filechecker.validateAndCreateDirectory(comlocation, True)
if not checkdirectory:
logger.warn('Error trying to validate/create directory. Aborting this process at this time.')
return
comicIssues = gcdinfo['totalissues']
@ -1475,6 +1486,8 @@ def annual_check(ComicName, SeriesYear, comicid, issuetype, issuechk, weeklyissu
sresults, explicit = mb.findComic(annComicName, mode, issue=None, explicit='all')#,explicit=True)
type='comic'
annual_types_ignore = {'paperback', 'collecting', 'reprints', 'collected', 'print edition', 'tpb', 'available in print'}
if len(sresults) == 1:
logger.fdebug('[IMPORTER-ANNUAL] - 1 result')
if len(sresults) > 0:
@ -1483,7 +1496,7 @@ def annual_check(ComicName, SeriesYear, comicid, issuetype, issuechk, weeklyissu
while (num_res < len(sresults)):
sr = sresults[num_res]
logger.fdebug("description:" + sr['description'])
if 'paperback' in sr['description'] or 'collecting' in sr['description'] or 'reprints' in sr['description'] or 'collected' in sr['description']:
if any(x in sr['description'].lower() for x in annual_types_ignore):
logger.fdebug('[IMPORTER-ANNUAL] - tradeback/collected edition detected - skipping ' + str(sr['comicid']))
else:
if comicid in sr['description']:

View File

@ -49,24 +49,29 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
comic_list = []
comiccnt = 0
extensions = ('cbr','cbz')
cv_location = []
for r, d, f in os.walk(dir):
for files in f:
if 'cvinfo' in files:
cv_location.append(r)
logger.fdebug('CVINFO found: ' + os.path.join(r))
if any(files.lower().endswith('.' + x.lower()) for x in extensions):
comic = files
comicpath = os.path.join(r, files)
comicsize = os.path.getsize(comicpath)
logger.fdebug('Comic: ' + comic + ' [' + comicpath + '] - ' + str(comicsize) + ' bytes')
comiccnt+=1
# We need the unicode path to use for logging, inserting into database
unicode_comic_path = comicpath.decode(mylar.SYS_ENCODING, 'replace')
comiccnt+=1
comic_dict = {'ComicFilename': comic,
'ComicLocation': comicpath,
'ComicSize': comicsize,
'Unicode_ComicLocation': unicode_comic_path}
'ComicLocation': comicpath,
'ComicSize': comicsize,
'Unicode_ComicLocation': unicode_comic_path}
comic_list.append(comic_dict)
logger.info("I've found a total of " + str(comiccnt) + " comics....analyzing now")
#logger.info("comiclist: " + str(comic_list))
myDB = db.DBConnection()
@ -136,9 +141,44 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
# numconv = basnumbs[numbs]
# #logger.fdebug("numconv: " + str(numconv))
issueid_list = []
cvscanned_loc = None
cvinfo_CID = None
for i in comic_list:
logger.fdebug('Analyzing : ' + i['ComicFilename'])
comfilename = i['ComicFilename']
comlocation = i['ComicLocation']
issueinfo = None
#Make sure cvinfo is checked for FIRST (so that CID can be attached to all files properly thereafter as they're scanned in)
if os.path.dirname(comlocation) in cv_location and os.path.dirname(comlocation) != cvscanned_loc:
#if comfilename == 'cvinfo':
logger.info('comfilename: ' + comfilename)
logger.info('cvscanned_loc: ' + str(cv_location))
logger.info('comlocation: ' + os.path.dirname(comlocation))
#if cvscanned_loc != comlocation:
try:
with open(os.path.join(os.path.dirname(comlocation), 'cvinfo')) as f:
urllink = f.readline()
print 'urllink: ' + str(urllink)
if urllink:
cid = urllink.split('/')
if '4050-' in cid[-2]:
cvinfo_CID = re.sub('4050-', '', cid[-2]).strip()
logger.info('CVINFO file located within directory. Attaching everything in directory that is valid to ComicID: ' + str(cvinfo_CID))
#store the location of the cvinfo so it's applied to the correct directory (since we're scanning multile direcorties usually)
cvscanned_loc = os.path.dirname(comlocation)
else:
logger.error("Could not read cvinfo file properly (or it does not contain any data)")
except (OSError, IOError):
logger.error("Could not read cvinfo file properly (or it does not contain any data)")
#else:
# don't scan in it again if it's already been done initially
# continue
if mylar.IMP_METADATA:
logger.info('metatagging checking enabled.')
@ -158,305 +198,349 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
logger.fdebug('Issue Title: ' + issuetitle)
issueyear = issueinfo[0]['year']
logger.fdebug('Issue Year: ' + str(issueyear))
try:
issuevolume = issueinfo[0]['volume']
except:
issuevolume = None
# if used by ComicTagger, Notes field will have the IssueID.
issuenotes = issueinfo[0]['notes']
logger.fdebug('Notes: ' + issuenotes)
if issuenotes is not None:
if 'Issue ID' in issuenotes:
st_find = issuenotes.find('Issue ID')
issuenotes_id = re.sub("[^0-9]", " ", issuenotes[st_find:]).strip()
if issuenotes_id.isdigit():
logger.fdebug('Successfully retrieved CV IssueID for ' + comicname + ' #' + str(issue_number) + ' [' + str(issuenotes_id) + ']')
logger.fdebug("adding " + comicname + " to the import-queue!")
impid = comicname + '-' + str(issueyear) + '-' + str(issue_number) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
logger.fdebug("impid: " + str(impid))
#make sure we only add in those issueid's which don't already have a comicid attached via the cvinfo scan above (this is for reverse-lookup of issueids)
if cvinfo_CID is None:
issueid_list.append(issuenotes_id)
if cvscanned_loc == os.path.dirname(comlocation):
cv_cid = cvinfo_CID
logger.info('CVINFO_COMICID attached : ' + str(cv_cid))
else:
cv_cid = None
import_by_comicids.append({
"impid": impid,
"comicid": cv_cid,
"watchmatch": None,
"displayname": helpers.cleanName(comicname),
"comicname": comicname, #com_NAME,
"comicyear": issueyear,
"volume": issuevolume,
"issueid": issuenotes_id,
"comfilename": comfilename,
"comlocation": comlocation.decode(mylar.SYS_ENCODING)
})
else:
logger.info(i['ComicLocation'] + ' is not in a metatagged format (cbz). Bypassing reading of the metatags')
comfilename = i['ComicFilename']
comlocation = i['ComicLocation']
#let's clean up the filename for matching purposes
cfilename = re.sub('[\_\#\,\/\:\;\-\!\$\%\&\+\'\?\@]', ' ', comfilename)
#cfilename = re.sub('\s', '_', str(cfilename))
d_filename = re.sub('[\_\#\,\/\;\!\$\%\&\?\@]', ' ', comfilename)
d_filename = re.sub('[\:\-\+\']', '#', d_filename)
if issueinfo is None:
#let's clean up the filename for matching purposes
#strip extraspaces
d_filename = re.sub('\s+', ' ', d_filename)
cfilename = re.sub('\s+', ' ', cfilename)
cfilename = re.sub('[\_\#\,\/\:\;\-\!\$\%\&\+\'\?\@]', ' ', comfilename)
#cfilename = re.sub('\s', '_', str(cfilename))
d_filename = re.sub('[\_\#\,\/\;\!\$\%\&\?\@]', ' ', comfilename)
d_filename = re.sub('[\:\-\+\']', '#', d_filename)
#versioning - remove it
subsplit = cfilename.replace('_', ' ').split()
volno = None
volyr = None
for subit in subsplit:
if subit[0].lower() == 'v':
vfull = 0
if subit[1:].isdigit():
#if in format v1, v2009 etc...
if len(subit) > 3:
# if it's greater than 3 in length, then the format is Vyyyy
vfull = 1 # add on 1 character length to account for extra space
cfilename = re.sub(subit, '', cfilename)
d_filename = re.sub(subit, '', d_filename)
volno = re.sub("[^0-9]", " ", subit)
elif subit.lower()[:3] == 'vol':
#if in format vol.2013 etc
#because the '.' in Vol. gets removed, let's loop thru again after the Vol hit to remove it entirely
logger.fdebug('volume indicator detected as version #:' + str(subit))
cfilename = re.sub(subit, '', cfilename)
cfilename = " ".join(cfilename.split())
d_filename = re.sub(subit, '', d_filename)
d_filename = " ".join(d_filename.split())
volyr = re.sub("[^0-9]", " ", subit).strip()
logger.fdebug('volume year set as : ' + str(volyr))
cm_cn = 0
#strip extraspaces
d_filename = re.sub('\s+', ' ', d_filename)
cfilename = re.sub('\s+', ' ', cfilename)
#we need to track the counter to make sure we are comparing the right array parts
#this takes care of the brackets :)
m = re.findall('[^()]+', d_filename) #cfilename)
lenm = len(m)
logger.fdebug("there are " + str(lenm) + " words.")
cnt = 0
yearmatch = "false"
foundonwatch = "False"
issue = 999999
#versioning - remove it
subsplit = cfilename.replace('_', ' ').split()
volno = None
volyr = None
for subit in subsplit:
if subit[0].lower() == 'v':
vfull = 0
if subit[1:].isdigit():
#if in format v1, v2009 etc...
if len(subit) > 3:
# if it's greater than 3 in length, then the format is Vyyyy
vfull = 1 # add on 1 character length to account for extra space
cfilename = re.sub(subit, '', cfilename)
d_filename = re.sub(subit, '', d_filename)
volno = re.sub("[^0-9]", " ", subit)
elif subit.lower()[:3] == 'vol':
#if in format vol.2013 etc
#because the '.' in Vol. gets removed, let's loop thru again after the Vol hit to remove it entirely
logger.fdebug('volume indicator detected as version #:' + str(subit))
cfilename = re.sub(subit, '', cfilename)
cfilename = " ".join(cfilename.split())
d_filename = re.sub(subit, '', d_filename)
d_filename = " ".join(d_filename.split())
volyr = re.sub("[^0-9]", " ", subit).strip()
logger.fdebug('volume year set as : ' + str(volyr))
cm_cn = 0
#we need to track the counter to make sure we are comparing the right array parts
#this takes care of the brackets :)
m = re.findall('[^()]+', d_filename) #cfilename)
lenm = len(m)
logger.fdebug("there are " + str(lenm) + " words.")
cnt = 0
yearmatch = "false"
foundonwatch = "False"
issue = 999999
while (cnt < lenm):
if m[cnt] is None: break
if m[cnt] == ' ':
pass
else:
logger.fdebug(str(cnt) + ". Bracket Word: " + m[cnt])
if cnt == 0:
comic_andiss = m[cnt]
logger.fdebug("Comic: " + comic_andiss)
# if it's not in the standard format this will bork.
# let's try to accomodate (somehow).
# first remove the extension (if any)
extensions = ('cbr', 'cbz')
if comic_andiss.lower().endswith(extensions):
comic_andiss = comic_andiss[:-4]
logger.fdebug("removed extension from filename.")
#now we have to break up the string regardless of formatting.
#let's force the spaces.
comic_andiss = re.sub('_', ' ', comic_andiss)
cs = comic_andiss.split()
cs_len = len(cs)
cn = ''
ydetected = 'no'
idetected = 'no'
decimaldetect = 'no'
for i in reversed(xrange(len(cs))):
#start at the end.
logger.fdebug("word: " + str(cs[i]))
#assume once we find issue - everything prior is the actual title
#idetected = no will ignore everything so it will assume all title
if cs[i][:-2] == '19' or cs[i][:-2] == '20' and idetected == 'no':
logger.fdebug("year detected: " + str(cs[i]))
ydetected = 'yes'
result_comyear = cs[i]
elif cs[i].isdigit() and idetected == 'no' or '.' in cs[i]:
if '.' in cs[i]:
#make sure it's a number on either side of decimal and assume decimal issue.
decst = cs[i].find('.')
dec_st = cs[i][:decst]
dec_en = cs[i][decst +1:]
logger.fdebug("st: " + str(dec_st))
logger.fdebug("en: " + str(dec_en))
if dec_st.isdigit() and dec_en.isdigit():
logger.fdebug("decimal issue detected...adjusting.")
issue = dec_st + "." + dec_en
logger.fdebug("issue detected: " + str(issue))
idetected = 'yes'
else:
logger.fdebug("false decimal represent. Chunking to extra word.")
cn = cn + cs[i] + " "
#break
else:
issue = cs[i]
logger.fdebug("issue detected : " + str(issue))
idetected = 'yes'
elif '\#' in cs[i] or decimaldetect == 'yes':
logger.fdebug("issue detected: " + str(cs[i]))
idetected = 'yes'
else: cn = cn + cs[i] + " "
if ydetected == 'no':
#assume no year given in filename...
result_comyear = "0000"
logger.fdebug("cm?: " + str(cn))
if issue is not '999999':
comiss = issue
else:
logger.ERROR("Invalid Issue number (none present) for " + comfilename)
break
cnsplit = cn.split()
cname = ''
findcn = 0
while (findcn < len(cnsplit)):
cname = cname + cs[findcn] + " "
findcn+=1
cname = cname[:len(cname)-1] # drop the end space...
logger.fdebug('assuming name is : ' + cname)
com_NAME = cname
logger.fdebug('com_NAME : ' + com_NAME)
yearmatch = "True"
while (cnt < lenm):
if m[cnt] is None: break
if m[cnt] == ' ':
pass
else:
logger.fdebug('checking ' + m[cnt])
# we're assuming that the year is in brackets (and it should be damnit)
if m[cnt][:-2] == '19' or m[cnt][:-2] == '20':
logger.fdebug('year detected: ' + str(m[cnt]))
ydetected = 'yes'
result_comyear = m[cnt]
elif m[cnt][:3].lower() in datelist:
logger.fdebug('possible issue date format given - verifying')
#if the date of the issue is given as (Jan 2010) or (January 2010) let's adjust.
#keeping in mind that ',' and '.' are already stripped from the string
if m[cnt][-4:].isdigit():
ydetected = 'yes'
result_comyear = m[cnt][-4:]
logger.fdebug('Valid Issue year of ' + str(result_comyear) + 'detected in format of ' + str(m[cnt]))
cnt+=1
logger.fdebug(str(cnt) + ". Bracket Word: " + m[cnt])
if cnt == 0:
comic_andiss = m[cnt]
logger.fdebug("Comic: " + comic_andiss)
# if it's not in the standard format this will bork.
# let's try to accomodate (somehow).
# first remove the extension (if any)
extensions = ('cbr', 'cbz')
if comic_andiss.lower().endswith(extensions):
comic_andiss = comic_andiss[:-4]
logger.fdebug("removed extension from filename.")
#now we have to break up the string regardless of formatting.
#let's force the spaces.
comic_andiss = re.sub('_', ' ', comic_andiss)
cs = comic_andiss.split()
cs_len = len(cs)
cn = ''
ydetected = 'no'
idetected = 'no'
decimaldetect = 'no'
for i in reversed(xrange(len(cs))):
#start at the end.
logger.fdebug("word: " + str(cs[i]))
#assume once we find issue - everything prior is the actual title
#idetected = no will ignore everything so it will assume all title
if cs[i][:-2] == '19' or cs[i][:-2] == '20' and idetected == 'no':
logger.fdebug("year detected: " + str(cs[i]))
ydetected = 'yes'
result_comyear = cs[i]
elif cs[i].isdigit() and idetected == 'no' or '.' in cs[i]:
if '.' in cs[i]:
#make sure it's a number on either side of decimal and assume decimal issue.
decst = cs[i].find('.')
dec_st = cs[i][:decst]
dec_en = cs[i][decst +1:]
logger.fdebug("st: " + str(dec_st))
logger.fdebug("en: " + str(dec_en))
if dec_st.isdigit() and dec_en.isdigit():
logger.fdebug("decimal issue detected...adjusting.")
issue = dec_st + "." + dec_en
logger.fdebug("issue detected: " + str(issue))
idetected = 'yes'
else:
logger.fdebug("false decimal represent. Chunking to extra word.")
cn = cn + cs[i] + " "
#break
else:
issue = cs[i]
logger.fdebug("issue detected : " + str(issue))
idetected = 'yes'
displength = len(cname)
logger.fdebug('cname length : ' + str(displength) + ' --- ' + str(cname))
logger.fdebug('d_filename is : ' + d_filename)
charcount = d_filename.count('#')
logger.fdebug('charcount is : ' + str(charcount))
if charcount > 0:
logger.fdebug('entering loop')
for i, m in enumerate(re.finditer('\#', d_filename)):
if m.end() <= displength:
logger.fdebug(comfilename[m.start():m.end()])
# find occurance in c_filename, then replace into d_filname so special characters are brought across
newchar = comfilename[m.start():m.end()]
logger.fdebug('newchar:' + str(newchar))
d_filename = d_filename[:m.start()] + str(newchar) + d_filename[m.end():]
logger.fdebug('d_filename:' + str(d_filename))
dispname = d_filename[:displength]
logger.fdebug('dispname : ' + dispname)
splitit = []
watchcomic_split = []
logger.fdebug("filename comic and issue: " + comic_andiss)
#changed this from '' to ' '
comic_iss_b4 = re.sub('[\-\:\,]', ' ', comic_andiss)
comic_iss = comic_iss_b4.replace('.', ' ')
comic_iss = re.sub('[\s+]', ' ', comic_iss).strip()
logger.fdebug("adjusted comic and issue: " + str(comic_iss))
#remove 'the' from here for proper comparisons.
if ' the ' in comic_iss.lower():
comic_iss = re.sub('\\bthe\\b', '', comic_iss).strip()
splitit = comic_iss.split(None)
logger.fdebug("adjusting from: " + str(comic_iss_b4) + " to: " + str(comic_iss))
#here we cycle through the Watchlist looking for a match.
while (cm_cn < watchcnt):
#setup the watchlist
comname = ComicName[cm_cn]
comyear = ComicYear[cm_cn]
compub = ComicPublisher[cm_cn]
comtotal = ComicTotal[cm_cn]
comicid = ComicID[cm_cn]
watch_location = ComicLocation[cm_cn]
# there shouldn't be an issue in the comic now, so let's just assume it's all gravy.
splitst = len(splitit)
watchcomic_split = helpers.cleanName(comname)
watchcomic_split = re.sub('[\-\:\,\.]', ' ', watchcomic_split).split(None)
logger.fdebug(str(splitit) + " file series word count: " + str(splitst))
logger.fdebug(str(watchcomic_split) + " watchlist word count: " + str(len(watchcomic_split)))
if (splitst) != len(watchcomic_split):
logger.fdebug("incorrect comic lengths...not a match")
# if str(splitit[0]).lower() == "the":
# logger.fdebug("THE word detected...attempting to adjust pattern matching")
# splitit[0] = splitit[4:]
else:
logger.fdebug("length match..proceeding")
n = 0
scount = 0
logger.fdebug("search-length: " + str(splitst))
logger.fdebug("Watchlist-length: " + str(len(watchcomic_split)))
while (n <= (splitst) -1):
logger.fdebug("splitit: " + str(splitit[n]))
if n < (splitst) and n < len(watchcomic_split):
logger.fdebug(str(n) + " Comparing: " + str(watchcomic_split[n]) + " .to. " + str(splitit[n]))
if '+' in watchcomic_split[n]:
watchcomic_split[n] = re.sub('+', '', str(watchcomic_split[n]))
if str(watchcomic_split[n].lower()) in str(splitit[n].lower()) and len(watchcomic_split[n]) >= len(splitit[n]):
logger.fdebug("word matched on : " + str(splitit[n]))
scount+=1
#elif ':' in splitit[n] or '-' in splitit[n]:
# splitrep = splitit[n].replace('-', '')
# logger.fdebug("non-character keyword...skipped on " + splitit[n])
elif str(splitit[n]).lower().startswith('v'):
logger.fdebug("possible versioning..checking")
#we hit a versioning # - account for it
if splitit[n][1:].isdigit():
comicversion = str(splitit[n])
logger.fdebug("version found: " + str(comicversion))
else:
logger.fdebug("Comic / Issue section")
if splitit[n].isdigit():
logger.fdebug("issue detected")
elif '\#' in cs[i] or decimaldetect == 'yes':
logger.fdebug("issue detected: " + str(cs[i]))
idetected = 'yes'
else: cn = cn + cs[i] + " "
if ydetected == 'no':
#assume no year given in filename...
result_comyear = "0000"
logger.fdebug("cm?: " + str(cn))
if issue is not '999999':
comiss = issue
else:
logger.fdebug("non-match for: "+ str(splitit[n]))
pass
n+=1
#set the match threshold to 80% (for now)
# if it's less than 80% consider it a non-match and discard.
#splitit has to splitit-1 because last position is issue.
wordcnt = int(scount)
logger.fdebug("scount:" + str(wordcnt))
totalcnt = int(splitst)
logger.fdebug("splitit-len:" + str(totalcnt))
spercent = (wordcnt /totalcnt) * 100
logger.fdebug("we got " + str(spercent) + " percent.")
if int(spercent) >= 80:
logger.fdebug("it's a go captain... - we matched " + str(spercent) + "%!")
logger.fdebug("this should be a match!")
logger.fdebug("issue we found for is : " + str(comiss))
#set the year to the series we just found ;)
result_comyear = comyear
#issue comparison now as well
logger.info(u"Found " + comname + " (" + str(comyear) + ") issue: " + str(comiss))
watchmatch = str(comicid)
dispname = DisplayName[cm_cn]
foundonwatch = "True"
break
elif int(spercent) < 80:
logger.fdebug("failure - we only got " + str(spercent) + "% right!")
cm_cn+=1
logger.ERROR("Invalid Issue number (none present) for " + comfilename)
break
cnsplit = cn.split()
cname = ''
findcn = 0
while (findcn < len(cnsplit)):
cname = cname + cs[findcn] + " "
findcn+=1
cname = cname[:len(cname)-1] # drop the end space...
logger.fdebug('assuming name is : ' + cname)
com_NAME = cname
logger.fdebug('com_NAME : ' + com_NAME)
yearmatch = "True"
else:
logger.fdebug('checking ' + m[cnt])
# we're assuming that the year is in brackets (and it should be damnit)
if m[cnt][:-2] == '19' or m[cnt][:-2] == '20':
logger.fdebug('year detected: ' + str(m[cnt]))
ydetected = 'yes'
result_comyear = m[cnt]
elif m[cnt][:3].lower() in datelist:
logger.fdebug('possible issue date format given - verifying')
#if the date of the issue is given as (Jan 2010) or (January 2010) let's adjust.
#keeping in mind that ',' and '.' are already stripped from the string
if m[cnt][-4:].isdigit():
ydetected = 'yes'
result_comyear = m[cnt][-4:]
logger.fdebug('Valid Issue year of ' + str(result_comyear) + 'detected in format of ' + str(m[cnt]))
cnt+=1
if foundonwatch == "False":
watchmatch = None
#---if it's not a match - send it to the importer.
n = 0
displength = len(cname)
logger.fdebug('cname length : ' + str(displength) + ' --- ' + str(cname))
logger.fdebug('d_filename is : ' + d_filename)
charcount = d_filename.count('#')
logger.fdebug('charcount is : ' + str(charcount))
if charcount > 0:
logger.fdebug('entering loop')
for i, m in enumerate(re.finditer('\#', d_filename)):
if m.end() <= displength:
logger.fdebug(comfilename[m.start():m.end()])
# find occurance in c_filename, then replace into d_filname so special characters are brought across
newchar = comfilename[m.start():m.end()]
logger.fdebug('newchar:' + str(newchar))
d_filename = d_filename[:m.start()] + str(newchar) + d_filename[m.end():]
logger.fdebug('d_filename:' + str(d_filename))
dispname = d_filename[:displength]
logger.fdebug('dispname : ' + dispname)
splitit = []
watchcomic_split = []
logger.fdebug("filename comic and issue: " + comic_andiss)
#changed this from '' to ' '
comic_iss_b4 = re.sub('[\-\:\,]', ' ', comic_andiss)
comic_iss = comic_iss_b4.replace('.', ' ')
comic_iss = re.sub('[\s+]', ' ', comic_iss).strip()
logger.fdebug("adjusted comic and issue: " + str(comic_iss))
#remove 'the' from here for proper comparisons.
if ' the ' in comic_iss.lower():
comic_iss = re.sub('\\bthe\\b', '', comic_iss).strip()
splitit = comic_iss.split(None)
logger.fdebug("adjusting from: " + str(comic_iss_b4) + " to: " + str(comic_iss))
#here we cycle through the Watchlist looking for a match.
while (cm_cn < watchcnt):
#setup the watchlist
comname = ComicName[cm_cn]
comyear = ComicYear[cm_cn]
compub = ComicPublisher[cm_cn]
comtotal = ComicTotal[cm_cn]
comicid = ComicID[cm_cn]
watch_location = ComicLocation[cm_cn]
# there shouldn't be an issue in the comic now, so let's just assume it's all gravy.
splitst = len(splitit)
watchcomic_split = helpers.cleanName(comname)
watchcomic_split = re.sub('[\-\:\,\.]', ' ', watchcomic_split).split(None)
logger.fdebug(str(splitit) + " file series word count: " + str(splitst))
logger.fdebug(str(watchcomic_split) + " watchlist word count: " + str(len(watchcomic_split)))
if (splitst) != len(watchcomic_split):
logger.fdebug("incorrect comic lengths...not a match")
# if str(splitit[0]).lower() == "the":
# logger.fdebug("THE word detected...attempting to adjust pattern matching")
# splitit[0] = splitit[4:]
else:
logger.fdebug("length match..proceeding")
n = 0
scount = 0
logger.fdebug("search-length: " + str(splitst))
logger.fdebug("Watchlist-length: " + str(len(watchcomic_split)))
while (n <= (splitst) -1):
logger.fdebug("splitit: " + str(splitit[n]))
if n < (splitst) and n < len(watchcomic_split):
logger.fdebug(str(n) + " Comparing: " + str(watchcomic_split[n]) + " .to. " + str(splitit[n]))
if '+' in watchcomic_split[n]:
watchcomic_split[n] = re.sub('+', '', str(watchcomic_split[n]))
if str(watchcomic_split[n].lower()) in str(splitit[n].lower()) and len(watchcomic_split[n]) >= len(splitit[n]):
logger.fdebug("word matched on : " + str(splitit[n]))
scount+=1
#elif ':' in splitit[n] or '-' in splitit[n]:
# splitrep = splitit[n].replace('-', '')
# logger.fdebug("non-character keyword...skipped on " + splitit[n])
elif str(splitit[n]).lower().startswith('v'):
logger.fdebug("possible versioning..checking")
#we hit a versioning # - account for it
if splitit[n][1:].isdigit():
comicversion = str(splitit[n])
logger.fdebug("version found: " + str(comicversion))
else:
logger.fdebug("Comic / Issue section")
if splitit[n].isdigit():
logger.fdebug("issue detected")
else:
logger.fdebug("non-match for: "+ str(splitit[n]))
pass
n+=1
#set the match threshold to 80% (for now)
# if it's less than 80% consider it a non-match and discard.
#splitit has to splitit-1 because last position is issue.
wordcnt = int(scount)
logger.fdebug("scount:" + str(wordcnt))
totalcnt = int(splitst)
logger.fdebug("splitit-len:" + str(totalcnt))
spercent = (wordcnt /totalcnt) * 100
logger.fdebug("we got " + str(spercent) + " percent.")
if int(spercent) >= 80:
logger.fdebug("it's a go captain... - we matched " + str(spercent) + "%!")
logger.fdebug("this should be a match!")
logger.fdebug("issue we found for is : " + str(comiss))
#set the year to the series we just found ;)
result_comyear = comyear
#issue comparison now as well
logger.info(u"Found " + comname + " (" + str(comyear) + ") issue: " + str(comiss))
watchmatch = str(comicid)
dispname = DisplayName[cm_cn]
foundonwatch = "True"
break
elif int(spercent) < 80:
logger.fdebug("failure - we only got " + str(spercent) + "% right!")
cm_cn+=1
if foundonwatch == "False":
watchmatch = None
#---if it's not a match - send it to the importer.
n = 0
if volyr is None:
if result_comyear is None:
result_comyear = '0000' #no year in filename basically.
else:
if result_comyear is None:
result_comyear = volyr
if volno is None:
if volyr is None:
vol_label = None
if result_comyear is None:
result_comyear = '0000' #no year in filename basically.
else:
vol_label = volyr
else:
vol_label = volno
if result_comyear is None:
result_comyear = volyr
if volno is None:
if volyr is None:
vol_label = None
else:
vol_label = volyr
else:
vol_label = volno
logger.fdebug("adding " + com_NAME + " to the import-queue!")
impid = dispname + '-' + str(result_comyear) + '-' + str(comiss) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
logger.fdebug("impid: " + str(impid))
import_by_comicids.append({
"impid": impid,
"watchmatch": watchmatch,
"displayname": dispname,
"comicname": dispname, #com_NAME,
"comicyear": result_comyear,
"volume": vol_label,
"comfilename": comfilename,
"comlocation": comlocation.decode(mylar.SYS_ENCODING)
})
logger.fdebug("adding " + com_NAME + " to the import-queue!")
impid = dispname + '-' + str(result_comyear) + '-' + str(comiss) #com_NAME + "-" + str(result_comyear) + "-" + str(comiss)
logger.fdebug("impid: " + str(impid))
if cvscanned_loc == os.path.dirname(comlocation):
cv_cid = cvinfo_CID
logger.info('CVINFO_COMICID attached : ' + str(cv_cid))
else:
cv_cid = None
import_by_comicids.append({
"impid": impid,
"comicid": cv_cid,
"issueid": None,
"watchmatch": watchmatch,
"displayname": dispname,
"comicname": dispname, #com_NAME,
"comicyear": result_comyear,
"volume": vol_label,
"comfilename": comfilename,
"comlocation": comlocation.decode(mylar.SYS_ENCODING)
})
#logger.fdebug('import_by_ids: ' + str(import_by_comicids))
#reverse lookup all of the gathered IssueID's in order to get the related ComicID
vals = mylar.cv.getComic(None, 'import', comicidlist=issueid_list)
logger.fdebug('vals returned:' + str(vals))
if len(watch_kchoice) > 0:
watchchoice['watchlist'] = watch_kchoice
#logger.fdebug("watchchoice: " + str(watchchoice))
@ -535,7 +619,13 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
return "Completed"
if len(import_by_comicids) > 0:
import_comicids['comic_info'] = import_by_comicids
if vals:
import_comicids['issueid_info'] = vals
else:
import_comicids['issueid_info'] = None
logger.fdebug('import comicids: ' + str(import_by_comicids))
return import_comicids, len(import_by_comicids)
@ -566,14 +656,35 @@ def scanLibrary(scan=None, queue=None):
logger.fdebug("number of records: " + str(noids))
while (sl < int(noids)):
soma_sl = soma['comic_info'][sl]
issue_info = soma['issueid_info']
logger.fdebug("soma_sl: " + str(soma_sl))
logger.fdebug("comicname: " + soma_sl['comicname'].encode('utf-8'))
logger.fdebug("filename: " + soma_sl['comfilename'].encode('utf-8'))
logger.fdebug("issue_info: " + str(issue_info))
logger.fdebug("comicname: " + soma_sl['comicname'])
logger.fdebug("filename: " + soma_sl['comfilename'])
if issue_info is not None:
for iss in issue_info:
if soma_sl['issueid'] == iss['IssueID']:
logger.info('IssueID match: ' + str(iss['IssueID']))
logger.info('ComicName: ' + str(iss['ComicName'] + '[' + str(iss['ComicID'])+ ']'))
IssID = iss['IssueID']
ComicID = iss['ComicID']
displayname = iss['ComicName']
comicname = iss['ComicName']
break
else:
IssID = None
displayname = soma_sl['displayname'].encode('utf-8')
comicname = soma_sl['comicname'].encode('utf-8')
ComicID = soma_sl['comicid'] #if it's been scanned in for cvinfo, this will be the CID - otherwise it's None
controlValue = {"impID": soma_sl['impid']}
newValue = {"ComicYear": soma_sl['comicyear'],
"Status": "Not Imported",
"ComicName": soma_sl['comicname'].encode('utf-8'),
"DisplayName": soma_sl['displayname'].encode('utf-8'),
"ComicName": comicname,
"DisplayName": displayname,
"ComicID": ComicID,
"IssueID": IssID,
"Volume": soma_sl['volume'],
"ComicFilename": soma_sl['comfilename'].encode('utf-8'),
"ComicLocation": soma_sl['comlocation'].encode('utf-8'),
"ImportDate": helpers.today(),

View File

@ -15,7 +15,7 @@
import os
import sys
#import logging
import logging
import traceback
import threading
import platform
@ -30,9 +30,9 @@ FILENAME = 'mylar.log'
MAX_FILES = 5
# Mylar logger
logger = getLogger('mylar')
logger = logging.getLogger('mylar')
class LogListHandler(Handler):
class LogListHandler(logging.Handler):
"""
Log handler for Web UI.
"""
@ -42,7 +42,7 @@ class LogListHandler(Handler):
message = message.replace("\n", "<br />")
mylar.LOG_LIST.insert(0, (helpers.now(), message, record.levelname, record.threadName))
def initLogger(verbose=1):
def initLogger(console=False, log_dir=False, verbose=False):
#concurrentLogHandler/0.8.7 (to deal with windows locks)
#since this only happens on windows boxes, if it's nix/mac use the default logger.
if platform.system() == 'Windows':
@ -72,46 +72,45 @@ def initLogger(verbose=1):
* RotatingFileHandler: for the file Mylar.log
* LogListHandler: for Web UI
* StreamHandler: for console (if verbose > 0)
* StreamHandler: for console
"""
# Close and remove old handlers. This is required to reinit the loggers
# at runtime
for handler in logger.handlers[:]:
# Just make sure it is cleaned up.
if isinstance(handler, RFHandler):
handler.close()
elif isinstance(handler, logging.StreamHandler):
handler.flush()
logger.removeHandler(handler)
# Configure the logger to accept all messages
logger.propagate = False
logger.setLevel(DEBUG)# if verbose == 2 else logging.INFO)
# Setup file logger
filename = os.path.join(mylar.LOG_DIR, FILENAME)
file_formatter = Formatter('%(asctime)s - %(levelname)-7s :: %(threadName)s : %(message)s', '%d-%b-%Y %H:%M:%S')
file_handler = RFHandler(filename, "a", maxBytes=MAX_SIZE, backupCount=MAX_FILES)
file_handler.setLevel(DEBUG)
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
logger.setLevel(logging.DEBUG if verbose else logging.INFO)
# Add list logger
loglist_handler = LogListHandler()
#-- this needs to get enabled and logging changed everywhere so the accessing the log GUI won't hang the system.
#-- right now leave it set to INFO only, everything else will still get logged to the mylar.log file.
#if verbose == 2:
# loglist_handler.setLevel(logging.DEBUG)
#else:
# loglist_handler.setLevel(logging.INFO)
#--
loglist_handler.setLevel(INFO)
loglist_handler.setLevel(logging.DEBUG)
logger.addHandler(loglist_handler)
# Setup file logger
if log_dir:
filename = os.path.join(mylar.LOG_DIR, FILENAME)
file_formatter = Formatter('%(asctime)s - %(levelname)-7s :: %(threadName)s : %(message)s', '%d-%b-%Y %H:%M:%S')
file_handler = RFHandler(filename, "a", maxBytes=MAX_SIZE, backupCount=MAX_FILES)
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
# Setup console logger
if verbose:
console_formatter = Formatter('%(asctime)s - %(levelname)s :: %(threadName)s : %(message)s', '%d-%b-%Y %H:%M:%S')
console_handler = StreamHandler()
if console:
console_formatter = logging.Formatter('%(asctime)s - %(levelname)s :: %(threadName)s : %(message)s', '%d-%b-%Y %H:%M:%S')
console_handler = logging.StreamHandler()
console_handler.setFormatter(console_formatter)
#print 'verbose is ' + str(verbose)
#if verbose == 2:
# console_handler.setLevel(logging.DEBUG)
#else:
# console_handler.setLevel(logging.INFO)
console_handler.setLevel(INFO)
console_handler.setLevel(logging.DEBUG)
logger.addHandler(console_handler)

View File

@ -107,7 +107,10 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
explicit = 'all'
#OR
if explicit == 'loose':
if ' and ' in comicquery.lower() or ' & ' in comicquery:
logger.fdebug('Enforcing exact naming match due to operator in title (and)')
explicit = 'all'
elif explicit == 'loose':
logger.fdebug('Changing to loose mode - this will match ANY of the search words')
comicquery = name.replace(" ", " OR ")
elif explicit == 'explicit':

View File

@ -114,9 +114,9 @@ def newpull():
f.write('%s\n' % (newdates))
for pl in pull_list:
if pl['publisher'] == oldpub:
exceptln = str(pl['ID']) + "\t" + str(pl['name']) + "\t" + str(pl['price'])
exceptln = str(pl['ID']) + "\t" + pl['name'].replace(u"\xA0", u" ") + "\t" + str(pl['price'])
else:
exceptln = pl['publisher'] + "\n" + str(pl['ID']) + "\t" + str(pl['name']) + "\t" + str(pl['price'])
exceptln = pl['publisher'] + "\n" + str(pl['ID']) + "\t" + pl['name'].replace(u"\xA0", u" ") + "\t" + str(pl['price'])
for lb in breakhtml:
exceptln = re.sub(lb, '', exceptln).strip()

View File

@ -231,6 +231,8 @@ class Readinglist(object):
logger.info(module + ' The host {0} is Reachable. Preparing to send files.'.format(cmd[-1]))
success = mylar.ftpsshup.sendfiles(sendlist)
if success == 'fail':
return
if len(success) > 0:
for succ in success:

View File

@ -740,6 +740,10 @@ def torsend2client(seriesname, issue, seriesyear, linkit, site):
logger.fdebug('[32P-AUTHENTICATION] 32P (Auth Mode) Authentication enabled. Keys have not been established yet, attempting to gather.')
feed32p = auth32p.info32p(reauthenticate=True)
feedinfo = feed32p.authenticate()
if feedinfo == "disable":
mylar.ENABLE_32P = 0
mylar.config_write()
return "fail"
if mylar.PASSKEY_32P is None or mylar.AUTHKEY_32P is None or mylar.KEYS_32P is None:
logger.error('[RSS] Unable to sign-on to 32P to validate settings and initiate download sequence. Please enter/check your username password in the configuration.')
return "fail"
@ -816,6 +820,10 @@ def torsend2client(seriesname, issue, seriesyear, linkit, site):
logger.info('Attempting to re-authenticate against 32P and poll new keys as required.')
feed32p = auth32p.info32p(reauthenticate=True)
feedinfo = feed32p.authenticate()
if feedinfo == "disable":
mylar.ENABLE_32P = 0
mylar.config_write()
return "fail"
try:
r = requests.get(url, params=payload, verify=verify, stream=True, headers=headers)
except Exception, e:

View File

@ -76,10 +76,13 @@ class tehMain():
if mylar.KEYS_32P is None:
feed32p = auth32p.info32p()
feedinfo = feed32p.authenticate()
if feedinfo == "disable":
mylar.ENABLE_32P = 0
mylar.config_write()
else:
feedinfo = mylar.FEEDINFO_32P
if feedinfo is None or len(feedinfo) == 0:
if feedinfo is None or len(feedinfo) == 0 or feedinfo == "disable":
logger.error('[RSS] Unable to retrieve any information from 32P for RSS Feeds. Skipping for now.')
else:
rsscheck.torrents(pickfeed='1', feedinfo=feedinfo[0])

View File

@ -72,7 +72,7 @@ class Scheduler:
logger.fdebug("Starting new thread: " + self.threadName)
if self.delay:
logger.info('delaying startup thread for ' + str(self.delay) + ' seconds to avoid locks.')
logger.info('delaying thread for ' + str(self.delay) + ' seconds to avoid locks.')
time.sleep(self.delay)
self.action.run()

View File

@ -26,7 +26,9 @@ import mylar
from mylar import db, logger, helpers, filechecker
def dbUpdate(ComicIDList=None, calledfrom=None):
if mylar.IMPORTLOCK:
logger.info('Import is currently running - deferring this until the next scheduled run sequence.')
return
myDB = db.DBConnection()
#print "comicidlist:" + str(ComicIDList)
if ComicIDList is None:

View File

@ -1,4 +1,3 @@
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
@ -20,6 +19,7 @@ import os
import cherrypy
import datetime
import re
import json
from mako.template import Template
from mako.lookup import TemplateLookup
@ -162,24 +162,42 @@ class WebInterface(object):
"delete_dir": helpers.checked(mylar.DELETE_REMOVE_DIR)
}
if mylar.ANNUALS_ON:
annuals = myDB.select("SELECT * FROM annuals WHERE ComicID=?", [ComicID])
annuals = myDB.select("SELECT * FROM annuals WHERE ComicID=? ORDER BY ComicID, Int_IssueNumber DESC", [ComicID])
#we need to load in the annual['ReleaseComicName'] and annual['ReleaseComicID']
#then group by ReleaseComicID, in an attempt to create seperate tables for each different annual series.
#this should allow for annuals, specials, one-shots, etc all to be included if desired.
acnt = 0
aName = []
annuals_list = []
annualinfo = {}
prevcomicid = None
for ann in annuals:
if not any(d.get('annualComicID', None) == str(ann['ReleaseComicID']) for d in aName):
aName.append({"annualComicName": ann['ReleaseComicName'],
"annualComicID": ann['ReleaseComicID']})
"annualComicID": ann['ReleaseComicID']})
annuals_list.append({"Issue_Number": ann['Issue_Number'],
"Int_IssueNumber": ann['Int_IssueNumber'],
"IssueName": ann['IssueName'],
"IssueDate": ann['IssueDate'],
"Status": ann['Status'],
"Location": ann['Location'],
"ComicID": ann['ComicID'],
"IssueID": ann['IssueID'],
"ReleaseComicID": ann['ReleaseComicID'],
"ComicName": ann['ComicName'],
"ComicSize": ann['ComicSize'],
"ReleaseComicName": ann['ReleaseComicName'],
"PrevComicID": prevcomicid})
prevcomicid = ann['ReleaseComicID']
acnt+=1
annualinfo = aName
#annualinfo['count'] = acnt
else:
annuals = None
annuals_list = None
aName = None
return serve_template(templatename="comicdetails.html", title=comic['ComicName'], comic=comic, issues=issues, comicConfig=comicConfig, isCounts=isCounts, series=series, annuals=annuals, annualinfo=aName)
return serve_template(templatename="comicdetails.html", title=comic['ComicName'], comic=comic, issues=issues, comicConfig=comicConfig, isCounts=isCounts, series=series, annuals=annuals_list, annualinfo=aName)
comicDetails.exposed = True
def searchit(self, name, issue=None, mode=None, type=None, explicit=None, serinfo=None):
@ -945,7 +963,6 @@ class WebInterface(object):
else:
newaction = action
for IssueID in args:
logger.info(IssueID)
if any([IssueID is None, 'issue_table' in IssueID, 'history_table' in IssueID, 'manage_issues' in IssueID, 'issue_table_length' in IssueID, 'issues' in IssueID, 'annuals' in IssueID]):
continue
else:
@ -1487,7 +1504,10 @@ class WebInterface(object):
#raise cherrypy.HTTPRedirect("home")
else:
return self.manualpull()
weekfold = os.path.join(mylar.DESTINATION_DIR, pulldate['SHIPDATE'])
if mylar.WEEKFOLDER_LOC is not None:
weekfold = os.path.join(mylar.WEEKFOLDER_LOC, pulldate['SHIPDATE'])
else:
weekfold = os.path.join(mylar.DESTINATION_DIR, pulldate['SHIPDATE'])
return serve_template(templatename="weeklypull.html", title="Weekly Pull", weeklyresults=weeklyresults, pulldate=pulldate['SHIPDATE'], pullfilter=True, weekfold=weekfold, wantedcount=wantedcount)
pullist.exposed = True
@ -2157,13 +2177,16 @@ class WebInterface(object):
markreads.exposed = True
def removefromreadlist(self, IssueID=None, StoryArcID=None, IssueArcID=None, AllRead=None):
def removefromreadlist(self, IssueID=None, StoryArcID=None, IssueArcID=None, AllRead=None, ArcName=None):
myDB = db.DBConnection()
if IssueID:
myDB.action('DELETE from readlist WHERE IssueID=?', [IssueID])
logger.info("Removed " + str(IssueID) + " from Reading List")
elif StoryArcID:
myDB.action('DELETE from readinglist WHERE StoryArcID=?', [StoryArcID])
#ArcName should be an optional flag so that it doesn't remove arcs that have identical naming (ie. Secret Wars)
#if ArcName:
# myDB.action('DELETE from readinglist WHERE StoryArc=?', [ArcName])
stid = 'S' + str(StoryArcID) + '_%'
#delete from the nzblog so it will always find the most current downloads. Nzblog has issueid, but starts with ArcID
myDB.action('DELETE from nzblog WHERE IssueID LIKE ?', [stid])
@ -2395,7 +2418,10 @@ class WebInterface(object):
if not os.path.isdir(dstloc):
logger.info('Story Arc Directory [' + dstloc + '] does not exist! - attempting to create now.')
filechecker.validateAndCreateDirectory(dstloc, True)
checkdirectory = filechecker.validateAndCreateDirectory(dstloc, True)
if not checkdirectory:
logger.warn('Error trying to validate/create directory. Aborting this process at this time.')
return
sarc_title = arc['StoryArc']
logger.fdebug("arc: " + arc['StoryArc'] + " : " + arc['ComicName'] + " : " + arc['IssueNumber'])
@ -2723,21 +2749,57 @@ class WebInterface(object):
# return serve_template(templatename="importlog.html", title="Log", implog=implog)
importLog.exposed = True
def logs(self, log_level=None):
# def logs(self, log_level=None):
#if mylar.LOG_LEVEL is None or mylar.LOG_LEVEL == '' or log_level is None:
# mylar.LOG_LEVEL = 'INFO'
#else:
# mylar.LOG_LEVEL = log_level
return serve_template(templatename="logs.html", title="Log", lineList=mylar.LOG_LIST, loglevel=mylar.LOG_LEVEL)
# return serve_template(templatename="logs.html", title="Log", lineList=mylar.LOG_LIST, loglevel=mylar.LOG_LEVEL)
# logs.exposed = True
def logs(self):
return serve_template(templatename="logs.html", title="Log", lineList=mylar.LOG_LIST)
logs.exposed = True
def log_change(self, log_level):
if log_level is not None:
print ("changing logger to " + str(log_level))
raise cherrypy.HTTPRedirect("logs?log_level=%s" % log_level)
#return serve_template(templatename="logs.html", title="Log", lineList=log_list, log_level=loglevel) #lineList=mylar.LOG_LIST, log_level=log_level)
def clearLogs(self):
mylar.LOG_LIST = []
logger.info("Web logs cleared")
raise cherrypy.HTTPRedirect("logs")
clearLogs.exposed = True
log_change.exposed = True
def toggleVerbose(self):
mylar.VERBOSE = not mylar.VERBOSE
logger.initLogger(console=not mylar.QUIET,
log_dir=mylar.LOG_DIR, verbose=mylar.VERBOSE)
logger.info("Verbose toggled, set to %s", mylar.VERBOSE)
logger.debug("If you read this message, debug logging is available")
raise cherrypy.HTTPRedirect("logs")
toggleVerbose.exposed = True
def getLog(self, iDisplayStart=0, iDisplayLength=100, iSortCol_0=0, sSortDir_0="desc", sSearch="", **kwargs):
iDisplayStart = int(iDisplayStart)
iDisplayLength = int(iDisplayLength)
filtered = []
if sSearch == "" or sSearch == None:
filtered = mylar.LOG_LIST[::]
else:
filtered = [row for row in mylar.LOG_LIST for column in row if sSearch.lower() in column.lower()]
sortcolumn = 0
if iSortCol_0 == '1':
sortcolumn = 2
elif iSortCol_0 == '2':
sortcolumn = 1
filtered.sort(key=lambda x: x[sortcolumn], reverse=sSortDir_0 == "desc")
rows = filtered[iDisplayStart:(iDisplayStart + iDisplayLength)]
rows = [[row[0], row[2], row[1]] for row in rows]
return json.dumps({
'iTotalDisplayRecords': len(filtered),
'iTotalRecords': len(mylar.LOG_LIST),
'aaData': rows,
})
getLog.exposed = True
def clearhistory(self, type=None):
myDB = db.DBConnection()
@ -2836,7 +2898,10 @@ class WebInterface(object):
# into a 'weekly' pull folder for those wanting to transfer directly to a 3rd party device.
myDB = db.DBConnection()
if mylar.WEEKFOLDER:
desdir = os.path.join(mylar.DESTINATION_DIR, pulldate)
if mylar.WEEKFOLDER_LOC:
desdir = os.path.join(mylar.WEEKFOLDER_LOC, pulldate)
else:
desdir = os.path.join(mylar.DESTINATION_DIR, pulldate)
if os.path.isdir(desdir):
logger.info(u"Directory (" + desdir + ") already exists! Continuing...")
else:
@ -2947,20 +3012,56 @@ class WebInterface(object):
raise cherrypy.HTTPRedirect("importResults")
deleteimport.exposed = True
def preSearchit(self, ComicName, comiclist=None, mimp=0, displaycomic=None):
def preSearchit(self, ComicName, comiclist=None, mimp=0, displaycomic=None, comicid=None):
if mylar.IMPORTLOCK:
logger.info('There is an import already running. Please wait for it to finish, and then you can resubmit this import.')
return
importlock = threading.Lock()
myDB = db.DBConnection()
if mimp == 0:
comiclist = []
comiclist.append(ComicName)
comiclist.append({"ComicName": ComicName,
"ComicID": comicid})
with importlock:
#set the global importlock here so that nothing runs and tries to refresh things simultaneously...
mylar.IMPORTLOCK = True
#do imports that have the comicID already present (ie. metatagging has returned valid hits).
#if a comicID is present along with an IssueID - then we have valid metadata.
#otherwise, comicID present by itself indicates a watch match that already exists and is done below this sequence.
RemoveIDS = []
for comicinfo in comiclist:
logger.info('Checking for any valid metatagging already present.')
logger.info(comicinfo['ComicID'])
if comicinfo['ComicID'] is None or comicinfo['ComicID'] == 'None':
continue
else:
#issue_count = Counter(im['ComicID'])
logger.info('Issues found with valid ComicID information for : ' + comicinfo['ComicName'] + ' [' + str(comicinfo['ComicID']) + ']')
self.addbyid(comicinfo['ComicID'], calledby=True, imported='yes', ogcname=comicinfo['ComicName'])
#status update.
import random
SRID = str(random.randint(100000, 999999))
ctrlVal = {"ComicID": comicinfo['ComicID']}
newVal = {"Status": 'Imported',
"SRID": SRID}
myDB.upsert("importresults", newVal, ctrlVal)
logger.info('Successfully imported :' + comicinfo['ComicName'])
RemoveIDS.append(comicinfo['ComicID'])
#we need to remove these items from the comiclist now, so they don't get processed again
if len(RemoveIDS) > 0:
for RID in RemoveIDS:
newlist = {k:comiclist[k] for k in comiclist if comiclist[k]['ComicID'] != RID}
comiclist = newlist
logger.info('newlist: ' + str(newlist))
for cl in comiclist:
implog = ''
implog = implog + "imp_rename:" + str(mylar.IMP_RENAME) + "\n"
implog = implog + "imp_move:" + str(mylar.IMP_MOVE) + "\n"
ComicName = cl
ComicName = cl['ComicName']
logger.info('comicname is :' + ComicName)
implog = implog + "comicName: " + str(ComicName) + "\n"
results = myDB.select("SELECT * FROM importresults WHERE ComicName=?", [ComicName])
@ -3160,6 +3261,8 @@ class WebInterface(object):
"ComicID": sr['comicid']}
myDB.upsert("importresults", newVal, ctrlVal)
mylar.IMPORTLOCK = False
preSearchit.exposed = True
def importresults_popup(self, SRID, ComicName, imported=None, ogcname=None):
@ -3250,7 +3353,6 @@ class WebInterface(object):
"api_key": mylar.API_KEY,
"launch_browser": helpers.checked(mylar.LAUNCH_BROWSER),
"auto_update": helpers.checked(mylar.AUTO_UPDATE),
"logverbose": helpers.checked(mylar.LOGVERBOSE),
"max_logsize": mylar.MAX_LOGSIZE,
"annuals_on": helpers.checked(mylar.ANNUALS_ON),
"enable_check_folder": helpers.checked(mylar.ENABLE_CHECK_FOLDER),
@ -3377,6 +3479,7 @@ class WebInterface(object):
"enable_extra_scripts": helpers.checked(mylar.ENABLE_EXTRA_SCRIPTS),
"extra_scripts": mylar.EXTRA_SCRIPTS,
"post_processing": helpers.checked(mylar.POST_PROCESSING),
"file_opts": mylar.FILE_OPTS,
"enable_meta": helpers.checked(mylar.ENABLE_META),
"cmtagger_path": mylar.CMTAGGER_PATH,
"ct_tag_cr": helpers.checked(mylar.CT_TAG_CR),
@ -3544,7 +3647,10 @@ class WebInterface(object):
# logger.info(u"Directory successfully created at: " + str(com_location))
#except OSError:
# logger.error(u"Could not create comicdir : " + str(com_location))
filechecker.validateAndCreateDirectory(com_location, True)
checkdirectory = filechecker.validateAndCreateDirectory(com_location, True)
if not checkdirectory:
logger.warn('Error trying to validate/create directory. Aborting this process at this time.')
return
myDB.upsert("comics", newValues, controlValueDict)
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % ComicID)
@ -3576,7 +3682,10 @@ class WebInterface(object):
logger.info(u"Validating Directory (" + str(arcdir) + "). Already exists! Continuing...")
else:
logger.fdebug("Updated Directory doesn't exist! - attempting to create now.")
filechecker.validateAndCreateDirectory(arcdir, True)
checkdirectory = filechecker.validateAndCreateDirectory(arcdir, True)
if not checkdirectory:
logger.warn('Error trying to validate/create directory. Aborting this process at this time.')
return
if StoryArcID is not None:
raise cherrypy.HTTPRedirect("detailStoryArc?StoryArcID=%s&StoryArcName=%s" % (StoryArcID, StoryArcName))
else:
@ -3584,7 +3693,7 @@ class WebInterface(object):
readOptions.exposed = True
def configUpdate(self, comicvine_api=None, http_host='0.0.0.0', http_username=None, http_port=8090, http_password=None, enable_https=0, https_cert=None, https_key=None, api_enabled=0, api_key=None, launch_browser=0, auto_update=0, logverbose=0, annuals_on=0, max_logsize=None, download_scan_interval=None, nzb_search_interval=None, nzb_startup_search=0, libraryscan_interval=None,
def configUpdate(self, comicvine_api=None, http_host='0.0.0.0', http_username=None, http_port=8090, http_password=None, enable_https=0, https_cert=None, https_key=None, api_enabled=0, api_key=None, launch_browser=0, auto_update=0, annuals_on=0, max_logsize=None, download_scan_interval=None, nzb_search_interval=None, nzb_startup_search=0, libraryscan_interval=None,
nzb_downloader=0, sab_host=None, sab_username=None, sab_apikey=None, sab_password=None, sab_category=None, sab_priority=None, sab_directory=None, sab_to_mylar=0, log_dir=None, log_level=0, blackhole_dir=None,
nzbget_host=None, nzbget_port=None, nzbget_username=None, nzbget_password=None, nzbget_category=None, nzbget_priority=None, nzbget_directory=None,
usenet_retention=None, nzbsu=0, nzbsu_uid=None, nzbsu_apikey=None, dognzb=0, dognzb_apikey=None, omgwtfnzbs=0, omgwtfnzbs_username=None, omgwtfnzbs_apikey=None, newznab=0, newznab_host=None, newznab_name=None, newznab_apikey=None, newznab_uid=None, newznab_enabled=0,
@ -3593,7 +3702,7 @@ class WebInterface(object):
enable_torrents=0, minseeds=0, torrent_local=0, local_watchdir=None, torrent_seedbox=0, seedbox_watchdir=None, seedbox_user=None, seedbox_pass=None, seedbox_host=None, seedbox_port=None,
prowl_enabled=0, prowl_onsnatch=0, prowl_keys=None, prowl_priority=None, nma_enabled=0, nma_apikey=None, nma_priority=0, nma_onsnatch=0, pushover_enabled=0, pushover_onsnatch=0, pushover_apikey=None, pushover_userkey=None, pushover_priority=None, boxcar_enabled=0, boxcar_onsnatch=0, boxcar_token=None,
pushbullet_enabled=0, pushbullet_apikey=None, pushbullet_deviceid=None, pushbullet_onsnatch=0,
preferred_quality=0, move_files=0, rename_files=0, add_to_csv=1, cvinfo=0, lowercase_filenames=0, folder_format=None, file_format=None, enable_extra_scripts=0, extra_scripts=None, enable_pre_scripts=0, pre_scripts=None, post_processing=0, syno_fix=0, search_delay=None, chmod_dir=0777, chmod_file=0660, chowner=None, chgroup=None,
preferred_quality=0, move_files=0, rename_files=0, add_to_csv=1, cvinfo=0, lowercase_filenames=0, folder_format=None, file_format=None, enable_extra_scripts=0, extra_scripts=None, enable_pre_scripts=0, pre_scripts=None, post_processing=0, file_opts=None, syno_fix=0, search_delay=None, chmod_dir=0777, chmod_file=0660, chowner=None, chgroup=None,
tsab=None, destination_dir=None, create_folders=1, replace_spaces=0, replace_char=None, use_minsize=0, minsize=None, use_maxsize=0, maxsize=None, autowant_all=0, autowant_upcoming=0, comic_cover_local=0, zero_level=0, zero_level_n=None, interface=None, dupeconstraint=None, **kwargs):
mylar.COMICVINE_API = comicvine_api
mylar.HTTP_HOST = http_host
@ -3607,7 +3716,6 @@ class WebInterface(object):
mylar.API_KEY = api_key
mylar.LAUNCH_BROWSER = launch_browser
mylar.AUTO_UPDATE = auto_update
mylar.LOGVERBOSE = logverbose
mylar.ANNUALS_ON = int(annuals_on)
mylar.MAX_LOGSIZE = max_logsize
mylar.ENABLE_CHECK_FOLDER = enable_check_folder
@ -3727,6 +3835,7 @@ class WebInterface(object):
mylar.EXTRA_SCRIPTS = extra_scripts
mylar.ENABLE_PRE_SCRIPTS = enable_pre_scripts
mylar.POST_PROCESSING = post_processing
mylar.FILE_OPTS = file_opts
mylar.PRE_SCRIPTS = pre_scripts
mylar.ENABLE_META = enable_meta
mylar.CMTAGGER_PATH = cmtagger_path
@ -3798,6 +3907,9 @@ class WebInterface(object):
logger.info("Auto-correcting trailing slash in SABnzbd url (not required)")
mylar.SAB_HOST = mylar.SAB_HOST[:-1]
if mylar.FILE_OPTS is None:
mylar.FILE_OPTS = 'move'
if mylar.ENABLE_META:
#force it to use comictagger in lib vs. outside in order to ensure 1/api second CV rate limit isn't broken.
logger.fdebug("ComicTagger Path enforced to use local library : " + mylar.PROG_DIR)

View File

@ -957,7 +957,10 @@ def weekly_singlecopy(comicid, issuenum, file, path, pulldate):
module = '[WEEKLY-PULL COPY]'
if mylar.WEEKFOLDER:
desdir = os.path.join(mylar.DESTINATION_DIR, pulldate)
if mylar.WEEKFOLDER_LOC:
desdir = os.path.join(mylar.WEEKFOLDER_LOC, pulldate)
else:
desdir = os.path.join(mylar.DESTINATION_DIR, pulldate)
dircheck = mylar.filechecker.validateAndCreateDirectory(desdir, True, module=module)
if dircheck:
pass
@ -1067,17 +1070,25 @@ def future_check():
logger.fdebug('Comparing ' + sr['name'] + ' - to - ' + ser['ComicName'])
tmpsername = re.sub('[\'\*\^\%\$\#\@\!\/\,\.\:\(\)]', '', ser['ComicName']).strip()
tmpsrname = re.sub('[\'\*\^\%\$\#\@\!\/\,\.\:\(\)]', '', sr['name']).strip()
tmpsername = re.sub('\-', ' ', tmpsername)
tmpsername = re.sub('\-', '', tmpsername)
if tmpsername.lower().startswith('the '):
tmpsername = re.sub('the ', ' ', tmpsername.lower()).strip()
tmpsername = re.sub('the ', '', tmpsername.lower()).strip()
else:
tmpsername = re.sub(' the ', ' ', tmpsername.lower()).strip()
tmpsrname = re.sub('\-', ' ', tmpsrname)
tmpsername = re.sub(' the ', '', tmpsername.lower()).strip()
tmpsrname = re.sub('\-', '', tmpsrname)
if tmpsrname.lower().startswith('the '):
tmpsrname = re.sub('the ', ' ', tmpsrname.lower()).strip()
tmpsrname = re.sub('the ', '', tmpsrname.lower()).strip()
else:
tmpsrname = re.sub(' the ', ' ', tmpsrname.lower()).strip()
logger.fdebug('Comparing ' + tmpsrname + ' - to - ' + tmpsername)
tmpsrname = re.sub(' the ', '', tmpsrname.lower()).strip()
tmpsername = re.sub(' and ', '', tmpsername.lower()).strip()
tmpsername = re.sub(' & ', '', tmpsername.lower()).strip()
tmpsrname = re.sub(' and ', '', tmpsrname.lower()).strip()
tmpsrname = re.sub(' & ', '', tmpsrname.lower()).strip()
tmpsername = re.sub('\s', '', tmpsername).strip()
tmpsrname = re.sub('\s', '', tmpsrname).strip()
logger.fdebug('Comparing modified names: ' + tmpsrname + ' - to - ' + tmpsername)
if tmpsername.lower() == tmpsrname.lower():
logger.fdebug('Name matched successful: ' + sr['name'])
if str(sr['comicyear']) == str(theissdate):