IMP: Ability to now specify search provider order (regardless of torrents or nzb) within the config.ini, IMP: (#667) Changed the db module to try to accomodate db locking errors and lowering the amount of actual write transactions that were committed along with a new scheduler system, IMP: Changed sabnzbd directory to post-processing, and included subdirs for sabnzbd & nzbget ComicRN scripts, IMP: NZBGet Post-Processing ComicRN.py script (updated for use with nzbget v11.0+)added & updated in post-processing/nzbget directory (thnx ministoat), FIX: If Issue Location was None, and status was Downloaded would cause error in GUI and break series, IMP: (#689) Minimum # of seeders added (will work with KAT), IMP: (#680) Added Boxcar 2 IO Notifications, IMP: Added PushBullet Notifications, IMP: Cleaned up some notification messages so it's not so cluttered, IMP: Added Clickable series link in History tab, IMP: Added Post-Processed as a status to History tab to show manually post-processed items, IMP: Removed log level dropdown from Logs page & added 'ThreadName' as a column, IMP: Added Force Check Availability & View Future Pull-list to Upcoming sub-tabs, IMP: Added '--safe' option to startup options which will redirect directly to Manage Comics screen incase things are broken, FIX: Added proper month conversions for manual post-processing when doing comparitive issue analysis for matches, FIX: (#613) Allow for negative issue numbers in post-processing when renaming and issue padding is enabled, FIX: File Permissions on post-processing would stop post-processing if couldn't change, now will just log the error and continue, IMP: Added Scheduler (from sickbeard) to allow for threadnaming and better scheduling, IMP: Filenames in the format of ' () ' will now get scanned in, IMP: During manual post-processing will now stop looking for matches upon a successful match, IMP: A Refresh/Weeklypull series check will now just scan in issue data, instead of series info,etc, IMP: Removed some legacy GCD code that is no longer in use, IMP: Exception/traceback handling will now be logged, FIX: Unable to grab torrents from KAT due to content-encoding detection failing, IMP: Added universal date-time conversion to allow for non-english based dates to be properly compared when checking search results against publication dates, FIX: Annuals will now get proper notification (prior was leaving out the word 'annual' from notification/logs), IMP: Improved future pull-list detection and increased retension (now ~5 months), IMP: Will now mark new issues as Wanted on a Refresh Series if autowant upcoming is enabled (was reverting to a status of None previously), IMP: Cannot change status to Downloaded if current status is Skipped or Wanted, FIX: (#704) UnSkipped will now work (X in options column on comic details page), IMP: future_check will check upcoming future issues (future pull-list) that have no series data yet (ie. #1's) and auto-add them to watchlist when the data is available and auto-want accordingly, IMP: (#706) Downloading issues to local machine (via comicdetails screen) with special characters in filename now will work, IMP: improved comparison checks during weekly pull list and improved speed abit since only refreshing issue data now instead of entire series, Other Referenced issues: (#670)(#690) and some others....

This commit is contained in:
evilhero 2014-05-25 14:32:11 -04:00
parent 326cf60295
commit ea02cd83f2
47 changed files with 2159 additions and 1319 deletions

View File

@ -16,6 +16,7 @@
import os, sys, locale
import time
import threading
from lib.configobj import ConfigObj
@ -64,6 +65,7 @@ def main():
parser.add_argument('--config', help='Specify a config file to use')
parser.add_argument('--nolaunch', action='store_true', help='Prevent browser from launching on startup')
parser.add_argument('--pidfile', help='Create a pid file (only relevant when running as a daemon)')
parser.add_argument('--safe', action='store_true', help='redirect the startup page to point to the Manage Comics screen on startup')
args = parser.parse_args()
@ -73,10 +75,28 @@ def main():
mylar.VERBOSE = 0
if args.daemon:
mylar.DAEMON=True
mylar.VERBOSE = 0
if args.pidfile :
mylar.PIDFILE = args.pidfile
if sys.platform == 'win32':
print "Daemonize not supported under Windows, starting normally"
else:
mylar.DAEMON=True
mylar.VERBOSE=0
if args.pidfile :
mylar.PIDFILE = args.pidfile
# If the pidfile already exists, mylar may still be running, so exit
if os.path.exists(mylar.PIDFILE):
sys.exit("PID file '" + mylar.PIDFILE + "' already exists. Exiting.")
# The pidfile is only useful in daemon mode, make sure we can write the file properly
if mylar.DAEMON:
try:
file(mylar.PIDFILE, 'w').write("pid\n")
except IOError, e:
raise SystemExit("Unable to write PID file: %s [%d]" % (e.strerror, e.errno))
else:
logger.warn("Not running in daemon mode. PID file creation disabled.")
if args.datadir:
mylar.DATA_DIR = args.datadir
@ -88,6 +108,11 @@ def main():
else:
mylar.CONFIG_FILE = os.path.join(mylar.DATA_DIR, 'config.ini')
if args.safe:
mylar.SAFESTART = True
else:
mylar.SAFESTART = False
# Try to create the DATA_DIR if it doesn't exist
#if not os.path.exists(mylar.DATA_DIR):
# try:
@ -105,6 +130,9 @@ def main():
mylar.DB_FILE = os.path.join(mylar.DATA_DIR, 'mylar.db')
mylar.CFG = ConfigObj(mylar.CONFIG_FILE, encoding='utf-8')
# Rename the main thread
threading.currentThread().name = "MAIN"
# Read config & start logging
mylar.initialize()

View File

@ -109,6 +109,7 @@
<script src="js/plugins.js"></script>
<script src="interfaces/default/js/script.js"></script>
<script src="interfaces/default/js/browser.js"></script>
<!--[if lt IE 7 ]>
<script src="js/libs/dd_belatedpng.js"></script>
<script> DD_belatedPNG.fix('img, .png_bg');</script>

View File

@ -372,12 +372,15 @@
<a href="#" title="Mark issue as Skipped" onclick="doAjaxCall('unqueueissue?IssueID=${issue['IssueID']}&ComicID=${issue['ComicID']}',$(this),'table')" data-success="'${issue['Issue_Number']}' has been marked as skipped"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" /></a>
%elif (issue['Status'] == 'Downloaded'):
<%
linky = os.path.join(comic['ComicLocation'],issue['Location'])
if not os.path.isfile(linky):
if issue['Location'] is not None:
linky = os.path.join(comic['ComicLocation'],issue['Location'])
if not os.path.isfile(linky):
linky = None
else:
linky = None
%>
%if linky:
<a href="downloadthis?pathfile=${linky}"><img src="interfaces/default/images/download_icon.png" height="25" width="25" title="Download the Issue" /></a>
<a href="downloadthis?pathfile=${linky |u}"><img src="interfaces/default/images/download_icon.png" height="25" width="25" title="Download the Issue" /></a>
%endif
<a href="#" title="Add to Reading List" onclick="doAjaxCall('addtoreadlist?IssueID=${issue['IssueID']}',$(this),'table')" data-success="${issue['Issue_Number']} added to Reading List"><img src="interfaces/default/images/glasses-icon.png" height="25" width="25" /></a>
%else:

View File

@ -119,10 +119,12 @@
<div class="row checkbox">
<input type="checkbox" name="launch_browser" value="1" ${config['launch_browser']} /> <label>Launch Browser on Startup</label>
</div>
<!--
<div class="row checkbox">
<input type="checkbox" name="logverbose" value="1" ${config['logverbose']} /> <label>Verbose Logging</label>
<input type="checkbox" name="logverbose" value="2" ${config['logverbose']} /> <label>Verbose Logging</label>
<br/><small>*Use this only when experiencing problems*</small>
</div>
-->
<div class="row checkbox">
<input type="checkbox" name="syno_fix" value="1" ${config['syno_fix']} /> <label>Synology Fix</label>
<br/><small>*Use this if experiencing parsing problems*</small>
@ -331,6 +333,10 @@
<input id="enable_torrents" type="checkbox" onclick="initConfigCheckbox($(this));" name="enable_torrents" value=1 ${config['enable_torrents']} /><label>Use Torrents</label>
</div>
<div class="config">
<div class="row">
<label>Minimum # of seeders</label>
<input type="text" name="minseeds" value="${config['minseeds']}" size="10">
</div>
<div class="row checkbox left clearfix">
<input id="torrent_local" type="checkbox" onclick="initConfigCheckbox($(this));" name="torrent_local" value=1 ${config['torrent_local']} /><label>Local Torrent Client</label>
</div>
@ -386,6 +392,7 @@
<label>RSS Inteval Feed Check</label>
<input type="text" name="rss_checkinterval" value="${config['rss_checkinterval']}" size="6" /><small>(Mins)</small>
<a href="#" style="float:right" type="button" onclick="doAjaxCall('force_rss',$(this))" data-success="RSS Force now running" data-error="Error trying to retrieve RSS Feeds"><span class="ui-icon ui-icon-extlink"></span>Force RSS</a>
</br><small><% rss_last=mylar.RSS_LASTRUN %>last run: ${rss_last}</small>
</div>
</fieldset>
<fieldset>
@ -502,6 +509,7 @@
<input type="button" value="Add Newznab" class="add_newznab" id="add_newznab" />
</div>
</fieldset>
<!--
<fieldset>
<div id="newznab providers">
@ -722,6 +730,10 @@
<label>Log Directory:</label>
<input type="text" name="log_dir" value="${config['log_dir']}" size="50">
</div>
<div class="row">
<label>Maximum Log Size (bytes):</label>
<input type="text" name="max_logsize" value="${config['max_logsize']}" size="20">
</div>
</fieldset>
<h2>Notifications</h2>
@ -792,7 +804,7 @@
</div>
<div id="pushoveroptions">
<div class="row">
<label>API key</label><input type="text" name="pushover_apikey" value="${config['pushover_apikey']}" size="50">
<label>API key</label><input type="text" title="Leave blank if you don't have your own API (recommended to get your own)" name="pushover_apikey" value="${config['pushover_apikey']}" size="50">
</div>
<div class="row">
<label>User key</label><input type="text" name="pushover_userkey" value="${config['pushover_userkey']}" size="50">
@ -809,18 +821,35 @@
<fieldset>
<h3><img src="interfaces/default/images/boxcar_logo.png" style="vertical-align: middle; margin: 3px; margin-top: -1px;" height="30" width="30"/>Boxcar.IO</h3>
<div class="checkbox row">
<input type="checkbox" name="boxcar_enabled" id="boxcar" value="1" ${config['boxcar_enabled']} /><label>Enable Boxcar.IO</label>
<input type="checkbox" name="boxcar_enabled" id="boxcar" value="1" ${config['boxcar_enabled']} /><label>Enable Boxcar.IO Notifications</label>
</div>
<div id="boxcaroptions">
<div class="row">
<div class="row checkbox">
<input type="checkbox" name="boxcar_onsnatch" value="1" ${config['boxcar_onsnatch']} /><label>Notify on snatch?</label>
</div>
<label>Boxcar Username (email)</label>
<input type="text" name="boxcar_username" value="${config['boxcar_username']}" size="30">
<label>Boxcar Token</label>
<input type="text" name="boxcar_token" value="${config['boxcar_token']}" size="30">
</div>
</div>
</fieldset>
<fieldset>
<h3><img src="interfaces/default/images/pushbullet_logo.png" style="vertical-align: middle; margin: 3px; margin-top: -1px;" height="30" width="30"/>Pushbullet</h3>
<div class="row checkbox">
<input type="checkbox" name="pushbullet_enabled" id="pushbullet" value="1" ${config['pushbullet_enabled']} /><label>Enable PushBullet Notifications</label>
</div>
<div id="pushbulletoptions">
<div class="row checkbox">
<input type="checkbox" name="pushbullet_onsnatch" value="1" ${config['pushbullet_onsnatch']} /><label>Notify on snatch?</label>
</div>
<div class="row">
<label>API Key</label><input type="text" name="pushbullet_apikey" value="${config['pushbullet_apikey']}" size="50">
</div>
<div class="row">
<label>Device ID</label><input type="text" name="pushbullet_deviceid" value="${config['pushbullet_deviceid']}" size="50">
</div>
</div>
</fieldset>
</td>
</tr>
</table>
@ -958,6 +987,27 @@
{
$("#boxcaroptions").slideUp();
}
});
if ($("#pushbullet").is(":checked"))
{
$("#pushbulletoptions").show();
}
else
{
$("#pushbulletoptions").hide();
}
$("#pushbullet").click(function(){
if ($("#pushbullet").is(":checked"))
{
$("#pushbulletoptions").slideDown();
}
else
{
$("#pushbulletoptions").slideUp();
}
if ($("#nzb_downloader_sabnzbd").is(":checked"))
{

View File

@ -892,7 +892,6 @@ div#artistheader #artistDetails {
line-height: 16px;
margin-top: 5px;
}
div#artistheader h1 a {
font-size: 32px;
margin-bottom: 5px;
@ -937,14 +936,11 @@ div#artistheader h2 a {
text-align: left;
vertical-align: middle;
}
#read_detail td#options {
min-width: 150px;
text-align: left;
vertical-align: middle;
}
#weekly_pull th#publisher {
min-width: 150px;
text-align: left;
@ -977,6 +973,56 @@ div#artistheader h2 a {
text-align: left;
vertical-align: middle;
}
#pull_table th#publishdate {
min-width: 50px;
text-align: left;
vertial-align: middle;
}
#pull_table th#publisher {
min-width: 100px;
text-align: left;
vertical-align: middle;
}
#pull_table th#comicname {
min-width: 300px;
text-align: left;
vertical-align: middle;
}
#pull_table th#comicnumber {
max-width: 45px;
text-align: left;
vertical-align: middle;
}
#pull_table th#status {
min-width: 100px;
text-align: left;
vertical-align: middle;
}
#pull_table td#publishdate {
max-width: 50px;
text-align: left;
vertial-align: middle;
}
#pull_table td#publisher {
min-width: 100px;
text-align: left;
vertical-align: middle;
}
#pull_table td#comicname {
min-width: 300px;
text-align: left;
vertical-align: middle;
}
#pull_table td#comicnumber {
max-width: 45px;
text-align: left;
vertical-align: middle;
}
#pull_table td#status {
min-width: 100px;
text-align: left;
vertial-align: middle;
}
#manage_comic th#name {
min-width: 275px;
text-align: left;

View File

@ -36,15 +36,15 @@
%for future in futureresults:
<tr>
%if pullfilter is True:
<td class="publishdate">${future['SHIPDATE']}</td>
<td class="publisher">${future['PUBLISHER']}</td>
<td class="comicname">${future['COMIC']}
<td id="publishdate">${future['SHIPDATE']}</td>
<td id="publisher">${future['PUBLISHER']}</td>
<td id="comicname">${future['COMIC']}
%if future['EXTRA'] != '':
(${future['EXTRA']})
%endif
</td>
<td class="comicnumber">${future['ISSUE']}</td>
<td class="status">${future['STATUS']}
<td id="comicnumber">${future['ISSUE']}</td>
<td id="status">${future['STATUS']}
%if future['STATUS'] == 'Wanted':
<a href="unqueueissue?IssueID=None&ComicID=${future['COMICID']}&ComicName=${future['COMIC'] | u}&Issue=${future['ISSUE']}&FutureID=${future['FUTUREID']}"><span class="ui-icon ui-icon-plus"></span>UnWant</a>
%elif future['STATUS'] == 'Skipped':

View File

@ -60,18 +60,18 @@
<tr class="grade${grade}">
<td id="select"><input type="checkbox" name="${item['IssueID']}" class="checkbox" /></td>
<td id="dateadded">${item['DateAdded']}</td>
<td id="filename">${item['ComicName']}</td>
<td id="filename"><a href="comicDetails?ComicID=${item['ComicID']}">${item['ComicName']}</a></td>
<td id="issue">${item['Issue_Number']}</td>
<td id="status">${item['Status']}
%if item['Provider'] == 'CBT' or item['Provider'] == 'KAT':
<img src="interfaces/default/images/torrent-icon.png" height="20" width="20" title="${item['Provider']}" />
%else:
%if item['Status'] != 'Downloaded':
%if item['Status'] != 'Downloaded' and item['Status'] != 'Post-Processed':
(${item['Provider']})
%endif
%endif
</td>
<td id="action">[<a href="#" onclick="doAjaxCall('queueissue?IssueID=${item['IssueID']}&ComicName=${item['ComicName']}&ComicID=${item['ComicID']}&ComicIssue=${item['Issue_Number']}&mode=want&redirect=history', $(this),'table')" data-success="Retrying download of '${item['ComicName']}' '${item['Issue_Number']}'">retry</a>]</td>
<td id="action">[<a href="#" onclick="doAjaxCall('queueissue?IssueID=${item['IssueID']}&ComicName=${item['ComicName']}&ComicID=${item['ComicID']}&ComicIssue=${item['Issue_Number']}&mode=want&redirect=history', $(this),'table')" data-success="Successful re-download of '${item['ComicName']}' '${item['Issue_Number']}'">retry</a>]</td>
</tr>
%endfor
</tbody>

0
data/interfaces/default/images/boxcar_logo.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 92 KiB

After

Width:  |  Height:  |  Size: 92 KiB

0
data/interfaces/default/images/prowl_logo.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

0
data/interfaces/default/images/pushover_logo.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 49 KiB

0
data/interfaces/default/images/submit.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 43 KiB

0
data/interfaces/default/images/x.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -9,27 +9,29 @@
<h1 class="clearfix"><img src="interfaces/default/images/icon_logs.png" alt="Logs"/>Logs</h1>
</div>
<table class="display" id="log_table">
<form action="log_change" method="GET">
<!-- <form action="log_change" method="GET">
<div class="row">
<label>Interface</label>
<select name="loglevel">
<select name="log_level">
%for loglevel in ['Info', 'Warning', 'Debug']:
<%
if loglevel == mylar.LOG_LEVEL:
selected = 'selected="selected"'
selected = 'selected'
else:
selected = ''
%>
%>
<option value="${loglevel}" ${selected}>${loglevel}</option>
%endfor
</select>
</div>
<input type="button" value="Go!" onclick="doAjaxCall('log_change?log_level=${loglevel}',$(this),'table',true)" data-success="Log level changed to ${loglevel}">
</form>
<input type="submit" value="Go"/>
-->
<thead>
<tr>
<th id="timestamp">Timestamp</th>
<th id="level">Level</th>
<th id="thread">Thread</th>
<th id="message">Message</th>
</tr>
</thead>
@ -37,15 +39,19 @@
%for line in lineList:
<%
timestamp, message, level, threadname = line
if level == 'WARNING' or level == 'ERROR':
grade = 'X'
else:
grade = 'Z'
if threadname is None:
threadname = ''
%>
<tr class="grade${grade}">
<td id="timestamp">${timestamp}</td>
<td id="level">${level}</td>
<td id="thread">${threadname}</td>
<td id="message">${message}</td>
</tr>
%endfor

View File

@ -134,6 +134,7 @@
</%def>
<%def name="javascriptIncludes()">
<script>
function addScanAction() {
$('#autoadd').append('<input type="hidden" name="scan" value=1 />');
};

View File

@ -98,30 +98,36 @@
<%def name="javascriptIncludes()">
<script src="js/libs/jquery.dataTables.min.js"></script>
<!--
<script src="js/libs/jquery.dataTables.rowReordering.js"></script>
-->
<script>
function initThisPage() {
$('#read_detail').dataTable({
"bDestroy": true,
"oLanguage": {
"sLengthMenu":"Show _MENU_ items per page",
"sEmptyTable": "<em>No History to Display</em>",
"sInfo":"Showing _START_ to _END_ of _TOTAL_ items",
"sInfoEmpty":"Showing 0 to 0 of 0 items",
"sInfoFiltered":"(filtered from _MAX_ total items)"},
"iDisplayLength": 25,
"sPaginationType": "full_numbers",
"aaSorting": []
}).rowReordering({
sAjax: "reOrder",
fnAlert: function(text){
alert("Order cannot be changed.\n" + text);
}
});
resetFilters("item");
$('#read_detail').dataTable(
{
"bDestroy": true,
"oLanguage": {
"sLengthMenu":"Show _MENU_ items per page",
"sEmptyTable": "<em>No History to Display</em>",
"sInfo":"Showing _START_ to _END_ of _TOTAL_ items",
"sInfoEmpty":"Showing 0 to 0 of 0 items",
"sInfoFiltered":"(filtered from _MAX_ total items)"},
"iDisplayLength": 25,
"sPaginationType": "full_numbers",
"aaSorting": []
});
//
// }).rowReordering({
// sAjax: "reOrder",
// fnAlert: function(text){
// alert("Order cannot be changed.\n" + text);
// }
// });
//
resetFilters("item");
}
$(document).ready(function() {
initThisPage();
initActions();

View File

@ -98,6 +98,7 @@
</div>
</div>
<div id="tabs-2">
<a id="menu_link_edit" href="force_check">Force Check Availability</a>
<div class="table_wrapper">
<table class="display_no_select" id="upcoming_table">
%if future_nodata_upcoming:
@ -126,6 +127,7 @@
</div>
</div>
<div id="tabs-3">
<a id="menu_link_edit" href="futurepulllist">View Future Pull-list</a>
<div class="table_wrapper">
<table class="display_no_select" id="upcoming_table">
%if futureupcoming:

View File

@ -74,7 +74,7 @@ class PostProcessor(object):
self.log = ''
def _log(self, message, level=logger.MESSAGE):
def _log(self, message, level=logger.message): #level=logger.MESSAGE):
"""
A wrapper for the internal logger which also keeps track of messages and saves them to a string for $
@ -136,8 +136,8 @@ class PostProcessor(object):
def Process(self):
self._log("nzb name: " + str(self.nzb_name), logger.DEBUG)
self._log("nzb folder: " + str(self.nzb_folder), logger.DEBUG)
self._log("nzb name: " + str(self.nzb_name))#, logger.DEBUG)
self._log("nzb folder: " + str(self.nzb_folder))#, logger.DEBUG)
logger.fdebug("nzb name: " + str(self.nzb_name))
logger.fdebug("nzb folder: " + str(self.nzb_folder))
if mylar.USE_SABNZBD==0:
@ -172,10 +172,13 @@ class PostProcessor(object):
# -- end. not used.
if mylar.USE_NZBGET==1:
logger.fdebug("Using NZBGET")
if self.nzb_name != 'Manual Run':
logger.fdebug("Using NZBGET")
logger.fdebug("NZB name as passed from NZBGet: " + self.nzb_name)
# if the NZBGet Directory option is enabled, let's use that folder name and append the jobname.
if mylar.NZBGET_DIRECTORY is not None and mylar.NZBGET_DIRECTORY is not 'None' and len(mylar.NZBGET_DIRECTORY) > 4:
if self.nzb_name == 'Manual Run':
logger.fdebug('Manual Run Post-Processing enabled.')
elif mylar.NZBGET_DIRECTORY is not None and mylar.NZBGET_DIRECTORY is not 'None' and len(mylar.NZBGET_DIRECTORY) > 4:
self.nzb_folder = os.path.join(mylar.NZBGET_DIRECTORY, self.nzb_name).encode(mylar.SYS_ENCODING)
logger.fdebug('NZBGET Download folder option enabled. Directory set to : ' + self.nzb_folder)
myDB = db.DBConnection()
@ -188,7 +191,7 @@ class PostProcessor(object):
#once a series name and issue are matched,
#write the series/issue/filename to a tuple
#when all done, iterate over the tuple until completion...
comicseries = myDB.action("SELECT * FROM comics")
comicseries = myDB.select("SELECT * FROM comics")
manual_list = []
if comicseries is None:
logger.error(u"No Series in Watchlist - aborting Manual Post Processing. Maybe you should be running Import?")
@ -223,39 +226,49 @@ class PostProcessor(object):
logger.info("annual detected.")
annchk = "yes"
fcdigit = helpers.issuedigits(re.sub('annual', '', str(temploc.lower())).strip())
issuechk = myDB.action("SELECT * from annuals WHERE ComicID=? AND Int_IssueNumber=?", [cs['ComicID'],fcdigit]).fetchone()
issuechk = myDB.selectone("SELECT * from annuals WHERE ComicID=? AND Int_IssueNumber=?", [cs['ComicID'],fcdigit]).fetchone()
else:
fcdigit = helpers.issuedigits(temploc)
issuechk = myDB.action("SELECT * from issues WHERE ComicID=? AND Int_IssueNumber=?", [cs['ComicID'],fcdigit]).fetchone()
issuechk = myDB.selectone("SELECT * from issues WHERE ComicID=? AND Int_IssueNumber=?", [cs['ComicID'],fcdigit]).fetchone()
if issuechk is None:
logger.info("No corresponding issue # found for " + str(cs['ComicID']))
logger.fdebug("No corresponding issue # found for " + str(cs['ComicID']))
else:
datematch = "True"
if len(watchmatch) > 1 and tmpfc['ComicYear'] is not None:
if len(watchmatch) >= 1 and tmpfc['ComicYear'] is not None:
#if the # of matches is more than 1, we need to make sure we get the right series
#compare the ReleaseDate for the issue, to the found issue date in the filename.
#if ReleaseDate doesn't exist, use IssueDate
#if no issue date was found, then ignore.
issyr = None
if int(issuechk['IssueDate'][5:7]) == 11 or issuechk['IssueDate'][5:7] == 12:
issyr = int(issuechk['IssueDate'][:4])
elif int(issuechk['IssueDate'][5:7]) == 1 or int(issuechk['IssueDate'][5:7]) == 2:
issyr = int(issuechk['IssueDate'][:4])
logger.fdebug('issuedate:' + str(issuechk['IssueDate']))
logger.fdebug('issuechk: ' + str(issuechk['IssueDate'][5:7]))
if issuechk['ReleaseDate'] is not None:
#logger.info('ReleaseDate: ' + str(issuechk['ReleaseDate']))
#logger.info('IssueDate: ' + str(issuechk['IssueDate']))
if issuechk['ReleaseDate'] is not None and issuechk['ReleaseDate'] != '0000-00-00':
monthval = issuechk['ReleaseDate']
if int(issuechk['ReleaseDate'][:4]) < int(tmpfc['ComicYear']):
logger.fdebug(str(issuechk['ReleaseDate']) + ' is before the issue year of ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
datematch = "False"
else:
monthval = issuechk['IssueDate']
if int(issuechk['IssueDate'][:4]) < int(tmpfc['ComicYear']):
logger.fdebug(str(issuechk['IssueDate']) + ' is before the issue year ' + str(tmpfc['ComicYear']) + ' that was discovered in the filename')
datematch = "False"
if int(monthval[5:7]) == 11 or int(monthval[5:7]) == 12:
issyr = int(monthval[:4]) + 1
logger.fdebug('issyr is ' + str(issyr))
elif int(monthval[5:7]) == 1 or int(monthval[5:7]) == 2:
issyr = int(monthval[:4]) - 1
if datematch == "False" and issyr is not None:
logger.fdebug(str(issyr) + ' comparing to ' + str(tmpfc['ComicYear']) + 'rechecking by month-check versus year.')
datematch == "True"
logger.fdebug(str(issyr) + ' comparing to ' + str(tmpfc['ComicYear']) + ' : rechecking by month-check versus year.')
datematch = "True"
if int(issyr) != int(tmpfc['ComicYear']):
logger.fdebug('[fail] Issue is before the modified issue year of ' + str(issyr))
datematch = "False"
@ -284,7 +297,7 @@ class PostProcessor(object):
if nzbname.lower().endswith(extensions):
fd, ext = os.path.splitext(nzbname)
self._log("Removed extension from nzb: " + ext, logger.DEBUG)
self._log("Removed extension from nzb: " + ext)
nzbname = re.sub(str(ext), '', str(nzbname))
#replace spaces
@ -295,18 +308,18 @@ class PostProcessor(object):
logger.fdebug("After conversions, nzbname is : " + str(nzbname))
# if mylar.USE_NZBGET==1:
# nzbname=self.nzb_name
self._log("nzbname: " + str(nzbname), logger.DEBUG)
self._log("nzbname: " + str(nzbname))#, logger.DEBUG)
nzbiss = myDB.action("SELECT * from nzblog WHERE nzbname=?", [nzbname]).fetchone()
nzbiss = myDB.selectone("SELECT * from nzblog WHERE nzbname=?", [nzbname]).fetchone()
if nzbiss is None:
self._log("Failure - could not initially locate nzbfile in my database to rename.", logger.DEBUG)
self._log("Failure - could not initially locate nzbfile in my database to rename.")#, logger.DEBUG)
logger.fdebug("Failure - could not locate nzbfile initially.")
# if failed on spaces, change it all to decimals and try again.
nzbname = re.sub('_', '.', str(nzbname))
self._log("trying again with this nzbname: " + str(nzbname), logger.DEBUG)
self._log("trying again with this nzbname: " + str(nzbname))#, logger.DEBUG)
logger.fdebug("trying again with nzbname of : " + str(nzbname))
nzbiss = myDB.action("SELECT * from nzblog WHERE nzbname=?", [nzbname]).fetchone()
nzbiss = myDB.selectone("SELECT * from nzblog WHERE nzbname=?", [nzbname]).fetchone()
if nzbiss is None:
logger.error(u"Unable to locate downloaded file to rename. PostProcessing aborted.")
return
@ -324,9 +337,9 @@ class PostProcessor(object):
if 'annual' in nzbname.lower():
logger.info("annual detected.")
annchk = "yes"
issuenzb = myDB.action("SELECT * from annuals WHERE IssueID=? AND ComicName NOT NULL", [issueid]).fetchone()
issuenzb = myDB.selectone("SELECT * from annuals WHERE IssueID=? AND ComicName NOT NULL", [issueid]).fetchone()
else:
issuenzb = myDB.action("SELECT * from issues WHERE IssueID=? AND ComicName NOT NULL", [issueid]).fetchone()
issuenzb = myDB.selectone("SELECT * from issues WHERE IssueID=? AND ComicName NOT NULL", [issueid]).fetchone()
if issuenzb is not None:
logger.info("issuenzb found.")
@ -353,14 +366,14 @@ class PostProcessor(object):
logger.info("One-off STORYARC mode enabled for Post-Processing for " + str(sarc))
if mylar.STORYARCDIR:
storyarcd = os.path.join(mylar.DESTINATION_DIR, "StoryArcs", sarc)
self._log("StoryArc Directory set to : " + storyarcd, logger.DEBUG)
self._log("StoryArc Directory set to : " + storyarcd)#, logger.DEBUG)
else:
self._log("Grab-Bag Directory set to : " + mylar.GRABBAG_DIR, logger.DEBUG)
self._log("Grab-Bag Directory set to : " + mylar.GRABBAG_DIR)#, logger.DEBUG)
else:
self._log("One-off mode enabled for Post-Processing. All I'm doing is moving the file untouched into the Grab-bag directory.", logger.DEBUG)
self._log("One-off mode enabled for Post-Processing. All I'm doing is moving the file untouched into the Grab-bag directory.")#, logger.DEBUG)
logger.info("One-off mode enabled for Post-Processing. Will move into Grab-bag directory.")
self._log("Grab-Bag Directory set to : " + mylar.GRABBAG_DIR, logger.DEBUG)
self._log("Grab-Bag Directory set to : " + mylar.GRABBAG_DIR)#, logger.DEBUG)
for root, dirnames, filenames in os.walk(self.nzb_folder):
for filename in filenames:
@ -386,7 +399,7 @@ class PostProcessor(object):
if mylar.READ2FILENAME:
issuearcid = re.sub('S', '', issueid)
logger.fdebug('issuearcid:' + str(issuearcid))
arcdata = myDB.action("SELECT * FROM readinglist WHERE IssueArcID=?",[issuearcid]).fetchone()
arcdata = myDB.selectone("SELECT * FROM readinglist WHERE IssueArcID=?",[issuearcid]).fetchone()
logger.fdebug('readingorder#: ' + str(arcdata['ReadingOrder']))
if int(arcdata['ReadingOrder']) < 10: readord = "00" + str(arcdata['ReadingOrder'])
elif int(arcdata['ReadingOrder']) > 10 and int(arcdata['ReadingOrder']) < 99: readord = "0" + str(arcdata['ReadingOrder'])
@ -398,10 +411,10 @@ class PostProcessor(object):
else:
grab_dst = os.path.join(grdst, ofilename)
self._log("Destination Path : " + grab_dst, logger.DEBUG)
self._log("Destination Path : " + grab_dst)#, logger.DEBUG)
logger.info("Destination Path : " + grab_dst)
grab_src = os.path.join(self.nzb_folder, ofilename)
self._log("Source Path : " + grab_src, logger.DEBUG)
self._log("Source Path : " + grab_src)#, logger.DEBUG)
logger.info("Source Path : " + grab_src)
logger.info("Moving " + str(ofilename) + " into directory : " + str(grdst))
@ -409,19 +422,19 @@ class PostProcessor(object):
try:
shutil.move(grab_src, grab_dst)
except (OSError, IOError):
self._log("Failed to move directory - check directories and manually re-run.", logger.DEBUG)
self._log("Failed to move directory - check directories and manually re-run.")#, logger.DEBUG)
logger.debug("Failed to move directory - check directories and manually re-run.")
return
#tidyup old path
try:
shutil.rmtree(self.nzb_folder)
except (OSError, IOError):
self._log("Failed to remove temporary directory.", logger.DEBUG)
self._log("Failed to remove temporary directory.")#, logger.DEBUG)
logger.debug("Failed to remove temporary directory - check directory and manually re-run.")
return
logger.debug("Removed temporary directory : " + str(self.nzb_folder))
self._log("Removed temporary directory : " + self.nzb_folder, logger.DEBUG)
self._log("Removed temporary directory : " + self.nzb_folder)#, logger.DEBUG)
#delete entry from nzblog table
myDB.action('DELETE from nzblog WHERE issueid=?', [issueid])
@ -457,12 +470,12 @@ class PostProcessor(object):
annchk = "no"
extensions = ('.cbr', '.cbz')
myDB = db.DBConnection()
comicnzb = myDB.action("SELECT * from comics WHERE comicid=?", [comicid]).fetchone()
issuenzb = myDB.action("SELECT * from issues WHERE issueid=? AND comicid=? AND ComicName NOT NULL", [issueid,comicid]).fetchone()
comicnzb = myDB.selectone("SELECT * from comics WHERE comicid=?", [comicid]).fetchone()
issuenzb = myDB.selectone("SELECT * from issues WHERE issueid=? AND comicid=? AND ComicName NOT NULL", [issueid,comicid]).fetchone()
logger.fdebug('issueid: ' + str(issueid))
logger.fdebug('issuenumOG: ' + str(issuenumOG))
if issuenzb is None:
issuenzb = myDB.action("SELECT * from annuals WHERE issueid=? and comicid=?", [issueid,comicid]).fetchone()
issuenzb = myDB.selectone("SELECT * from annuals WHERE issueid=? and comicid=?", [issueid,comicid]).fetchone()
annchk = "yes"
#issueno = str(issuenum).split('.')[0]
#new CV API - removed all decimals...here we go AGAIN!
@ -491,7 +504,7 @@ class PostProcessor(object):
iss = iss_b4dec
issdec = int(iss_decval)
issueno = str(iss)
self._log("Issue Number: " + str(issueno), logger.DEBUG)
self._log("Issue Number: " + str(issueno))#, logger.DEBUG)
logger.fdebug("Issue Number: " + str(issueno))
else:
if len(iss_decval) == 1:
@ -501,7 +514,7 @@ class PostProcessor(object):
iss = iss_b4dec + "." + iss_decval.rstrip('0')
issdec = int(iss_decval.rstrip('0')) * 10
issueno = iss_b4dec
self._log("Issue Number: " + str(iss), logger.DEBUG)
self._log("Issue Number: " + str(iss))#, logger.DEBUG)
logger.fdebug("Issue Number: " + str(iss))
else:
iss = issuenum
@ -518,8 +531,11 @@ class PostProcessor(object):
logger.fdebug("Zero Suppression set to : " + str(mylar.ZERO_LEVEL_N))
if str(len(issueno)) > 1:
if int(issueno) < 10:
self._log("issue detected less than 10", logger.DEBUG)
if int(issueno) < 0:
self._log("issue detected is a negative")
prettycomiss = '-' + str(zeroadd) + str(abs(issueno))
elif int(issueno) < 10:
self._log("issue detected less than 10")#, logger.DEBUG)
if '.' in iss:
if int(iss_decval) > 0:
issueno = str(iss)
@ -530,9 +546,9 @@ class PostProcessor(object):
prettycomiss = str(zeroadd) + str(iss)
if issue_except != 'None':
prettycomiss = str(prettycomiss) + issue_except
self._log("Zero level supplement set to " + str(mylar.ZERO_LEVEL_N) + ". Issue will be set as : " + str(prettycomiss), logger.DEBUG)
self._log("Zero level supplement set to " + str(mylar.ZERO_LEVEL_N) + ". Issue will be set as : " + str(prettycomiss))#, logger.DEBUG)
elif int(issueno) >= 10 and int(issueno) < 100:
self._log("issue detected greater than 10, but less than 100", logger.DEBUG)
self._log("issue detected greater than 10, but less than 100")#, logger.DEBUG)
if mylar.ZERO_LEVEL_N == "none":
zeroadd = ""
else:
@ -547,44 +563,44 @@ class PostProcessor(object):
prettycomiss = str(zeroadd) + str(iss)
if issue_except != 'None':
prettycomiss = str(prettycomiss) + issue_except
self._log("Zero level supplement set to " + str(mylar.ZERO_LEVEL_N) + ".Issue will be set as : " + str(prettycomiss), logger.DEBUG)
self._log("Zero level supplement set to " + str(mylar.ZERO_LEVEL_N) + ".Issue will be set as : " + str(prettycomiss))#, logger.DEBUG)
else:
self._log("issue detected greater than 100", logger.DEBUG)
self._log("issue detected greater than 100")#, logger.DEBUG)
if '.' in iss:
if int(iss_decval) > 0:
issueno = str(iss)
prettycomiss = str(issueno)
if issue_except != 'None':
prettycomiss = str(prettycomiss) + issue_except
self._log("Zero level supplement set to " + str(mylar.ZERO_LEVEL_N) + ". Issue will be set as : " + str(prettycomiss), logger.DEBUG)
self._log("Zero level supplement set to " + str(mylar.ZERO_LEVEL_N) + ". Issue will be set as : " + str(prettycomiss))#, logger.DEBUG)
else:
prettycomiss = str(issueno)
self._log("issue length error - cannot determine length. Defaulting to None: " + str(prettycomiss), logger.DEBUG)
self._log("issue length error - cannot determine length. Defaulting to None: " + str(prettycomiss))#, logger.DEBUG)
if annchk == "yes":
self._log("Annual detected.")
logger.fdebug("Pretty Comic Issue is : " + str(prettycomiss))
issueyear = issuenzb['IssueDate'][:4]
self._log("Issue Year: " + str(issueyear), logger.DEBUG)
self._log("Issue Year: " + str(issueyear))#, logger.DEBUG)
logger.fdebug("Issue Year : " + str(issueyear))
month = issuenzb['IssueDate'][5:7].replace('-','').strip()
month_name = helpers.fullmonth(month)
# comicnzb= myDB.action("SELECT * from comics WHERE comicid=?", [comicid]).fetchone()
publisher = comicnzb['ComicPublisher']
self._log("Publisher: " + publisher, logger.DEBUG)
self._log("Publisher: " + publisher)#, logger.DEBUG)
logger.fdebug("Publisher: " + str(publisher))
#we need to un-unicode this to make sure we can write the filenames properly for spec.chars
series = comicnzb['ComicName'].encode('ascii', 'ignore').strip()
self._log("Series: " + series, logger.DEBUG)
self._log("Series: " + series)#, logger.DEBUG)
logger.fdebug("Series: " + str(series))
seriesyear = comicnzb['ComicYear']
self._log("Year: " + seriesyear, logger.DEBUG)
self._log("Year: " + seriesyear)#, logger.DEBUG)
logger.fdebug("Year: " + str(seriesyear))
comlocation = comicnzb['ComicLocation']
self._log("Comic Location: " + comlocation, logger.DEBUG)
self._log("Comic Location: " + comlocation)#, logger.DEBUG)
logger.fdebug("Comic Location: " + str(comlocation))
comversion = comicnzb['ComicVersion']
self._log("Comic Version: " + str(comversion), logger.DEBUG)
self._log("Comic Version: " + str(comversion))#, logger.DEBUG)
logger.fdebug("Comic Version: " + str(comversion))
if comversion is None:
comversion = 'None'
@ -593,7 +609,7 @@ class PostProcessor(object):
chunk_f_f = re.sub('\$VolumeN','',mylar.FILE_FORMAT)
chunk_f = re.compile(r'\s+')
chunk_file_format = chunk_f.sub(' ', chunk_f_f)
self._log("No version # found for series - tag will not be available for renaming.", logger.DEBUG)
self._log("No version # found for series - tag will not be available for renaming.")#, logger.DEBUG)
logger.fdebug("No version # found for series, removing from filename")
logger.fdebug("new format is now: " + str(chunk_file_format))
else:
@ -708,13 +724,13 @@ class PostProcessor(object):
if ofilename is None:
logger.error(u"Aborting PostProcessing - the filename doesn't exist in the location given. Make sure that " + str(self.nzb_folder) + " exists and is the correct location.")
return
self._log("Original Filename: " + ofilename, logger.DEBUG)
self._log("Original Extension: " + ext, logger.DEBUG)
self._log("Original Filename: " + ofilename)#, logger.DEBUG)
self._log("Original Extension: " + ext)#, logger.DEBUG)
logger.fdebug("Original Filname: " + str(ofilename))
logger.fdebug("Original Extension: " + str(ext))
if mylar.FILE_FORMAT == '' or not mylar.RENAME_FILES:
self._log("Rename Files isn't enabled...keeping original filename.", logger.DEBUG)
self._log("Rename Files isn't enabled...keeping original filename.")#, logger.DEBUG)
logger.fdebug("Rename Files isn't enabled - keeping original filename.")
#check if extension is in nzb_name - will screw up otherwise
if ofilename.lower().endswith(extensions):
@ -728,7 +744,7 @@ class PostProcessor(object):
nfilename = nfilename.replace(' ', mylar.REPLACE_CHAR)
nfilename = re.sub('[\,\:\?]', '', nfilename)
nfilename = re.sub('[\/]', '-', nfilename)
self._log("New Filename: " + nfilename, logger.DEBUG)
self._log("New Filename: " + nfilename)#, logger.DEBUG)
logger.fdebug("New Filename: " + str(nfilename))
src = os.path.join(self.nzb_folder, ofilename)
@ -739,12 +755,14 @@ class PostProcessor(object):
dst = (comlocation + "/" + nfilename + ext).lower()
else:
dst = comlocation + "/" + nfilename + ext.lower()
self._log("Source:" + src, logger.DEBUG)
self._log("Destination:" + dst, logger.DEBUG)
self._log("Source:" + src)#, logger.DEBUG)
self._log("Destination:" + dst)#, logger.DEBUG)
logger.fdebug("Source: " + str(src))
logger.fdebug("Destination: " + str(dst))
if ml is None:
#downtype = for use with updater on history table to set status to 'Downloaded'
downtype = 'True'
#non-manual run moving/deleting...
logger.fdebug('self.nzb_folder: ' + self.nzb_folder)
logger.fdebug('ofilename:' + str(ofilename))
@ -754,19 +772,21 @@ class PostProcessor(object):
try:
shutil.move(src, dst)
except (OSError, IOError):
self._log("Failed to move directory - check directories and manually re-run.", logger.DEBUG)
self._log("Post-Processing ABORTED.", logger.DEBUG)
self._log("Failed to move directory - check directories and manually re-run.")#, logger.DEBUG)
self._log("Post-Processing ABORTED.")#, logger.DEBUG)
return
#tidyup old path
try:
shutil.rmtree(self.nzb_folder)
except (OSError, IOError):
self._log("Failed to remove temporary directory - check directory and manually re-run.", logger.DEBUG)
self._log("Post-Processing ABORTED.", logger.DEBUG)
self._log("Failed to remove temporary directory - check directory and manually re-run.")#, logger.DEBUG)
self._log("Post-Processing ABORTED.")#, logger.DEBUG)
return
self._log("Removed temporary directory : " + str(self.nzb_folder), logger.DEBUG)
self._log("Removed temporary directory : " + str(self.nzb_folder))#, logger.DEBUG)
else:
#downtype = for use with updater on history table to set status to 'Post-Processed'
downtype = 'PP'
#Manual Run, this is the portion.
logger.fdebug("Renaming " + os.path.join(self.nzb_folder, str(ofilename)) + " ..to.. " + os.path.join(self.nzb_folder,str(nfilename + ext)))
os.rename(os.path.join(self.nzb_folder, str(ofilename)), os.path.join(self.nzb_folder,str(nfilename + ext)))
@ -791,24 +811,27 @@ class PostProcessor(object):
#Hopefully set permissions on downloaded file
try:
os.chmod( dst, int(mylar.CHMOD_FILE,8) )
except (OSError, IOError):
return
permission = int(mylar.CHMOD_FILE, 8)
os.umask(0)
os.chmod(dst.rstrip(), permission)
except OSError:
logger.error('Could not change file permisssions for : ' + dst)
#delete entry from nzblog table
myDB.action('DELETE from nzblog WHERE issueid=?', [issueid])
#update snatched table to change status to Downloaded
if annchk == "no":
updater.foundsearch(comicid, issueid, down='True')
updater.foundsearch(comicid, issueid, down=downtype)
dispiss = 'issue: ' + str(issuenumOG)
else:
updater.foundsearch(comicid, issueid, mode='want_ann', down='True')
updater.foundsearch(comicid, issueid, mode='want_ann', down=downtype)
dispiss = 'annual issue: ' + str(issuenumOG)
#force rescan of files
updater.forceRescan(comicid)
logger.info(u"Post-Processing completed for: " + series + " " + dispiss )
self._log(u"Post Processing SUCCESSFULL! ", logger.DEBUG)
self._log(u"Post Processing SUCCESSFULL! ")#, logger.DEBUG)
# retrieve/create the corresponding comic objects
if mylar.ENABLE_EXTRA_SCRIPTS:
@ -834,26 +857,44 @@ class PostProcessor(object):
if ml is not None:
return self.log
else:
if annchk == "no":
prline = series + '(' + issueyear + ') - issue #' + issuenumOG
else:
prline = series + ' Annual (' + issueyear + ') - issue #' + issuenumOG
prline2 = 'Mylar has downloaded and post-processed: ' + prline
if mylar.PROWL_ENABLED:
pushmessage = series + '(' + issueyear + ') - issue #' + issuenumOG
pushmessage = prline
logger.info(u"Prowl request")
prowl = notifiers.PROWL()
prowl.notify(pushmessage,"Download and Postprocessing completed")
if mylar.NMA_ENABLED:
nma = notifiers.NMA()
nma.notify(series, str(issueyear), str(issuenumOG))
nma.notify(prline=prline, prline2=prline2)
if mylar.PUSHOVER_ENABLED:
pushmessage = series + ' (' + str(issueyear) + ') - issue #' + str(issuenumOG)
logger.info(u"Pushover request")
pushover = notifiers.PUSHOVER()
pushover.notify(pushmessage, "Download and Post-Processing completed")
pushover.notify(prline, "Download and Post-Processing completed")
if mylar.BOXCAR_ENABLED:
boxcar = notifiers.BOXCAR()
boxcar.notify(series, str(issueyear), str(issuenumOG))
boxcar.notify(prline=prline, prline2=prline2)
if mylar.PUSHBULLET_ENABLED:
pushbullet = notifiers.PUSHBULLET()
pushbullet.notify(prline=prline, prline2=prline2)
return self.log
class FolderCheck():
def run(self):
import PostProcessor, logger
#monitor a selected folder for 'snatched' files that haven't been processed
logger.info('Checking folder ' + mylar.CHECK_FOLDER + ' for newly snatched downloads')
PostProcess = PostProcessor.PostProcessor('Manual Run', mylar.CHECK_FOLDER)
result = PostProcess.Process()
logger.info('Finished checking for newly snatched downloads')

View File

@ -17,7 +17,8 @@ from __future__ import with_statement
import os, sys, subprocess
import threading
#import threading
import datetime
import webbrowser
import sqlite3
import itertools
@ -25,13 +26,14 @@ import csv
import shutil
import platform
import locale
from threading import Lock, Thread
from lib.apscheduler.scheduler import Scheduler
#from lib.apscheduler.scheduler import Scheduler
from lib.configobj import ConfigObj
import cherrypy
from mylar import versioncheck, logger, version, rsscheck
from mylar import versioncheck, logger, version, versioncheckit, rsscheckit, scheduler, dbupdater, weeklypullit, PostProcessor, searchit #, search
FULL_PATH = None
PROG_DIR = None
@ -47,11 +49,20 @@ VERBOSE = 1
DAEMON = False
PIDFILE= None
SCHED = Scheduler()
#SCHED = Scheduler()
INIT_LOCK = threading.Lock()
#INIT_LOCK = threading.Lock()
INIT_LOCK = Lock()
__INITIALIZED__ = False
started = False
WRITELOCK = False
dbUpdateScheduler = None
searchScheduler = None
RSSScheduler = None
WeeklyScheduler = None
VersionScheduler = None
FolderMonitorScheduler = None
DATA_DIR = None
DBLOCK = False
@ -64,6 +75,7 @@ DB_FILE = None
LOG_DIR = None
LOG_LIST = []
MAX_LOGSIZE = None
CACHE_DIR = None
SYNO_FIX = False
@ -78,7 +90,7 @@ HTTP_ROOT = None
API_ENABLED = False
API_KEY = None
LAUNCH_BROWSER = False
LOGVERBOSE = 1
LOGVERBOSE = None
GIT_PATH = None
INSTALL_TYPE = None
CURRENT_VERSION = None
@ -144,8 +156,12 @@ PUSHOVER_APIKEY = None
PUSHOVER_USERKEY = None
PUSHOVER_ONSNATCH = False
BOXCAR_ENABLED = False
BOXCAR_USERNAME = None
BOXCAR_ONSNATCH = False
BOXCAR_TOKEN = None
PUSHBULLET_ENABLED = False
PUSHBULLET_APIKEY = None
PUSHBULLET_DEVICEID = None
PUSHBULLET_ONSNATCH = False
SKIPPED2WANTED = False
CVINFO = False
@ -246,6 +262,7 @@ RSS_CHECKINTERVAL = 20
RSS_LASTRUN = None
ENABLE_TORRENTS = 0
MINSEEDS = 0
TORRENT_LOCAL = 0
LOCAL_WATCHDIR = None
TORRENT_SEEDBOX = 0
@ -313,7 +330,7 @@ def initialize():
with INIT_LOCK:
global __INITIALIZED__, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, LOGVERBOSE, OLDCONFIG_VERSION, OS_DETECT, OS_LANG, OS_ENCODING, \
global __INITIALIZED__, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, MAX_LOGSIZE, LOGVERBOSE, OLDCONFIG_VERSION, OS_DETECT, OS_LANG, OS_ENCODING, \
HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, API_ENABLED, API_KEY, LAUNCH_BROWSER, GIT_PATH, \
CURRENT_VERSION, LATEST_VERSION, CHECK_GITHUB, CHECK_GITHUB_ON_STARTUP, CHECK_GITHUB_INTERVAL, USER_AGENT, DESTINATION_DIR, \
DOWNLOAD_DIR, USENET_RETENTION, SEARCH_INTERVAL, NZB_STARTUP_SEARCH, INTERFACE, AUTOWANT_ALL, AUTOWANT_UPCOMING, ZERO_LEVEL, ZERO_LEVEL_N, COMIC_COVER_LOCAL, HIGHCOUNT, \
@ -322,9 +339,11 @@ def initialize():
NEWZNAB, NEWZNAB_NAME, NEWZNAB_HOST, NEWZNAB_APIKEY, NEWZNAB_UID, NEWZNAB_ENABLED, EXTRA_NEWZNABS, NEWZNAB_EXTRA, \
RAW, RAW_PROVIDER, RAW_USERNAME, RAW_PASSWORD, RAW_GROUPS, EXPERIMENTAL, ALTEXPERIMENTAL, \
ENABLE_META, CMTAGGER_PATH, INDIE_PUB, BIGGIE_PUB, IGNORE_HAVETOTAL, PROVIDER_ORDER, \
ENABLE_TORRENTS, TORRENT_LOCAL, LOCAL_WATCHDIR, TORRENT_SEEDBOX, SEEDBOX_HOST, SEEDBOX_PORT, SEEDBOX_USER, SEEDBOX_PASS, SEEDBOX_WATCHDIR, \
dbUpdateScheduler, searchScheduler, RSSScheduler, WeeklyScheduler, VersionScheduler, FolderMonitorScheduler, \
ENABLE_TORRENTS, MINSEEDS, TORRENT_LOCAL, LOCAL_WATCHDIR, TORRENT_SEEDBOX, SEEDBOX_HOST, SEEDBOX_PORT, SEEDBOX_USER, SEEDBOX_PASS, SEEDBOX_WATCHDIR, \
ENABLE_RSS, RSS_CHECKINTERVAL, RSS_LASTRUN, ENABLE_TORRENT_SEARCH, ENABLE_KAT, KAT_PROXY, ENABLE_CBT, CBT_PASSKEY, \
PROWL_ENABLED, PROWL_PRIORITY, PROWL_KEYS, PROWL_ONSNATCH, NMA_ENABLED, NMA_APIKEY, NMA_PRIORITY, NMA_ONSNATCH, PUSHOVER_ENABLED, PUSHOVER_PRIORITY, PUSHOVER_APIKEY, PUSHOVER_USERKEY, PUSHOVER_ONSNATCH, BOXCAR_ENABLED, BOXCAR_USERNAME, BOXCAR_ONSNATCH, LOCMOVE, NEWCOM_DIR, FFTONEWCOM_DIR, \
PROWL_ENABLED, PROWL_PRIORITY, PROWL_KEYS, PROWL_ONSNATCH, NMA_ENABLED, NMA_APIKEY, NMA_PRIORITY, NMA_ONSNATCH, PUSHOVER_ENABLED, PUSHOVER_PRIORITY, PUSHOVER_APIKEY, PUSHOVER_USERKEY, PUSHOVER_ONSNATCH, BOXCAR_ENABLED, BOXCAR_ONSNATCH, BOXCAR_TOKEN, \
PUSHBULLET_ENABLED, PUSHBULLET_APIKEY, PUSHBULLET_DEVICEID, PUSHBULLET_ONSNATCH, LOCMOVE, NEWCOM_DIR, FFTONEWCOM_DIR, \
PREFERRED_QUALITY, MOVE_FILES, RENAME_FILES, LOWERCASE_FILENAMES, USE_MINSIZE, MINSIZE, USE_MAXSIZE, MAXSIZE, CORRECT_METADATA, FOLDER_FORMAT, FILE_FORMAT, REPLACE_CHAR, REPLACE_SPACES, ADD_TO_CSV, CVINFO, LOG_LEVEL, POST_PROCESSING, SEARCH_DELAY, GRABBAG_DIR, READ2FILENAME, STORYARCDIR, CVURL, CVAPIFIX, CHECK_FOLDER, \
COMIC_LOCATION, QUAL_ALTVERS, QUAL_SCANNER, QUAL_TYPE, QUAL_QUALITY, ENABLE_EXTRA_SCRIPTS, EXTRA_SCRIPTS, ENABLE_PRE_SCRIPTS, PRE_SCRIPTS, PULLNEW, COUNT_ISSUES, COUNT_HAVES, COUNT_COMICS, SYNO_FIX, CHMOD_FILE, CHMOD_DIR, ANNUALS_ON, CV_ONLY, CV_ONETIMER, WEEKFOLDER
@ -358,7 +377,14 @@ def initialize():
API_ENABLED = bool(check_setting_int(CFG, 'General', 'api_enabled', 0))
API_KEY = check_setting_str(CFG, 'General', 'api_key', '')
LAUNCH_BROWSER = bool(check_setting_int(CFG, 'General', 'launch_browser', 1))
LOGVERBOSE = bool(check_setting_int(CFG, 'General', 'logverbose', 1))
LOGVERBOSE = bool(check_setting_int(CFG, 'General', 'logverbose', 0))
if LOGVERBOSE:
VERBOSE = 2
else:
VERBOSE = 1
MAX_LOGSIZE = check_setting_str(CFG, 'General', 'max_logsize', '')
if not MAX_LOGSIZE:
MAX_LOGSIZE = 1000000
GIT_PATH = check_setting_str(CFG, 'General', 'git_path', '')
LOG_DIR = check_setting_str(CFG, 'General', 'log_dir', '')
if not CACHE_DIR:
@ -440,9 +466,13 @@ def initialize():
PUSHOVER_ONSNATCH = bool(check_setting_int(CFG, 'PUSHOVER', 'pushover_onsnatch', 0))
BOXCAR_ENABLED = bool(check_setting_int(CFG, 'BOXCAR', 'boxcar_enabled', 0))
BOXCAR_USERNAME = check_setting_str(CFG, 'BOXCAR', 'boxcar_username', '')
BOXCAR_ONSNATCH = bool(check_setting_int(CFG, 'BOXCAR', 'boxcar_onsnatch', 0))
BOXCAR_TOKEN = check_setting_str(CFG, 'BOXCAR', 'boxcar_token', '')
PUSHBULLET_ENABLED = bool(check_setting_int(CFG, 'PUSHBULLET', 'pushbullet_enabled', 0))
PUSHBULLET_APIKEY = check_setting_str(CFG, 'PUSHBULLET', 'pushbullet_apikey', '')
PUSHBULLET_DEVICEID = check_setting_str(CFG, 'PUSHBULLET', 'pushbullet_deviceid', '')
PUSHBULLET_ONSNATCH = bool(check_setting_int(CFG, 'PUSHBULLET', 'pushbullet_onsnatch', 0))
USE_MINSIZE = bool(check_setting_int(CFG, 'General', 'use_minsize', 0))
MINSIZE = check_setting_str(CFG, 'General', 'minsize', '')
@ -480,6 +510,7 @@ def initialize():
RSS_LASTRUN = check_setting_str(CFG, 'General', 'rss_lastrun', '')
ENABLE_TORRENTS = bool(check_setting_int(CFG, 'Torrents', 'enable_torrents', 0))
MINSEEDS = check_setting_str(CFG, 'Torrents', 'minseeds', '0')
TORRENT_LOCAL = bool(check_setting_int(CFG, 'Torrents', 'torrent_local', 0))
LOCAL_WATCHDIR = check_setting_str(CFG, 'Torrents', 'local_watchdir', '')
TORRENT_SEEDBOX = bool(check_setting_int(CFG, 'Torrents', 'torrent_seedbox', 0))
@ -501,6 +532,10 @@ def initialize():
if NZB_DOWNLOADER == 0: USE_SABNZBD = True
elif NZB_DOWNLOADER == 1: USE_NZBGET = True
elif NZB_DOWNLOADER == 2: USE_BLACKHOLE = True
else:
#default to SABnzbd
NZB_DOWNLOADER = 0
USE_SABNZBD = True
#USE_SABNZBD = bool(check_setting_int(CFG, 'SABnzbd', 'use_sabnzbd', 0))
SAB_HOST = check_setting_str(CFG, 'SABnzbd', 'sab_host', '')
SAB_USERNAME = check_setting_str(CFG, 'SABnzbd', 'sab_username', '')
@ -567,7 +602,7 @@ def initialize():
PR.append('Experimental')
PR_NUM +=1
print 'PR_NUM::' + str(PR_NUM)
#print 'PR_NUM::' + str(PR_NUM)
NEWZNAB = bool(check_setting_int(CFG, 'Newznab', 'newznab', 0))
@ -617,18 +652,19 @@ def initialize():
#to counteract the loss of the 1st newznab entry because of a switch, let's rewrite to the tuple
if NEWZNAB_HOST and CONFIG_VERSION:
EXTRA_NEWZNABS.append((NEWZNAB_NAME, NEWZNAB_HOST, NEWZNAB_APIKEY, NEWZNAB_UID, int(NEWZNAB_ENABLED)))
PR_NUM +=1
#PR_NUM +=1
# Need to rewrite config here and bump up config version
CONFIG_VERSION = '5'
config_write()
#print 'PR_NUM:' + str(PR_NUM)
for ens in EXTRA_NEWZNABS:
#print ens[0]
#print 'enabled:' + str(ens[4])
if ens[4] == '1': # if newznabs are enabled
PR.append(ens[0])
PR_NUM +=1
if NEWZNAB:
for ens in EXTRA_NEWZNABS:
#print ens[0]
#print 'enabled:' + str(ens[4])
if ens[4] == '1': # if newznabs are enabled
PR.append(ens[0])
PR_NUM +=1
#print('Provider Number count: ' + str(PR_NUM))
@ -642,12 +678,50 @@ def initialize():
TMPPR_NUM = 0
PROV_ORDER = []
while TMPPR_NUM < PR_NUM :
PROV_ORDER.append((TMPPR_NUM, PR[TMPPR_NUM]))
PROV_ORDER.append({"order_seq": TMPPR_NUM,
"provider": str(PR[TMPPR_NUM])})
TMPPR_NUM +=1
PROVIDER_ORDER = PROV_ORDER
else:
#if provider order exists already, load it and then append to end any NEW entries.
TMPPR_NUM = 0
PROV_ORDER = []
for PRO in PROVIDER_ORDER:
PROV_ORDER.append({"order_seq": PRO[0],
"provider": str(PRO[1])})
#print 'Provider is : ' + str(PRO)
TMPPR_NUM +=1
if PR_NUM != TMPPR_NUM:
#print 'existing Order count does not match New Order count'
if PR_NUM > TMPPR_NUM:
#print 'New entries exist, appending to end as default ordering'
TMPPR_NUM = 0
while (TMPPR_NUM < PR_NUM):
#print 'checking entry #' + str(TMPPR_NUM) + ': ' + str(PR[TMPPR_NUM])
if not any(d.get("provider",None) == str(PR[TMPPR_NUM]) for d in PROV_ORDER):
#print 'new provider should be : ' + str(TMPPR_NUM) + ' -- ' + str(PR[TMPPR_NUM])
PROV_ORDER.append({"order_seq": TMPPR_NUM,
"provider": str(PR[TMPPR_NUM])})
#else:
#print 'provider already exists at : ' + str(TMPPR_NUM) + ' -- ' + str(PR[TMPPR_NUM])
TMPPR_NUM +=1
#this isn't ready for primetime just yet...
#logger.info('Provider Order is:' + str(PROVIDER_ORDER))
#print 'Provider Order is:' + str(PROV_ORDER)
if PROV_ORDER is None:
flatt_providers = None
else:
flatt_providers = []
for pro in PROV_ORDER:
for key, value in pro.items():
flatt_providers.append(str(value))
PROVIDER_ORDER = list(itertools.izip(*[itertools.islice(flatt_providers, i, None, 2) for i in range(2)]))
#print 'text provider order is: ' + str(PROVIDER_ORDER)
config_write()
# update folder formats in the config & bump up config version
@ -701,7 +775,7 @@ def initialize():
print 'Unable to create the log directory. Logging to screen only.'
# Start the logger, silence console logging if we need to
logger.mylar_log.initLogger(verbose=VERBOSE)
logger.initLogger(verbose=VERBOSE) #logger.mylar_log.initLogger(verbose=VERBOSE)
# Put the cache dir in the data dir for now
if not CACHE_DIR:
@ -795,6 +869,52 @@ def initialize():
#Ordering comics here
logger.info('Remapping the sorting to allow for new additions.')
COMICSORT = helpers.ComicSort(sequence='startup')
#start the db write only thread here.
#this is a thread that continually runs in the background as the ONLY thread that can write to the db.
logger.info('Starting Write-Only thread.')
db.WriteOnly()
#initialize the scheduler threads here.
dbUpdateScheduler = scheduler.Scheduler(action=dbupdater.dbUpdate(),
cycleTime=datetime.timedelta(hours=48),
runImmediately=False,
threadName="DBUPDATE")
if NZB_STARTUP_SEARCH:
searchrunmode = True
else:
searchrunmode = False
searchScheduler = scheduler.Scheduler(searchit.CurrentSearcher(),
cycleTime=datetime.timedelta(minutes=SEARCH_INTERVAL),
threadName="SEARCH",
runImmediately=searchrunmode)
RSSScheduler = scheduler.Scheduler(rsscheckit.tehMain(),
cycleTime=datetime.timedelta(minutes=int(RSS_CHECKINTERVAL)),
threadName="RSSCHECK",
runImmediately=True,
delay=30)
WeeklyScheduler = scheduler.Scheduler(weeklypullit.Weekly(),
cycleTime=datetime.timedelta(hours=24),
threadName="WEEKLYCHECK",
runImmediately=True,
delay=10)
VersionScheduler = scheduler.Scheduler(versioncheckit.CheckVersion(),
cycleTime=datetime.timedelta(minutes=CHECK_GITHUB_INTERVAL),
threadName="VERSIONCHECK",
runImmediately=True)
FolderMonitorScheduler = scheduler.Scheduler(PostProcessor.FolderCheck(),
cycleTime=datetime.timedelta(minutes=int(DOWNLOAD_SCAN_INTERVAL)),
threadName="FOLDERMONITOR",
runImmediately=True,
delay=60)
__INITIALIZED__ = True
return True
@ -874,6 +994,7 @@ def config_write():
new_config['General']['api_key'] = API_KEY
new_config['General']['launch_browser'] = int(LAUNCH_BROWSER)
new_config['General']['log_dir'] = LOG_DIR
new_config['General']['max_logsize'] = MAX_LOGSIZE
new_config['General']['logverbose'] = int(LOGVERBOSE)
new_config['General']['git_path'] = GIT_PATH
new_config['General']['cache_dir'] = CACHE_DIR
@ -950,20 +1071,23 @@ def config_write():
new_config['General']['rss_checkinterval'] = RSS_CHECKINTERVAL
new_config['General']['rss_lastrun'] = RSS_LASTRUN
# Need to unpack the extra newznabs for saving in config.ini
# Need to unpack the providers for saving in config.ini
if PROVIDER_ORDER is None:
flattened_providers = None
else:
flattened_providers = []
for prov_order in PROVIDER_ORDER:
for item in prov_order:
flattened_providers.append(item)
for pro in PROVIDER_ORDER:
#for key, value in pro.items():
for item in pro:
flattened_providers.append(str(item))
#flattened_providers.append(str(value))
new_config['General']['provider_order'] = flattened_providers
new_config['General']['nzb_downloader'] = NZB_DOWNLOADER
new_config['General']['nzb_downloader'] = int(NZB_DOWNLOADER)
new_config['Torrents'] = {}
new_config['Torrents']['enable_torrents'] = int(ENABLE_TORRENTS)
new_config['Torrents']['minseeds'] = int(MINSEEDS)
new_config['Torrents']['torrent_local'] = int(TORRENT_LOCAL)
new_config['Torrents']['local_watchdir'] = LOCAL_WATCHDIR
new_config['Torrents']['torrent_seedbox'] = int(TORRENT_SEEDBOX)
@ -1045,9 +1169,14 @@ def config_write():
new_config['BOXCAR'] = {}
new_config['BOXCAR']['boxcar_enabled'] = int(BOXCAR_ENABLED)
new_config['BOXCAR']['boxcar_username'] = BOXCAR_USERNAME
new_config['BOXCAR']['boxcar_onsnatch'] = int(BOXCAR_ONSNATCH)
new_config['BOXCAR']['boxcar_token'] = BOXCAR_TOKEN
new_config['PUSHBULLET'] = {}
new_config['PUSHBULLET']['pushbullet_enabled'] = int(PUSHBULLET_ENABLED)
new_config['PUSHBULLET']['pushbullet_apikey'] = PUSHBULLET_APIKEY
new_config['PUSHBULLET']['pushbullet_deviceid'] = PUSHBULLET_DEVICEID
new_config['PUSHBULLET']['pushbullet_onsnatch'] = int(PUSHBULLET_ONSNATCH)
new_config['Raw'] = {}
new_config['Raw']['raw'] = int(RAW)
@ -1060,52 +1189,65 @@ def config_write():
def start():
global __INITIALIZED__, started
global __INITIALIZED__, \
dbUpdateScheduler, searchScheduler, RSSScheduler, \
WeeklyScheduler, VersionScheduler, FolderMonitorScheduler, \
started
with INIT_LOCK:
if __INITIALIZED__:
if __INITIALIZED__:
# Start our scheduled background tasks
#from mylar import updater, searcher, librarysync, postprocessor
# Start our scheduled background tasks
#from mylar import updater, searcher, librarysync, postprocessor
from mylar import updater, search, weeklypull, PostProcessor
SCHED.add_interval_job(updater.dbUpdate, hours=48)
SCHED.add_interval_job(search.searchforissue, minutes=SEARCH_INTERVAL)
# SCHED.add_interval_job(updater.dbUpdate, hours=48)
# SCHED.add_interval_job(search.searchforissue, minutes=SEARCH_INTERVAL)
helpers.latestdate_fix()
#start the db updater scheduler
logger.info('Initializing the DB Updater.')
dbUpdateScheduler.thread.start()
#initiate startup rss feeds for torrents/nzbs here...
if ENABLE_RSS:
SCHED.add_interval_job(rsscheck.tehMain, minutes=int(RSS_CHECKINTERVAL))
#start the search scheduler
searchScheduler.thread.start()
logger.info('Initiating startup-RSS feed checks.')
rsscheck.tehMain()
helpers.latestdate_fix()
#initiate startup rss feeds for torrents/nzbs here...
if ENABLE_RSS:
# SCHED.add_interval_job(rsscheck.tehMain, minutes=int(RSS_CHECKINTERVAL))
RSSScheduler.thread.start()
logger.info('Initiating startup-RSS feed checks.')
# rsscheck.tehMain()
#SCHED.add_interval_job(librarysync.libraryScan, minutes=LIBRARYSCAN_INTERVAL)
#weekly pull list gets messed up if it's not populated first, so let's populate it then set the scheduler.
logger.info('Checking for existance of Weekly Comic listing...')
PULLNEW = 'no' #reset the indicator here.
threading.Thread(target=weeklypull.pullit).start()
#now the scheduler (check every 24 hours)
SCHED.add_interval_job(weeklypull.pullit, hours=24)
#weekly pull list gets messed up if it's not populated first, so let's populate it then set the scheduler.
logger.info('Checking for existance of Weekly Comic listing...')
PULLNEW = 'no' #reset the indicator here.
# threading.Thread(target=weeklypull.pullit).start()
# #now the scheduler (check every 24 hours)
# SCHED.add_interval_job(weeklypull.pullit, hours=24)
WeeklyScheduler.thread.start()
#let's do a run at the Wanted issues here (on startup) if enabled.
if NZB_STARTUP_SEARCH:
threading.Thread(target=search.searchforissue).start()
#let's do a run at the Wanted issues here (on startup) if enabled.
# if NZB_STARTUP_SEARCH:
# threading.Thread(target=search.searchforissue).start()
if CHECK_GITHUB:
SCHED.add_interval_job(versioncheck.checkGithub, minutes=CHECK_GITHUB_INTERVAL)
if CHECK_GITHUB:
VersionScheduler.thread.start()
# SCHED.add_interval_job(versioncheck.checkGithub, minutes=CHECK_GITHUB_INTERVAL)
#run checkFolder every X minutes (basically Manual Run Post-Processing)
logger.info('CHECK_FOLDER SET TO: ' + str(CHECK_FOLDER))
if CHECK_FOLDER:
if DOWNLOAD_SCAN_INTERVAL >0:
logger.info('Setting monitor on folder : ' + str(CHECK_FOLDER))
SCHED.add_interval_job(helpers.checkFolder, minutes=int(DOWNLOAD_SCAN_INTERVAL))
else:
logger.error('You need to specify a monitoring time for the check folder option to work')
SCHED.start()
#run checkFolder every X minutes (basically Manual Run Post-Processing)
logger.info('CHECK_FOLDER SET TO: ' + str(CHECK_FOLDER))
if CHECK_FOLDER:
if DOWNLOAD_SCAN_INTERVAL >0:
logger.info('Setting monitor on folder : ' + str(CHECK_FOLDER))
FolderMonitorScheduler.thread.start()
# SCHED.add_interval_job(helpers.checkFolder, minutes=int(DOWNLOAD_SCAN_INTERVAL))
else:
logger.error('You need to specify a monitoring time for the check folder option to work')
# SCHED.start()
started = True
@ -1415,10 +1557,69 @@ def csv_load():
conn.commit()
c.close()
def halt():
global __INITIALIZED__, dbUpdateScheduler, seachScheduler, RSSScheduler, WeeklyScheduler, \
VersionScheduler, FolderMonitorScheduler, started
with INIT_LOCK:
if __INITIALIZED__:
logger.info(u"Aborting all threads")
# abort all the threads
dbUpdateScheduler.abort = True
logger.info(u"Waiting for the DB UPDATE thread to exit")
try:
dbUpdateScheduler.thread.join(10)
except:
pass
searchScheduler.abort = True
logger.info(u"Waiting for the SEARCH thread to exit")
try:
searchScheduler.thread.join(10)
except:
pass
RSSScheduler.abort = True
logger.info(u"Waiting for the RSS CHECK thread to exit")
try:
RSSScheduler.thread.join(10)
except:
pass
WeeklyScheduler.abort = True
logger.info(u"Waiting for the WEEKLY CHECK thread to exit")
try:
WeeklyScheduler.thread.join(10)
except:
pass
VersionScheduler.abort = True
logger.info(u"Waiting for the VERSION CHECK thread to exit")
try:
VersionScheduler.thread.join(10)
except:
pass
FolderMonitorScheduler.abort = True
logger.info(u"Waiting for the FOLDER MONITOR thread to exit")
try:
FolderMonitorScheduler.thread.join(10)
except:
pass
__INITIALIZED__ = False
def shutdown(restart=False, update=False):
halt()
cherrypy.engine.exit()
SCHED.shutdown(wait=False)
#SCHED.shutdown(wait=False)
config_write()
@ -1432,7 +1633,7 @@ def shutdown(restart=False, update=False):
logger.warn('Mylar failed to update: %s. Restarting.' % e)
if PIDFILE :
logger.info ('Removing pidfile %s' % PIDFILE)
logger.info('Removing pidfile %s' % PIDFILE)
os.remove(PIDFILE)
if restart:

View File

@ -32,7 +32,7 @@ def pulldetails(comicid,type,issueid=None,offset=1):
comicapi='583939a3df0a25fc4e8b7a29934a13078002dc27'
if type == 'comic':
if not comicid.startswith('4050-'): comicid = '4050-' + comicid
PULLURL= mylar.CVURL + 'volume/' + str(comicid) + '/?api_key=' + str(comicapi) + '&format=xml&field_list=name,count_of_issues,issues,start_year,site_detail_url,image,publisher,description,first_issue,deck'
PULLURL= mylar.CVURL + 'volume/' + str(comicid) + '/?api_key=' + str(comicapi) + '&format=xml&field_list=name,count_of_issues,issues,start_year,site_detail_url,image,publisher,description,first_issue,deck,aliases'
elif type == 'issue':
if mylar.CV_ONLY:
cv_type = 'issues'
@ -178,11 +178,11 @@ def GetComicInfo(comicid,dom):
if i == 0:
vfind = comicDes[v_find:v_find+15] #if it's volume 5 format
basenums = {'zero':'0','one':'1','two':'2','three':'3','four':'4','five':'5','six':'6','seven':'7','eight':'8','nine':'9','ten':'10','i':'1','ii':'2','iii':'3','iv':'4','v':'5'}
logger.fdebug('volume X format - ' + str(i) + ': ' + str(vfind))
logger.fdebug('volume X format - ' + str(i) + ': ' + vfind)
else:
vfind = comicDes[:v_find] # if it's fifth volume format
basenums = {'zero':'0','first':'1','second':'2','third':'3','fourth':'4','fifth':'5','sixth':'6','seventh':'7','eighth':'8','nineth':'9','tenth':'10','i':'1','ii':'2','iii':'3','iv':'4','v':'5'}
logger.fdebug('X volume format - ' + str(i) + ': ' + str(vfind))
logger.fdebug('X volume format - ' + str(i) + ': ' + vfind)
volconv = ''
for nums in basenums:
if nums in vfind.lower():
@ -215,7 +215,7 @@ def GetComicInfo(comicid,dom):
i+=1
if comic['ComicVersion'] == 'noversion':
logger.info('comic[ComicVersion]:' + str(comic['ComicVersion']))
logger.fdebug('comic[ComicVersion]:' + str(comic['ComicVersion']))
desdeck -=1
else:
break

View File

@ -23,17 +23,46 @@ import os
import sqlite3
import threading
import time
import Queue
import mylar
from mylar import logger
db_lock = threading.Lock()
mylarQueue = Queue.Queue()
def dbFilename(filename="mylar.db"):
return os.path.join(mylar.DATA_DIR, filename)
class WriteOnly:
def __init__(self):
t = threading.Thread(target=self.worker, name="DB-WRITER")
t.daemon = True
t.start()
logger.fdebug('Thread WriteOnly initialized.')
def worker(self):
myDB = DBConnection()
#this should be in it's own thread somewhere, constantly polling the queue and sending them to the writer.
logger.fdebug('worker started.')
while True:
thisthread = threading.currentThread().name
if not mylarQueue.empty():
# Rename the main thread
logger.fdebug('[' + str(thisthread) + '] queue is not empty yet...')
(QtableName, QvalueDict, QkeyDict) = mylarQueue.get(block=True, timeout=None)
logger.fdebug('[REQUEUE] Table: ' + str(QtableName) + ' values: ' + str(QvalueDict) + ' keys: ' + str(QkeyDict))
sqlResult = myDB.upsert(QtableName, QvalueDict, QkeyDict)
if sqlResult:
mylarQueue.task_done()
return sqlResult
#else:
# time.sleep(1)
# logger.fdebug('[' + str(thisthread) + '] sleeping until active.')
class DBConnection:
def __init__(self, filename="mylar.db"):
@ -41,11 +70,49 @@ class DBConnection:
self.filename = filename
self.connection = sqlite3.connect(dbFilename(filename), timeout=20)
self.connection.row_factory = sqlite3.Row
self.queue = mylarQueue
def action(self, query, args=None):
def fetch(self, query, args=None):
with db_lock:
if query == None:
return
sqlResult = None
attempt = 0
while attempt < 5:
try:
if args == None:
#logger.fdebug("[FETCH] : " + query)
cursor = self.connection.cursor()
sqlResult = cursor.execute(query)
else:
#logger.fdebug("[FETCH] : " + query + " with args " + str(args))
cursor = self.connection.cursor()
sqlResult = cursor.execute(query, args)
# get out of the connection attempt loop since we were successful
break
except sqlite3.OperationalError, e:
if "unable to open database file" in e.args[0] or "database is locked" in e.args[0]:
logger.warn('Database Error: %s' % e)
attempt += 1
time.sleep(1)
else:
logger.warn('DB error: %s' % e)
raise
except sqlite3.DatabaseError, e:
logger.error('Fatal error executing query: %s' % e)
raise
return sqlResult
def action(self, query, args=None):
with db_lock:
if query == None:
return
@ -55,10 +122,10 @@ class DBConnection:
while attempt < 5:
try:
if args == None:
#logger.debug(self.filename+": "+query)
#logger.fdebug("[ACTION] : " + query)
sqlResult = self.connection.execute(query)
else:
#logger.debug(self.filename+": "+query+" with args "+str(args))
#logger.fdebug("[ACTION] : " + query + " with args " + str(args))
sqlResult = self.connection.execute(query, args)
self.connection.commit()
break
@ -71,32 +138,46 @@ class DBConnection:
else:
logger.error('Database error executing %s :: %s' % (query, e))
raise
except sqlite3.DatabaseError, e:
logger.error('Fatal Error executing %s :: %s' % (query, e))
raise
return sqlResult
def select(self, query, args=None):
sqlResults = self.action(query, args).fetchall()
sqlResults = self.fetch(query, args).fetchall()
if sqlResults == None:
return []
return sqlResults
def selectone(self, query, args=None):
sqlResults = self.fetch(query, args)
if sqlResults == None:
return []
return sqlResults
def upsert(self, tableName, valueDict, keyDict):
changesBefore = self.connection.total_changes
genParams = lambda myDict : [x + " = ?" for x in myDict.keys()]
query = "UPDATE "+tableName+" SET " + ", ".join(genParams(valueDict)) + " WHERE " + " AND ".join(genParams(keyDict))
query = "UPDATE " + tableName + " SET " + ", ".join(genParams(valueDict)) + " WHERE " + " AND ".join(genParams(keyDict))
self.action(query, valueDict.values() + keyDict.values())
if self.connection.total_changes == changesBefore:
query = "INSERT INTO "+tableName+" (" + ", ".join(valueDict.keys() + keyDict.keys()) + ")" + \
" VALUES (" + ", ".join(["?"] * len(valueDict.keys() + keyDict.keys())) + ")"
self.action(query, valueDict.values() + keyDict.values())
# else:
# logger.info('[' + str(thisthread) + '] db is currently locked for writing. Queuing this action until it is free')
# logger.info('Table: ' + str(tableName) + ' Values: ' + str(valueDict) + ' Keys: ' + str(keyDict))
# self.queue.put( (tableName, valueDict, keyDict) )
# #assuming this is coming in from a seperate thread, so loop it until it's free to write.
# #self.queuesend()

31
mylar/dbupdater.py Normal file
View File

@ -0,0 +1,31 @@
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
from __future__ import with_statement
import mylar
from mylar import logger
#import threading
class dbUpdate():
def __init__(self):
pass
def run(self):
logger.info('[DBUpdate] Updating Database.')
mylar.updater.dbUpdate()
return

View File

@ -93,6 +93,7 @@ def listFiles(dir,watchcomic,Publisher,AlternateSearch=None,manual=None,sarc=Non
#print item
#subname = os.path.join(basedir, item)
subname = item
subname = re.sub('\_', ' ', subname)
#Remove html code for ( )
subname = re.sub(r'%28', '(', subname)
@ -170,66 +171,71 @@ def listFiles(dir,watchcomic,Publisher,AlternateSearch=None,manual=None,sarc=Non
subname = watchcomic + subname
subnm = re.findall('[^()]+', subname)
else:
subit = re.sub('(.*)[\s+|_+](19\d{2}|20\d{2})(.*)', '\\1 (\\2) \\3', subname)
subit = re.sub('(.*)[\s+|_+](19\d{2}|20\d{2})(.*)', '\\1 \\3 (\\2)', subname).replace('( )', '')
subthis2 = re.sub('.cbr', '', subit)
subthis1 = re.sub('.cbz', '', subthis2)
subname = re.sub('[\:\;\!\'\/\?\+\=\_\%]', '', subthis1)
#if '.' appears more than once at this point, then it's being used in place of spaces.
#if '.' only appears once at this point, it's a decimal issue (since decimalinseries is False within this else stmt).
if len(str(subname.count('.'))) == 1:
logger.fdebug('decimal issue detected, not removing decimals')
logger.fdebug('[FILECHECKER] decimal issue detected, not removing decimals')
else:
logger.fdebug('more than one decimal detected, and the series does not have decimals - assuming in place of spaces.')
logger.fdebug('[FILECHECKER] more than one decimal detected, and the series does not have decimals - assuming in place of spaces.')
subname = re.sub('[\.]', '', subname)
subnm = re.findall('[^()]+', subname)
if Publisher.lower() in subname.lower():
#if the Publisher is given within the title or filename even (for some reason, some people
#have this to distinguish different titles), let's remove it entirely.
lenm = len(subnm)
subsplit = subname.replace('_', ' ').split()
cnt = 0
pub_removed = None
if sarc is None:
if Publisher.lower() in re.sub('_', ' ', subname.lower()):
#if the Publisher is given within the title or filename even (for some reason, some people
#have this to distinguish different titles), let's remove it entirely.
lenm = len(subnm)
while (cnt < lenm):
if subnm[cnt] is None: break
if subnm[cnt] == ' ':
pass
else:
logger.fdebug(str(cnt) + ". Bracket Word: " + str(subnm[cnt]))
cnt = 0
pub_removed = None
if Publisher.lower() in subnm[cnt].lower() and cnt >= 1:
logger.fdebug('Publisher detected within title : ' + str(subnm[cnt]))
logger.fdebug('cnt is : ' + str(cnt) + ' --- Publisher is: ' + Publisher)
pub_removed = subnm[cnt]
#-strip publisher if exists here-
logger.fdebug('removing publisher from title')
subname_pubremoved = re.sub(pub_removed, '', subname)
logger.fdebug('pubremoved : ' + str(subname_pubremoved))
subname_pubremoved = re.sub('\(\)', '', subname_pubremoved) #remove empty brackets
subname_pubremoved = re.sub('\s+', ' ', subname_pubremoved) #remove spaces > 1
logger.fdebug('blank brackets removed: ' + str(subname_pubremoved))
subnm = re.findall('[^()]+', subname_pubremoved)
break
cnt+=1
while (cnt < lenm):
submod = re.sub('_', ' ', subnm[cnt])
if submod is None: break
if submod == ' ':
pass
else:
logger.fdebug('[FILECHECKER] ' + str(cnt) + ". Bracket Word: " + str(submod))
if Publisher.lower() in submod.lower() and cnt >= 1:
logger.fdebug('[FILECHECKER] Publisher detected within title : ' + str(submod))
logger.fdebug('[FILECHECKER] cnt is : ' + str(cnt) + ' --- Publisher is: ' + Publisher)
#-strip publisher if exists here-
pub_removed = submod
logger.fdebug('[FILECHECKER] removing publisher from title')
subname_pubremoved = re.sub(pub_removed, '', subname)
logger.fdebug('[FILECHECKER] pubremoved : ' + str(subname_pubremoved))
subname_pubremoved = re.sub('\(\)', '', subname_pubremoved) #remove empty brackets
subname_pubremoved = re.sub('\s+', ' ', subname_pubremoved) #remove spaces > 1
logger.fdebug('[FILECHECKER] blank brackets removed: ' + str(subname_pubremoved))
subnm = re.findall('[^()]+', subname_pubremoved)
break
cnt+=1
#If the Year comes before the Issue # the subname is passed with no Issue number.
#This logic checks for numbers before the extension in the format of 1 01 001
#and adds to the subname. (Cases where comic name is $Series_$Year_$Issue)
if len(subnm) > 1:
if (re.search('[0-9]{0,1}[0-9]{0,1}[0-9]{1,1}\.cb.',subnm[2]) is not None):
subname = str(subnm[0])+str(subnm[2])
else:
subname = subnm[0]
else:
subname = subnm[0]
if (re.search('(19\d{2}|20\d{2})',subnm[1]) is not None):
logger.fdebug('subnm0: ' + str(subnm[0]))
logger.fdebug('subnm1: ' + str(subnm[1]))
# logger.fdebug('subnm2: ' + str(subnm[2]))
subname = str(subnm[0]).lstrip() + ' (' + str(subnm[1]).strip() + ') '
subnm = re.findall('[^()]+', subname) # we need to regenerate this here.
subname = subnm[0]
if len(subnm):
# if it still has no year (brackets), check setting and either assume no year needed.
subname = subname
logger.fdebug('[FILECHECKER] subname no brackets: ' + str(subname))
subname = re.sub('\_', ' ', subname)
nonocount = 0
charpos = 0
detneg = "no"
@ -748,6 +754,8 @@ def listFiles(dir,watchcomic,Publisher,AlternateSearch=None,manual=None,sarc=Non
'JusttheDigits': justthedigits
})
#print('appended.')
# watchmatch['comiclist'] = comiclist
# break
else:
if moddir is not None:
item = os.path.join(moddir, item)

View File

@ -75,8 +75,8 @@ def Startit(searchName, searchIssue, searchYear, ComicVersion, IssDateFix):
"link": urlParse["href"],
"length": urlParse["length"],
"pubdate": feed.entries[countUp].updated})
countUp=countUp+1
logger.fdebug('keypair: ' + str(keyPair))
# thanks to SpammyHagar for spending the time in compiling these regEx's!
@ -98,12 +98,12 @@ def Startit(searchName, searchIssue, searchYear, ComicVersion, IssDateFix):
for entry in keyPair:
title = entry['title']
#logger.fdebug("titlesplit: " + str(title.split("\"")))
logger.fdebug("titlesplit: " + str(title.split("\"")))
splitTitle = title.split("\"")
noYear = 'False'
for subs in splitTitle:
#logger.fdebug('sub:' + subs)
logger.fdebug('sub:' + subs)
regExCount = 0
if len(subs) >= len(cName) and not any(d in subs.lower() for d in except_list):
#Looping through dictionary to run each regEx - length + regex is determined by regexList up top.
@ -116,6 +116,7 @@ def Startit(searchName, searchIssue, searchYear, ComicVersion, IssDateFix):
# 'title': subs,
# 'link': str(link)
# })
logger.fdebug('match.')
if IssDateFix != "no":
if IssDateFix == "01" or IssDateFix == "02": ComicYearFix = str(int(searchYear) - 1)
else: ComicYearFix = str(int(searchYear) + 1)

View File

@ -267,13 +267,13 @@ def rename_param(comicid, comicname, issue, ofilename, comicyear=None, issueid=N
if issueid is None:
logger.fdebug('annualize is ' + str(annualize))
if annualize is None:
chkissue = myDB.action("SELECT * from issues WHERE ComicID=? AND Issue_Number=?", [comicid, issue]).fetchone()
chkissue = myDB.selectone("SELECT * from issues WHERE ComicID=? AND Issue_Number=?", [comicid, issue]).fetchone()
else:
chkissue = myDB.action("SELECT * from annuals WHERE ComicID=? AND Issue_Number=?", [comicid, issue]).fetchone()
chkissue = myDB.selectone("SELECT * from annuals WHERE ComicID=? AND Issue_Number=?", [comicid, issue]).fetchone()
if chkissue is None:
#rechk chkissue against int value of issue #
chkissue = myDB.action("SELECT * from issues WHERE ComicID=? AND Int_IssueNumber=?", [comicid, issuedigits(issue)]).fetchone()
chkissue = myDB.selectone("SELECT * from issues WHERE ComicID=? AND Int_IssueNumber=?", [comicid, issuedigits(issue)]).fetchone()
if chkissue is None:
if chkissue is None:
logger.error('Invalid Issue_Number - please validate.')
@ -286,10 +286,10 @@ def rename_param(comicid, comicname, issue, ofilename, comicyear=None, issueid=N
#use issueid to get publisher, series, year, issue number
logger.fdebug('issueid is now : ' + str(issueid))
issuenzb = myDB.action("SELECT * from issues WHERE ComicID=? AND IssueID=?", [comicid,issueid]).fetchone()
issuenzb = myDB.selectone("SELECT * from issues WHERE ComicID=? AND IssueID=?", [comicid,issueid]).fetchone()
if issuenzb is None:
logger.fdebug('not an issue, checking against annuals')
issuenzb = myDB.action("SELECT * from annuals WHERE ComicID=? AND IssueID=?", [comicid,issueid]).fetchone()
issuenzb = myDB.selectone("SELECT * from annuals WHERE ComicID=? AND IssueID=?", [comicid,issueid]).fetchone()
if issuenzb is None:
logger.fdebug('Unable to rename - cannot locate issue id within db')
return
@ -388,7 +388,7 @@ def rename_param(comicid, comicname, issue, ofilename, comicyear=None, issueid=N
month = issuenzb['IssueDate'][5:7].replace('-','').strip()
month_name = fullmonth(month)
logger.fdebug('Issue Year : ' + str(issueyear))
comicnzb= myDB.action("SELECT * from comics WHERE comicid=?", [comicid]).fetchone()
comicnzb= myDB.selectone("SELECT * from comics WHERE comicid=?", [comicid]).fetchone()
publisher = comicnzb['ComicPublisher']
logger.fdebug('Publisher: ' + str(publisher))
series = comicnzb['ComicName']
@ -493,7 +493,7 @@ def ComicSort(comicorder=None,sequence=None,imported=None):
i = 0
import db, logger
myDB = db.DBConnection()
comicsort = myDB.action("SELECT * FROM comics ORDER BY ComicSortName COLLATE NOCASE")
comicsort = myDB.select("SELECT * FROM comics ORDER BY ComicSortName COLLATE NOCASE")
comicorderlist = []
comicorder = {}
comicidlist = []
@ -782,7 +782,7 @@ def checkthepub(ComicID):
import db, logger
myDB = db.DBConnection()
publishers = ['marvel', 'dc', 'darkhorse']
pubchk = myDB.action("SELECT * FROM comics WHERE ComicID=?", [ComicID]).fetchone()
pubchk = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [ComicID]).fetchone()
if pubchk is None:
logger.fdebug('No publisher information found to aid in determining series..defaulting to base check of 55 days.')
return mylar.BIGGIE_PUB
@ -798,7 +798,7 @@ def checkthepub(ComicID):
def annual_update():
import db, logger
myDB = db.DBConnection()
annuallist = myDB.action('SELECT * FROM annuals')
annuallist = myDB.select('SELECT * FROM annuals')
if annuallist is None:
logger.info('no annuals to update.')
return
@ -806,7 +806,7 @@ def annual_update():
cnames = []
#populate the ComicName field with the corresponding series name from the comics table.
for ann in annuallist:
coms = myDB.action('SELECT * FROM comics WHERE ComicID=?', [ann['ComicID']]).fetchone()
coms = myDB.selectone('SELECT * FROM comics WHERE ComicID=?', [ann['ComicID']]).fetchone()
cnames.append({'ComicID': ann['ComicID'],
'ComicName': coms['ComicName']
})
@ -856,7 +856,7 @@ def latestdate_fix():
import db, logger
datefix = []
myDB = db.DBConnection()
comiclist = myDB.action('SELECT * FROM comics')
comiclist = myDB.select('SELECT * FROM comics')
if comiclist is None:
logger.fdebug('No Series in watchlist to correct latest date')
return

View File

@ -54,18 +54,20 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
controlValueDict = {"ComicID": comicid}
dbcomic = myDB.action('SELECT * FROM comics WHERE ComicID=?', [comicid]).fetchone()
dbcomic = myDB.selectone('SELECT * FROM comics WHERE ComicID=?', [comicid]).fetchone()
if dbcomic is None:
newValueDict = {"ComicName": "Comic ID: %s" % (comicid),
"Status": "Loading"}
comlocation = None
oldcomversion = None
else:
if chkwant is not None:
logger.fdebug('ComicID: ' + str(comicid) + ' already exists. Not adding from the future pull list at this time.')
return 'Exists'
newValueDict = {"Status": "Loading"}
comlocation = dbcomic['ComicLocation']
filechecker.validateAndCreateDirectory(comlocation, True)
oldcomversion = dbcomic['ComicVersion'] #store the comicversion and chk if it exists before hammering.
myDB.upsert("comics", newValueDict, controlValueDict)
#run the re-sortorder here in order to properly display the page
@ -74,7 +76,7 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
# we need to lookup the info for the requested ComicID in full now
comic = cv.getComic(comicid,'comic')
#comic = myDB.action('SELECT * FROM comics WHERE ComicID=?', [comicid]).fetchone()
if not comic:
logger.warn('Error fetching comic. ID for : ' + comicid)
if dbcomic is None:
@ -110,7 +112,7 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
#print ("gcdinfo:" + str(gcdinfo))
elif mismatch == "yes":
CV_EXcomicid = myDB.action("SELECT * from exceptions WHERE ComicID=?", [comicid]).fetchone()
CV_EXcomicid = myDB.selectone("SELECT * from exceptions WHERE ComicID=?", [comicid])
if CV_EXcomicid['variloop'] is None: pass
else:
vari_loop = CV_EXcomicid['variloop']
@ -469,395 +471,10 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
#if set to 'no' then we haven't pulled down the issues, otherwise we did it already
issued = cv.getComic(comicid,'issue')
logger.info('Sucessfully retrieved issue details for ' + comic['ComicName'] )
n = 0
iscnt = int(comicIssues)
issid = []
issnum = []
issname = []
issdate = []
issuedata = []
int_issnum = []
#let's start issue #'s at 0 -- thanks to DC for the new 52 reboot! :)
latestiss = "0"
latestdate = "0000-00-00"
firstiss = "10000000"
firstdate = "2099-00-00"
#print ("total issues:" + str(iscnt))
#---removed NEW code here---
logger.info('Now adding/updating issues for ' + comic['ComicName'])
if not mylar.CV_ONLY:
#fccnt = int(fc['comiccount'])
#logger.info(u"Found " + str(fccnt) + "/" + str(iscnt) + " issues of " + comic['ComicName'] + "...verifying")
#fcnew = []
if iscnt > 0: #if a series is brand new, it wont have any issues/details yet so skip this part
while (n <= iscnt):
#---NEW.code
try:
firstval = issued['issuechoice'][n]
except IndexError:
break
cleanname = helpers.cleanName(firstval['Issue_Name'])
issid = str(firstval['Issue_ID'])
issnum = str(firstval['Issue_Number'])
#print ("issnum: " + str(issnum))
issname = cleanname
if '.' in str(issnum):
issn_st = str(issnum).find('.')
issn_b4dec = str(issnum)[:issn_st]
#if the length of decimal is only 1 digit, assume it's a tenth
dec_is = str(issnum)[issn_st + 1:]
if len(dec_is) == 1:
dec_nisval = int(dec_is) * 10
iss_naftdec = str(dec_nisval)
if len(dec_is) == 2:
dec_nisval = int(dec_is)
iss_naftdec = str(dec_nisval)
iss_issue = issn_b4dec + "." + iss_naftdec
issis = (int(issn_b4dec) * 1000) + dec_nisval
elif 'au' in issnum.lower():
print ("au detected")
stau = issnum.lower().find('au')
issnum_au = issnum[:stau]
print ("issnum_au: " + str(issnum_au))
#account for Age of Ultron mucked up numbering
issis = str(int(issnum_au) * 1000) + 'AU'
else: issis = int(issnum) * 1000
bb = 0
while (bb <= iscnt):
try:
gcdval = gcdinfo['gcdchoice'][bb]
#print ("gcdval: " + str(gcdval))
except IndexError:
#account for gcd variation here
if gcdinfo['gcdvariation'] == 'gcd':
#logger.fdebug("gcd-variation accounted for.")
issdate = '0000-00-00'
int_issnum = int ( issis / 1000 )
break
if 'nn' in str(gcdval['GCDIssue']):
#no number detected - GN, TP or the like
logger.warn('Non Series detected (Graphic Novel, etc) - cannot proceed at this time.')
updater.no_searchresults(comicid)
return
elif 'au' in gcdval['GCDIssue'].lower():
#account for Age of Ultron mucked up numbering - this is in format of 5AU.00
gstau = gcdval['GCDIssue'].lower().find('au')
gcdis_au = gcdval['GCDIssue'][:gstau]
gcdis = str(int(gcdis_au) * 1000) + 'AU'
elif '.' in str(gcdval['GCDIssue']):
#logger.fdebug("g-issue:" + str(gcdval['GCDIssue']))
issst = str(gcdval['GCDIssue']).find('.')
#logger.fdebug("issst:" + str(issst))
issb4dec = str(gcdval['GCDIssue'])[:issst]
#logger.fdebug("issb4dec:" + str(issb4dec))
#if the length of decimal is only 1 digit, assume it's a tenth
decis = str(gcdval['GCDIssue'])[issst+1:]
#logger.fdebug("decis:" + str(decis))
if len(decis) == 1:
decisval = int(decis) * 10
issaftdec = str(decisval)
if len(decis) == 2:
decisval = int(decis)
issaftdec = str(decisval)
gcd_issue = issb4dec + "." + issaftdec
#logger.fdebug("gcd_issue:" + str(gcd_issue))
try:
gcdis = (int(issb4dec) * 1000) + decisval
except ValueError:
logger.error('This has no issue # for me to get - Either a Graphic Novel or one-shot. This feature to allow these will be added in the near future.')
updater.no_searchresults(comicid)
return
else:
gcdis = int(str(gcdval['GCDIssue'])) * 1000
if gcdis == issis:
issdate = str(gcdval['GCDDate'])
if str(issis).isdigit():
int_issnum = int( gcdis / 1000 )
else:
if 'au' in issis.lower():
int_issnum = str(int(gcdis[:-2]) / 1000) + 'AU'
else:
logger.error('this has an alpha-numeric in the issue # which I cannot account for. Get on github and log the issue for evilhero.')
return
#get the latest issue / date using the date.
if gcdval['GCDDate'] > latestdate:
latestiss = str(issnum)
latestdate = str(gcdval['GCDDate'])
break
#bb = iscnt
bb+=1
#print("(" + str(n) + ") IssueID: " + str(issid) + " IssueNo: " + str(issnum) + " Date" + str(issdate))
#---END.NEW.
# check if the issue already exists
iss_exists = myDB.action('SELECT * from issues WHERE IssueID=?', [issid]).fetchone()
# Only change the status & add DateAdded if the issue is already in the database
if iss_exists is None:
newValueDict['DateAdded'] = helpers.today()
controlValueDict = {"IssueID": issid}
newValueDict = {"ComicID": comicid,
"ComicName": comic['ComicName'],
"IssueName": issname,
"Issue_Number": issnum,
"IssueDate": issdate,
"Int_IssueNumber": int_issnum
}
if mylar.AUTOWANT_ALL:
newValueDict['Status'] = "Wanted"
elif issdate > helpers.today() and mylar.AUTOWANT_UPCOMING:
newValueDict['Status'] = "Wanted"
else:
newValueDict['Status'] = "Skipped"
if iss_exists:
#print ("Existing status : " + str(iss_exists['Status']))
newValueDict['Status'] = iss_exists['Status']
try:
myDB.upsert("issues", newValueDict, controlValueDict)
except sqlite3.InterfaceError, e:
#raise sqlite3.InterfaceError(e)
logger.error('MAJOR error trying to get issue data, this is most likey a MULTI-VOLUME series and you need to use the custom_exceptions.csv file.')
myDB.action("DELETE FROM comics WHERE ComicID=?", [comicid])
return
n+=1
# logger.debug(u"Updating comic cache for " + comic['ComicName'])
# cache.getThumb(ComicID=issue['issueid'])
# logger.debug(u"Updating cache for: " + comic['ComicName'])
# cache.getThumb(ComicIDcomicid)
else:
if iscnt > 0: #if a series is brand new, it wont have any issues/details yet so skip this part
while (n <= iscnt):
#---NEW.code
try:
firstval = issued['issuechoice'][n]
#print firstval
except IndexError:
break
cleanname = helpers.cleanName(firstval['Issue_Name'])
issid = str(firstval['Issue_ID'])
issnum = firstval['Issue_Number']
#print ("issnum: " + str(issnum))
issname = cleanname
issdate = str(firstval['Issue_Date'])
storedate = str(firstval['Store_Date'])
if issnum.isdigit():
int_issnum = int( issnum ) * 1000
else:
if 'a.i.' in issnum.lower(): issnum = re.sub('\.', '', issnum)
if 'au' in issnum.lower():
int_issnum = (int(issnum[:-2]) * 1000) + ord('a') + ord('u')
# elif 'ai' in issnum.lower():
# int_issnum = (int(issnum[:-2]) * 1000) + ord('a') + ord('i')
elif 'inh' in issnum.lower():
int_issnum = (int(issnum[:-4]) * 1000) + ord('i') + ord('n') + ord('h')
elif 'now' in issnum.lower():
int_issnum = (int(issnum[:-4]) * 1000) + ord('n') + ord('o') + ord('w')
elif u'\xbd' in issnum:
issnum = .5
int_issnum = int(issnum) * 1000
elif u'\xbc' in issnum:
issnum = .25
int_issnum = int(issnum) * 1000
elif u'\xbe' in issnum:
issnum = .75
int_issnum = int(issnum) * 1000
elif u'\u221e' in issnum:
#issnum = utf-8 will encode the infinity symbol without any help
int_issnum = 9999999999 * 1000 # set 9999999999 for integer value of issue
elif '.' in issnum or ',' in issnum:
if ',' in issnum: issnum = re.sub(',','.', issnum)
issst = str(issnum).find('.')
#logger.fdebug("issst:" + str(issst))
if issst == 0:
issb4dec = 0
else:
issb4dec = str(issnum)[:issst]
#logger.fdebug("issb4dec:" + str(issb4dec))
#if the length of decimal is only 1 digit, assume it's a tenth
decis = str(issnum)[issst+1:]
#logger.fdebug("decis:" + str(decis))
if len(decis) == 1:
decisval = int(decis) * 10
issaftdec = str(decisval)
elif len(decis) == 2:
decisval = int(decis)
issaftdec = str(decisval)
else:
decisval = decis
issaftdec = str(decisval)
try:
# int_issnum = str(issnum)
int_issnum = (int(issb4dec) * 1000) + (int(issaftdec) * 10)
except ValueError:
logger.error('This has no issue # for me to get - Either a Graphic Novel or one-shot.')
updater.no_searchresults(comicid)
return
else:
try:
x = float(issnum)
#validity check
if x < 0:
logger.info('I have encountered a negative issue #: ' + str(issnum) + '. Trying to accomodate.')
logger.fdebug('value of x is : ' + str(x))
int_issnum = (int(x)*1000) - 1
else: raise ValueError
except ValueError, e:
x = 0
tstord = None
issno = None
invchk = "false"
while (x < len(issnum)):
if issnum[x].isalpha():
#take first occurance of alpha in string and carry it through
tstord = issnum[x:].rstrip()
issno = issnum[:x].rstrip()
try:
isschk = float(issno)
except ValueError, e:
logger.fdebug('invalid numeric for issue - cannot be found. Ignoring.')
issno = None
tstord = None
invchk = "true"
break
x+=1
if tstord is not None and issno is not None:
logger.fdebug('tstord: ' + str(tstord))
a = 0
ordtot = 0
while (a < len(tstord)):
ordtot += ord(tstord[a].lower()) #lower-case the letters for simplicty
a+=1
logger.fdebug('issno: ' + str(issno))
int_issnum = (int(issno) * 1000) + ordtot
logger.fdebug('intissnum : ' + str(int_issnum))
elif invchk == "true":
logger.fdebug('this does not have an issue # that I can parse properly.')
return
else:
logger.error(str(issnum) + ' this has an alpha-numeric in the issue # which I cannot account for.')
return
#get the latest issue / date using the date.
#logger.info('latest date: ' + str(latestdate))
#logger.info('first date: ' + str(firstdate))
#logger.info('issue date: ' + str(firstval['Issue_Date']))
if firstval['Issue_Date'] > latestdate:
latestiss = issnum
latestdate = str(firstval['Issue_Date'])
if firstval['Issue_Date'] < firstdate:
firstiss = issnum
firstdate = str(firstval['Issue_Date'])
if issuechk is not None and issuetype == 'series':
logger.fdebug('comparing ' + str(issuechk) + ' .. to .. ' + str(int_issnum))
if issuechk == int_issnum:
weeklyissue_check.append({"Int_IssueNumber": int_issnum,
"Issue_Number": issnum,
"IssueDate": issdate,
"ReleaseDate": storedate})
#--moved to lower function.
# # check if the issue already exists
# iss_exists = myDB.action('SELECT * from issues WHERE IssueID=?', [issid]).fetchone()
# # Only change the status & add DateAdded if the issue is already in the database
# if iss_exists is None:
# newValueDict['DateAdded'] = helpers.today()
# controlValueDict = {"IssueID": issid}
# newValueDict = {"ComicID": comicid,
# "ComicName": comic['ComicName'],
# "IssueName": issname,
# "Issue_Number": issnum,
# "IssueDate": issdate,
# "Int_IssueNumber": int_issnum
# }
issuedata.append({"ComicID": comicid,
"IssueID": issid,
"ComicName": comic['ComicName'],
"IssueName": issname,
"Issue_Number": issnum,
"IssueDate": issdate,
"ReleaseDate": storedate,
"Int_IssueNumber": int_issnum})
#logger.info('issuedata: ' + str(issuedata))
#--moved to lower function
# if iss_exists:
# print ("Existing status : " + str(iss_exists['Status']))
# newValueDict['Status'] = iss_exists['Status']
# else:
# print "issue doesn't exist in db."
# if mylar.AUTOWANT_ALL:
# newValueDict['Status'] = "Wanted"
# elif issdate > helpers.today() and mylar.AUTOWANT_UPCOMING:
# newValueDict['Status'] = "Wanted"
# else:
# newValueDict['Status'] = "Skipped"
# try:
# myDB.upsert("issues", newValueDict, controlValueDict)
# except sqlite3.InterfaceError, e:
# #raise sqlite3.InterfaceError(e)
# logger.error('Something went wrong - I cannot add the issue information into my DB.')
# myDB.action("DELETE FROM comics WHERE ComicID=?", [comicid])
# return
n+=1
if len(issuedata) > 1 and not calledfrom == 'dbupdate':
logger.fdebug('initiating issue updating - info & status')
issue_collection(issuedata,nostatus='False')
else:
logger.fdebug('initiating issue updating - just the info')
issue_collection(issuedata,nostatus='True')
#figure publish dates here...
styear = str(SeriesYear)
#if SeriesYear == '0000':
# styear = firstdate[:4]
if firstdate[5:7] == '00':
stmonth = "?"
else:
stmonth = helpers.fullmonth(firstdate[5:7])
ltyear = re.sub('/s','', latestdate[:4])
if latestdate[5:7] == '00':
ltmonth = "?"
else:
ltmonth = helpers.fullmonth(latestdate[5:7])
#try to determine if it's an 'actively' published comic from above dates
#threshold is if it's within a month (<55 days) let's assume it's recent.
c_date = datetime.date(int(latestdate[:4]),int(latestdate[5:7]),1)
n_date = datetime.date.today()
recentchk = (n_date - c_date).days
#print ("recentchk: " + str(recentchk))
if recentchk <= 55:
lastpubdate = 'Present'
else:
lastpubdate = str(ltmonth) + ' ' + str(ltyear)
publishfigure = str(stmonth) + ' ' + str(styear) + ' - ' + str(lastpubdate)
controlValueStat = {"ComicID": comicid}
newValueStat = {"Status": "Active",
"LatestIssue": latestiss,
"LatestDate": latestdate,
"ComicPublished": publishfigure,
"LastUpdated": helpers.now()
}
myDB.upsert("comics", newValueStat, controlValueStat)
#move to own function so can call independently to only refresh issue data
#issued is from cv.getComic, comic['ComicName'] & comicid would both be already known to do independent call.
issuedata = updateissuedata(comicid, comic['ComicName'], issued, comicIssues, calledfrom, SeriesYear=SeriesYear)
if mylar.CVINFO or (mylar.CV_ONLY and mylar.CVINFO):
if not os.path.exists(os.path.join(comlocation,"cvinfo")) or mylar.CV_ONETIMER:
@ -866,6 +483,25 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
logger.info('Updating complete for: ' + comic['ComicName'])
if calledfrom == 'weekly':
logger.info('Successfully refreshed ' + comic['ComicName'] + ' (' + str(SeriesYear) + '). Returning to Weekly issue comparison.')
logger.info('Update issuedata for ' + str(issuechk) + ' of : ' + str(weeklyissue_check))
return issuedata # this should be the weeklyissue_check data from updateissuedata function
elif calledfrom == 'dbupdate':
logger.info('returning to dbupdate module')
return #issuedata # this should be the issuedata data from updateissuedata function
elif calledfrom == 'weeklycheck':
logger.info('Successfully refreshed ' + comic['ComicName'] + ' (' + str(SeriesYear) + '). Returning to Weekly issue update.')
return #no need to return any data here.
#if it made it here, then the issuedata contains dates, let's pull the data now.
latestiss = issuedata['LatestIssue']
latestdate = issuedata['LatestDate']
lastpubdate = issuedata['LastPubDate']
#move the files...if imported is not empty (meaning it's not from the mass importer.)
if imported is None or imported == 'None':
pass
@ -877,20 +513,11 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
logger.info('Mass import - Moving not Enabled. Setting Archived Status for import.')
moveit.archivefiles(comicid,ogcname)
if calledfrom == 'dbupdate':
logger.info('returning to dbupdate module')
return
elif calledfrom == 'weekly':
logger.info('Successfully refreshed ' + comic['ComicName'] + ' (' + str(SeriesYear) + '). Returning to Weekly issue comparison.')
logger.info('Update issuedata for ' + str(issuechk) + ' of : ' + str(weeklyissue_check))
return weeklyissue_check
#check for existing files...
statbefore = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone()
statbefore = myDB.selectone("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone()
logger.fdebug('issue: ' + str(latestiss) + ' status before chk :' + str(statbefore['Status']))
updater.forceRescan(comicid)
statafter = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone()
statafter = myDB.selectone("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone()
logger.fdebug('issue: ' + str(latestiss) + ' status after chk :' + str(statafter['Status']))
if pullupd is None:
@ -898,7 +525,7 @@ def addComictoDB(comicid,mismatch=None,pullupd=None,imported=None,ogcname=None,c
# do this for only Present comics....
if mylar.AUTOWANT_UPCOMING and lastpubdate == 'Present': #and 'Present' in gcdinfo['resultPublished']:
logger.fdebug('latestissue: #' + str(latestiss))
chkstats = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone()
chkstats = myDB.selectone("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [comicid,str(latestiss)]).fetchone()
logger.fdebug('latestissue status: ' + chkstats['Status'])
if chkstats['Status'] == 'Skipped' or chkstats['Status'] == 'Wanted' or chkstats['Status'] == 'Snatched':
logger.info('Checking this week pullist for new issues of ' + comic['ComicName'])
@ -960,7 +587,7 @@ def GCDimport(gcomicid, pullupd=None,imported=None,ogcname=None):
controlValueDict = {"ComicID": gcdcomicid}
comic = myDB.action('SELECT ComicName, ComicYear, Total, ComicPublished, ComicImage, ComicLocation, ComicPublisher FROM comics WHERE ComicID=?', [gcomicid]).fetchone()
comic = myDB.selectone('SELECT ComicName, ComicYear, Total, ComicPublished, ComicImage, ComicLocation, ComicPublisher FROM comics WHERE ComicID=?', [gcomicid]).fetchone()
ComicName = comic[0]
ComicYear = comic[1]
ComicIssues = comic[2]
@ -1186,7 +813,7 @@ def GCDimport(gcomicid, pullupd=None,imported=None,ogcname=None):
#---END.NEW.
# check if the issue already exists
iss_exists = myDB.action('SELECT * from issues WHERE IssueID=?', [issid]).fetchone()
iss_exists = myDB.selectone('SELECT * from issues WHERE IssueID=?', [issid]).fetchone()
# Only change the status & add DateAdded if the issue is not already in the database
@ -1303,7 +930,7 @@ def issue_collection(issuedata,nostatus):
}
# check if the issue already exists
iss_exists = myDB.action('SELECT * from issues WHERE IssueID=?', [issue['IssueID']]).fetchone()
iss_exists = myDB.selectone('SELECT * from issues WHERE IssueID=?', [issue['IssueID']]).fetchone()
if nostatus == 'False':
@ -1334,6 +961,7 @@ def issue_collection(issuedata,nostatus):
myDB.action("DELETE FROM comics WHERE ComicID=?", [issue['ComicID']])
return
def manualAnnual(manual_comicid, comicname, comicyear, comicid):
#called when importing/refreshing an annual that was manually added.
myDB = db.DBConnection()
@ -1384,3 +1012,240 @@ def manualAnnual(manual_comicid, comicname, comicyear, comicid):
myDB.upsert("annuals", newVals, newCtrl)
n+=1
return
def updateissuedata(comicid, comicname=None, issued=None, comicIssues=None, calledfrom=None, issuechk=None, issuetype=None, SeriesYear=None):
weeklyissue_check = []
logger.fdebug('issuedata call references...')
logger.fdebug('comicid:' + str(comicid))
logger.fdebug('comicname:' + str(comicname))
logger.fdebug('comicissues:' + str(comicIssues))
logger.fdebug('calledfrom: ' + str(calledfrom))
logger.fdebug('issuechk: ' + str(issuechk))
logger.fdebug('issuetype: ' + str(issuetype))
#to facilitate independent calls to updateissuedata ONLY, account for data not available and get it.
#chkType comes from the weeklypulllist - either 'annual' or not to distinguish annuals vs. issues
if comicIssues is None:
comic = cv.getComic(comicid,'comic')
if comicIssues is None:
comicIssues = comic['ComicIssues']
if SeriesYear is None:
SeriesYear = comic['ComicYear']
if comicname is None:
comicname = comic['ComicName']
if issued is None:
issued = cv.getComic(comicid,'issue')
n = 0
iscnt = int(comicIssues)
issid = []
issnum = []
issname = []
issdate = []
issuedata = []
int_issnum = []
#let's start issue #'s at 0 -- thanks to DC for the new 52 reboot! :)
latestiss = "0"
latestdate = "0000-00-00"
firstiss = "10000000"
firstdate = "2099-00-00"
#print ("total issues:" + str(iscnt))
logger.info('Now adding/updating issues for ' + comicname)
if iscnt > 0: #if a series is brand new, it wont have any issues/details yet so skip this part
while (n <= iscnt):
try:
firstval = issued['issuechoice'][n]
#print firstval
except IndexError:
break
cleanname = helpers.cleanName(firstval['Issue_Name'])
issid = str(firstval['Issue_ID'])
issnum = firstval['Issue_Number']
#print ("issnum: " + str(issnum))
issname = cleanname
issdate = str(firstval['Issue_Date'])
storedate = str(firstval['Store_Date'])
if issnum.isdigit():
int_issnum = int( issnum ) * 1000
else:
if 'a.i.' in issnum.lower(): issnum = re.sub('\.', '', issnum)
if 'au' in issnum.lower():
int_issnum = (int(issnum[:-2]) * 1000) + ord('a') + ord('u')
elif 'inh' in issnum.lower():
int_issnum = (int(issnum[:-4]) * 1000) + ord('i') + ord('n') + ord('h')
elif 'now' in issnum.lower():
int_issnum = (int(issnum[:-4]) * 1000) + ord('n') + ord('o') + ord('w')
elif u'\xbd' in issnum:
issnum = .5
int_issnum = int(issnum) * 1000
elif u'\xbc' in issnum:
issnum = .25
int_issnum = int(issnum) * 1000
elif u'\xbe' in issnum:
issnum = .75
int_issnum = int(issnum) * 1000
elif u'\u221e' in issnum:
#issnum = utf-8 will encode the infinity symbol without any help
int_issnum = 9999999999 * 1000 # set 9999999999 for integer value of issue
elif '.' in issnum or ',' in issnum:
if ',' in issnum: issnum = re.sub(',','.', issnum)
issst = str(issnum).find('.')
#logger.fdebug("issst:" + str(issst))
if issst == 0:
issb4dec = 0
else:
issb4dec = str(issnum)[:issst]
#logger.fdebug("issb4dec:" + str(issb4dec))
#if the length of decimal is only 1 digit, assume it's a tenth
decis = str(issnum)[issst+1:]
#logger.fdebug("decis:" + str(decis))
if len(decis) == 1:
decisval = int(decis) * 10
issaftdec = str(decisval)
elif len(decis) == 2:
decisval = int(decis)
issaftdec = str(decisval)
else:
decisval = decis
issaftdec = str(decisval)
try:
# int_issnum = str(issnum)
int_issnum = (int(issb4dec) * 1000) + (int(issaftdec) * 10)
except ValueError:
logger.error('This has no issue # for me to get - Either a Graphic Novel or one-shot.')
updater.no_searchresults(comicid)
return
else:
try:
x = float(issnum)
#validity check
if x < 0:
logger.info('I have encountered a negative issue #: ' + str(issnum) + '. Trying to accomodate.')
logger.fdebug('value of x is : ' + str(x))
int_issnum = (int(x)*1000) - 1
else: raise ValueError
except ValueError, e:
x = 0
tstord = None
issno = None
invchk = "false"
while (x < len(issnum)):
if issnum[x].isalpha():
#take first occurance of alpha in string and carry it through
tstord = issnum[x:].rstrip()
issno = issnum[:x].rstrip()
try:
isschk = float(issno)
except ValueError, e:
logger.fdebug('invalid numeric for issue - cannot be found. Ignoring.')
issno = None
tstord = None
invchk = "true"
break
x+=1
if tstord is not None and issno is not None:
logger.fdebug('tstord: ' + str(tstord))
a = 0
ordtot = 0
while (a < len(tstord)):
ordtot += ord(tstord[a].lower()) #lower-case the letters for simplicty
a+=1
logger.fdebug('issno: ' + str(issno))
int_issnum = (int(issno) * 1000) + ordtot
logger.fdebug('intissnum : ' + str(int_issnum))
elif invchk == "true":
logger.fdebug('this does not have an issue # that I can parse properly.')
return
else:
logger.error(str(issnum) + ' this has an alpha-numeric in the issue # which I cannot account for.')
return
#get the latest issue / date using the date.
logger.fdebug('issue : ' + str(issnum))
logger.fdebug('latest date: ' + str(latestdate))
logger.fdebug('first date: ' + str(firstdate))
logger.fdebug('issue date: ' + str(firstval['Issue_Date']))
if firstval['Issue_Date'] > latestdate:
if issnum > latestiss:
latestiss = issnum
latestdate = str(firstval['Issue_Date'])
if firstval['Issue_Date'] < firstdate:
firstiss = issnum
firstdate = str(firstval['Issue_Date'])
if issuechk is not None and issuetype == 'series':
logger.fdebug('comparing ' + str(issuechk) + ' .. to .. ' + str(int_issnum))
if issuechk == int_issnum:
weeklyissue_check.append({"Int_IssueNumber": int_issnum,
"Issue_Number": issnum,
"IssueDate": issdate,
"ReleaseDate": storedate})
issuedata.append({"ComicID": comicid,
"IssueID": issid,
"ComicName": comicname,
"IssueName": issname,
"Issue_Number": issnum,
"IssueDate": issdate,
"ReleaseDate": storedate,
"Int_IssueNumber": int_issnum})
n+=1
if len(issuedata) > 1 and not calledfrom == 'dbupdate':
logger.fdebug('initiating issue updating - info & status')
issue_collection(issuedata,nostatus='False')
else:
logger.fdebug('initiating issue updating - just the info')
issue_collection(issuedata,nostatus='True')
styear = str(SeriesYear)
if firstdate[5:7] == '00':
stmonth = "?"
else:
stmonth = helpers.fullmonth(firstdate[5:7])
ltyear = re.sub('/s','', latestdate[:4])
if latestdate[5:7] == '00':
ltmonth = "?"
else:
ltmonth = helpers.fullmonth(latestdate[5:7])
#try to determine if it's an 'actively' published comic from above dates
#threshold is if it's within a month (<55 days) let's assume it's recent.
c_date = datetime.date(int(latestdate[:4]),int(latestdate[5:7]),1)
n_date = datetime.date.today()
recentchk = (n_date - c_date).days
#print ("recentchk: " + str(recentchk))
if recentchk <= 55:
lastpubdate = 'Present'
else:
lastpubdate = str(ltmonth) + ' ' + str(ltyear)
publishfigure = str(stmonth) + ' ' + str(styear) + ' - ' + str(lastpubdate)
controlValueStat = {"ComicID": comicid}
newValueStat = {"Status": "Active",
"ComicPublished": publishfigure,
"LatestIssue": latestiss,
"LatestDate": latestdate,
"LastUpdated": helpers.now()
}
myDB = db.DBConnection()
myDB.upsert("comics", newValueStat, controlValueStat)
importantdates = {}
importantdates['LatestIssue'] = latestiss
importantdates['LatestDate'] = latestdate
importantdates['LastPubDate'] = lastpubdate
if calledfrom == 'weekly':
return weeklyissue_check
elif calledfrom == 'dbupdate':
return issuedata
return importantdates

View File

@ -78,7 +78,7 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
#let's load in the watchlist to see if we have any matches.
logger.info("loading in the watchlist to see if a series is being watched already...")
watchlist = myDB.action("SELECT * from comics")
watchlist = myDB.select("SELECT * from comics")
ComicName = []
DisplayName = []
ComicYear = []
@ -486,7 +486,7 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None)
watch_issue = watch_the_list['ComicIssue']
print ("ComicID: " + str(watch_comicid))
print ("Issue#: " + str(watch_issue))
issuechk = myDB.action("SELECT * from issues where ComicID=? AND INT_IssueNumber=?", [watch_comicid, watch_issue]).fetchone()
issuechk = myDB.selectone("SELECT * from issues where ComicID=? AND INT_IssueNumber=?", [watch_comicid, watch_issue]).fetchone()
if issuechk is None:
print ("no matching issues for this comic#")
else:

View File

@ -14,101 +14,146 @@
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
import os
import threading
import sys
import logging
import unicodedata # for non-english locales
import traceback
import threading
import mylar
from logging import handlers
import mylar
from mylar import helpers
MAX_SIZE = 1000000 # 1mb
# These settings are for file logging only
FILENAME = 'mylar.log'
MAX_SIZE = 1000000 # 1 MB
MAX_FILES = 5
ERROR = logging.ERROR
WARNING = logging.WARNING
MESSAGE = logging.INFO
DEBUG = logging.DEBUG
FDEBUG = logging.DEBUG
# Mylar logger
logger = logging.getLogger('mylar')
# Simple rotating log handler that uses RotatingFileHandler
class RotatingLogger(object):
class LogListHandler(logging.Handler):
"""
Log handler for Web UI.
"""
def __init__(self, filename, max_size, max_files):
self.filename = filename
self.max_size = max_size
self.max_files = max_files
def initLogger(self, verbose=1):
l = logging.getLogger('mylar')
l.setLevel(logging.DEBUG)
self.filename = os.path.join(mylar.LOG_DIR, self.filename)
filehandler = handlers.RotatingFileHandler(self.filename, maxBytes=self.max_size, backupCount=self.max_files)
filehandler.setLevel(logging.DEBUG)
fileformatter = logging.Formatter('%(asctime)s - %(levelname)-7s :: %(message)s', '%d-%b-%Y %H:%M:%S')
filehandler.setFormatter(fileformatter)
l.addHandler(filehandler)
if verbose:
consolehandler = logging.StreamHandler()
if verbose == 1:
consolehandler.setLevel(logging.INFO)
if verbose == 2:
consolehandler.setLevel(logging.DEBUG)
consoleformatter = logging.Formatter('%(asctime)s - %(levelname)s :: %(message)s', '%d-%b-%Y %H:%M:%S')
consolehandler.setFormatter(consoleformatter)
l.addHandler(consolehandler)
def log(self, message, level):
def emit(self, record):
message = self.format(record)
message = message.replace("\n", "<br />")
mylar.LOG_LIST.insert(0, (helpers.now(), message, record.levelname, record.threadName))
logger = logging.getLogger('mylar')
threadname = threading.currentThread().getName()
if level != 'DEBUG':
if mylar.OS_DETECT == "Windows" and mylar.OS_ENCODING is not "utf-8":
tmpthedate = unicodedata.normalize('NFKD', helpers.now().decode(mylar.OS_ENCODING, "replace"))
else:
tmpthedate = helpers.now()
mylar.LOG_LIST.insert(0, (tmpthedate, message, level, threadname))
message = threadname + ' : ' + message
def initLogger(verbose=1):
# if mylar.MAX_LOGSIZE:
# MAX_SIZE = mylar.MAX_LOGSIZE
# else:
# MAX_SIZE = 1000000 # 1 MB
if level == 'DEBUG':
logger.debug(message)
elif level == 'INFO':
logger.info(message)
elif level == 'WARNING':
logger.warn(message)
elif level == 'FDEBUG':
logger.debug(message)
else:
logger.error(message)
"""
Setup logging for Mylar. It uses the logger instance with the name
'mylar'. Three log handlers are added:
mylar_log = RotatingLogger('mylar.log', MAX_SIZE, MAX_FILES)
* RotatingFileHandler: for the file Mylar.log
* LogListHandler: for Web UI
* StreamHandler: for console (if verbose > 0)
"""
def debug(message):
mylar_log.log(message, level='DEBUG')
# Configure the logger to accept all messages
logger.propagate = False
logger.setLevel(logging.DEBUG)# if verbose == 2 else logging.INFO)
def info(message):
mylar_log.log(message, level='INFO')
def warn(message):
mylar_log.log(message, level='WARNING')
def error(message):
mylar_log.log(message, level='ERROR')
# Setup file logger
filename = os.path.join(mylar.LOG_DIR, FILENAME)
def fdebug(message):
#if mylar.LOGVERBOSE == 1:
mylar_log.log(message, level='DEBUG')
file_formatter = logging.Formatter('%(asctime)s - %(levelname)-7s :: %(threadName)s : %(message)s', '%d-%b-%Y %H:%M:%S')
file_handler = handlers.RotatingFileHandler(filename, maxBytes=MAX_SIZE, backupCount=MAX_FILES)
file_handler.setLevel(logging.DEBUG)
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
# Add list logger
loglist_handler = LogListHandler()
#-- this needs to get enabled and logging changed everywhere so the accessing the log GUI won't hang the system.
#-- right now leave it set to INFO only, everything else will still get logged to the mylar.log file.
#if verbose == 2:
# loglist_handler.setLevel(logging.DEBUG)
#else:
# mylar_log.log(message, level='DEBUG')
# loglist_handler.setLevel(logging.INFO)
#--
loglist_handler.setLevel(logging.INFO)
logger.addHandler(loglist_handler)
# Setup console logger
if verbose:
console_formatter = logging.Formatter('%(asctime)s - %(levelname)s :: %(threadName)s : %(message)s', '%d-%b-%Y %H:%M:%S')
console_handler = logging.StreamHandler()
console_handler.setFormatter(console_formatter)
#print 'verbose is ' + str(verbose)
#if verbose == 2:
# console_handler.setLevel(logging.DEBUG)
#else:
# console_handler.setLevel(logging.INFO)
console_handler.setLevel(logging.INFO)
logger.addHandler(console_handler)
# Install exception hooks
initHooks()
def initHooks(global_exceptions=True, thread_exceptions=True, pass_original=True):
"""
This method installs exception catching mechanisms. Any exception caught
will pass through the exception hook, and will be logged to the logger as
an error. Additionally, a traceback is provided.
This is very useful for crashing threads and any other bugs, that may not
be exposed when running as daemon.
The default exception hook is still considered, if pass_original is True.
"""
def excepthook(*exception_info):
# We should always catch this to prevent loops!
try:
message = "".join(traceback.format_exception(*exception_info))
logger.error("Uncaught exception: %s", message)
except:
pass
# Original excepthook
if pass_original:
sys.__excepthook__(*exception_info)
# Global exception hook
if global_exceptions:
sys.excepthook = excepthook
# Thread exception hook
if thread_exceptions:
old_init = threading.Thread.__init__
def new_init(self, *args, **kwargs):
old_init(self, *args, **kwargs)
old_run = self.run
def new_run(*args, **kwargs):
try:
old_run(*args, **kwargs)
except (KeyboardInterrupt, SystemExit):
raise
except:
excepthook(*sys.exc_info())
self.run = new_run
# Monkey patch the run() by monkey patching the __init__ method
threading.Thread.__init__ = new_init
# Expose logger methods
info = logger.info
warn = logger.warn
error = logger.error
debug = logger.debug
warning = logger.warning
message = logger.info
exception = logger.exception
fdebug = logger.debug

View File

@ -8,7 +8,7 @@ def movefiles(comicid,comlocation,ogcname,imported=None):
myDB = db.DBConnection()
print ("comlocation is : " + str(comlocation))
print ("original comicname is : " + str(ogcname))
impres = myDB.action("SELECT * from importresults WHERE ComicName=?", [ogcname])
impres = myDB.select("SELECT * from importresults WHERE ComicName=?", [ogcname])
if impres is not None:
#print ("preparing to move " + str(len(impres)) + " files into the right directory now.")
@ -36,7 +36,7 @@ def movefiles(comicid,comlocation,ogcname,imported=None):
print("all files moved.")
#now that it's moved / renamed ... we remove it from importResults or mark as completed.
results = myDB.action("SELECT * from importresults WHERE ComicName=?", [ogcname])
results = myDB.select("SELECT * from importresults WHERE ComicName=?", [ogcname])
if results is not None:
for result in results:
controlValue = {"impID": result['impid']}
@ -46,7 +46,7 @@ def movefiles(comicid,comlocation,ogcname,imported=None):
def archivefiles(comicid,ogcname):
# if move files isn't enabled, let's set all found comics to Archive status :)
result = myDB.action("SELECT * FROM importresults WHERE ComicName=?", [ogcname])
result = myDB.select("SELECT * FROM importresults WHERE ComicName=?", [ogcname])
if result is None: pass
else:
ogdir = result['Location']

View File

@ -107,17 +107,18 @@ class NMA:
return response
def notify(self, ComicName=None, Year=None, Issue=None, snatched_nzb=None, sent_to=None):
def notify(self, snline=None, prline=None, prline2=None, snatched_nzb=None, sent_to=None, prov=None):
apikey = self.apikey
priority = self.priority
if snatched_nzb:
event = snatched_nzb + " snatched!"
description = "Mylar has snatched: " + snatched_nzb + " and has sent it to " + sent_to
if snatched_nzb[-1] == '\.': snatched_nzb = snatched_nzb[:-1]
event = snline
description = "Mylar has snatched: " + snatched_nzb + " from " + prov + " and has sent it to " + sent_to
else:
event = ComicName + ' (' + Year + ') - Issue #' + Issue + ' complete!'
description = "Mylar has downloaded and postprocessed: " + ComicName + ' (' + Year + ') #' + Issue
event = prline
description = prline2
data = { 'apikey': apikey, 'application':'Mylar', 'event': event, 'description': description, 'priority': priority}
@ -133,7 +134,10 @@ class PUSHOVER:
def __init__(self):
self.enabled = mylar.PUSHOVER_ENABLED
self.apikey = mylar.PUSHOVER_APIKEY
if mylar.PUSHOVER_APIKEY is None or mylar.PUSHOVER_APIKEY == 'None':
self.apikey = 'a1KZ1L7d8JKdrtHcUR6eFoW2XGBmwG'
else:
self.apikey = mylar.PUSHOVER_APIKEY
self.userkey = mylar.PUSHOVER_USERKEY
self.priority = mylar.PUSHOVER_PRIORITY
# other API options:
@ -187,98 +191,59 @@ class PUSHOVER:
self.notify('ZOMG Lazors Pewpewpew!', 'Test Message')
API_URL = "https://boxcar.io/devices/providers/WqbewHpV8ZATnawpCsr4/notifications"
class BOXCAR:
def test_notify(self, email, title="Test"):
return self._sendBoxcar("This is a test notification from SickBeard", title, email)
#new BoxCar2 API
def __init__(self):
self.url = 'https://new.boxcar.io/api/notifications'
def _sendBoxcar(self, msg, title):
def _sendBoxcar(self, msg, title, email, subscribe=False):
"""
Sends a boxcar notification to the address provided
msg: The message to send (unicode)
title: The title of the message
email: The email address to send the message to (or to subscribe with)
subscribe: If true then instead of sending a message this function will send a subscription notificat$
returns: True if the message succeeded, False otherwise
"""
# build up the URL and parameters
msg = msg.strip()
curUrl = API_URL
try:
# if this is a subscription notification then act accordingly
if subscribe:
data = urllib.urlencode({'email': email})
curUrl = curUrl + "/subscribe"
# for normal requests we need all these parameters
else:
data = urllib.urlencode({
'email': email,
'notification[from_screen_name]': title,
'notification[message]': msg.encode('utf-8'),
'notification[from_remote_service_id]': int(time.time())
'user_credentials': mylar.BOXCAR_TOKEN,
'notification[title]': title.encode('utf-8').strip(),
'notification[long_message]': message.encode('utf-8'),
'notification[sound]': "done"
})
# send the request to boxcar
try:
req = urllib2.Request(curUrl)
req = urllib2.Request(self.url)
handle = urllib2.urlopen(req, data)
handle.close()
return True
except urllib2.URLError, e:
# if we get an error back that doesn't have an error code then who knows what's really happening
if not hasattr(e, 'code'):
logger.error("Boxcar notification failed." + ex(e))
return False
else:
logger.error("Boxcar notification failed. Error code: " + str(e.code))
# HTTP status 404 if the provided email address isn't a Boxcar user.
if e.code == 404:
logger.error("Username is wrong/not a boxcar email. Boxcar will send an email to it")
return False
# For HTTP status code 401's, it is because you are passing in either an invalid token, or the user has not added$
elif e.code == 401:
# If the user has already added your service, we'll return an HTTP status code of 401.
if subscribe:
logger.error("Already subscribed to service")
# i dont know if this is true or false ... its neither but i also dont know how we got here in the first $
return False
#HTTP status 401 if the user doesn't have the service added
else:
subscribeNote = self._sendBoxcar(msg, title, email, True)
if subscribeNote:
logger.info("Subscription send")
return True
else:
logger.info("Subscription could not be send")
return False
logger.error('Boxcar2 notification failed. %s' % e)
# If you receive an HTTP status code of 400, it is because you failed to send the proper parameters
elif e.code == 400:
logger.info("Wrong data sent to boxcar")
logger.info('data:' + data)
return False
else:
logger.error("Boxcar2 notification failed. Error code: " + str(e.code))
return False
logger.fdebug("Boxcar notification successful.")
logger.fdebug("Boxcar2 notification successful.")
return True
def notify(self, ComicName=None, Year=None, Issue=None, sent_to=None, snatched_nzb=None, username=None, force=False):
def notify(self, ComicName=None, Year=None, Issue=None, sent_to=None, snatched_nzb=None, force=False):
"""
Sends a boxcar notification based on the provided info or SB config
title: The title of the notification to send
message: The message string to send
username: The username to send the notification to (optional, defaults to the username in the config)
force: If True then the notification will be sent even if Boxcar is disabled in the config
"""
@ -287,10 +252,6 @@ class BOXCAR:
return False
# if no username was given then use the one from the config
if not username:
username = mylar.BOXCAR_USERNAME
if snatched_nzb:
title = "Mylar. Sucessfully Snatched!"
message = "Mylar has snatched: " + snatched_nzb + " and has sent it to " + sent_to
@ -299,9 +260,63 @@ class BOXCAR:
message = "Mylar has downloaded and postprocessed: " + ComicName + ' (' + Year + ') #' + Issue
logger.info("Sending notification to Boxcar")
logger.info('Sending notification to Boxcar2')
self._sendBoxcar(message, title, username)
self._sendBoxcar(message, title)
return True
class PUSHBULLET:
def __init__(self):
self.apikey = mylar.PUSHBULLET_APIKEY
self.deviceid = mylar.PUSHBULLET_DEVICEID
def notify(self, snline=None, prline=None, prline2=None, snatched=None, sent_to=None, prov=None):
if not mylar.PUSHBULLET_ENABLED:
return
if snatched:
if snatched[-1] == '.': snatched = snatched[:-1]
event = snline
message = "Mylar has snatched: " + snatched + " from " + prov + " and has sent it to " + sent_to
else:
event = prline + ' complete!'
message = prline2
http_handler = HTTPSConnection("api.pushbullet.com")
data = {'device_iden': mylar.PUSHBULLET_DEVICEID,
'type': "note",
'title': event, #"mylar",
'body': message.encode("utf-8") }
http_handler.request("POST",
"/api/pushes",
headers = {'Content-type': "application/x-www-form-urlencoded",
'Authorization' : 'Basic %s' % base64.b64encode(mylar.PUSHBULLET_APIKEY + ":") },
body = urlencode(data))
response = http_handler.getresponse()
request_status = response.status
#logger.debug(u"PushBullet response status: %r" % request_status)
#logger.debug(u"PushBullet response headers: %r" % response.getheaders())
#logger.debug(u"PushBullet response body: %r" % response.read())
if request_status == 200:
logger.fdebug(u"PushBullet notifications sent.")
return True
elif request_status >= 400 and request_status < 500:
logger.error(u"PushBullet request failed: %s" % response.reason)
return False
else:
logger.error(u"PushBullet notification failed serverside.")
return False
def test(self, apikey, deviceid):
self.enabled = True
self.apikey = apikey
self.deviceid = deviceid
self.notify('Main Screen Activate', 'Test Message')

View File

@ -12,52 +12,52 @@ from StringIO import StringIO
import mylar
from mylar import db, logger, ftpsshup, helpers
def tehMain(forcerss=None):
logger.info('RSS Feed Check was last run at : ' + str(mylar.RSS_LASTRUN))
firstrun = "no"
#check the last run of rss to make sure it's not hammering.
if mylar.RSS_LASTRUN is None or mylar.RSS_LASTRUN == '' or mylar.RSS_LASTRUN == '0' or forcerss == True:
logger.info('RSS Feed Check First Ever Run.')
firstrun = "yes"
mins = 0
else:
c_obj_date = datetime.datetime.strptime(mylar.RSS_LASTRUN, "%Y-%m-%d %H:%M:%S")
n_date = datetime.datetime.now()
absdiff = abs(n_date - c_obj_date)
mins = (absdiff.days * 24 * 60 * 60 + absdiff.seconds) / 60.0 #3600 is for hours.
#def tehMain(forcerss=None):
# logger.info('RSS Feed Check was last run at : ' + str(mylar.RSS_LASTRUN))
# firstrun = "no"
# #check the last run of rss to make sure it's not hammering.
# if mylar.RSS_LASTRUN is None or mylar.RSS_LASTRUN == '' or mylar.RSS_LASTRUN == '0' or forcerss == True:
# logger.info('RSS Feed Check First Ever Run.')
# firstrun = "yes"
# mins = 0
# else:
# c_obj_date = datetime.datetime.strptime(mylar.RSS_LASTRUN, "%Y-%m-%d %H:%M:%S")
# n_date = datetime.datetime.now()
# absdiff = abs(n_date - c_obj_date)
# mins = (absdiff.days * 24 * 60 * 60 + absdiff.seconds) / 60.0 #3600 is for hours.
#
# if firstrun == "no" and mins < int(mylar.RSS_CHECKINTERVAL):
# logger.fdebug('RSS Check has taken place less than the threshold - not initiating at this time.')
# return
#
# mylar.RSS_LASTRUN = helpers.now()
# logger.fdebug('Updating RSS Run time to : ' + str(mylar.RSS_LASTRUN))
# mylar.config_write()
if firstrun == "no" and mins < int(mylar.RSS_CHECKINTERVAL):
logger.fdebug('RSS Check has taken place less than the threshold - not initiating at this time.')
return
mylar.RSS_LASTRUN = helpers.now()
logger.fdebug('Updating RSS Run time to : ' + str(mylar.RSS_LASTRUN))
mylar.config_write()
#function for looping through nzbs/torrent feeds
if mylar.ENABLE_TORRENTS:
logger.fdebug('[RSS] Initiating Torrent RSS Check.')
if mylar.ENABLE_KAT:
logger.fdebug('[RSS] Initiating Torrent RSS Feed Check on KAT.')
torrents(pickfeed='3')
torrents(pickfeed='6')
if mylar.ENABLE_CBT:
logger.fdebug('[RSS] Initiating Torrent RSS Feed Check on CBT.')
torrents(pickfeed='1')
torrents(pickfeed='4')
logger.fdebug('[RSS] Initiating RSS Feed Check for NZB Providers.')
nzbs()
logger.fdebug('[RSS] RSS Feed Check/Update Complete')
logger.fdebug('[RSS] Watchlist Check for new Releases')
#if mylar.ENABLE_TORRENTS:
# if mylar.ENABLE_KAT:
# search.searchforissue(rsscheck='yes')
# if mylar.ENABLE_CBT:
mylar.search.searchforissue(rsscheck='yes')
#nzbcheck here
#nzbs(rsscheck='yes')
logger.fdebug('[RSS] Watchlist Check complete.')
return
# #function for looping through nzbs/torrent feeds
# if mylar.ENABLE_TORRENTS:
# logger.fdebug('[RSS] Initiating Torrent RSS Check.')
# if mylar.ENABLE_KAT:
# logger.fdebug('[RSS] Initiating Torrent RSS Feed Check on KAT.')
# torrents(pickfeed='3')
# torrents(pickfeed='6')
# if mylar.ENABLE_CBT:
# logger.fdebug('[RSS] Initiating Torrent RSS Feed Check on CBT.')
# torrents(pickfeed='1')
# torrents(pickfeed='4')
# logger.fdebug('[RSS] Initiating RSS Feed Check for NZB Providers.')
# nzbs()
# logger.fdebug('[RSS] RSS Feed Check/Update Complete')
# logger.fdebug('[RSS] Watchlist Check for new Releases')
# #if mylar.ENABLE_TORRENTS:
# # if mylar.ENABLE_KAT:
# # search.searchforissue(rsscheck='yes')
# # if mylar.ENABLE_CBT:
# mylar.search.searchforissue(rsscheck='yes')
# #nzbcheck here
# #nzbs(rsscheck='yes')
# logger.fdebug('[RSS] Watchlist Check complete.')
# return
def torrents(pickfeed=None,seriesname=None,issue=None):
if pickfeed is None:
@ -111,9 +111,9 @@ def torrents(pickfeed=None,seriesname=None,issue=None):
feed = "http://comicbt.com/rss.php?action=browse&passkey=" + str(passkey) + "&type=dl"
feedtype = ' from the New Releases RSS Feed for comics'
elif pickfeed == "2" and srchterm is not None: # kat.ph search
feed = kat_url + "usearch/" + str(srchterm) + "%20category%3Acomics%20seeds%3A1/?rss=1"
feed = kat_url + "usearch/" + str(srchterm) + "%20category%3Acomics%20seeds%3A" + str(mylar.MINSEEDS) + "/?rss=1"
elif pickfeed == "3": # kat.ph rss feed
feed = kat_url + "usearch/category%3Acomics%20seeds%3A1/?rss=1"
feed = kat_url + "usearch/category%3Acomics%20seeds%3A" + str(mylar.MINSEEDS) + "/?rss=1"
feedtype = ' from the New Releases RSS Feed for comics'
elif pickfeed == "4": #cbt follow link
feed = "http://comicbt.com/rss.php?action=follow&passkey=" + str(passkey) + "&type=dl"
@ -121,7 +121,7 @@ def torrents(pickfeed=None,seriesname=None,issue=None):
elif pickfeed == "5" and srchterm is not None: # kat.ph search (category:other since some 0-day comics initially get thrown there until categorized)
feed = kat_url + "usearch/" + str(srchterm) + "%20category%3Aother%20seeds%3A1/?rss=1"
elif pickfeed == "6": # kat.ph rss feed (category:other so that we can get them quicker if need-be)
feed = kat_url + "usearch/.cbr%20category%3Aother%20seeds%3A1/?rss=1"
feed = kat_url + "usearch/.cbr%20category%3Aother%20seeds%3A" + str(mylar.MINSEEDS) + "/?rss=1"
feedtype = ' from the New Releases for category Other RSS Feed that contain comics'
elif pickfeed == "7": # cbt series link
# seriespage = "http://comicbt.com/series.php?passkey=" + str(passkey)
@ -146,21 +146,28 @@ def torrents(pickfeed=None,seriesname=None,issue=None):
if pickfeed == "3" or pickfeed == "6":
tmpsz = feedme.entries[i].enclosures[0]
feeddata.append({
'Site': picksite,
'Title': feedme.entries[i].title,
'Link': tmpsz['url'],
'Pubdate': feedme.entries[i].updated,
'Size': tmpsz['length']
'site': picksite,
'title': feedme.entries[i].title,
'link': tmpsz['url'],
'pubdate': feedme.entries[i].updated,
'size': tmpsz['length']
})
#print ("Site: KAT")
#print ("Title: " + str(feedme.entries[i].title))
#print ("Link: " + str(tmpsz['url']))
#print ("pubdate: " + str(feedme.entries[i].updated))
#print ("size: " + str(tmpsz['length']))
elif pickfeed == "2" or pickfeed == "5":
tmpsz = feedme.entries[i].enclosures[0]
torthekat.append({
'site': picksite,
'title': feedme.entries[i].title,
'title': feedme.entries[i].title,
'link': tmpsz['url'],
'pubdate': feedme.entries[i].updated,
'length': tmpsz['length']
'size': tmpsz['length']
})
#print ("Site: KAT")
@ -170,14 +177,49 @@ def torrents(pickfeed=None,seriesname=None,issue=None):
#print ("size: " + str(tmpsz['length']))
elif pickfeed == "1" or pickfeed == "4":
# tmpsz = feedme.entries[i].enclosures[0]
feeddata.append({
'Site': picksite,
'Title': feedme.entries[i].title,
'Link': feedme.entries[i].link,
'Pubdate': feedme.entries[i].updated
# 'Size': tmpsz['length']
})
if pickfeed == "1":
tmpdesc = feedme.entries[i].description
#break it down to get the Size since it's available on THIS CBT feed only.
sizestart = tmpdesc.find('Size:')
sizeend = tmpdesc.find('Leechers:')
sizestart +=5 # to get to the end of the word 'Size:'
tmpsize = tmpdesc[sizestart:sizeend].strip()
fdigits = re.sub("[^0123456789\.]", "", tmpsize).strip()
if '.' in fdigits:
decfind = fdigits.find('.')
wholenum = fdigits[:decfind]
decnum = fdigits[decfind+1:]
else:
wholenum = fdigits
decnum = 0
if 'MB' in tmpsize:
wholebytes = int(wholenum) * 1048576
wholedecimal = ( int(decnum) * 1048576 ) / 100
justdigits = wholebytes + wholedecimal
else:
#it's 'GB' then
wholebytes = ( int(wholenum) * 1024 ) * 1048576
wholedecimal = ( ( int(decnum) * 1024 ) * 1048576 ) / 100
justdigits = wholebytes + wholedecimal
#Get the # of seeders.
seedstart = tmpdesc.find('Seeders:')
seedend = tmpdesc.find('Added:')
seedstart +=8 # to get to the end of the word 'Seeders:'
tmpseed = tmpdesc[seedstart:seedend].strip()
seeddigits = re.sub("[^0123456789\.]", "", tmpseed).strip()
else:
justdigits = None #size not available in follow-list rss feed
seeddigits = 0 #number of seeders not available in follow-list rss feed
if int(mylar.MINSEEDS) >= int(seeddigits):
feeddata.append({
'site': picksite,
'title': feedme.entries[i].title,
'link': feedme.entries[i].link,
'pubdate': feedme.entries[i].updated,
'size': justdigits
})
#print ("Site: CBT")
#print ("Title: " + str(feeddata[i]['Title']))
#print ("Link: " + str(feeddata[i]['Link']))
@ -386,18 +428,21 @@ def rssdbupdate(feeddata,i,type):
#print "populating : " + str(dataval)
#remove passkey so it doesn't end up in db
if type == 'torrent':
newlink = dataval['Link'][:(dataval['Link'].find('&passkey'))]
newlink = dataval['link'][:(dataval['link'].find('&passkey'))]
newVal = {"Link": newlink,
"Pubdate": dataval['Pubdate'],
"Site": dataval['Site']}
"Pubdate": dataval['pubdate'],
"Site": dataval['site'],
"Size": dataval['size']}
ctrlVal = {"Title": dataval['title']}
# if dataval['Site'] == 'KAT':
# newVal['Size'] = dataval['Size']
else:
newlink = dataval['Link']
newVal = {"Link": newlink,
"Pubdate": dataval['Pubdate'],
"Site": dataval['Site'],
"Size": dataval['Size']}
ctrlVal = {"Title": dataval['Title']}
ctrlVal = {"Title": dataval['Title']}
myDB.upsert("rssdb", newVal,ctrlVal)
@ -413,7 +458,7 @@ def torrentdbsearch(seriesname,issue,comicid=None,nzbprov=None):
pass
else:
logger.fdebug('ComicID: ' + str(comicid))
snm = myDB.action("SELECT * FROM comics WHERE comicid=?", [comicid]).fetchone()
snm = myDB.selectone("SELECT * FROM comics WHERE comicid=?", [comicid]).fetchone()
if snm is None:
logger.fdebug('Invalid ComicID of ' + str(comicid) + '. Aborting search.')
return
@ -433,9 +478,9 @@ def torrentdbsearch(seriesname,issue,comicid=None,nzbprov=None):
tresults = []
if mylar.ENABLE_CBT:
tresults = myDB.action("SELECT * FROM rssdb WHERE Title like ? AND Site='CBT'", [tsearch]).fetchall()
tresults = myDB.select("SELECT * FROM rssdb WHERE Title like ? AND Site='CBT'", [tsearch])
if mylar.ENABLE_KAT:
tresults += myDB.action("SELECT * FROM rssdb WHERE Title like ? AND Site='KAT'", [tsearch]).fetchall()
tresults += myDB.select("SELECT * FROM rssdb WHERE Title like ? AND Site='KAT'", [tsearch])
logger.fdebug('seriesname_alt:' + str(seriesname_alt))
if seriesname_alt is None or seriesname_alt == 'None':
@ -465,9 +510,9 @@ def torrentdbsearch(seriesname,issue,comicid=None,nzbprov=None):
if mylar.ENABLE_CBT:
#print "AS_Alternate:" + str(AS_Alternate)
tresults += myDB.action("SELECT * FROM rssdb WHERE Title like ? AND Site='CBT'", [AS_Alternate]).fetchall()
tresults += myDB.select("SELECT * FROM rssdb WHERE Title like ? AND Site='CBT'", [AS_Alternate])
if mylar.ENABLE_KAT:
tresults += myDB.action("SELECT * FROM rssdb WHERE Title like ? AND Site='KAT'", [AS_Alternate]).fetchall()
tresults += myDB.select("SELECT * FROM rssdb WHERE Title like ? AND Site='KAT'", [AS_Alternate])
if tresults is None:
logger.fdebug('torrent search returned no results for ' + seriesname)
@ -514,12 +559,14 @@ def torrentdbsearch(seriesname,issue,comicid=None,nzbprov=None):
seriesname_mod = re.sub('[\&]', ' ', seriesname_mod)
foundname_mod = re.sub('[\&]', ' ', foundname_mod)
formatrem_seriesname = re.sub('[\'\!\@\#\$\%\:\;\=\?\.\-\/]', '',seriesname_mod)
#formatrem_seriesname = re.sub('[\/]', '-', formatrem_seriesname) #not necessary since seriesname in a torrent file won't have /
formatrem_seriesname = re.sub('[\'\!\@\#\$\%\:\;\=\?\.\/]', '',seriesname_mod)
formatrem_seriesname = re.sub('[\-]', ' ',formatrem_seriesname)
formatrem_seriesname = re.sub('[\/]', '-', formatrem_seriesname) #not necessary since seriesname in a torrent file won't have /
formatrem_seriesname = re.sub('\s+', ' ', formatrem_seriesname)
if formatrem_seriesname[:1] == ' ': formatrem_seriesname = formatrem_seriesname[1:]
formatrem_torsplit = re.sub('[\'\!\@\#\$\%\:\;\\=\?\.\-\/]', '',foundname_mod)
formatrem_torsplit = re.sub('[\'\!\@\#\$\%\:\;\\=\?\.\/]', '',foundname_mod)
formatrem_torsplit = re.sub('[\-]', ' ',formatrem_torsplit) #we replace the - with space so we'll get hits if differnces
#formatrem_torsplit = re.sub('[\/]', '-', formatrem_torsplit) #not necessary since if has a /, should be removed in above line
formatrem_torsplit = re.sub('\s+', ' ', formatrem_torsplit)
logger.fdebug(str(len(formatrem_torsplit)) + ' - formatrem_torsplit : ' + formatrem_torsplit.lower())
@ -599,7 +646,7 @@ def nzbdbsearch(seriesname,issue,comicid=None,nzbprov=None,searchYear=None,Comic
if comicid is None or comicid == 'None':
pass
else:
snm = myDB.action("SELECT * FROM comics WHERE comicid=?", [comicid]).fetchone()
snm = myDB.selectone("SELECT * FROM comics WHERE comicid=?", [comicid]).fetchone()
if snm is None:
logger.info('Invalid ComicID of ' + str(comicid) + '. Aborting search.')
return
@ -611,7 +658,7 @@ def nzbdbsearch(seriesname,issue,comicid=None,nzbprov=None,searchYear=None,Comic
nsearch_seriesname = re.sub('[\'\!\@\#\$\%\:\;\/\\=\?\.\-\s]', '%',seriesname)
formatrem_seriesname = re.sub('[\'\!\@\#\$\%\:\;\/\\=\?\.]', '',seriesname)
nsearch = '%' + nsearch_seriesname + "%"
nresults = myDB.action("SELECT * FROM rssdb WHERE Title like ? AND Site=?", [nsearch,nzbprov])
nresults = myDB.select("SELECT * FROM rssdb WHERE Title like ? AND Site=?", [nsearch,nzbprov])
if nresults is None:
logger.fdebug('nzb search returned no results for ' + seriesname)
if seriesname_alt is None:
@ -623,7 +670,7 @@ def nzbdbsearch(seriesname,issue,comicid=None,nzbprov=None,searchYear=None,Comic
AS_Alternate = AlternateSearch
for calt in chkthealt:
AS_Alternate = re.sub('##','',calt)
nresults += myDB.action("SELECT * FROM rssdb WHERE Title like ? AND Site=?", [AS_Alternate,nzbprov])
nresults += myDB.select("SELECT * FROM rssdb WHERE Title like ? AND Site=?", [AS_Alternate,nzbprov])
if nresults is None:
logger.fdebug('nzb alternate name search returned no results.')
return "no results"
@ -684,7 +731,7 @@ def nzbdbsearch(seriesname,issue,comicid=None,nzbprov=None,searchYear=None,Comic
'site': nzb['Site'],
'length': nzb['Size']
})
logger.fdebug("entered info for " + nzb['Title'])
#logger.fdebug("entered info for " + nzb['Title'])
nzbinfo['entries'] = nzbtheinfo
@ -726,11 +773,17 @@ def torsend2client(seriesname, issue, seriesyear, linkit, site):
# response = helpers.urlretrieve(urllib2.urlopen(request), filepath)
response = urllib2.urlopen(request)
logger.fdebug('retrieved response.')
if response.info().get('Content-Encoding') == 'gzip':
buf = StringIO(response.read())
f = gzip.GzipFile(fileobj=buf)
torrent = f.read()
if site == 'KAT':
if response.info()['content-encoding'] == 'gzip':#.get('Content-Encoding') == 'gzip':
logger.fdebug('gzip detected')
buf = StringIO(response.read())
logger.fdebug('gzip buffered')
f = gzip.GzipFile(fileobj=buf)
logger.fdebug('gzip filed.')
torrent = f.read()
logger.fdebug('gzip read.')
else:
torrent = response.read()
@ -741,7 +794,7 @@ def torsend2client(seriesname, issue, seriesyear, linkit, site):
with open(filepath, 'wb') as the_file:
the_file.write(torrent)
logger.info("saved.")
logger.fdebug("saved.")
#logger.fdebug('torrent file saved as : ' + str(filepath))
if mylar.TORRENT_LOCAL:
return "pass"

71
mylar/rsscheckit.py Executable file
View File

@ -0,0 +1,71 @@
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
from __future__ import with_statement
import datetime
import mylar
from mylar import logger, rsscheck, helpers
#import threading
class tehMain():
def __init__(self, forcerss=None):
self.forcerss = forcerss
def run(self):
forcerss = self.forcerss
logger.info('RSS Feed Check was last run at : ' + str(mylar.RSS_LASTRUN))
firstrun = "no"
#check the last run of rss to make sure it's not hammering.
if mylar.RSS_LASTRUN is None or mylar.RSS_LASTRUN == '' or mylar.RSS_LASTRUN == '0' or forcerss == True:
logger.info('RSS Feed Check First Ever Run.')
firstrun = "yes"
mins = 0
else:
c_obj_date = datetime.datetime.strptime(mylar.RSS_LASTRUN, "%Y-%m-%d %H:%M:%S")
n_date = datetime.datetime.now()
absdiff = abs(n_date - c_obj_date)
mins = (absdiff.days * 24 * 60 * 60 + absdiff.seconds) / 60.0 #3600 is for hours.
if firstrun == "no" and mins < int(mylar.RSS_CHECKINTERVAL):
logger.fdebug('RSS Check has taken place less than the threshold - not initiating at this time.')
return
mylar.RSS_LASTRUN = helpers.now()
logger.fdebug('Updating RSS Run time to : ' + str(mylar.RSS_LASTRUN))
mylar.config_write()
#function for looping through nzbs/torrent feeds
if mylar.ENABLE_TORRENTS:
logger.fdebug('[RSS] Initiating Torrent RSS Check.')
if mylar.ENABLE_KAT:
logger.fdebug('[RSS] Initiating Torrent RSS Feed Check on KAT.')
rsscheck.torrents(pickfeed='3')
rsscheck.torrents(pickfeed='6')
if mylar.ENABLE_CBT:
logger.fdebug('[RSS] Initiating Torrent RSS Feed Check on CBT.')
rsscheck.torrents(pickfeed='1')
rsscheck.torrents(pickfeed='4')
logger.fdebug('[RSS] Initiating RSS Feed Check for NZB Providers.')
rsscheck.nzbs()
logger.fdebug('[RSS] RSS Feed Check/Update Complete')
logger.fdebug('[RSS] Watchlist Check for new Releases')
mylar.search.searchforissue(rsscheck='yes')
logger.fdebug('[RSS] Watchlist Check complete.')
return

88
mylar/scheduler.py Normal file
View File

@ -0,0 +1,88 @@
# Author: Nic Wolfe <nic@wolfeden.ca>
# URL: http://code.google.com/p/sickbeard/
#
# This file is part of Sick Beard.
#
# Sick Beard is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Sick Beard is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Sick Beard. If not, see <http://www.gnu.org/licenses/>.
import datetime
import time
import threading
import traceback
from mylar import logger
class Scheduler:
def __init__(self, action, cycleTime=datetime.timedelta(minutes=10), runImmediately=True,
threadName="ScheduledThread", silent=False, delay=None):
if runImmediately:
self.lastRun = datetime.datetime.fromordinal(1)
else:
self.lastRun = datetime.datetime.now()
self.action = action
self.cycleTime = cycleTime
self.thread = None
self.threadName = threadName
self.silent = silent
self.delay = delay
self.initThread()
self.abort = False
def initThread(self):
if self.thread == None or not self.thread.isAlive():
self.thread = threading.Thread(None, self.runAction, self.threadName)
def timeLeft(self):
return self.cycleTime - (datetime.datetime.now() - self.lastRun)
def forceRun(self):
if not self.action.amActive:
self.lastRun = datetime.datetime.fromordinal(1)
return True
return False
def runAction(self):
while True:
currentTime = datetime.datetime.now()
if currentTime - self.lastRun > self.cycleTime:
self.lastRun = currentTime
try:
if not self.silent:
logger.fdebug("Starting new thread: " + self.threadName)
if self.delay:
logger.info('delaying startup thread for ' + str(self.delay) + ' seconds to avoid locks.')
time.sleep(self.delay)
self.action.run()
except Exception, e:
logger.fdebug("Exception generated in thread " + self.threadName + ": %s" % e )
logger.fdebug(repr(traceback.format_exc()))
if self.abort:
self.abort = False
self.thread = None
return
time.sleep(1)

View File

@ -1,4 +1,3 @@
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
@ -19,8 +18,6 @@ from __future__ import division
import mylar
from mylar import logger, db, updater, helpers, parseit, findcomicfeed, notifiers, rsscheck
LOG = mylar.LOG_DIR
import lib.feedparser as feedparser
import urllib
import os, errno
@ -35,6 +32,7 @@ from xml.dom.minidom import parseString
import urllib2
import email.utils
import datetime
from wsgiref.handlers import format_date_time
def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, IssueID, AlternateSearch=None, UseFuzzy=None, ComicVersion=None, SARC=None, IssueArcID=None, mode=None, rsscheck=None, ComicID=None):
if ComicYear == None: ComicYear = '2014'
@ -62,7 +60,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueD
torprovider = []
torp = 0
logger.fdebug("Checking for torrent enabled.")
if mylar.ENABLE_TORRENTS and mylar.ENABLE_TORRENT_SEARCH:
if mylar.ENABLE_TORRENT_SEARCH: #and mylar.ENABLE_TORRENTS:
if mylar.ENABLE_CBT:
torprovider.append('cbt')
torp+=1
@ -91,7 +89,7 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueD
newznab_hosts = []
if mylar.NEWZNAB == 1:
#if len(mylar.EXTRA_NEWZNABS > 0):
for newznab_host in mylar.EXTRA_NEWZNABS:
if newznab_host[4] == '1' or newznab_host[4] == 1:
newznab_hosts.append(newznab_host)
@ -106,7 +104,8 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueD
newznabs+=1
logger.fdebug("newznab name:" + str(newznab_host[0]) + " @ " + str(newznab_host[1]))
logger.fdebug('newznab hosts: ' + str(newznab_hosts))
logger.fdebug('nzbprovider: ' + str(nzbprovider))
# --------
logger.fdebug("there are : " + str(torp) + " torrent providers you have selected.")
torpr = torp - 1
@ -115,9 +114,9 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueD
providercount = int(nzbp + newznabs)
logger.fdebug("there are : " + str(providercount) + " nzb providers you have selected.")
logger.fdebug("Usenet Retention : " + str(mylar.USENET_RETENTION) + " days")
nzbpr = providercount - 1
if nzbpr < 0:
nzbpr == 0
#nzbpr = providercount - 1
#if nzbpr < 0:
# nzbpr == 0
findit = 'no'
totalproviders = providercount + torp
@ -128,15 +127,9 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueD
nzbprov = None
return findit, nzbprov
#provider order sequencing here.
#prov_order = []
#if len(mylar.PROVIDER_ORDER) > 0:
# for pr_order in mylar.PROVIDER_ORDER:
# prov_order.append(pr_order[1])
# logger.fdebug('sequence is now to start with ' + pr_order[1] + ' at spot #' + str(pr_order[0]))
prov_order,newznab_info = provider_sequence(nzbprovider,torprovider,newznab_hosts)
# end provider order sequencing
logger.info('search provider order is ' + str(prov_order))
#fix for issue dates between Nov-Dec/(Jan-Feb-Mar)
IssDt = str(IssueDate)[5:7]
@ -176,201 +169,83 @@ def search_init(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueD
logger.fdebug("Initiating Search via : " + str(searchmode))
torprtmp = torpr
#torprtmp = 0 # torprtmp = torpr
prov_count = 0
while (torprtmp >=0 ):
if torprovider[torprtmp] == 'cbt':
# CBT
torprov = 'CBT'
elif torprovider[torprtmp] == 'kat':
torprov = 'KAT'
if searchmode == 'rss':
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, torprov, torpr, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes':
logger.fdebug("findit = found!")
break
else:
if AlternateSearch is not None and AlternateSearch != "None":
chkthealt = AlternateSearch.split('##')
if chkthealt == 0:
AS_Alternate = AlternateSearch
loopit = len(chkthealt)
for calt in chkthealt:
AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, torprov, torp, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes':
break
else:
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, torprov, torpr, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes':
logger.fdebug("findit = found!")
break
else:
if AlternateSearch is not None and AlternateSearch != "None":
chkthealt = AlternateSearch.split('##')
if chkthealt == 0:
AS_Alternate = AlternateSearch
loopit = len(chkthealt)
for calt in chkthealt:
AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, torprov, torp, IssDateFix, IssueID, UseFuzzy, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes':
break
torprtmp-=1
i+=1
if findit == 'yes': return findit, torprov
searchcnt = 0
nzbprov = None
i = 1
if rsscheck:
if mylar.ENABLE_RSS:
searchcnt = 1 # rss-only
else:
searchcnt = 0 # if it's not enabled, don't even bother.
else:
if mylar.ENABLE_RSS:
searchcnt = 2 # rss first, then api on non-matches
else:
searchcnt = 2 #set the searchcnt to 2 (api)
i = 2 #start the counter at api, so it will exit without running RSS
nzbsrchproviders = nzbpr
while ( i <= searchcnt ):
#searchmodes:
# rss - will run through the built-cached db of entries
# api - will run through the providers via api (or non-api in the case of Experimental)
# the trick is if the search is done during an rss compare, it needs to exit when done.
# otherwise, the order of operations is rss feed check first, followed by api on non-results.
if i == 1: searchmode = 'rss' #order of ops - this will be used first.
elif i == 2: searchmode = 'api'
nzbpr = nzbsrchproviders
logger.fdebug("Initiating Search via : " + str(searchmode))
while (nzbpr >= 0 ):
if 'newznab' in nzbprovider[nzbpr]:
while (prov_count <= len(prov_order)-1):
#while (torprtmp <= torpr): #(torprtmp >=0 ):
newznab_host = None
if prov_order[prov_count] == 'cbt':
searchprov = 'CBT'
elif prov_order[prov_count] == 'kat':
searchprov = 'KAT'
elif 'newznab' in prov_order[prov_count]:
#this is for newznab
nzbprov = 'newznab'
for newznab_host in newznab_hosts:
#if it's rss - search both seriesname/alternates via rss then return.
if searchmode == 'rss':
if mylar.ENABLE_RSS:
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes':
logger.fdebug("Found via RSS.")
break
#findit = altdefine(AlternateSearch, searchmode='rss')
if AlternateSearch is not None and AlternateSearch != "None":
chkthealt = AlternateSearch.split('##')
if chkthealt == 0:
AS_Alternate = AlternateSearch
loopit = len(chkthealt)
for calt in chkthealt:
AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes':
break
if findit == 'yes':
logger.fdebug("Found via RSS Alternate Naming.")
break
else:
logger.fdebug("RSS search not enabled - using API only (Enable in the Configuration)")
break
else:
#normal api-search here.
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes':
logger.fdebug("Found via API.")
break
if AlternateSearch is not None and AlternateSearch != "None":
chkthealt = AlternateSearch.split('##')
if chkthealt == 0:
AS_Alternate = AlternateSearch
loopit = len(chkthealt)
for calt in chkthealt:
AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes':
break
if findit == 'yes':
logger.fdebug("Found via API Alternate Naming.")
break
nzbpr-=1
if nzbpr >= 0 and findit != 'yes':
logger.info(u"More than one search provider given - trying next one.")
else:
break
searchprov = 'newznab'
for nninfo in newznab_info:
if nninfo['provider'] == prov_order[prov_count]:
newznab_host = nninfo['info']
if newznab_host is None:
logger.fdebug('there was an error - newznab information was blank and it should not be.')
else:
newznab_host = None
nzbprov = nzbprovider[nzbpr]
if searchmode == 'rss':
if mylar.ENABLE_RSS:
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS='yes', ComicID=ComicID)
if findit == 'yes':
logger.fdebug("Found via RSS on " + nzbprov)
break
if AlternateSearch is not None and AlternateSearch != "None":
chkthealt = AlternateSearch.split('##')
if chkthealt == 0:
AS_Alternate = AlternateSearch
loopit = len(chkthealt)
for calt in chkthealt:
AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes':
logger.fdebug("Found via RSS Alternate Naming on " + nzbprov)
break
else:
logger.fdebug("RSS search not enabled - using API only (Enable in the Configuration)")
break
searchprov = prov_order[prov_count].lower()
if searchmode == 'rss':
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, searchprov, prov_count, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes':
logger.fdebug("findit = found!")
break
else:
#normal api-search here.
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes':
logger.fdebug("Found via API on " + nzbprov)
break
if AlternateSearch is not None and AlternateSearch != "None":
chkthealt = AlternateSearch.split('##')
if chkthealt == 0:
AS_Alternate = AlternateSearch
loopit = len(chkthealt)
for calt in chkthealt:
AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, searchprov, prov_count, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, RSS="yes", ComicID=ComicID)
if findit == 'yes':
break
if findit == 'yes':
logger.fdebug("Found via API Alternate Naming on " + nzbprov)
break
nzbpr-=1
if nzbpr >= 0 and findit != 'yes':
logger.info(u"More than one search provider given - trying next one.")
else:
else:
findit = NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, searchprov, prov_count, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes':
logger.fdebug("findit = found!")
break
if findit == 'yes': return findit, nzbprov
else:
if AlternateSearch is not None and AlternateSearch != "None":
chkthealt = AlternateSearch.split('##')
if chkthealt == 0:
AS_Alternate = AlternateSearch
loopit = len(chkthealt)
for calt in chkthealt:
AS_Alternate = re.sub('##','',calt)
logger.info(u"Alternate Search pattern detected...re-adjusting to : " + str(AS_Alternate) + " " + str(ComicYear))
findit = NZB_SEARCH(AS_Alternate, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, searchprov, prov_count, IssDateFix, IssueID, UseFuzzy, newznab_host, ComicVersion=ComicVersion, SARC=SARC, IssueArcID=IssueArcID, ComicID=ComicID)
if findit == 'yes':
break
if searchprov == 'newznab':
searchprov = newznab_host[0].rstrip()
logger.info('Could not find Issue ' + str(IssueNumber) + ' of ' + ComicName + '(' + str(SeriesYear) + ') using ' + str(searchprov))
prov_count+=1
#torprtmp+=1 #torprtmp-=1
if findit == 'yes':
return findit, searchprov
else:
logger.fdebug("Finished searching via : " + str(searchmode))
i+=1
return findit, nzbprov
if findit == 'no':
logger.info('Issue not found. Status kept as Wanted.')
def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, nzbpr, IssDateFix, IssueID, UseFuzzy, newznab_host=None, ComicVersion=None, SARC=None, IssueArcID=None, RSS=None, ComicID=None):
return findit, searchprov
def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDate, StoreDate, nzbprov, prov_count, IssDateFix, IssueID, UseFuzzy, newznab_host=None, ComicVersion=None, SARC=None, IssueArcID=None, RSS=None, ComicID=None):
if nzbprov == 'nzb.su':
apikey = mylar.NZBSU_APIKEY
elif nzbprov == 'dognzb':
@ -677,36 +552,38 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
else:
for entry in bb['entries']:
logger.fdebug("checking search result: " + entry['title'])
if nzbprov != "experimental" and nzbprov != "CBT" and nzbprov != "dognzb":
if nzbprov != "experimental" and nzbprov != "dognzb":
if RSS == "yes":
comsize_b = entry['length']
else:
#Experimental already has size constraints done.
if nzbprov == 'CBT':
comsize_b = 0 #CBT rss doesn't have sizes
elif nzbprov == 'KAT':
comsize_b = entry['length']
elif nzbprov == 'KAT':
comsize_b = entry['size']
else:
tmpsz = entry.enclosures[0]
comsize_b = tmpsz['length']
if comsize_b is None: comsize_b = 0
comsize_m = helpers.human_size(comsize_b)
logger.fdebug("size given as: " + str(comsize_m))
#----size constraints.
#if it's not within size constaints - dump it now and save some time.
logger.fdebug("size : " + str(comsize_m))
if mylar.USE_MINSIZE:
conv_minsize = helpers.human2bytes(mylar.MINSIZE + "M")
logger.fdebug("comparing Min threshold " + str(conv_minsize) + " .. to .. nzb " + str(comsize_b))
if int(conv_minsize) > int(comsize_b):
logger.fdebug("Failure to meet the Minimum size threshold - skipping")
continue
if mylar.USE_MAXSIZE:
conv_maxsize = helpers.human2bytes(mylar.MAXSIZE + "M")
logger.fdebug("comparing Max threshold " + str(conv_maxsize) + " .. to .. nzb " + str(comsize_b))
if int(comsize_b) > int(conv_maxsize):
logger.fdebug("Failure to meet the Maximium size threshold - skipping")
continue
if comsize_b is None:
logger.fdebug('Size of file cannot be retrieved. Ignoring size-comparison and continuing.')
#comsize_b = 0
else:
comsize_m = helpers.human_size(comsize_b)
logger.fdebug("size given as: " + str(comsize_m))
#----size constraints.
#if it's not within size constaints - dump it now and save some time.
if mylar.USE_MINSIZE:
conv_minsize = helpers.human2bytes(mylar.MINSIZE + "M")
logger.fdebug("comparing Min threshold " + str(conv_minsize) + " .. to .. nzb " + str(comsize_b))
if int(conv_minsize) > int(comsize_b):
logger.fdebug("Failure to meet the Minimum size threshold - skipping")
continue
if mylar.USE_MAXSIZE:
conv_maxsize = helpers.human2bytes(mylar.MAXSIZE + "M")
logger.fdebug("comparing Max threshold " + str(conv_maxsize) + " .. to .. nzb " + str(comsize_b))
if int(comsize_b) > int(conv_maxsize):
logger.fdebug("Failure to meet the Maximium size threshold - skipping")
continue
#---- date constaints.
# if the posting date is prior to the publication date, dump it and save the time.
@ -741,12 +618,16 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
#convert it to a Thu, 06 Feb 2014 00:00:00 format
issue_convert = datetime.datetime.strptime(stdate.rstrip(), '%Y-%m-%d')
#logger.fdebug('issue_convert:' + str(issue_convert))
issconv = issue_convert.strftime('%a, %d %b %Y %H:%M:%S')
#issconv = issue_convert.strftime('%a, %d %b %Y %H:%M:%S')
# to get past different locale's os-dependent dates, let's convert it to a generic datetime format
stamp = time.mktime(issue_convert.timetuple())
#logger.fdebug('stamp: ' + str(stamp))
issconv = format_date_time(stamp)
#logger.fdebug('issue date is :' + str(issconv))
#convert it to a tuple
econv = email.utils.parsedate_tz(issconv)
#logger.fdebug('econv:' + str(econv))
#convert it to a numeric
#convert it to a numeric and drop the GMT/Timezone
issuedate_int = time.mktime(econv[:len(econv)-1])
#logger.fdebug('issuedate_int:' + str(issuedate_int))
if postdate_int < issuedate_int:
@ -1397,6 +1278,10 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
sent_to = "SABnzbd+"
logger.info(u"Successfully sent nzb file to SABnzbd")
if annualize == True:
modcomicname = ComicName + ' Annual'
else:
modcomicname = ComicName
if mylar.PROWL_ENABLED and mylar.PROWL_ONSNATCH:
logger.info(u"Sending Prowl notification")
prowl = notifiers.PROWL()
@ -1404,7 +1289,8 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
if mylar.NMA_ENABLED and mylar.NMA_ONSNATCH:
logger.info(u"Sending NMA notification")
nma = notifiers.NMA()
nma.notify(snatched_nzb=nzbname,sent_to=sent_to)
snline = modcomicname + ' (' + comyear + ') - Issue #' + IssueNumber + ' snatched!'
nma.notify(snline=snline,snatched_nzb=nzbname,sent_to=sent_to,prov=nzbprov)
if mylar.PUSHOVER_ENABLED and mylar.PUSHOVER_ONSNATCH:
logger.info(u"Sending Pushover notification")
pushover = notifiers.PUSHOVER()
@ -1413,6 +1299,11 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
logger.info(u"Sending Boxcar notification")
boxcar = notifiers.BOXCAR()
boxcar.notify(snatched_nzb=nzbname,sent_to=sent_to)
if mylar.PUSHBULLET_ENABLED and mylar.PUSHBULLET_ONSNATCH:
logger.info(u"Sending Pushbullet notification")
pushbullet = notifiers.PUSHBULLET()
snline = modcomicname + ' (' + comyear + ') - Issue #' + IssueNumber + ' snatched!'
pushbullet.notify(snline=snline,snatched=nzbname,sent_to=sent_to,prov=nzbprov)
foundc = "yes"
done = True
@ -1433,14 +1324,14 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
foundcomic.append("yes")
logger.fdebug("Found matching comic...preparing to send to Updater with IssueID: " + str(IssueID) + " and nzbname: " + str(nzbname))
updater.nzblog(IssueID, nzbname, ComicName, SARC, IssueArcID)
nzbpr == 0
prov_count == 0
#break
return foundc
elif foundc == "no" and nzbpr == 0:
elif foundc == "no" and prov_count == 0:
foundcomic.append("no")
logger.fdebug("couldn't find a matching comic using " + str(tmpprov))
#logger.fdebug('Could not find a matching comic using ' + str(tmpprov))
if IssDateFix == "no":
logger.info(u"Couldn't find Issue " + str(IssueNumber) + " of " + ComicName + "(" + str(comyear) + "). Status kept as wanted." )
#logger.info('Could not find Issue ' + str(IssueNumber) + ' of ' + ComicName + '(' + str(comyear) + ') using ' + str(tmpprov) + '. Status kept as wanted.' )
break
return foundc
@ -1487,7 +1378,7 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
new = True
for result in results:
comic = myDB.action("SELECT * from comics WHERE ComicID=? AND ComicName != 'None'", [result['ComicID']]).fetchone()
comic = myDB.selectone("SELECT * from comics WHERE ComicID=? AND ComicName != 'None'", [result['ComicID']]).fetchone()
foundNZB = "none"
SeriesYear = comic['ComicYear']
Publisher = comic['ComicPublisher']
@ -1510,16 +1401,16 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
pass
#print ("not found!")
else:
result = myDB.action('SELECT * FROM issues where IssueID=?', [issueid]).fetchone()
result = myDB.selectone('SELECT * FROM issues where IssueID=?', [issueid]).fetchone()
mode = 'want'
if result is None:
result = myDB.action('SELECT * FROM annuals where IssueID=?', [issueid]).fetchone()
result = myDB.selectone('SELECT * FROM annuals where IssueID=?', [issueid]).fetchone()
mode = 'want_ann'
if result is None:
logger.info("Unable to locate IssueID - you probably should delete/refresh the series.")
return
ComicID = result['ComicID']
comic = myDB.action('SELECT * FROM comics where ComicID=?', [ComicID]).fetchone()
comic = myDB.selectone('SELECT * FROM comics where ComicID=?', [ComicID]).fetchone()
SeriesYear = comic['ComicYear']
Publisher = comic['ComicPublisher']
AlternateSearch = comic['AlternateSearch']
@ -1546,15 +1437,15 @@ def searchforissue(issueid=None, new=False, rsscheck=None):
def searchIssueIDList(issuelist):
myDB = db.DBConnection()
for issueid in issuelist:
issue = myDB.action('SELECT * from issues WHERE IssueID=?', [issueid]).fetchone()
issue = myDB.selectone('SELECT * from issues WHERE IssueID=?', [issueid]).fetchone()
mode = 'want'
if issue is None:
issue = myDB.action('SELECT * from annuals WHERE IssueID=?', [issueid]).fetchone()
issue = myDB.selectone('SELECT * from annuals WHERE IssueID=?', [issueid]).fetchone()
mode = 'want_ann'
if issue is None:
logger.info("unable to determine IssueID - perhaps you need to delete/refresh series?")
break
comic = myDB.action('SELECT * from comics WHERE ComicID=?', [issue['ComicID']]).fetchone()
comic = myDB.selectone('SELECT * from comics WHERE ComicID=?', [issue['ComicID']]).fetchone()
print ("Checking for issue: " + str(issue['Issue_Number']))
foundNZB = "none"
SeriesYear = comic['ComicYear']
@ -1575,3 +1466,47 @@ def searchIssueIDList(issuelist):
pass
#print ("not found!")
def provider_sequence(nzbprovider, torprovider, newznab_hosts):
#provider order sequencing here.
newznab_info = []
prov_order = []
nzbproviders_lower = [x.lower() for x in nzbprovider]
if len(mylar.PROVIDER_ORDER) > 0:
for pr_order in mylar.PROVIDER_ORDER:
logger.fdebug('looking for ' + str(pr_order[1]).lower())
logger.fdebug('nzbproviders ' + str(nzbproviders_lower))
logger.fdebug('torproviders ' + str(torprovider))
if (pr_order[1].lower() in torprovider) or any(pr_order[1].lower() in x for x in nzbproviders_lower):
logger.fdebug('found provider in existing enabled providers.')
if any(pr_order[1].lower() in x for x in nzbproviders_lower):
# this is for nzb providers
for np in nzbprovider:
logger.fdebug('checking against nzb provider: ' + str(np))
if all( [ 'newznab' in np, pr_order[1].lower() in np.lower() ] ):
logger.fdebug('newznab match against: ' + str(np))
for newznab_host in newznab_hosts:
logger.fdebug('comparing ' + str(pr_order[1]).lower() + ' against: ' + str(newznab_host[0]).lower())
if newznab_host[0].lower() == pr_order[1].lower():
logger.fdebug('sucessfully matched - appending to provider.order sequence')
prov_order.append(np) #newznab_host)
newznab_info.append({"provider": np,
"info": newznab_host})
break
elif pr_order[1].lower() in np.lower():
prov_order.append(pr_order[1])
break
else:
for tp in torprovider:
logger.fdebug('checking against torrent provider: ' + str(tp))
if (pr_order[1].lower() in tp.lower()):
logger.fdebug('torrent match against: ' + str(tp))
prov_order.append(tp) #torrent provider
break
logger.fdebug('sequence is now to start with ' + pr_order[1] + ' at spot #' + str(pr_order[0]))
return prov_order,newznab_info

30
mylar/searchit.py Executable file
View File

@ -0,0 +1,30 @@
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
from __future__ import with_statement
import mylar
from mylar import logger
#import threading
class CurrentSearcher():
def __init__(self):
pass
def run(self):
logger.info('[SEARCH] Running Search for Wanted.')
mylar.search.searchforissue()

View File

@ -14,7 +14,7 @@ from HTMLParser import HTMLParseError
from time import strptime
import mylar
from mylar import logger
from mylar import logger, helpers
def solicit(month, year):
#convert to numerics just to ensure this...
@ -29,24 +29,57 @@ def solicit(month, year):
mnloop = 0
upcoming = []
publishers = {'DC Comics':'DC Comics', 'DC\'s': 'DC Comics', 'Marvel':'Marvel Comics', 'Image':'Image Comics', 'IDW':'IDW Publishing', 'Dark Horse':'Dark Horse Comics'}
publishers = {'DC Comics':'DC Comics', 'DC\'s': 'DC Comics', 'Marvel':'Marvel Comics', 'Image':'Image Comics', 'IDW':'IDW Publishing', 'Dark Horse':'Dark Horse'}
while (mnloop < 5):
if year == 2014:
if len(str(month)) == 1:
month_string = '0' + str(month)
else:
month_string = str(month)
datestring = str(year) + str(month_string)
else:
datestring = str(month) + str(year)
pagelinks = "http://www.comicbookresources.com/tag/solicits" + str(datestring)
# -- this is no longer needed (testing)
# while (mnloop < 5):
# if year == 2014:
# if len(str(month)) == 1:
# month_string = '0' + str(month)
# else:
# month_string = str(month)
# datestring = str(year) + str(month_string)
# else:
# datestring = str(month) + str(year)
# pagelinks = "http://www.comicbookresources.com/tag/solicits" + str(datestring)
#using the solicits+datestring leaves out some entries occasionally
#should use http://www.comicbookresources.com/tag/soliciations
#should use http://www.comicbookresources.com/tag/solicitations
#then just use the logic below but instead of datestring, find the month term and
#go ahead up to +5 months.
if month > 0:
month_start = month
month_end = month + 5
#if month_end > 12:
# ms = 8, me=13 [(12-8)+(13-12)] = [4 + 1] = 5
# [(12 - ms) + (me - 12)] = number of months (5)
monthlist = []
mongr = month_start
#we need to build the months we can grab, but the non-numeric way.
while (mongr <= month_end):
mon = mongr
if mon == 13:
mon = 1
year +=1
if len(str(mon)) == 1:
mon = '0' + str(mon)
monthlist.append({"month": helpers.fullmonth(mon).lower(),
"num_month": mon,
"year": str(year)})
mongr+=1
logger.info('months: ' + str(monthlist))
pagelinks = "http://www.comicbookresources.com/tag/solicitations"
#logger.info('datestring:' + datestring)
#logger.info('checking:' + pagelinks)
pageresponse = urllib2.urlopen ( pagelinks )
@ -57,6 +90,8 @@ def solicit(month, year):
publish = []
resultURL = []
resultmonth = []
resultyear = []
x = 0
cnt = 0
@ -64,43 +99,63 @@ def solicit(month, year):
while (x < lenlinks):
headt = cntlinks[x] #iterate through the hrefs pulling out only results.
if "/?page=article&amp;id=" in str(headt):
#print ("titlet: " + str(headt))
print ("titlet: " + str(headt))
headName = headt.findNext(text=True)
if ('Marvel' and 'DC' and 'Image' not in headName) and ('Solicitations' in headName or 'Solicits' in headName):
pubstart = headName.find('Solicitations')
for pub in publishers:
if pub in headName[:pubstart]:
#print 'publisher:' + str(publishers[pub])
publish.append(publishers[pub])
break
#publish.append( headName[:pubstart].strip() )
abc = headt.findAll('a', href=True)[0]
ID_som = abc['href'] #first instance will have the right link...
resultURL.append( ID_som )
#print '(' + str(cnt) + ') [ ' + publish[cnt] + '] Link URL: ' + resultURL[cnt]
cnt+=1
print ('headName: ' + headName)
if 'Image' in headName: print 'IMAGE FOUND'
if not all( ['Marvel' in headName, 'DC' in headName, 'Image' in headName] ) and ('Solicitations' in headName or 'Solicits' in headName):
# test for month here (int(month) + 5)
if not any(d.get('month', None) == str(headName).lower() for d in monthlist):
for mt in monthlist:
if mt['month'] in headName.lower():
logger.info('matched on month: ' + str(mt['month']))
logger.info('matched on year: ' + str(mt['year']))
resultmonth.append(mt['num_month'])
resultyear.append(mt['year'])
pubstart = headName.find('Solicitations')
publishchk = False
for pub in publishers:
if pub in headName[:pubstart]:
print 'publisher:' + str(publishers[pub])
publish.append(publishers[pub])
publishchk = True
break
if publishchk == False:
break
#publish.append( headName[:pubstart].strip() )
abc = headt.findAll('a', href=True)[0]
ID_som = abc['href'] #first instance will have the right link...
resultURL.append( ID_som )
print '(' + str(cnt) + ') [ ' + publish[cnt] + '] Link URL: ' + resultURL[cnt]
cnt+=1
else:
logger.info('incorrect month - not using.')
x+=1
#print 'cnt:' + str(cnt)
if cnt == 0:
break # no results means, end it
return #break # no results means, end it
loopthis = (cnt-1)
#this loops through each 'found' solicit page
shipdate = str(month_string) + '-' + str(year)
#shipdate = str(month_string) + '-' + str(year) - not needed.
while ( loopthis >= 0 ):
#print 'loopthis is : ' + str(loopthis)
#print 'resultURL is : ' + str(resultURL[loopthis])
shipdate = str(resultmonth[loopthis]) + '-' + str(resultyear[loopthis])
upcoming += populate(resultURL[loopthis], publish[loopthis], shipdate)
loopthis -=1
month +=1 #increment month by 1
mnloop +=1 #increment loop by 1
## not needed.
# month +=1 #increment month by 1
# mnloop +=1 #increment loop by 1
if month > 12: #failsafe failover for months
month = 1
year+=1
# if month > 12: #failsafe failover for months
# month = 1
# year+=1
#---
#print upcoming
logger.info( str(len(upcoming)) + ' upcoming issues discovered.' )
@ -178,12 +233,19 @@ def populate(link,publisher,shipdate):
while (i < lenabc):
titlet = abc[i] #iterate through the p pulling out only results.
titlet_next = titlet.findNext(text=True)
#print ("titlet: " + str(titlet))
if "/prev_img.php?pid" in str(titlet):
if "/prev_img.php?pid" in str(titlet) and titlet_next is None:
#solicits in 03-2014 have seperated <p> tags, so we need to take the subsequent <p>, not the initial.
prev_chk = False
get_next = True
i+=1
continue
elif titlet_next is not None:
#logger.fdebug('non seperated <p> tags - taking next text.')
get_next = False
prev_chk = True
elif "/news/preview2.php" in str(titlet):
prev_chk = True
get_next = False
@ -192,13 +254,11 @@ def populate(link,publisher,shipdate):
else:
prev_chk = False
get_next = False
if prev_chk == True:
tempName = titlet.findNext(text=True)
#logger.info('prev_chk: ' + str(prev_chk) + ' ... get_next: ' + str(get_next))
#logger.info('tempName:' + tempName)
if ' TPB' not in tempName and ' HC' not in tempName and 'GN-TPB' not in tempName and 'for $1' not in tempName.lower() and 'subscription variant' not in tempName.lower() and 'poster' not in tempName.lower():
#print publisher + ' found upcoming'
if '#' in tempName:
if not any( [' TPB' in tempName, 'HC' in tempName, 'GN-TPB' in tempName, 'for $1' in tempName.lower(), 'subscription variant' in tempName.lower(), 'poster' in tempName.lower() ] ):
if '#' in tempName[:50]:
#tempName = tempName.replace(u'.',u"'")
tempName = tempName.encode('ascii', 'replace') #.decode('utf-8')
if '???' in tempName:

View File

@ -44,7 +44,7 @@ def dbUpdate(ComicIDList=None):
#print "comicid:" + str(comicid)
mismatch = "no"
if not mylar.CV_ONLY or comicid[:1] == "G":
CV_EXcomicid = myDB.action("SELECT * from exceptions WHERE ComicID=?", [comicid]).fetchone()
CV_EXcomicid = myDB.selectone("SELECT * from exceptions WHERE ComicID=?", [comicid]).fetchone()
if CV_EXcomicid is None: pass
else:
if CV_EXcomicid['variloop'] == '99':
@ -134,12 +134,12 @@ def upcoming_update(ComicID, ComicName, IssueNumber, IssueDate, forcecheck=None,
#let's refresh the series here just to make sure if an issue is available/not.
mismatch = "no"
CV_EXcomicid = myDB.action("SELECT * from exceptions WHERE ComicID=?", [ComicID]).fetchone()
CV_EXcomicid = myDB.selectone("SELECT * from exceptions WHERE ComicID=?", [ComicID]).fetchone()
if CV_EXcomicid is None: pass
else:
if CV_EXcomicid['variloop'] == '99':
mismatch = "yes"
lastupdatechk = myDB.action("SELECT * FROM comics WHERE ComicID=?", [ComicID]).fetchone()
lastupdatechk = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [ComicID]).fetchone()
if lastupdatechk is None:
pullupd = "yes"
else:
@ -156,21 +156,21 @@ def upcoming_update(ComicID, ComicName, IssueNumber, IssueDate, forcecheck=None,
#pullupd = "yes"
if 'annual' in ComicName.lower():
if mylar.ANNUALS_ON:
issuechk = myDB.action("SELECT * FROM annuals WHERE ComicID=? AND Issue_Number=?", [ComicID, IssueNumber]).fetchone()
issuechk = myDB.selectone("SELECT * FROM annuals WHERE ComicID=? AND Issue_Number=?", [ComicID, IssueNumber]).fetchone()
else:
logger.fdebug('Annual detected, but annuals not enabled. Ignoring result.')
return
else:
issuechk = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [ComicID, IssueNumber]).fetchone()
issuechk = myDB.selectone("SELECT * FROM issues WHERE ComicID=? AND Issue_Number=?", [ComicID, IssueNumber]).fetchone()
if issuechk is None and altissuenumber is not None:
logger.info('altissuenumber is : ' + str(altissuenumber))
issuechk = myDB.action("SELECT * FROM issues WHERE ComicID=? AND Int_IssueNumber=?", [ComicID, helpers.issuedigits(altissuenumber)]).fetchone()
issuechk = myDB.selectone("SELECT * FROM issues WHERE ComicID=? AND Int_IssueNumber=?", [ComicID, helpers.issuedigits(altissuenumber)]).fetchone()
if issuechk is None:
if futurepull is None:
logger.fdebug(adjComicName + ' Issue: ' + str(IssueNumber) + ' not present in listings to mark for download...updating comic and adding to Upcoming Wanted Releases.')
# we need to either decrease the total issue count, OR indicate that an issue is upcoming.
upco_results = myDB.action("SELECT COUNT(*) FROM UPCOMING WHERE ComicID=?",[ComicID]).fetchall()
upco_results = myDB.select("SELECT COUNT(*) FROM UPCOMING WHERE ComicID=?",[ComicID])
upco_iss = upco_results[0][0]
#logger.info("upco_iss: " + str(upco_iss))
if int(upco_iss) > 0:
@ -188,7 +188,7 @@ def upcoming_update(ComicID, ComicName, IssueNumber, IssueDate, forcecheck=None,
pullupd = "yes"
logger.fdebug('Now Refreshing comic ' + ComicName + ' to make sure it is up-to-date')
if ComicID[:1] == "G": mylar.importer.GCDimport(ComicID,pullupd)
else: mylar.importer.addComictoDB(ComicID,mismatch,pullupd)
else: mylar.importer.updateissuedata(ComicID, ComicName, calledfrom='weeklycheck')#mylar.importer.addComictoDB(ComicID,mismatch,pullupd)
else:
logger.fdebug('It has not been longer than 5 hours since we last did this...we will wait so we do not hammer things.')
return
@ -289,9 +289,9 @@ def weekly_update(ComicName,IssueNumber,CStatus,CID,futurepull=None,altissuenumb
# added CStatus to update status flags on Pullist screen
myDB = db.DBConnection()
if futurepull is None:
issuecheck = myDB.action("SELECT * FROM weekly WHERE COMIC=? AND ISSUE=?", [ComicName,IssueNumber]).fetchone()
issuecheck = myDB.selectone("SELECT * FROM weekly WHERE COMIC=? AND ISSUE=?", [ComicName,IssueNumber]).fetchone()
else:
issuecheck = myDB.action("SELECT * FROM future WHERE COMIC=? AND ISSUE=?", [ComicName,IssueNumber]).fetchone()
issuecheck = myDB.selectone("SELECT * FROM future WHERE COMIC=? AND ISSUE=?", [ComicName,IssueNumber]).fetchone()
if issuecheck is not None:
controlValue = { "COMIC": str(ComicName),
"ISSUE": str(IssueNumber)}
@ -307,12 +307,12 @@ def weekly_update(ComicName,IssueNumber,CStatus,CID,futurepull=None,altissuenumb
if futurepull is None:
myDB.upsert("weekly", newValue, controlValue)
else:
logger.info('checking ' + str(issuecheck['ComicID']) + ' status of : ' + str(CStatus))
logger.fdebug('checking ' + str(issuecheck['ComicID']) + ' status of : ' + str(CStatus))
if issuecheck['ComicID'] is not None and CStatus != None:
newValue = {"STATUS": "Wanted",
"ComicID": issuecheck['ComicID']}
logger.info('updating value: ' + str(newValue))
logger.info('updating control: ' + str(controlValue))
logger.fdebug('updating value: ' + str(newValue))
logger.fdebug('updating control: ' + str(controlValue))
myDB.upsert("future", newValue, controlValue)
def newpullcheck(ComicName, ComicID, issue=None):
@ -367,16 +367,16 @@ def foundsearch(ComicID, IssueID, mode=None, down=None, provider=None, SARC=None
logger.info('comicid: ' + str(ComicID))
logger.info('issueid: ' + str(IssueID))
if mode != 'story_arc':
comic = myDB.action('SELECT * FROM comics WHERE ComicID=?', [ComicID]).fetchone()
comic = myDB.selectone('SELECT * FROM comics WHERE ComicID=?', [ComicID]).fetchone()
ComicName = comic['ComicName']
if mode == 'want_ann':
issue = myDB.action('SELECT * FROM annuals WHERE IssueID=?', [IssueID]).fetchone()
issue = myDB.selectone('SELECT * FROM annuals WHERE IssueID=?', [IssueID]).fetchone()
else:
issue = myDB.action('SELECT * FROM issues WHERE IssueID=?', [IssueID]).fetchone()
issue = myDB.selectone('SELECT * FROM issues WHERE IssueID=?', [IssueID]).fetchone()
CYear = issue['IssueDate'][:4]
else:
issue = myDB.action('SELECT * FROM readinglist WHERE IssueArcID=?', [IssueArcID]).fetchone()
issue = myDB.selectone('SELECT * FROM readinglist WHERE IssueArcID=?', [IssueArcID]).fetchone()
ComicName = issue['ComicName']
CYear = issue['IssueYEAR']
@ -432,7 +432,12 @@ def foundsearch(ComicID, IssueID, mode=None, down=None, provider=None, SARC=None
myDB.upsert("snatched", newsnatchValues, snatchedupdate)
logger.info("updated the snatched.")
else:
logger.info("updating the downloaded.")
if down == 'PP':
logger.fdebug('setting status to Post-Processed in history.')
downstatus = 'Post-Processed'
else:
logger.fdebug('setting status to Downloaded in history.')
downstatus = 'Downloaded'
if mode == 'want_ann':
IssueNum = "Annual " + issue['Issue_Number']
elif mode == 'story_arc':
@ -442,14 +447,14 @@ def foundsearch(ComicID, IssueID, mode=None, down=None, provider=None, SARC=None
IssueNum = issue['Issue_Number']
snatchedupdate = {"IssueID": IssueID,
"Status": "Downloaded",
"Status": downstatus,
"Provider": provider
}
newsnatchValues = {"ComicName": ComicName,
"ComicID": ComicID,
"Issue_Number": IssueNum,
"DateAdded": helpers.now(),
"Status": "Downloaded"
"Status": downstatus
}
myDB.upsert("snatched", newsnatchValues, snatchedupdate)
@ -471,7 +476,7 @@ def foundsearch(ComicID, IssueID, mode=None, down=None, provider=None, SARC=None
def forceRescan(ComicID,archive=None):
myDB = db.DBConnection()
# file check to see if issue exists
rescan = myDB.action('SELECT * FROM comics WHERE ComicID=?', [ComicID]).fetchone()
rescan = myDB.selectone('SELECT * FROM comics WHERE ComicID=?', [ComicID]).fetchone()
logger.info('Now checking files for ' + rescan['ComicName'] + ' (' + str(rescan['ComicYear']) + ') in ' + rescan['ComicLocation'] )
if archive is None:
fc = filechecker.listFiles(dir=rescan['ComicLocation'], watchcomic=rescan['ComicName'], Publisher=rescan['ComicPublisher'], AlternateSearch=rescan['AlternateSearch'])
@ -480,7 +485,7 @@ def forceRescan(ComicID,archive=None):
iscnt = rescan['Total']
havefiles = 0
if mylar.ANNUALS_ON:
an_cnt = myDB.action("SELECT COUNT(*) FROM annuals WHERE ComicID=?", [ComicID]).fetchall()
an_cnt = myDB.select("SELECT COUNT(*) FROM annuals WHERE ComicID=?", [ComicID])
anncnt = an_cnt[0][0]
else:
anncnt = 0
@ -491,9 +496,11 @@ def forceRescan(ComicID,archive=None):
issuedupechk = []
annualdupechk = []
issueexceptdupechk = []
reissues = myDB.action('SELECT * FROM issues WHERE ComicID=?', [ComicID]).fetchall()
reissues = myDB.select('SELECT * FROM issues WHERE ComicID=?', [ComicID])
issID_to_ignore = []
issID_to_ignore.append(str(ComicID))
issID_to_write = []
while (fn < fccnt):
haveissue = "no"
issuedupe = "no"
@ -656,7 +663,7 @@ def forceRescan(ComicID,archive=None):
else:
# annual inclusion here.
#logger.fdebug("checking " + str(temploc))
reannuals = myDB.action('SELECT * FROM annuals WHERE ComicID=?', [ComicID]).fetchall()
reannuals = myDB.select('SELECT * FROM annuals WHERE ComicID=?', [ComicID])
fcnew = shlex.split(str(temploc))
fcn = len(fcnew)
n = 0
@ -753,17 +760,28 @@ def forceRescan(ComicID,archive=None):
issID_to_ignore.append(str(iss_id))
if 'annual' in temploc.lower():
#issID_to_write.append({"tableName": "annuals",
# "newValueDict": newValueDict,
# "controlValueDict": controlValueDict})
myDB.upsert("annuals", newValueDict, controlValueDict)
else:
#issID_to_write.append({"tableName": "issues",
# "valueDict": newValueDict,
# "keyDict": controlValueDict})
myDB.upsert("issues", newValueDict, controlValueDict)
fn+=1
# if len(issID_to_write) > 0:
# for iss in issID_to_write:
# logger.info('writing ' + str(iss))
# writethis = myDB.upsert(iss['tableName'], iss['valueDict'], iss['keyDict'])
logger.fdebug('IssueID to ignore: ' + str(issID_to_ignore))
#here we need to change the status of the ones we DIDN'T FIND above since the loop only hits on FOUND issues.
update_iss = []
tmpsql = "SELECT * FROM issues WHERE ComicID=? AND IssueID not in ({seq})".format(seq=','.join(['?']*(len(issID_to_ignore)-1)))
chkthis = myDB.action(tmpsql, issID_to_ignore).fetchall()
chkthis = myDB.select(tmpsql, issID_to_ignore)
# chkthis = None
if chkthis is None:
pass
@ -810,10 +828,10 @@ def forceRescan(ComicID,archive=None):
arcanns = 0
# if filechecker returns 0 files (it doesn't find any), but some issues have a status of 'Archived'
# the loop below won't work...let's adjust :)
arcissues = myDB.action("SELECT count(*) FROM issues WHERE ComicID=? and Status='Archived'", [ComicID]).fetchall()
arcissues = myDB.select("SELECT count(*) FROM issues WHERE ComicID=? and Status='Archived'", [ComicID])
if int(arcissues[0][0]) > 0:
arcfiles = arcissues[0][0]
arcannuals = myDB.action("SELECT count(*) FROM annuals WHERE ComicID=? and Status='Archived'", [ComicID]).fetchall()
arcannuals = myDB.select("SELECT count(*) FROM annuals WHERE ComicID=? and Status='Archived'", [ComicID])
if int(arcannuals[0][0]) > 0:
arcanns = arcannuals[0][0]
@ -824,7 +842,7 @@ def forceRescan(ComicID,archive=None):
ignorecount = 0
if mylar.IGNORE_HAVETOTAL: # if this is enabled, will increase Have total as if in Archived Status
ignores = myDB.action("SELECT count(*) FROM issues WHERE ComicID=? AND Status='Ignored'", [ComicID]).fetchall()
ignores = myDB.select("SELECT count(*) FROM issues WHERE ComicID=? AND Status='Ignored'", [ComicID])
if int(ignores[0][0]) > 0:
ignorecount = ignores[0][0]
havefiles = havefiles + ignorecount

View File

@ -87,7 +87,7 @@ def getVersion():
cur_commit_hash = output.strip()
if not re.match('^[a-z0-9]+$', cur_commit_hash):
logger.error('Output doesn\'t look like a hash, not using it')
logger.error('Output does not look like a hash, not using it')
return None
return cur_commit_hash

30
mylar/versioncheckit.py Normal file
View File

@ -0,0 +1,30 @@
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
from __future__ import with_statement
import mylar
from mylar import logger
#import threading
class CheckVersion():
def __init__(self):
pass
def run(self):
logger.info('[VersionCheck] Checking for new release on Github.')
mylar.versioncheck.checkGithub()
return

View File

@ -41,12 +41,9 @@ import lib.simplejson as simplejson
from operator import itemgetter
def serve_template(templatename, **kwargs):
interface_dir = os.path.join(str(mylar.PROG_DIR), 'data/interfaces/')
template_dir = os.path.join(str(interface_dir), mylar.INTERFACE)
_hplookup = TemplateLookup(directories=[template_dir])
try:
template = _hplookup.get_template(templatename)
return template.render(**kwargs)
@ -56,7 +53,10 @@ def serve_template(templatename, **kwargs):
class WebInterface(object):
def index(self):
raise cherrypy.HTTPRedirect("home")
if mylar.SAFESTART:
raise cherrypy.HTTPRedirect("manageComics")
else:
raise cherrypy.HTTPRedirect("home")
index.exposed=True
def home(self):
@ -68,7 +68,7 @@ class WebInterface(object):
issue = myDB.select("SELECT * FROM issues WHERE ComicID=?", [comic['ComicID']])
if mylar.ANNUALS_ON:
annuals_on = True
annual = myDB.action("SELECT COUNT(*) as count FROM annuals WHERE ComicID=?", [comic['ComicID']]).fetchone()
annual = myDB.selectone("SELECT COUNT(*) as count FROM annuals WHERE ComicID=?", [comic['ComicID']]).fetchone()
annualcount = annual[0]
if not annualcount:
annualcount = 0
@ -133,7 +133,7 @@ class WebInterface(object):
def comicDetails(self, ComicID):
myDB = db.DBConnection()
comic = myDB.action('SELECT * FROM comics WHERE ComicID=?', [ComicID]).fetchone()
comic = myDB.selectone('SELECT * FROM comics WHERE ComicID=?', [ComicID]).fetchone()
if comic is None:
raise cherrypy.HTTPRedirect("home")
#let's cheat. :)
@ -271,7 +271,7 @@ class WebInterface(object):
# iterate through, and overwrite the existing watchmatch with the new chosen 'C' + comicid value
confirmedid = "C" + str(comicid)
confirms = myDB.action("SELECT * FROM importresults WHERE WatchMatch=?", [ogcname])
confirms = myDB.select("SELECT * FROM importresults WHERE WatchMatch=?", [ogcname])
if confirms is None:
logger.Error("There are no results that match...this is an ERROR.")
else:
@ -291,7 +291,7 @@ class WebInterface(object):
#print ("comicimage: " + str(comicimage))
if not mylar.CV_ONLY:
#here we test for exception matches (ie. comics spanning more than one volume, known mismatches, etc).
CV_EXcomicid = myDB.action("SELECT * from exceptions WHERE ComicID=?", [comicid]).fetchone()
CV_EXcomicid = myDB.selectone("SELECT * from exceptions WHERE ComicID=?", [comicid]).fetchone()
if CV_EXcomicid is None: # pass #
gcdinfo=parseit.GCDScraper(comicname, comicyear, comicissues, comicid, quickmatch="yes")
if gcdinfo == "No Match":
@ -373,7 +373,7 @@ class WebInterface(object):
def wanted_Export(self):
import unicodedata
myDB = db.DBConnection()
wantlist = myDB.action("SELECT * FROM issues WHERE Status='Wanted' AND ComicName NOT NULL")
wantlist = myDB.select("SELECT * FROM issues WHERE Status='Wanted' AND ComicName NOT NULL")
if wantlist is None:
logger.info("There aren't any issues marked as Wanted. Aborting Export.")
return
@ -393,7 +393,7 @@ class WebInterface(object):
headerline = headrow.decode('utf-8','ignore')
f.write('%s\n' % (headerline.encode('ascii','replace').strip()))
for want in wantlist:
wantcomic = myDB.action("SELECT * FROM comics WHERE ComicID=?", [want['ComicID']]).fetchone()
wantcomic = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [want['ComicID']]).fetchone()
exceptln = wantcomic['ComicName'].encode('ascii', 'replace') + "," + str(wantcomic['ComicYear']) + "," + str(want['Issue_Number']) + "," + str(want['IssueDate']) + "," + str(want['ComicID']) + "," + str(want['IssueID'])
logger.fdebug(exceptln)
wcount+=1
@ -500,7 +500,7 @@ class WebInterface(object):
def deleteArtist(self, ComicID):
myDB = db.DBConnection()
comic = myDB.action('SELECT * from comics WHERE ComicID=?', [ComicID]).fetchone()
comic = myDB.selectone('SELECT * from comics WHERE ComicID=?', [ComicID]).fetchone()
if comic['ComicName'] is None: ComicName = "None"
else: ComicName = comic['ComicName']
logger.info(u"Deleting all traces of Comic: " + ComicName)
@ -524,9 +524,10 @@ class WebInterface(object):
def refreshArtist(self, ComicID):
myDB = db.DBConnection()
mismatch = "no"
logger.fdebug('Refreshing comicid: ' + str(ComicID))
if not mylar.CV_ONLY or ComicID[:1] == "G":
CV_EXcomicid = myDB.action("SELECT * from exceptions WHERE ComicID=?", [ComicID]).fetchone()
CV_EXcomicid = myDB.selectone("SELECT * from exceptions WHERE ComicID=?", [ComicID]).fetchone()
if CV_EXcomicid is None: pass
else:
if CV_EXcomicid['variloop'] == '99':
@ -560,8 +561,8 @@ class WebInterface(object):
issues += annual_load #myDB.select('SELECT * FROM annuals WHERE ComicID=?', [ComicID])
#store the issues' status for a given comicid, after deleting and readding, flip the status back to$
logger.fdebug("Deleting all issue data.")
myDB.select('DELETE FROM issues WHERE ComicID=?', [ComicID])
myDB.select('DELETE FROM annuals WHERE ComicID=?', [ComicID])
myDB.action('DELETE FROM issues WHERE ComicID=?', [ComicID])
myDB.action('DELETE FROM annuals WHERE ComicID=?', [ComicID])
logger.fdebug("Refreshing the series and pulling in new data using only CV.")
mylar.importer.addComictoDB(ComicID,mismatch,calledfrom='dbupdate',annload=annload)
#reload the annuals here.
@ -582,8 +583,8 @@ class WebInterface(object):
fndissue = []
for issue in issues:
for issuenew in issues_new:
logger.info(str(issue['Issue_Number']) + ' - issuenew:' + str(issuenew['IssueID']) + ' : ' + str(issuenew['Status']))
logger.info(str(issue['Issue_Number']) + ' - issue:' + str(issue['IssueID']) + ' : ' + str(issue['Status']))
#logger.fdebug(str(issue['Issue_Number']) + ' - issuenew:' + str(issuenew['IssueID']) + ' : ' + str(issuenew['Status']))
#logger.fdebug(str(issue['Issue_Number']) + ' - issue:' + str(issue['IssueID']) + ' : ' + str(issue['Status']))
if issuenew['IssueID'] == issue['IssueID'] and issuenew['Status'] != issue['Status']:
ctrlVAL = {"IssueID": issue['IssueID']}
#if the status is None and the original status is either Downloaded / Archived, keep status & stats
@ -615,7 +616,7 @@ class WebInterface(object):
logger.fdebug("annual detected for " + str(issue['IssueID']) + " #: " + str(issue['Issue_Number']))
myDB.upsert("Annuals", newVAL, ctrlVAL)
else:
logger.info('writing issuedata: ' + str(newVAL))
logger.fdebug('#' + str(issue['Issue_Number']) + ' writing issuedata: ' + str(newVAL))
myDB.upsert("Issues", newVAL, ctrlVAL)
fndissue.append({"IssueID": issue['IssueID']})
icount+=1
@ -627,9 +628,13 @@ class WebInterface(object):
issues_new += myDB.select('SELECT * FROM annuals WHERE ComicID=? AND Status is NULL', [ComicID])
newiss = []
if mylar.AUTOWANT_UPCOMING:
newstatus = "Wanted"
else:
newstatus = "Skipped"
for iss in issues_new:
newiss.append({"IssueID": issue['IssueID'],
"Status": "Skipped"})
newiss.append({"IssueID": iss['IssueID'],
"Status": newstatus})
if len(newiss) > 0:
for newi in newiss:
ctrlVAL = {"IssueID": newi['IssueID']}
@ -647,7 +652,7 @@ class WebInterface(object):
def editIssue(self, ComicID):
myDB = db.DBConnection()
comic = myDB.action('SELECT * from comics WHERE ComicID=?', [ComicID]).fetchone()
comic = myDB.selectone('SELECT * from comics WHERE ComicID=?', [ComicID]).fetchone()
title = 'Now Editing ' + comic['ComicName']
return serve_template(templatename="editcomic.html", title=title, comic=comic)
#raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" & ComicID)
@ -655,8 +660,9 @@ class WebInterface(object):
def force_rss(self):
logger.info('attempting to run RSS Check Forcibly')
chktorrent = mylar.rsscheck.tehMain(forcerss=True)
if chktorrent:
forcethis = mylar.rsscheckit.tehMain(forcerss=True)
forcerun = forcethis.run()
if forcerun:
logger.info('Successfully ran RSS Force Check.')
return
@ -685,20 +691,20 @@ class WebInterface(object):
if IssueID is None or 'issue_table' in IssueID or 'history_table' in IssueID:
continue
else:
mi = myDB.action("SELECT * FROM issues WHERE IssueID=?",[IssueID]).fetchone()
mi = myDB.selectone("SELECT * FROM issues WHERE IssueID=?",[IssueID]).fetchone()
annchk = 'no'
if mi is None:
if mylar.ANNUALS_ON:
mi = myDB.action("SELECT * FROM annuals WHERE IssueID=?",[IssueID]).fetchone()
mi = myDB.selectone("SELECT * FROM annuals WHERE IssueID=?",[IssueID]).fetchone()
comicname = mi['ReleaseComicName']
annchk = 'yes'
else:
comicname = mi['ComicName']
miyr = myDB.action("SELECT ComicYear FROM comics WHERE ComicID=?", [mi['ComicID']]).fetchone()
miyr = myDB.selectone("SELECT ComicYear FROM comics WHERE ComicID=?", [mi['ComicID']]).fetchone()
if action == 'Downloaded':
if mi['Status'] == "Skipped" or mi['Status'] == "Wanted":
logger.info(u"Cannot change status to %s as comic is not Snatched or Downloaded" % (newaction))
# continue
continue
elif action == 'Archived':
logger.info(u"Marking %s %s as %s" % (comicname, mi['Issue_Number'], newaction))
#updater.forceRescan(mi['ComicID'])
@ -773,7 +779,7 @@ class WebInterface(object):
#this is for marking individual comics from the pullist to be downloaded.
#because ComicID and IssueID will both be None due to pullist, it's probably
#better to set both to some generic #, and then filter out later...
cyear = myDB.action("SELECT SHIPDATE FROM weekly").fetchone()
cyear = myDB.selectone("SELECT SHIPDATE FROM weekly").fetchone()
ComicYear = str(cyear['SHIPDATE'])[:4]
if ComicYear == '': ComicYear = now.year
logger.info(u"Marking " + ComicName + " " + ComicIssue + " as wanted...")
@ -783,7 +789,7 @@ class WebInterface(object):
raise cherrypy.HTTPRedirect("pullist")
#return
elif mode == 'want' or mode == 'want_ann':
cdname = myDB.action("SELECT ComicName from comics where ComicID=?", [ComicID]).fetchone()
cdname = myDB.selectone("SELECT ComicName from comics where ComicID=?", [ComicID]).fetchone()
ComicName = cdname['ComicName']
controlValueDict = {"IssueID": IssueID}
newStatus = {"Status": "Wanted"}
@ -801,9 +807,9 @@ class WebInterface(object):
# myDB.upsert("issues", newStatus, controlValueDict)
#for future reference, the year should default to current year (.datetime)
if mode == 'want':
issues = myDB.action("SELECT IssueDate, ReleaseDate FROM issues WHERE IssueID=?", [IssueID]).fetchone()
issues = myDB.selectone("SELECT IssueDate, ReleaseDate FROM issues WHERE IssueID=?", [IssueID]).fetchone()
elif mode == 'want_ann':
issues = myDB.action("SELECT IssueDate, ReleaseDate FROM annuals WHERE IssueID=?", [IssueID]).fetchone()
issues = myDB.selectone("SELECT IssueDate, ReleaseDate FROM annuals WHERE IssueID=?", [IssueID]).fetchone()
if ComicYear == None:
ComicYear = str(issues['IssueDate'])[:4]
if issues['ReleaseDate'] is None or issues['ReleaseDate'] == '0000-00-00':
@ -812,7 +818,7 @@ class WebInterface(object):
storedate = issues['IssueDate']
else:
storedate = issues['ReleaseDate']
miy = myDB.action("SELECT * FROM comics WHERE ComicID=?", [ComicID]).fetchone()
miy = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [ComicID]).fetchone()
SeriesYear = miy['ComicYear']
AlternateSearch = miy['AlternateSearch']
Publisher = miy['ComicPublisher']
@ -831,21 +837,26 @@ class WebInterface(object):
queueissue.exposed = True
def unqueueissue(self, IssueID, ComicID, ComicName=None, Issue=None, FutureID=None):
print 'here'
myDB = db.DBConnection()
if ComicName is None:
issue = myDB.action('SELECT * FROM issues WHERE IssueID=?', [IssueID]).fetchone()
issue = myDB.selectone('SELECT * FROM issues WHERE IssueID=?', [IssueID]).fetchone()
annchk = 'no'
if issue is None:
if mylar.ANNUALS_ON:
issann = myDB.action('SELECT * FROM annuals WHERE IssueID=?', [IssueID]).fetchone()
issann = myDB.selectone('SELECT * FROM annuals WHERE IssueID=?', [IssueID]).fetchone()
comicname = issann['ReleaseComicName']
issue = issann['Issue_Number']
annchk = 'yes'
comicid = issann['ComicID']
else:
print IssueID
comicname = issue['ComicName']
print issue['ComicName']
issue = issue['Issue_Number']
comicid = issue['ComicID']
print issue
#comicid = issue['ComicID']
print ComicID
logger.info(u"Marking " + comicname + " issue # " + str(issue) + " as Skipped...")
controlValueDict = {"IssueID": IssueID}
newValueDict = {"Status": "Skipped"}
@ -859,11 +870,11 @@ class WebInterface(object):
#ComicID may be present if it's a watch from the Watchlist, otherwise it won't exist.
if ComicID is not None and ComicID != 'None':
logger.info('comicid present:' + str(ComicID))
thefuture = myDB.action('SELECT * FROM future WHERE ComicID=?', [ComicID]).fetchone()
thefuture = myDB.selectone('SELECT * FROM future WHERE ComicID=?', [ComicID]).fetchone()
else:
logger.info('FutureID: ' + str(FutureID))
logger.info('no comicid - ComicName: ' + str(ComicName) + ' -- Issue: #' + str(Issue))
thefuture = myDB.action('SELECT * FROM future WHERE FutureID=?', [FutureID]).fetchone()
thefuture = myDB.selectone('SELECT * FROM future WHERE FutureID=?', [FutureID]).fetchone()
if thefuture is None:
logger.info('Cannot find the corresponding issue in the Futures List for some reason. This is probably an Error.')
else:
@ -882,11 +893,11 @@ class WebInterface(object):
def archiveissue(self, IssueID):
myDB = db.DBConnection()
issue = myDB.action('SELECT * FROM issues WHERE IssueID=?', [IssueID]).fetchone()
issue = myDB.selectone('SELECT * FROM issues WHERE IssueID=?', [IssueID]).fetchone()
annchk = 'no'
if issue is None:
if mylar.ANNUALS_ON:
issann = myDB.action('SELECT * FROM annuals WHERE IssueID=?', [IssueID]).fetchone()
issann = myDB.selectone('SELECT * FROM annuals WHERE IssueID=?', [IssueID]).fetchone()
comicname = issann['ReleaseComicName']
issue = issann['Issue_Number']
annchk = 'yes'
@ -928,7 +939,7 @@ class WebInterface(object):
"STATUS" : weekly['STATUS']
})
weeklyresults = sorted(weeklyresults, key=itemgetter('PUBLISHER','COMIC'), reverse=False)
pulldate = myDB.action("SELECT * from weekly").fetchone()
pulldate = myDB.selectone("SELECT * from weekly").fetchone()
if pulldate is None:
return self.manualpull()
#raise cherrypy.HTTPRedirect("home")
@ -1022,7 +1033,7 @@ class WebInterface(object):
def add2futurewatchlist(self, ComicName, Issue, Publisher, ShipDate, FutureID):
myDB = db.DBConnection()
chkfuture = myDB.action('SELECT * FROM futureupcoming WHERE ComicName=? AND IssueNumber=?', [ComicName, Issue]).fetchone()
chkfuture = myDB.selectone('SELECT * FROM futureupcoming WHERE ComicName=? AND IssueNumber=?', [ComicName, Issue]).fetchone()
if chkfuture is not None:
logger.info('Already on Future Upcoming list - not adding at this time.')
return
@ -1049,8 +1060,12 @@ class WebInterface(object):
# - will automatically import the series (Add A Series) upon finding match
# - will then proceed to mark the issue as Wanted, then remove from the futureupcoming table
# - will then attempt to download the issue(s) in question.
# future to-do
# specify whether you want to 'add a series (Watch For)' or 'mark an issue as a one-off download'.
# currently the 'add series' option in the futurepulllist will attempt to add a series as per normal.
myDB = db.DBConnection()
chkfuture = myDB.action("SELECT * FROM futureupcoming WHERE IssueNumber='1'").fetchall()
chkfuture = myDB.select("SELECT * FROM futureupcoming WHERE IssueNumber is not NULL")
if chkfuture is None:
logger.info("There are not any series on your future-list that I consider to be a NEW series")
raise cherrypy.HTTPRedirect("home")
@ -1080,7 +1095,7 @@ class WebInterface(object):
#we should probably load all additional issues for the series on the futureupcoming list that are marked as Wanted and then
#throw them to the importer as a tuple, and once imported the import can run the additional search against them.
#now we scan for additional issues of the same series on the upcoming list and mark them accordingly.
chkwant = myDB.action("SELECT * FROM futureupcoming WHERE ComicName=? AND IssueNumber != '1' AND Status='Wanted'", [ser['ComicName']]).fetchall()
chkwant = myDB.select("SELECT * FROM futureupcoming WHERE ComicName=? AND IssueNumber != '1' AND Status='Wanted'", [ser['ComicName']])
if chkwant is None:
logger.info('No extra issues to mark at this time for ' + ser['ComicName'])
else:
@ -1094,8 +1109,12 @@ class WebInterface(object):
logger.info('Marking ' + str(len(chkthewanted)) + ' additional issues as Wanted from ' + ser['ComicName'] + ' series as requested')
importer.addComictoDB(sr['comicid'], "no", chkwant=chkthewanted)
logger.info('Sucessfully imported ' + ser['ComicName'] + ' (' + str(ser['IssueDate'][-4:]) + ')')
chktheadd = importer.addComictoDB(sr['comicid'], "no", chkwant=chkthewanted)
if chktheadd != 'Exists':
logger.info('Sucessfully imported ' + ser['ComicName'] + ' (' + str(ser['IssueDate'][-4:]) + ')')
myDB.action('DELETE from futureupcoming WHERE ComicName=?', [ser['ComicName']])
logger.info('Removed ' + ser['ComicName'] + ' (' + str(ser['IssueDate'][-4:]) + ') from the future upcoming list as it is now added.')
raise cherrypy.HTTPRedirect("home")
future_check.exposed = True
@ -1103,7 +1122,7 @@ class WebInterface(object):
def filterpull(self):
myDB = db.DBConnection()
weeklyresults = myDB.select("SELECT * from weekly")
pulldate = myDB.action("SELECT * from weekly").fetchone()
pulldate = myDB.selectone("SELECT * from weekly").fetchone()
if pulldate is None:
raise cherrypy.HTTPRedirect("home")
return serve_template(templatename="weeklypull.html", title="Weekly Pull", weeklyresults=weeklyresults, pulldate=pulldate['SHIPDATE'], pullfilter=True)
@ -1178,7 +1197,7 @@ class WebInterface(object):
"DisplayComicName": upc['DisplayComicName']})
issues = myDB.select("SELECT * from issues WHERE Status='Wanted'")
isscnt = CISSUES = myDB.action("SELECT COUNT(*) FROM issues WHERE Status='Wanted'").fetchall()
isscnt = myDB.select("SELECT COUNT(*) FROM issues WHERE Status='Wanted'")
iss_cnt = isscnt[0][0]
ann_list = []
@ -1189,7 +1208,7 @@ class WebInterface(object):
#let's add the annuals to the wanted table so people can see them
#ComicName wasn't present in db initially - added on startup chk now.
annuals_list = myDB.select("SELECT * FROM annuals WHERE Status='Wanted'")
anncnt = myDB.action("SELECT COUNT(*) FROM annuals WHERE Status='Wanted'").fetchall()
anncnt = myDB.select("SELECT COUNT(*) FROM annuals WHERE Status='Wanted'")
ann_cnt = anncnt[0][0]
ann_list += annuals_list
issues += annuals_list
@ -1204,7 +1223,7 @@ class WebInterface(object):
mvupcome = myDB.select("SELECT * from upcoming WHERE IssueDate < date('now') order by IssueDate DESC")
#get the issue ID's
for mvup in mvupcome:
myissue = myDB.action("SELECT * FROM issues WHERE IssueID=?", [mvup['IssueID']]).fetchone()
myissue = myDB.selectone("SELECT ComicName, Issue_Number, IssueID, ComicID FROM issues WHERE IssueID=?", [mvup['IssueID']]).fetchone()
#myissue = myDB.action("SELECT * FROM issues WHERE Issue_Number=?", [mvup['IssueNumber']]).fetchone()
if myissue is None: pass
@ -1220,7 +1239,7 @@ class WebInterface(object):
#remove old entry from upcoming so it won't try to continually download again.
logger.fdebug('[DELETE] - ' + mvup['ComicName'] + ' issue #: ' + str(mvup['IssueNumber']))
deleteit = myDB.action("DELETE from upcoming WHERE ComicName=? AND IssueNumber=?", [mvup['ComicName'],mvup['IssueNumber']])
deleteit = myDB.action("DELETE from upcoming WHERE ComicName=? AND IssueNumber=?", [mvup['ComicName'],mvup['IssueNumber']])
return serve_template(templatename="upcoming.html", title="Upcoming", upcoming=upcoming, issues=issues, ann_list=ann_list, futureupcoming=futureupcoming, future_nodata_upcoming=future_nodata_upcoming, futureupcoming_count=futureupcoming_count, upcoming_count=upcoming_count, wantedcount=wantedcount)
@ -1269,13 +1288,13 @@ class WebInterface(object):
return
myDB = db.DBConnection()
comic = myDB.action("SELECT * FROM comics WHERE ComicID=?", [comicid]).fetchone()
comic = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [comicid]).fetchone()
comicdir = comic['ComicLocation']
comicname = comic['ComicName']
extensions = ('.cbr', '.cbz')
issues = myDB.action("SELECT * FROM issues WHERE ComicID=?", [comicid]).fetchall()
issues = myDB.select("SELECT * FROM issues WHERE ComicID=?", [comicid])
if mylar.ANNUALS_ON:
issues += myDB.action("SELECT * FROM annuals WHERE ComicID=?", [comicid]).fetchall()
issues += myDB.select("SELECT * FROM annuals WHERE ComicID=?", [comicid])
comfiles = []
filefind = 0
for root, dirnames, filenames in os.walk(comicdir):
@ -1316,7 +1335,8 @@ class WebInterface(object):
searchScan.exposed = True
def manage(self):
return serve_template(templatename="manage.html", title="Manage")
mylarRoot = mylar.DESTINATION_DIR
return serve_template(templatename="manage.html", title="Manage", mylarRoot=mylarRoot)
manage.exposed = True
def manageComics(self):
@ -1463,7 +1483,7 @@ class WebInterface(object):
def markasRead(self, IssueID=None, IssueArcID=None):
myDB = db.DBConnection()
if IssueID:
issue = myDB.action('SELECT * from readlist WHERE IssueID=?', [IssueID]).fetchone()
issue = myDB.selectone('SELECT * from readlist WHERE IssueID=?', [IssueID]).fetchone()
if issue['Status'] == 'Read':
NewVal = {"Status": "Added"}
else:
@ -1472,7 +1492,7 @@ class WebInterface(object):
myDB.upsert("readlist", NewVal, CtrlVal)
logger.info("Marked " + str(issue['ComicName']) + " #" + str(issue['Issue_Number']) + " as Read.")
elif IssueArcID:
issue = myDB.action('SELECT * from readinglist WHERE IssueArcID=?', [IssueArcID]).fetchone()
issue = myDB.selectone('SELECT * from readinglist WHERE IssueArcID=?', [IssueArcID]).fetchone()
if issue['Status'] == 'Read':
NewVal = {"Status": "Added"}
else:
@ -1484,8 +1504,8 @@ class WebInterface(object):
def addtoreadlist(self, IssueID):
myDB = db.DBConnection()
readlist = myDB.action("SELECT * from issues where IssueID=?", [IssueID]).fetchone()
comicinfo = myDB.action("SELECT * from comics where ComicID=?", [readlist['ComicID']]).fetchone()
readlist = myDB.selectone("SELECT * from issues where IssueID=?", [IssueID]).fetchone()
comicinfo = myDB.selectone("SELECT * from comics where ComicID=?", [readlist['ComicID']]).fetchone()
if readlist is None:
logger.error("Cannot locate IssueID - aborting..")
else:
@ -1596,9 +1616,9 @@ class WebInterface(object):
GCDissue = int(GCDissue) / 1000
if '.' not in str(GCDissue): GCDissue = str(GCDissue) + ".00"
logger.fdebug("issue converted to " + str(GCDissue))
isschk = myDB.action("SELECT * FROM issues WHERE ComicName=? AND Issue_Number=? AND ComicID=?", [comic['ComicName'], str(GCDissue), comic['ComicID']]).fetchone()
isschk = myDB.selectone("SELECT * FROM issues WHERE ComicName=? AND Issue_Number=? AND ComicID=?", [comic['ComicName'], str(GCDissue), comic['ComicID']]).fetchone()
else:
isschk = myDB.action("SELECT * FROM issues WHERE ComicName=? AND Issue_Number=? AND ComicID=?", [comic['ComicName'], arc['IssueNumber'], comic['ComicID']]).fetchone()
isschk = myDB.selectone("SELECT * FROM issues WHERE ComicName=? AND Issue_Number=? AND ComicID=?", [comic['ComicName'], arc['IssueNumber'], comic['ComicID']]).fetchone()
if isschk is None:
logger.fdebug("we matched on name, but issue " + str(arc['IssueNumber']) + " doesn't exist for " + comic['ComicName'])
else:
@ -1671,7 +1691,7 @@ class WebInterface(object):
for m_arc in arc_match:
#now we cycle through the issues looking for a match.
issue = myDB.action("SELECT * FROM issues where ComicID=? and Issue_Number=?", [m_arc['match_id'],m_arc['match_issue']]).fetchone()
issue = myDB.selectone("SELECT * FROM issues where ComicID=? and Issue_Number=?", [m_arc['match_id'],m_arc['match_issue']]).fetchone()
if issue is None: pass
else:
logger.fdebug("issue: " + str(issue['Issue_Number']) + "..." + str(m_arc['match_issue']))
@ -1739,7 +1759,7 @@ class WebInterface(object):
if wantedlist is not None:
for want in wantedlist:
print want
issuechk = myDB.action("SELECT * FROM issues WHERE IssueID=?", [want['IssueArcID']]).fetchone()
issuechk = myDB.selectone("SELECT * FROM issues WHERE IssueID=?", [want['IssueArcID']]).fetchone()
SARC = want['StoryArc']
IssueArcID = want['IssueArcID']
if issuechk is None:
@ -1773,7 +1793,7 @@ class WebInterface(object):
if watchlistchk is not None:
for watchchk in watchlistchk:
print "Watchlist hit - " + str(watchchk['ComicName'])
issuechk = myDB.action("SELECT * FROM issues WHERE IssueID=?", [watchchk['IssueArcID']]).fetchone()
issuechk = myDB.selectone("SELECT * FROM issues WHERE IssueID=?", [watchchk['IssueArcID']]).fetchone()
SARC = watchchk['StoryArc']
IssueArcID = watchchk['IssueArcID']
if issuechk is None:
@ -1834,7 +1854,7 @@ class WebInterface(object):
def importLog(self, ComicName):
myDB = db.DBConnection()
impchk = myDB.action("SELECT * FROM importresults WHERE ComicName=?", [ComicName]).fetchone()
impchk = myDB.selectone("SELECT * FROM importresults WHERE ComicName=?", [ComicName]).fetchone()
if impchk is None:
logger.error(u"No associated log found for this import : " + ComicName)
return
@ -1844,17 +1864,20 @@ class WebInterface(object):
# return serve_template(templatename="importlog.html", title="Log", implog=implog)
importLog.exposed = True
def logs(self):
if mylar.LOG_LEVEL is None or mylar.LOG_LEVEL == '':
mylar.LOG_LEVEL = 'INFO'
return serve_template(templatename="logs.html", title="Log", lineList=mylar.LOG_LIST, log_level=mylar.LOG_LEVEL)
def logs(self, log_level=None):
#if mylar.LOG_LEVEL is None or mylar.LOG_LEVEL == '' or log_level is None:
# mylar.LOG_LEVEL = 'INFO'
#else:
# mylar.LOG_LEVEL = log_level
return serve_template(templatename="logs.html", title="Log", lineList=mylar.LOG_LIST, loglevel=mylar.LOG_LEVEL)
logs.exposed = True
def log_change(self, loglevel):
def log_change(self, log_level):
if log_level is not None:
print ("changing logger to " + str(log_level))
LOGGER.setLevel(log_level)
return serve_template(templatename="logs.html", title="Log", lineList=mylar.LOG_LIST, log_level=log_level)
raise cherrypy.HTTPRedirect("logs?log_level=%s" % log_level)
#return serve_template(templatename="logs.html", title="Log", lineList=log_list, log_level=loglevel) #lineList=mylar.LOG_LIST, log_level=log_level)
log_change.exposed = True
def clearhistory(self, type=None):
@ -1870,10 +1893,10 @@ class WebInterface(object):
def downloadLocal(self, IssueID=None, IssueArcID=None, ReadOrder=None, dir=None):
myDB = db.DBConnection()
issueDL = myDB.action("SELECT * FROM issues WHERE IssueID=?", [IssueID]).fetchone()
issueDL = myDB.selectone("SELECT * FROM issues WHERE IssueID=?", [IssueID]).fetchone()
comicid = issueDL['ComicID']
#print ("comicid: " + str(comicid))
comic = myDB.action("SELECT * FROM comics WHERE ComicID=?", [comicid]).fetchone()
comic = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [comicid]).fetchone()
#---issue info
comicname = comic['ComicName']
issuenum = issueDL['Issue_Number']
@ -2072,7 +2095,7 @@ class WebInterface(object):
results = myDB.select("SELECT * FROM importresults WHERE WatchMatch is Null OR WatchMatch LIKE 'C%' group by ComicName COLLATE NOCASE")
#this is to get the count of issues;
for result in results:
countthis = myDB.action("SELECT count(*) FROM importresults WHERE ComicName=?", [result['ComicName']]).fetchall()
countthis = myDB.select("SELECT count(*) FROM importresults WHERE ComicName=?", [result['ComicName']])
countit = countthis[0][0]
ctrlVal = {"ComicName": result['ComicName']}
newVal = {"IssueCount": countit}
@ -2103,7 +2126,7 @@ class WebInterface(object):
ComicName = cl
implog = implog + "comicName: " + str(ComicName) + "\n"
myDB = db.DBConnection()
results = myDB.action("SELECT * FROM importresults WHERE ComicName=?", [ComicName])
results = myDB.select("SELECT * FROM importresults WHERE ComicName=?", [ComicName])
#if results > 0:
# print ("There are " + str(results[7]) + " issues to import of " + str(ComicName))
#build the valid year ranges and the minimum issue# here to pass to search.
@ -2134,7 +2157,7 @@ class WebInterface(object):
#self.refreshArtist(comicid=comicid,imported='yes')
if mylar.IMP_MOVE:
implog = implog + "Mass import - Move files\n"
comloc = myDB.action("SELECT * FROM comics WHERE ComicID=?", [comicid]).fetchone()
comloc = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [comicid]).fetchone()
movedata_comicid = comicid
movedata_comiclocation = comloc['ComicLocation']
@ -2307,10 +2330,10 @@ class WebInterface(object):
# br_hist = err
#----
myDB = db.DBConnection()
CCOMICS = myDB.action("SELECT COUNT(*) FROM comics").fetchall()
CHAVES = myDB.action("SELECT COUNT(*) FROM issues WHERE Status='Downloaded' OR Status='Archived'").fetchall()
CISSUES = myDB.action("SELECT COUNT(*) FROM issues").fetchall()
CSIZE = myDB.action("select SUM(ComicSize) from issues where Status='Downloaded' or Status='Archived'").fetchall()
CCOMICS = myDB.select("SELECT COUNT(*) FROM comics")
CHAVES = myDB.select("SELECT COUNT(*) FROM issues WHERE Status='Downloaded' OR Status='Archived'")
CISSUES = myDB.select("SELECT COUNT(*) FROM issues")
CSIZE = myDB.select("select SUM(ComicSize) from issues where Status='Downloaded' or Status='Archived'")
COUNT_COMICS = CCOMICS[0][0]
COUNT_HAVES = CHAVES[0][0]
COUNT_ISSUES = CISSUES[0][0]
@ -2329,6 +2352,7 @@ class WebInterface(object):
"api_key" : mylar.API_KEY,
"launch_browser" : helpers.checked(mylar.LAUNCH_BROWSER),
"logverbose" : helpers.checked(mylar.LOGVERBOSE),
"max_logsize" : mylar.MAX_LOGSIZE,
"download_scan_interval" : mylar.DOWNLOAD_SCAN_INTERVAL,
"nzb_search_interval" : mylar.SEARCH_INTERVAL,
"nzb_startup_search" : helpers.checked(mylar.NZB_STARTUP_SEARCH),
@ -2371,6 +2395,7 @@ class WebInterface(object):
"rss_checkinterval" : mylar.RSS_CHECKINTERVAL,
"provider_order" : mylar.PROVIDER_ORDER,
"enable_torrents" : helpers.checked(mylar.ENABLE_TORRENTS),
"minseeds" : mylar.MINSEEDS,
"torrent_local" : helpers.checked(mylar.TORRENT_LOCAL),
"local_watchdir" : mylar.LOCAL_WATCHDIR,
"torrent_seedbox" : helpers.checked(mylar.TORRENT_SEEDBOX),
@ -2424,8 +2449,12 @@ class WebInterface(object):
"pushover_userkey": mylar.PUSHOVER_USERKEY,
"pushover_priority": mylar.PUSHOVER_PRIORITY,
"boxcar_enabled": helpers.checked(mylar.BOXCAR_ENABLED),
"boxcar_username": mylar.BOXCAR_USERNAME,
"boxcar_onsnatch": helpers.checked(mylar.BOXCAR_ONSNATCH),
"boxcar_token": mylar.BOXCAR_TOKEN,
"pushbullet_enabled": helpers.checked(mylar.PUSHBULLET_ENABLED),
"pushbullet_onsnatch": helpers.checked(mylar.PUSHBULLET_ONSNATCH),
"pushbullet_apikey": mylar.PUSHBULLET_APIKEY,
"pushbullet_deviceid": mylar.PUSHBULLET_DEVICEID,
"enable_extra_scripts" : helpers.checked(mylar.ENABLE_EXTRA_SCRIPTS),
"extra_scripts" : mylar.EXTRA_SCRIPTS,
"post_processing" : helpers.checked(mylar.POST_PROCESSING),
@ -2607,14 +2636,15 @@ class WebInterface(object):
readOptions.exposed = True
def configUpdate(self, http_host='0.0.0.0', http_username=None, http_port=8090, http_password=None, api_enabled=0, api_key=None, launch_browser=0, logverbose=0, download_scan_interval=None, nzb_search_interval=None, nzb_startup_search=0, libraryscan_interval=None,
def configUpdate(self, http_host='0.0.0.0', http_username=None, http_port=8090, http_password=None, api_enabled=0, api_key=None, launch_browser=0, logverbose=0, max_logsize=None, download_scan_interval=None, nzb_search_interval=None, nzb_startup_search=0, libraryscan_interval=None,
nzb_downloader=0, sab_host=None, sab_username=None, sab_apikey=None, sab_password=None, sab_category=None, sab_priority=None, sab_directory=None, log_dir=None, log_level=0, blackhole_dir=None,
nzbget_host=None, nzbget_port=None, nzbget_username=None, nzbget_password=None, nzbget_category=None, nzbget_priority=None, nzbget_directory=None,
usenet_retention=None, nzbsu=0, nzbsu_uid=None, nzbsu_apikey=None, dognzb=0, dognzb_uid=None, dognzb_apikey=None, newznab=0, newznab_host=None, newznab_name=None, newznab_apikey=None, newznab_uid=None, newznab_enabled=0,
raw=0, raw_provider=None, raw_username=None, raw_password=None, raw_groups=None, experimental=0,
enable_meta=0, cmtagger_path=None, enable_rss=0, rss_checkinterval=None, enable_torrent_search=0, enable_kat=0, enable_cbt=0, cbt_passkey=None,
enable_torrents=0, torrent_local=0, local_watchdir=None, torrent_seedbox=0, seedbox_watchdir=None, seedbox_user=None, seedbox_pass=None, seedbox_host=None, seedbox_port=None,
prowl_enabled=0, prowl_onsnatch=0, prowl_keys=None, prowl_priority=None, nma_enabled=0, nma_apikey=None, nma_priority=0, nma_onsnatch=0, pushover_enabled=0, pushover_onsnatch=0, pushover_apikey=None, pushover_userkey=None, pushover_priority=None, boxcar_enabled=0, boxcar_username=None, boxcar_onsnatch=0,
enable_torrents=0, minseeds=0, torrent_local=0, local_watchdir=None, torrent_seedbox=0, seedbox_watchdir=None, seedbox_user=None, seedbox_pass=None, seedbox_host=None, seedbox_port=None,
prowl_enabled=0, prowl_onsnatch=0, prowl_keys=None, prowl_priority=None, nma_enabled=0, nma_apikey=None, nma_priority=0, nma_onsnatch=0, pushover_enabled=0, pushover_onsnatch=0, pushover_apikey=None, pushover_userkey=None, pushover_priority=None, boxcar_enabled=0, boxcar_onsnatch=0, boxcar_token=None,
pushbullet_enabled=0, pushbullet_apikey=None, pushbullet_deviceid=None, pushbullet_onsnatch=0,
preferred_quality=0, move_files=0, rename_files=0, add_to_csv=1, cvinfo=0, lowercase_filenames=0, folder_format=None, file_format=None, enable_extra_scripts=0, extra_scripts=None, enable_pre_scripts=0, pre_scripts=None, post_processing=0, syno_fix=0, search_delay=None, chmod_dir=0777, chmod_file=0660, cvapifix=0,
tsab=None, destination_dir=None, replace_spaces=0, replace_char=None, use_minsize=0, minsize=None, use_maxsize=0, maxsize=None, autowant_all=0, autowant_upcoming=0, comic_cover_local=0, zero_level=0, zero_level_n=None, interface=None, **kwargs):
mylar.HTTP_HOST = http_host
@ -2625,6 +2655,7 @@ class WebInterface(object):
mylar.API_KEY = api_key
mylar.LAUNCH_BROWSER = launch_browser
mylar.LOGVERBOSE = logverbose
mylar.MAX_LOGSIZE = max_logsize
mylar.DOWNLOAD_SCAN_INTERVAL = download_scan_interval
mylar.SEARCH_INTERVAL = nzb_search_interval
mylar.NZB_STARTUP_SEARCH = nzb_startup_search
@ -2671,6 +2702,7 @@ class WebInterface(object):
mylar.ENABLE_RSS = int(enable_rss)
mylar.RSS_CHECKINTERVAL = rss_checkinterval
mylar.ENABLE_TORRENTS = int(enable_torrents)
mylar.MINSEEDS = int(minseeds)
mylar.TORRENT_LOCAL = int(torrent_local)
mylar.LOCAL_WATCHDIR = local_watchdir
mylar.TORRENT_SEEDBOX = int(torrent_seedbox)
@ -2709,8 +2741,12 @@ class WebInterface(object):
mylar.PUSHOVER_PRIORITY = pushover_priority
mylar.PUSHOVER_ONSNATCH = pushover_onsnatch
mylar.BOXCAR_ENABLED = boxcar_enabled
mylar.BOXCAR_USERNAME = boxcar_username
mylar.BOXCAR_ONSNATCH = boxcar_onsnatch
mylar.BOXCAR_TOKEN = boxcar_token
mylar.PUSHBULLET_ENABLED = pushbullet_enabled
mylar.PUSHBULLET_APIKEY = pushbullet_apikey
mylar.PUSHBULLET_DEVICEID = pushbullet_deviceid
mylar.PUSHBULLET_ONSNATCH = pushbullet_onsnatch
mylar.USE_MINSIZE = use_minsize
mylar.MINSIZE = minsize
mylar.USE_MAXSIZE = use_maxsize
@ -2949,7 +2985,8 @@ class WebInterface(object):
api.exposed = True
def downloadthis(self,pathfile=None):
logger.fdebug('filepath to retrieve file from is : ' + str(pathfile))
#pathfile should be escaped via the |u tag from within the html call already.
logger.fdebug('filepath to retrieve file from is : ' + pathfile)
from cherrypy.lib.static import serve_download
return serve_download(pathfile)

View File

@ -35,7 +35,7 @@ def pullit(forcecheck=None):
popit = myDB.select("SELECT count(*) FROM sqlite_master WHERE name='weekly' and type='table'")
if popit:
try:
pull_date = myDB.action("SELECT SHIPDATE from weekly").fetchone()
pull_date = myDB.selectone("SELECT SHIPDATE from weekly").fetchone()
logger.info(u"Weekly pull list present - checking if it's up-to-date..")
if (pull_date is None):
pulldate = '00000000'
@ -672,7 +672,7 @@ def pullitcheck(comic1off_name=None,comic1off_id=None,forcecheck=None, futurepul
latest_int = helpers.issuedigits(latestiss)
weekiss_int = helpers.issuedigits(week['ISSUE'])
logger.fdebug('comparing ' + str(latest_int) + ' to ' + str(weekiss_int))
if (latest_int > weekiss_int) or (latest_int == 0 or weekiss_int == 0):
if (latest_int > weekiss_int) and (latest_int != 0 or weekiss_int != 0):
logger.fdebug(str(week['ISSUE']) + ' should not be the next issue in THIS volume of the series.')
logger.fdebug('it should be either greater than ' + str(latestiss) + ' or an issue #0')
break
@ -725,7 +725,6 @@ def pullitcheck(comic1off_name=None,comic1off_id=None,forcecheck=None, futurepul
else:
# here we add to upcoming table...
statusupdate = updater.upcoming_update(ComicID=ComicID, ComicName=ComicName, IssueNumber=ComicIssue, IssueDate=ComicDate, forcecheck=forcecheck, futurepull='yes', altissuenumber=altissuenum)
# here we update status of weekly table...
if statusupdate is not None:
cstatus = statusupdate['Status']
@ -768,11 +767,11 @@ def loaditup(comicname, comicid, issue, chktype):
if chktype == 'annual':
typedisplay = 'annual issue'
logger.fdebug('[' + comicname + '] trying to locate ' + str(typedisplay) + ' ' + str(issue) + ' to do comparitive issue analysis for pull-list')
issueload = myDB.action('SELECT * FROM annuals WHERE ComicID=? AND Int_IssueNumber=?', [comicid, issue_number]).fetchone()
issueload = myDB.selectone('SELECT * FROM annuals WHERE ComicID=? AND Int_IssueNumber=?', [comicid, issue_number]).fetchone()
else:
typedisplay = 'issue'
logger.fdebug('[' + comicname + '] trying to locate ' + str(typedisplay) + ' ' + str(issue) + ' to do comparitive issue analysis for pull-list')
issueload = myDB.action('SELECT * FROM issues WHERE ComicID=? AND Int_IssueNumber=?', [comicid, issue_number]).fetchone()
issueload = myDB.selectone('SELECT * FROM issues WHERE ComicID=? AND Int_IssueNumber=?', [comicid, issue_number]).fetchone()
if issueload is None:
logger.fdebug('No results matched for Issue number - either this is a NEW issue with no data yet, or something is wrong')
@ -786,7 +785,8 @@ def loaditup(comicname, comicid, issue, chktype):
if releasedate == '0000-00-00':
logger.fdebug('Store date of 0000-00-00 returned for ' + str(typedisplay) + ' # ' + str(issue) + '. Refreshing series to see if valid date present')
mismatch = 'no'
issuerecheck = mylar.importer.addComictoDB(comicid,mismatch,calledfrom='weekly',issuechk=issue_number,issuetype=chktype)
#issuerecheck = mylar.importer.addComictoDB(comicid,mismatch,calledfrom='weekly',issuechk=issue_number,issuetype=chktype)
issuerecheck = mylar.importer.updateissuedata(comicid,comicname,calledfrom='weekly',issuechk=issue_number,issuetype=chktype)
if issuerecheck is not None:
for il in issuerecheck:
#this is only one record..

31
mylar/weeklypullit.py Executable file
View File

@ -0,0 +1,31 @@
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
from __future__ import with_statement
import mylar
from mylar import logger
#import threading
class Weekly():
def __init__(self):
pass
def run(self):
logger.info('[WEEKLY] Checking Weekly Pull-list for new releases/updates')
mylar.weeklypull.pullit()
return

View File

@ -0,0 +1,53 @@
#!/usr/bin/env python
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
#
# Move and rename comics according to Mylar's autoProcessComics.cfg
#
# NOTE: This script requires Python to be installed on your system.
##############################################################################
### OPTIONS ###
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
import sys, os
import autoProcessComics
# NZBGet V11+
# Check if the script is called from nzbget 11.0 or later
if os.environ.has_key('NZBOP_SCRIPTDIR') and not os.environ['NZBOP_VERSION'][0:5] < '11.0':
# NZBGet argv: all passed as environment variables.
# Exit codes used by NZBGet
POSTPROCESS_PARCHECK=92
POSTPROCESS_SUCCESS=93
POSTPROCESS_ERROR=94
POSTPROCESS_NONE=95
#Start script
result = autoProcessComics.processEpisode(os.environ['NZBPP_DIRECTORY'], os.environ['NZBPP_NZBNAME'])
elif len(sys.argv) == NZBGET_NO_OF_ARGUMENTS:
result = autoProcessComics.processEpisode(sys.argv[1], sys.argv[2], sys.argv[3])
if result == 0:
if os.environ.has_key('NZBOP_SCRIPTDIR'): # log success for nzbget
sys.exit(POSTPROCESS_SUCCESS)
else:
if os.environ.has_key('NZBOP_SCRIPTDIR'): # log fail for nzbget
sys.exit(POSTPROCESS_ERROR)

View File

@ -0,0 +1,22 @@
#!/usr/bin/env python
#
##############################################################################
### SABNZBD POST-PROCESSING SCRIPT ###
#
# Move and rename comics according to Mylar's autoProcessComics.cfg
#
# NOTE: This script requires Python to be installed on your system.
##############################################################################
#module loading
import sys
import autoProcessComics
#the code.
if len(sys.argv) < 2:
print "No folder supplied - is this being called from SABnzbd or NZBGet?"
sys.exit()
elif len(sys.argv) >= 3:
sys.exit(autoProcessComics.processEpisode(sys.argv[1], sys.argv[3]))
else:
sys.exit(autoProcessComics.processEpisode(sys.argv[1]))

View File

@ -1,19 +0,0 @@
#!/usr/bin/env python
import sys, os
import autoProcessComics
if len(sys.argv) < 2:
if os.getenv('NZBPP_NZBCOMPLETED', 0):
#if this variable is set, we're being called from NZBGet
autoProcessComics.processEpisode(os.getenv('NZBPP_DIRECTORY'), os.getenv('NZBPP_NZBFILENAME'))
else:
print "No folder supplied - is this being called from SABnzbd or NZBGet?"
sys.exit()
elif len(sys.argv) >= 3:
sys.exit(autoProcessComics.processEpisode(sys.argv[1], sys.argv[3]))
else:
sys.exit(autoProcessComics.processEpisode(sys.argv[1]))