IMP: Changed configuration completely. New config processing, global/config variables changed over entire application. All changes in the configuration GUI take effect immediately after saving - no more need to restart Mylar, IMP: Added provider order sequence header to the top of the Providers tab for improved visibilty, IMP: Added completed download handling for both SABnzbd and NZBGet - will monitor active queue for downloads sent by Mylar and then post-process them accordingly (no more ComicRN), FIX: When recreating a pullist for a week that's not the current week, will now refresh series that are still missing data in case of late population, IMP: Removed loose/explict search options, and search results will now match more accurately to the terms entered, as well as being much quicker, IMP: Changed colour-codes on search results screen, green now indicates that the series is a print series, non-green indicates that it is either a HC/TPB/digital series and will also be indicated as such, IMP: Fixed weeklypull not refreshing some series due to legacy numbering on series deemed as Ended by Mylar, or the pull not refreshing propeprly and thereby were not elgible to be considered as a valid series for matching, IMP: Fixed problem with the scheduler of the weekly pull not running at the designated time, which causes problems when attempting to match series to the pullist for issues, IMP: Changed the autosnatch script so that the get.conf is no longer required to be maintained seperately. The get.conf config items are now located in the main config.ini and have to be repopulated initially, FIX: Fixed problem with the import count not resetting to 0 on subsequent runs of the Import A Directory option, IMP: Changed the sab api key from using the nzbkey to the full api key as is required for completed download handling, FIX: Fixed the retry option for newznab entries so that it should work across the board now for newznabs as opposed to only some providers, IMP: some minor code cleanup

This commit is contained in:
evilhero 2017-11-10 14:25:14 -05:00
parent 8744904147
commit 689268684b
21 changed files with 837 additions and 428 deletions

View File

@ -104,9 +104,7 @@
</div> </div>
<div id="version"> <div id="version">
Version: <em>${mylar.CURRENT_VERSION}</em> Version: <em>${mylar.CURRENT_VERSION}</em>
%if mylar.CONFIG.GIT_BRANCH != 'master': (${mylar.CONFIG.GIT_BRANCH})
(${version.MYLAR_VERSION})
%endif
</div> </div>
</footer> </footer>
<a href="#main" id="toTop"><span>Back to top</span></a> <a href="#main" id="toTop"><span>Back to top</span></a>

View File

@ -362,6 +362,11 @@
</div> </div>
</div> </div>
<div class="row checkbox left clearfix">
<input type="checkbox" id="sab_client_post_processing" onclick="initConfigCheckbox($this));" name="sab_client_post_processing" value="1" ${config['sab_client_post_processing']} /><label>Enable Completed Download Handling<label>
<small>The category label above is used to when completed download handling is enabled</small>
</div>
<div align="center" class="row"> <div align="center" class="row">
<input type="button" value="Test SABnzbd" id="test_sab" style="float:center" /></br> <input type="button" value="Test SABnzbd" id="test_sab" style="float:center" /></br>
<input type="text" name="sabstatus" style="text-align:center; font-size:11px;" id="sabstatus" size="50" DISABLED /> <input type="text" name="sabstatus" style="text-align:center; font-size:11px;" id="sabstatus" size="50" DISABLED />
@ -409,7 +414,12 @@
%endfor %endfor
</select> </select>
</div> </div>
<div class="row checkbox left clearfix">
<input type="checkbox" id="nzbget_client_post_processing" onclick="initConfigCheckbox($this));" name="nzbget_client_post_processing" value="1" ${config['nzbget_client_post_processing']} /><label>Enable Completed Download Handling<label>
<small>The category label above is used to when completed download handling is enabled</small>
</div>
</fieldset> </fieldset>
<fieldset id="blackhole_options"> <fieldset id="blackhole_options">
<div class="row"> <div class="row">
@ -424,7 +434,6 @@
<input type="text" name="usenet_retention" value="${config['usenet_retention']}" size="10"> <input type="text" name="usenet_retention" value="${config['usenet_retention']}" size="10">
</div> </div>
</fieldset> </fieldset>
</td> </td>
<td> <td>
<legend>Torrents</legend> <legend>Torrents</legend>
@ -928,7 +937,6 @@
</div> </div>
</div> </div>
</fieldset> </fieldset>
</td> </td>
<td> <td>
<fieldset> <fieldset>
@ -1797,6 +1805,7 @@
return; return;
} }
$('#sabstatus').val(data); $('#sabstatus').val(data);
// $('#sab_apikey').val(data);
$('#ajaxMsg').html("<div class='msg'><span class='ui-icon ui-icon-check'></span>"+data+"</div>"); $('#ajaxMsg').html("<div class='msg'><span class='ui-icon ui-icon-check'></span>"+data+"</div>");
}); });
$('#ajaxMsg').addClass('success').fadeIn().delay(3000).fadeOut(); $('#ajaxMsg').addClass('success').fadeIn().delay(3000).fadeOut();

View File

@ -6,20 +6,7 @@
%> %>
<%def name="headerIncludes()"> <%def name="headerIncludes()">
<div id="subhead_container">
<div id="subhead_menu">
%if explicit == 'loose':
<a id="menu_link_retry" title="This will search for ALL of the terms given : ${name}" href="searchit?name=${name |u}&explicit=loose">Search ALL terms</a>
<a id="menu_link_retry" title="This will search EXPLICITLY for only the terms given : ${name}" href="searchit?name=${name |u}&explicit=explicit">Explicit Search</a>
%elif explicit == 'explicit':
<a id="menu_link_retry" title="Warning: This will search for ANY of the terms given : ${name} (this could take awhile)" href="searchit?name=${name |u}&explicit=loose">Loose Search</a>
<a id="menu_link_retry" title="This will search for ALL of the terms given : ${name}" href="searchit?name=${name |u}&explicit=all">Search ALL terms</a>
%elif explicit == 'all':
<a id="menu_link_retry" title="This will search EXPLICITLY for only the terms given : ${name}" href="searchit?name=${name |u}&explicit=explicit">Explicit Search</a>
<a id="menu_link_retry" title="Warning: This will search for ANY of the terms given : ${name} (this could take awhile)" href="searchit?name=${name |u}&explicit=loose">Loose Search</a>
%endif
</div>
</div>
</%def> </%def>
<%def name="body()"> <%def name="body()">
<div id="paddingheader"> <div id="paddingheader">
@ -28,12 +15,7 @@
typesel = " Story Arc" typesel = " Story Arc"
else: else:
typesel = "" typesel = ""
if explicit == 'loose': searchtext = "Search results for : </br><center>" + name + "</center>"
searchtext = "Loose Search results for: </br><center> " + name + "</center>"
elif explicit == 'explicit':
searchtext = "Explicit " + typesel.rstrip() + " Search results for: </br><center> " + name + "</center>"
else:
searchtext = "Search results for : </br><center>" + name + "</center>"
%> %>
<h1 class="clearfix"><img src="interfaces/default/images/icon_search.png" alt="Search results"/>${searchtext}</h1> <h1 class="clearfix"><img src="interfaces/default/images/icon_search.png" alt="Search results"/>${searchtext}</h1>
@ -66,17 +48,23 @@
%if searchresults: %if searchresults:
%for result in searchresults: %for result in searchresults:
<% <%
if result['comicyear'] == mylar.CURRENT_YEAR: grade = 'Z'
grade = 'A'
else:
grade = 'Z'
if result['haveit'] != "No":
grade = 'H';
rtype = None rtype = None
if type != 'story_arc': if type != 'story_arc':
if result['type'] == 'Digital': if result['type'] == 'Digital':
rtype = '[Digital]' rtype = '[Digital]'
grade = 'Z'
elif result['type'] == 'TPB':
rtype = '[TPB]'
grade = 'Z'
elif result['type'] == 'HC':
rtype = '[HC]'
grade = 'Z'
else:
grade = 'A'
if result['haveit'] != "No":
grade = 'H';
%> %>
<tr class="grade${grade}"> <tr class="grade${grade}">
<td class="blank"></td> <td class="blank"></td>

View File

@ -9,7 +9,7 @@
<div id="subhead_container"> <div id="subhead_container">
<div id="subhead_menu"> <div id="subhead_menu">
<a href="#" id="menu_link_refresh" onclick="doAjaxCall('pullist?week=${weekinfo['weeknumber']}&year=${weekinfo['year']}',$(this),'table')" data-success="Refresh submitted.">Refresh Pull-list</a> <a href="#" id="menu_link_refresh" onclick="doAjaxCall('pullist?week=${weekinfo['weeknumber']}&year=${weekinfo['year']}',$(this),'table')" data-success="Refresh submitted.">Refresh Pull-list</a>
<a id="menu_link_retry" href="pullrecreate">Recreate Pull-list</a> <a href="#" id="menu_link_retry" onclick="doAjaxCall('pullrecreate?weeknumber=${weekinfo['weeknumber']}&year=${weekinfo['year']}',$(this),'table')" data-success="Recreating Pullist for week ${weekinfo['weeknumber']}, ${weekinfo['year']}">Recreate Pull-list</a>
<!-- <!--
<a href="#" id="menu_link_retry" onclick="doAjaxCall('create_readlist?weeknumber=${weekinfo['weeknumber']}&year=${weekinfo['year']}',$(this),'table')" data-success="Submitted request for reading-list generation for this week">Generate Reading-List</a> <a href="#" id="menu_link_retry" onclick="doAjaxCall('create_readlist?weeknumber=${weekinfo['weeknumber']}&year=${weekinfo['year']}',$(this),'table')" data-success="Submitted request for reading-list generation for this week">Generate Reading-List</a>
--> -->

View File

@ -1027,7 +1027,6 @@ class PostProcessor(object):
comicname = None comicname = None
issuenumber = None issuenumber = None
if tmpiss is not None: if tmpiss is not None:
logger.info('shouldnt be here')
ppinfo.append({'comicid': tmpiss['ComicID'], ppinfo.append({'comicid': tmpiss['ComicID'],
'issueid': issueid, 'issueid': issueid,
'comicname': tmpiss['ComicName'], 'comicname': tmpiss['ComicName'],

View File

@ -83,7 +83,6 @@ RSS_STATUS = 'Waiting'
WEEKLY_STATUS = 'Waiting' WEEKLY_STATUS = 'Waiting'
VERSION_STATUS = 'Waiting' VERSION_STATUS = 'Waiting'
UPDATER_STATUS = 'Waiting' UPDATER_STATUS = 'Waiting'
SNATCHED_QUEUE = Queue.Queue()
SCHED_RSS_LAST = None SCHED_RSS_LAST = None
SCHED_WEEKLY_LAST = None SCHED_WEEKLY_LAST = None
SCHED_MONITOR_LAST = None SCHED_MONITOR_LAST = None
@ -118,6 +117,9 @@ USE_QBITTORENT = False
USE_UTORRENT = False USE_UTORRENT = False
USE_WATCHDIR = False USE_WATCHDIR = False
SNPOOL = None SNPOOL = None
NZBPOOL = None
SNATCHED_QUEUE = Queue.Queue()
NZB_QUEUE = Queue.Queue()
COMICSORT = None COMICSORT = None
PULLBYFILE = None PULLBYFILE = None
CFG = None CFG = None
@ -134,6 +136,7 @@ DOWNLOAD_APIKEY = None
CMTAGGER_PATH = None CMTAGGER_PATH = None
STATIC_COMICRN_VERSION = "1.01" STATIC_COMICRN_VERSION = "1.01"
STATIC_APC_VERSION = "1.0" STATIC_APC_VERSION = "1.0"
SAB_PARAMS = None
SCHED = BackgroundScheduler({ SCHED = BackgroundScheduler({
'apscheduler.executors.default': { 'apscheduler.executors.default': {
'class': 'apscheduler.executors.pool:ThreadPoolExecutor', 'class': 'apscheduler.executors.pool:ThreadPoolExecutor',
@ -149,9 +152,9 @@ def initialize(config_file):
with INIT_LOCK: with INIT_LOCK:
global CONFIG, _INITIALIZED, QUIET, CONFIG_FILE, CURRENT_VERSION, LATEST_VERSION, COMMITS_BEHIND, INSTALL_TYPE, IMPORTLOCK, PULLBYFILE, INKDROPS_32P, \ global CONFIG, _INITIALIZED, QUIET, CONFIG_FILE, CURRENT_VERSION, LATEST_VERSION, COMMITS_BEHIND, INSTALL_TYPE, IMPORTLOCK, PULLBYFILE, INKDROPS_32P, \
DONATEBUTTON, CURRENT_WEEKNUMBER, CURRENT_YEAR, UMASK, USER_AGENT, SNATCHED_QUEUE, PULLNEW, COMICSORT, WANTED_TAB_OFF, CV_HEADERS, \ DONATEBUTTON, CURRENT_WEEKNUMBER, CURRENT_YEAR, UMASK, USER_AGENT, SNATCHED_QUEUE, NZB_QUEUE, PULLNEW, COMICSORT, WANTED_TAB_OFF, CV_HEADERS, \
IMPORTBUTTON, IMPORT_FILES, IMPORT_TOTALFILES, IMPORT_CID_COUNT, IMPORT_PARSED_COUNT, IMPORT_FAILURE_COUNT, CHECKENABLED, CVURL, DEMURL, WWTURL, TPSEURL, \ IMPORTBUTTON, IMPORT_FILES, IMPORT_TOTALFILES, IMPORT_CID_COUNT, IMPORT_PARSED_COUNT, IMPORT_FAILURE_COUNT, CHECKENABLED, CVURL, DEMURL, WWTURL, TPSEURL, \
USE_SABNZBD, USE_NZBGET, USE_BLACKHOLE, USE_RTORRENT, USE_UTORRENT, USE_QBITTORRENT, USE_DELUGE, USE_TRANSMISSION, USE_WATCHDIR, \ USE_SABNZBD, USE_NZBGET, USE_BLACKHOLE, USE_RTORRENT, USE_UTORRENT, USE_QBITTORRENT, USE_DELUGE, USE_TRANSMISSION, USE_WATCHDIR, SAB_PARAMS, \
PROG_DIR, DATA_DIR, CMTAGGER_PATH, DOWNLOAD_APIKEY, LOCAL_IP, STATIC_COMICRN_VERSION, STATIC_APC_VERSION, KEYS_32P, AUTHKEY_32P, FEED_32P, FEEDINFO_32P, \ PROG_DIR, DATA_DIR, CMTAGGER_PATH, DOWNLOAD_APIKEY, LOCAL_IP, STATIC_COMICRN_VERSION, STATIC_APC_VERSION, KEYS_32P, AUTHKEY_32P, FEED_32P, FEEDINFO_32P, \
MONITOR_STATUS, SEARCH_STATUS, RSS_STATUS, WEEKLY_STATUS, VERSION_STATUS, UPDATER_STATUS, DBUPDATE_INTERVAL, \ MONITOR_STATUS, SEARCH_STATUS, RSS_STATUS, WEEKLY_STATUS, VERSION_STATUS, UPDATER_STATUS, DBUPDATE_INTERVAL, \
SCHED_RSS_LAST, SCHED_WEEKLY_LAST, SCHED_MONITOR_LAST, SCHED_SEARCH_LAST, SCHED_VERSION_LAST, SCHED_DBUPDATE_LAST SCHED_RSS_LAST, SCHED_WEEKLY_LAST, SCHED_MONITOR_LAST, SCHED_SEARCH_LAST, SCHED_VERSION_LAST, SCHED_DBUPDATE_LAST
@ -165,7 +168,7 @@ def initialize(config_file):
return False return False
#set up the default values here if they're wrong. #set up the default values here if they're wrong.
cc.configure() #cc.configure()
# Start the logger, silence console logging if we need to # Start the logger, silence console logging if we need to
logger.initLogger(console=not QUIET, log_dir=CONFIG.LOG_DIR, verbose=VERBOSE) #logger.mylar_log.initLogger(verbose=VERBOSE) logger.initLogger(console=not QUIET, log_dir=CONFIG.LOG_DIR, verbose=VERBOSE) #logger.mylar_log.initLogger(verbose=VERBOSE)
@ -381,29 +384,25 @@ def start():
SCHED.add_job(func=ss.run, id='search', name='Auto-Search', next_run_time=search_diff, trigger=IntervalTrigger(hours=0, minutes=CONFIG.SEARCH_INTERVAL, timezone='UTC')) SCHED.add_job(func=ss.run, id='search', name='Auto-Search', next_run_time=search_diff, trigger=IntervalTrigger(hours=0, minutes=CONFIG.SEARCH_INTERVAL, timezone='UTC'))
if all([CONFIG.ENABLE_TORRENTS, CONFIG.AUTO_SNATCH, OS_DETECT != 'Windows']) and any([CONFIG.TORRENT_DOWNLOADER == 2, CONFIG.TORRENT_DOWNLOADER == 4]): if all([CONFIG.ENABLE_TORRENTS, CONFIG.AUTO_SNATCH, OS_DETECT != 'Windows']) and any([CONFIG.TORRENT_DOWNLOADER == 2, CONFIG.TORRENT_DOWNLOADER == 4]):
logger.info('[AUTO-SNATCHER] Auto-Snatch of completed torrents enabled & attempting to backgroun load....') logger.info('[AUTO-SNATCHER] Auto-Snatch of completed torrents enabled & attempting to background load....')
SNPOOL = threading.Thread(target=helpers.worker_main, args=(SNATCHED_QUEUE,), name="AUTO-SNATCHER") SNPOOL = threading.Thread(target=helpers.worker_main, args=(SNATCHED_QUEUE,), name="AUTO-SNATCHER")
SNPOOL.start() SNPOOL.start()
logger.info('[AUTO-SNATCHER] Succesfully started Auto-Snatch add-on - will now monitor for completed torrents on client....') logger.info('[AUTO-SNATCHER] Succesfully started Auto-Snatch add-on - will now monitor for completed torrents on client....')
helpers.latestdate_fix() if CONFIG.POST_PROCESSING is True and ( all([CONFIG.NZB_DOWNLOADER == 0, CONFIG.SAB_CLIENT_POST_PROCESSING is True]) or all([CONFIG.NZB_DOWNLOADER == 1, CONFIG.NZBGET_CLIENT_POST_PROCESSING is True]) ):
if CONFIG.NZB_DOWNLOADER == 0:
logger.info('[SAB-MONITOR] Completed post-processing handling enabled for SABnzbd. Attempting to background load....')
elif CONFIG.NZB_DOWNLOADER == 1:
logger.info('[NZBGET-MONITOR] Completed post-processing handling enabled for NZBGet. Attempting to background load....')
NZBPOOL = threading.Thread(target=helpers.nzb_monitor, args=(NZB_QUEUE,), name="AUTO-COMPLETE-NZB")
NZBPOOL.start()
if CONFIG.NZB_DOWNLOADER == 0:
logger.info('[AUTO-COMPLETE-NZB] Succesfully started Completed post-processing handling for SABnzbd - will now monitor for completed nzbs within sabnzbd and post-process automatically....')
elif CONFIG.NZB_DOWNLOADER == 1:
logger.info('[AUTO-COMPLETE-NZB] Succesfully started Completed post-processing handling for NZBGet - will now monitor for completed nzbs within nzbget and post-process automatically....')
#initiate startup rss feeds for torrents/nzbs here...
if CONFIG.ENABLE_RSS: helpers.latestdate_fix()
logger.info('[RSS-FEEDS] Initiating startup-RSS feed checks.')
if SCHED_RSS_LAST is not None:
rss_timestamp = float(SCHED_RSS_LAST)
logger.info('[RSS-FEEDS] RSS last run @ %s' % datetime.datetime.utcfromtimestamp(rss_timestamp))
else:
rss_timestamp = helpers.utctimestamp() + (int(CONFIG.RSS_CHECKINTERVAL) *60)
rs = rsscheckit.tehMain()
duration_diff = (helpers.utctimestamp() - rss_timestamp)/60
if duration_diff >= int(CONFIG.RSS_CHECKINTERVAL):
SCHED.add_job(func=rs.run, id='rss', name='RSS Feeds', args=[True], next_run_time=datetime.datetime.now(), trigger=IntervalTrigger(hours=0, minutes=int(CONFIG.RSS_CHECKINTERVAL), timezone='UTC'))
else:
rss_diff = datetime.datetime.utcfromtimestamp(helpers.utctimestamp() + (int(CONFIG.RSS_CHECKINTERVAL) * 60) - (duration_diff * 60))
logger.fdebug('[RSS-FEEDS] Scheduling next run for @ %s every %s minutes' % (rss_diff, CONFIG.RSS_CHECKINTERVAL))
SCHED.add_job(func=rs.run, id='rss', name='RSS Feeds', args=[True], next_run_time=rss_diff, trigger=IntervalTrigger(hours=0, minutes=int(CONFIG.RSS_CHECKINTERVAL), timezone='UTC'))
if CONFIG.ALT_PULL == 2: if CONFIG.ALT_PULL == 2:
weektimer = 4 weektimer = 4
@ -425,12 +424,29 @@ def start():
if duration_diff >= weekly_interval/60: if duration_diff >= weekly_interval/60:
logger.info('[WEEKLY] Weekly Pull-Update initializing immediately as it has been %s hours since the last run' % (duration_diff/60)) logger.info('[WEEKLY] Weekly Pull-Update initializing immediately as it has been %s hours since the last run' % (duration_diff/60))
SCHED.add_job(func=ws.run, id='weekly', name='Weekly Pullist', next_run_time=datetime.datetime.now(), trigger=IntervalTrigger(hours=weektimer, minutes=0, timezone='UTC')) SCHED.add_job(func=ws.run, id='weekly', name='Weekly Pullist', next_run_time=datetime.datetime.utcnow(), trigger=IntervalTrigger(hours=weektimer, minutes=0, timezone='UTC'))
else: else:
weekly_diff = datetime.datetime.utcfromtimestamp(helpers.utctimestamp() + (weekly_interval - (duration_diff * 60))) weekly_diff = datetime.datetime.utcfromtimestamp(helpers.utctimestamp() + (weekly_interval - (duration_diff * 60)))
logger.fdebug('[WEEKLY] Scheduling next run for @ %s every %s hours' % (weekly_diff, weektimer)) logger.fdebug('[WEEKLY] Scheduling next run for @ %s every %s hours' % (weekly_diff, weektimer))
SCHED.add_job(func=ws.run, id='weekly', name='Weekly Pullist', next_run_time=weekly_diff, trigger=IntervalTrigger(hours=weektimer, minutes=0, timezone='UTC')) SCHED.add_job(func=ws.run, id='weekly', name='Weekly Pullist', next_run_time=weekly_diff, trigger=IntervalTrigger(hours=weektimer, minutes=0, timezone='UTC'))
#initiate startup rss feeds for torrents/nzbs here...
if CONFIG.ENABLE_RSS:
logger.info('[RSS-FEEDS] Initiating startup-RSS feed checks.')
if SCHED_RSS_LAST is not None:
rss_timestamp = float(SCHED_RSS_LAST)
logger.info('[RSS-FEEDS] RSS last run @ %s' % datetime.datetime.utcfromtimestamp(rss_timestamp))
else:
rss_timestamp = helpers.utctimestamp() + (int(CONFIG.RSS_CHECKINTERVAL) *60)
rs = rsscheckit.tehMain()
duration_diff = (helpers.utctimestamp() - rss_timestamp)/60
if duration_diff >= int(CONFIG.RSS_CHECKINTERVAL):
SCHED.add_job(func=rs.run, id='rss', name='RSS Feeds', args=[True], next_run_time=datetime.datetime.utcnow(), trigger=IntervalTrigger(hours=0, minutes=int(CONFIG.RSS_CHECKINTERVAL), timezone='UTC'))
else:
rss_diff = datetime.datetime.utcfromtimestamp(helpers.utctimestamp() + (int(CONFIG.RSS_CHECKINTERVAL) * 60) - (duration_diff * 60))
logger.fdebug('[RSS-FEEDS] Scheduling next run for @ %s every %s minutes' % (rss_diff, CONFIG.RSS_CHECKINTERVAL))
SCHED.add_job(func=rs.run, id='rss', name='RSS Feeds', args=[True], next_run_time=rss_diff, trigger=IntervalTrigger(hours=0, minutes=int(CONFIG.RSS_CHECKINTERVAL), timezone='UTC'))
if CONFIG.CHECK_GITHUB: if CONFIG.CHECK_GITHUB:
vs = versioncheckit.CheckVersion() vs = versioncheckit.CheckVersion()
SCHED.add_job(func=vs.run, id='version', name='Check Version', trigger=IntervalTrigger(hours=0, minutes=CONFIG.CHECK_GITHUB_INTERVAL, timezone='UTC')) SCHED.add_job(func=vs.run, id='version', name='Check Version', trigger=IntervalTrigger(hours=0, minutes=CONFIG.CHECK_GITHUB_INTERVAL, timezone='UTC'))
@ -1102,6 +1118,17 @@ def halt():
except: except:
SCHED.shutdown(wait=False) SCHED.shutdown(wait=False)
if NZBPOOL is not None:
logger.info('Terminating the nzb auto-complete thread.')
try:
NZBPOOL.join(10)
logger.info('Joined pool for termination - successful')
except KeyboardInterrupt:
NZB_QUEUE.put('exit')
NZBPOOL.join(5)
except AssertionError:
os._exit(0)
if SNPOOL is not None: if SNPOOL is not None:
logger.info('Terminating the auto-snatch thread.') logger.info('Terminating the auto-snatch thread.')
try: try:
@ -1112,7 +1139,6 @@ def halt():
SNPOOL.join(5) SNPOOL.join(5)
except AssertionError: except AssertionError:
os._exit(0) os._exit(0)
_INITIALIZED = False _INITIALIZED = False
def shutdown(restart=False, update=False): def shutdown(restart=False, update=False):

View File

@ -164,7 +164,7 @@ class info32p(object):
return return
else: else:
mylar.FEEDINFO_32P = feedinfo mylar.FEEDINFO_32P = feedinfo
return return feedinfo
def searchit(self): def searchit(self):
#self.searchterm is a tuple containing series name, issue number, volume and publisher. #self.searchterm is a tuple containing series name, issue number, volume and publisher.

View File

@ -100,7 +100,7 @@ _CONFIG_DEFINITIONS = OrderedDict({
'CVAPI_RATE' : (int, 'CV', 2), 'CVAPI_RATE' : (int, 'CV', 2),
'COMICVINE_API': (str, 'CV', None), 'COMICVINE_API': (str, 'CV', None),
'BLACKLISTED_PUBLISHERS' : (str, 'CV', None), 'BLACKLISTED_PUBLISHERS' : (str, 'CV', ''),
'CV_VERIFY': (bool, 'CV', True), 'CV_VERIFY': (bool, 'CV', True),
'CV_ONLY': (bool, 'CV', True), 'CV_ONLY': (bool, 'CV', True),
'CV_ONETIMER': (bool, 'CV', True), 'CV_ONETIMER': (bool, 'CV', True),
@ -195,6 +195,7 @@ _CONFIG_DEFINITIONS = OrderedDict({
'SAB_PRIORITY': (str, 'SABnzbd', "Default"), 'SAB_PRIORITY': (str, 'SABnzbd', "Default"),
'SAB_TO_MYLAR': (bool, 'SABnzbd', False), 'SAB_TO_MYLAR': (bool, 'SABnzbd', False),
'SAB_DIRECTORY': (str, 'SABnzbd', None), 'SAB_DIRECTORY': (str, 'SABnzbd', None),
'SAB_CLIENT_POST_PROCESSING': (bool, 'SABnbzd', False), #0/False: ComicRN.py, #1/True: Completed Download Handling
'NZBGET_HOST': (str, 'NZBGet', None), 'NZBGET_HOST': (str, 'NZBGet', None),
'NZBGET_PORT': (str, 'NZBGet', None), 'NZBGET_PORT': (str, 'NZBGet', None),
@ -203,6 +204,7 @@ _CONFIG_DEFINITIONS = OrderedDict({
'NZBGET_PRIORITY': (str, 'NZBGet', None), 'NZBGET_PRIORITY': (str, 'NZBGet', None),
'NZBGET_CATEGORY': (str, 'NZBGet', None), 'NZBGET_CATEGORY': (str, 'NZBGet', None),
'NZBGET_DIRECTORY': (str, 'NZBGet', None), 'NZBGET_DIRECTORY': (str, 'NZBGet', None),
'NZBGET_CLIENT_POST_PROCESSING': (bool, 'NZBGet', False), #0/False: ComicRN.py, #1/True: Completed Download Handling
'BLACKHOLE_DIR': (str, 'Blackhole', None), 'BLACKHOLE_DIR': (str, 'Blackhole', None),
@ -262,10 +264,17 @@ _CONFIG_DEFINITIONS = OrderedDict({
'ENABLE_TORRENTS': (bool, 'Torrents', False), 'ENABLE_TORRENTS': (bool, 'Torrents', False),
'ENABLE_TORRENT_SEARCH': (bool, 'Torrents', False), 'ENABLE_TORRENT_SEARCH': (bool, 'Torrents', False),
'MINSEEDS': (int, 'Torrents', 0), 'MINSEEDS': (int, 'Torrents', 0),
'AUTO_SNATCH': (bool, 'Torrents', False),
'AUTO_SNATCH_SCRIPT': (str, 'Torrents', None),
'ALLOW_PACKS': (bool, 'Torrents', False), 'ALLOW_PACKS': (bool, 'Torrents', False),
'AUTO_SNATCH': (bool, 'AutoSnatch', False),
'AUTO_SNATCH_SCRIPT': (str, 'AutoSnatch', None),
'PP_SSHHOST': (str, 'AutoSnatch', None),
'PP_SSHPORT': (str, 'AutoSnatch', 22),
'PP_SSHUSER': (str, 'AutoSnatch', None),
'PP_SSHPASSWD': (str, 'AutoSnatch', None),
'PP_SSHLOCALCD': (str, 'AutoSnatch', None),
'PP_SSHKEYFILE': (str, 'AutoSnatch', None),
'TORRENT_LOCAL': (bool, 'Watchdir', False), 'TORRENT_LOCAL': (bool, 'Watchdir', False),
'LOCAL_WATCHDIR': (str, 'Watchdir', None), 'LOCAL_WATCHDIR': (str, 'Watchdir', None),
'TORRENT_SEEDBOX': (bool, 'Seedbox', False), 'TORRENT_SEEDBOX': (bool, 'Seedbox', False),
@ -467,7 +476,7 @@ class Config(object):
default = key[3] default = key[3]
myval = self.check_config(definition_type, section, inikey, default) myval = self.check_config(definition_type, section, inikey, default)
if myval['status'] is False: if myval['status'] is False:
if self.CONFIG_VERSION == 6: if self.CONFIG_VERSION == 6 or (config.has_section('Torrents') and any([inikey == 'auto_snatch', inikey == 'auto_snatch_script'])):
chkstatus = False chkstatus = False
if config.has_section('Torrents'): if config.has_section('Torrents'):
myval = self.check_config(definition_type, 'Torrents', inikey, default) myval = self.check_config(definition_type, 'Torrents', inikey, default)
@ -581,7 +590,11 @@ class Config(object):
def writeconfig(self): def writeconfig(self):
logger.fdebug("Writing configuration to file") logger.fdebug("Writing configuration to file")
self.provider_sequence() self.provider_sequence()
config.set('Newznab', 'extra_newznabs', ', '.join(self.write_extra_newznabs())) config.set('Newznab', 'extra_newznabs', ', '.join(self.write_extras(self.EXTRA_NEWZNABS)))
if type(self.BLACKLISTED_PUBLISHERS) == list:
config.set('CV', 'blacklisted_publishers', ', '.join(self.write_extras(self.BLACKLISTED_PUBLISHERS)))
else:
config.set('CV', 'blacklisted_publishers', self.BLACKLISTED_PUBLISHERS)
config.set('General', 'dynamic_update', str(self.DYNAMIC_UPDATE)) config.set('General', 'dynamic_update', str(self.DYNAMIC_UPDATE))
try: try:
with codecs.open(self._config_file, encoding='utf8', mode='w+') as configfile: with codecs.open(self._config_file, encoding='utf8', mode='w+') as configfile:
@ -659,6 +672,9 @@ class Config(object):
#we can't have metatagging enabled with hard/soft linking. Forcibly disable it here just in case it's set on load. #we can't have metatagging enabled with hard/soft linking. Forcibly disable it here just in case it's set on load.
self.ENABLE_META = False self.ENABLE_META = False
if self.BLACKLISTED_PUBLISHERS is not None and type(self.BLACKLISTED_PUBLISHERS) == unicode:
setattr(self, 'BLACKLISTED_PUBLISHERS', self.BLACKLISTED_PUBLISHERS.split(', '))
#comictagger - force to use included version if option is enabled. #comictagger - force to use included version if option is enabled.
if self.ENABLE_META: if self.ENABLE_META:
mylar.CMTAGGER_PATH = mylar.PROG_DIR mylar.CMTAGGER_PATH = mylar.PROG_DIR
@ -676,6 +692,10 @@ class Config(object):
else: else:
logger.fdebug('Successfully created ComicTagger Settings location.') logger.fdebug('Successfully created ComicTagger Settings location.')
if self.AUTO_SNATCH is True and self.AUTO_SNATCH_SCRIPT is None:
setattr(self, 'AUTO_SNATCH_SCRIPT', os.path.join(mylar.PROG_DIR, 'post-processing', 'torrent-auto-snatch', 'getlftp.sh'))
config.set('AutoSnatch', 'auto_snatch_script', self.AUTO_SNATCH_SCRIPT)
mylar.USE_SABNZBD = False mylar.USE_SABNZBD = False
mylar.USE_NZBGET = False mylar.USE_NZBGET = False
mylar.USE_BLACKHOLE = False mylar.USE_BLACKHOLE = False
@ -861,9 +881,9 @@ class Config(object):
setattr(self, 'PROVIDER_ORDER', PROVIDER_ORDER) setattr(self, 'PROVIDER_ORDER', PROVIDER_ORDER)
logger.fdebug('Provider Order is now set : %s ' % self.PROVIDER_ORDER) logger.fdebug('Provider Order is now set : %s ' % self.PROVIDER_ORDER)
def write_extra_newznabs(self): def write_extras(self, value):
flattened_newznabs = [] flattened = []
for item in self.EXTRA_NEWZNABS: for item in value: #self.EXTRA_NEWZNABS:
for i in item: for i in item:
try: try:
if "\"" in i and " \"" in i: if "\"" in i and " \"" in i:
@ -872,5 +892,5 @@ class Config(object):
ib = i ib = i
except: except:
ib = i ib = i
flattened_newznabs.append(str(ib)) flattened.append(str(ib))
return flattened_newznabs return flattened

View File

@ -29,8 +29,10 @@ import itertools
import shutil import shutil
import os, errno import os, errno
from apscheduler.triggers.interval import IntervalTrigger from apscheduler.triggers.interval import IntervalTrigger
import mylar import mylar
import logger import logger
from mylar import sabnzbd, nzbget, process
def multikeysort(items, columns): def multikeysort(items, columns):
@ -1807,6 +1809,16 @@ def int_num(s):
except ValueError: except ValueError:
return float(s) return float(s)
def listPull(weeknumber, year):
import db
library = {}
myDB = db.DBConnection()
# Get individual comics
list = myDB.select("SELECT ComicID FROM Weekly WHERE weeknumber=? AND year=?", [weeknumber,year])
for row in list:
library[row['ComicID']] = row['ComicID']
return library
def listLibrary(): def listLibrary():
import db import db
library = {} library = {}
@ -2730,6 +2742,16 @@ def torrentinfo(issueid=None, torrent_hash=None, download=False, monitor=False):
downlocation = torrent_info['files'][0].encode('utf-8') downlocation = torrent_info['files'][0].encode('utf-8')
os.environ['downlocation'] = re.sub("'", "\\'",downlocation) os.environ['downlocation'] = re.sub("'", "\\'",downlocation)
#these are pulled from the config and are the ssh values to use to retrieve the data
os.environ['host'] = mylar.CONFIG.PP_SSHHOST
os.environ['port'] = mylar.CONFIG.PP_SSHPORT
os.environ['user'] = mylar.CONFIG.PP_SSHUSER
os.environ['passwd'] = mylar.CONFIG.PP_SSHPASSWD
os.environ['localcd'] = mylar.CONFIG.PP_SSHLOCALCD
if mylar.CONFIG.PP_SSHKEYFILE is not None:
os.environ['keyfile'] = mylar.CONFIG.PP_SSHKEYFILE
#downlocation = re.sub("\'", "\\'", downlocation) #downlocation = re.sub("\'", "\\'", downlocation)
#downlocation = re.sub("&", "\&", downlocation) #downlocation = re.sub("&", "\&", downlocation)
@ -2875,7 +2897,7 @@ def latestdate_update():
ctrlVal = {'ComicID': a['ComicID']} ctrlVal = {'ComicID': a['ComicID']}
logger.info('updating latest date for : ' + a['ComicID'] + ' to ' + a['LatestDate'] + ' #' + a['LatestIssue']) logger.info('updating latest date for : ' + a['ComicID'] + ' to ' + a['LatestDate'] + ' #' + a['LatestIssue'])
myDB.upsert("comics", newVal, ctrlVal) myDB.upsert("comics", newVal, ctrlVal)
def worker_main(queue): def worker_main(queue):
while True: while True:
item = queue.get(True) item = queue.get(True)
@ -2890,7 +2912,40 @@ def worker_main(queue):
mylar.SNATCHED_QUEUE.put(item) mylar.SNATCHED_QUEUE.put(item)
elif any([snstat['snatch_status'] == 'MONITOR FAIL', snstat['snatch_status'] == 'MONITOR COMPLETE']): elif any([snstat['snatch_status'] == 'MONITOR FAIL', snstat['snatch_status'] == 'MONITOR COMPLETE']):
logger.info('File copied for post-processing - submitting as a direct pp.') logger.info('File copied for post-processing - submitting as a direct pp.')
threading.Thread(target=self.checkFolder, args=[os.path.abspath(os.path.join(snstat['copied_filepath'], os.pardir))]).start() threading.Thread(target=self.checkFolder, args=[os.path.abspath(os.path.join(snstat['copied_filepath'], os.pardir))]).start()
def nzb_monitor(queue):
while True:
item = queue.get(True)
logger.info('Now loading from queue: %s' % item)
if item == 'exit':
logger.info('Cleaning up workers for shutdown')
break
if mylar.CONFIG.SAB_CLIENT_POST_PROCESSING is True:
nz = sabnzbd.SABnzbd(item)
nzstat = nz.processor()
elif mylar.CONFIG.NZBGET_CLIENT_POST_PROCESSING is True:
nz = nzbget.NZBGet()
nzstat = nz.processor(item)
else:
logger.warn('There are no NZB Completed Download handlers enabled. Not sending item to completed download handling...')
break
if nzstat['status'] is False:
logger.info('Something went wrong - maybe you should retry things. I will requeue up this item for post-processing...')
time.sleep(5)
mylar.NZB_QUEUE.put(item)
elif nzstat['status'] is True:
if nzstat['failed'] is False:
logger.info('File successfully downloaded - now initiating completed downloading handling.')
else:
logger.info('File failed either due to being corrupt or incomplete - now initiating completed failed downloading handling.')
try:
cc = process.Process(nzstat['name'], nzstat['location'], failed=nzstat['failed'])
nzpp = cc.post_process()
except Exception as e:
logger.info('process error: %s' % e)
def script_env(mode, vars): def script_env(mode, vars):
#mode = on-snatch, pre-postprocess, post-postprocess #mode = on-snatch, pre-postprocess, post-postprocess
@ -3150,7 +3205,7 @@ def job_management(write=False, job=None, last_run_completed=None, current_run=N
wkt = 4 wkt = 4
else: else:
wkt = 24 wkt = 24
mylar.SCHED.reschedule_job('weekly', trigger=IntervalTrigger(hours=wkt, minutes=mylar.CONFIG.SEARCH_INTERVAL, timezone='UTC')) mylar.SCHED.reschedule_job('weekly', trigger=IntervalTrigger(hours=wkt, minutes=0, timezone='UTC'))
nextrun_stamp = utctimestamp() + (wkt * 60 * 60) nextrun_stamp = utctimestamp() + (wkt * 60 * 60)
mylar.SCHED_WEEKLY_LAST = last_run_completed mylar.SCHED_WEEKLY_LAST = last_run_completed
elif job == 'Check Version': elif job == 'Check Version':

View File

@ -1438,7 +1438,7 @@ def annual_check(ComicName, SeriesYear, comicid, issuetype, issuechk, annualslis
annualyear = SeriesYear # no matter what, the year won't be less than this. annualyear = SeriesYear # no matter what, the year won't be less than this.
logger.fdebug('[IMPORTER-ANNUAL] - Annual Year:' + str(annualyear)) logger.fdebug('[IMPORTER-ANNUAL] - Annual Year:' + str(annualyear))
sresults, explicit = mb.findComic(annComicName, mode, issue=None, explicit='all')#,explicit=True) sresults = mb.findComic(annComicName, mode, issue=None)
type='comic' type='comic'
annual_types_ignore = {'paperback', 'collecting', 'reprints', 'collected edition', 'print edition', 'tpb', 'available in print', 'collects'} annual_types_ignore = {'paperback', 'collecting', 'reprints', 'collected edition', 'print edition', 'tpb', 'available in print', 'collects'}
@ -1547,7 +1547,7 @@ def annual_check(ComicName, SeriesYear, comicid, issuetype, issuechk, annualslis
elif len(sresults) == 0 or len(sresults) is None: elif len(sresults) == 0 or len(sresults) is None:
logger.fdebug('[IMPORTER-ANNUAL] - No results, removing the year from the agenda and re-querying.') logger.fdebug('[IMPORTER-ANNUAL] - No results, removing the year from the agenda and re-querying.')
sresults, explicit = mb.findComic(annComicName, mode, issue=None)#, explicit=True) sresults = mb.findComic(annComicName, mode, issue=None)
if len(sresults) == 1: if len(sresults) == 1:
sr = sresults[0] sr = sresults[0]
logger.fdebug('[IMPORTER-ANNUAL] - ' + str(comicid) + ' found. Assuming it is part of the greater collection.') logger.fdebug('[IMPORTER-ANNUAL] - ' + str(comicid) + ' found. Assuming it is part of the greater collection.')

View File

@ -610,6 +610,7 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None,
def scanLibrary(scan=None, queue=None): def scanLibrary(scan=None, queue=None):
mylar.IMPORT_FILES = 0
valreturn = [] valreturn = []
if scan: if scan:
try: try:

View File

@ -44,19 +44,21 @@ if platform.python_version() == '2.7.6':
httplib.HTTPConnection._http_vsn = 10 httplib.HTTPConnection._http_vsn = 10
httplib.HTTPConnection._http_vsn_str = 'HTTP/1.0' httplib.HTTPConnection._http_vsn_str = 'HTTP/1.0'
def pullsearch(comicapi, comicquery, offset, explicit, type): def pullsearch(comicapi, comicquery, offset, type, annuals=False):
u_comicquery = urllib.quote(comicquery.encode('utf-8').strip()) u_comicquery = urllib.quote(comicquery.encode('utf-8').strip())
u_comicquery = u_comicquery.replace(" ", "%20") u_comicquery = u_comicquery.replace(" ", "%20")
u_comicquery = u_comicquery.replace('-', '%2D')
if explicit == 'all' or explicit == 'loose': #logger.info('comicquery: %s' % comicquery)
if annuals is True:
PULLURL = mylar.CVURL + 'search?api_key=' + str(comicapi) + '&resources=' + str(type) + '&query=' + u_comicquery + '&field_list=id,name,start_year,first_issue,site_detail_url,count_of_issues,image,publisher,deck,description,last_issue&format=xml&limit=100&page=' + str(offset) PULLURL = mylar.CVURL + 'search?api_key=' + str(comicapi) + '&resources=' + str(type) + '&query=' + u_comicquery + '&field_list=id,name,start_year,first_issue,site_detail_url,count_of_issues,image,publisher,deck,description,last_issue&format=xml&limit=100&page=' + str(offset)
else: else:
# 02/22/2014 use the volume filter label to get the right results. # 02/22/2014 use the volume filter label to get the right results.
# add the 's' to the end of type to pluralize the caption (it's needed) # add the 's' to the end of type to pluralize the caption (it's needed)
if type == 'story_arc': if type == 'story_arc':
u_comicquery = re.sub("%20AND%20", "%20", u_comicquery) u_comicquery = re.sub("%20AND%20", "%20", u_comicquery)
PULLURL = mylar.CVURL + str(type) + 's?api_key=' + str(comicapi) + '&filter=name:' + u_comicquery + '&field_list=id,name,start_year,site_detail_url,count_of_issues,image,publisher,deck,description&format=xml&offset=' + str(offset) # 2012/22/02 - CVAPI flipped back to offset instead of page PULLURL = mylar.CVURL + str(type) + 's?api_key=' + str(comicapi) + '&filter=name:' + u_comicquery + '&field_list=id,name,start_year,site_detail_url,count_of_issues,image,publisher,deck,description,first_issue,last_issue&format=xml&offset=' + str(offset) # 2012/22/02 - CVAPI flipped back to offset instead of page
#all these imports are standard on most modern python implementations #all these imports are standard on most modern python implementations
#logger.info('MB.PULLURL:' + PULLURL) #logger.info('MB.PULLURL:' + PULLURL)
@ -78,48 +80,29 @@ def pullsearch(comicapi, comicquery, offset, explicit, type):
dom = parseString(r.content) #(data) dom = parseString(r.content) #(data)
return dom return dom
def findComic(name, mode, issue, limityear=None, explicit=None, type=None): def findComic(name, mode, issue, limityear=None, type=None):
#with mb_lock: #with mb_lock:
comicResults = None comicResults = None
comicLibrary = listLibrary() comicLibrary = listLibrary()
comiclist = [] comiclist = []
arcinfolist = [] arcinfolist = []
if type == 'story_arc': #if type == 'story_arc':
chars = set('!?*&') # chars = set('!?*&')
else: #else:
chars = set('!?*&-') # chars = set('!?*&-')
if any((c in chars) for c in name) or 'annual' in name: #if any((c in chars) for c in name) or 'annual' in name:
name = '"' +name +'"' # name = '"' +name +'"'
annuals = False
if 'annual' in name:
name = '"' + name +'"'
annuals = True
#print ("limityear: " + str(limityear)) #print ("limityear: " + str(limityear))
if limityear is None: limityear = 'None' if limityear is None: limityear = 'None'
comicquery = name comicquery = name
#comicquery=name.replace(" ", "%20")
if explicit is None:
#logger.fdebug('explicit is None. Setting to Default mode of ALL search words.')
#comicquery=name.replace(" ", " AND ")
explicit = 'all'
#OR
if ' and ' in comicquery.lower():
logger.fdebug('Enforcing exact naming match due to operator in title (and)')
explicit = 'all'
if explicit == 'loose':
logger.fdebug('Changing to loose mode - this will match ANY of the search words')
comicquery = name.replace(" ", " OR ")
elif explicit == 'explicit':
logger.fdebug('Changing to explicit mode - this will match explicitly on the EXACT words')
comicquery=name.replace(" ", " AND ")
else:
logger.fdebug('Default search mode - this will match on ALL search words')
#comicquery = name.replace(" ", " AND ")
explicit = 'all'
if mylar.CONFIG.COMICVINE_API == 'None' or mylar.CONFIG.COMICVINE_API is None: if mylar.CONFIG.COMICVINE_API == 'None' or mylar.CONFIG.COMICVINE_API is None:
logger.warn('You have not specified your own ComicVine API key - this is a requirement. Get your own @ http://api.comicvine.com.') logger.warn('You have not specified your own ComicVine API key - this is a requirement. Get your own @ http://api.comicvine.com.')
@ -131,7 +114,7 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
type = 'volume' type = 'volume'
#let's find out how many results we get from the query... #let's find out how many results we get from the query...
searched = pullsearch(comicapi, comicquery, 0, explicit, type) searched = pullsearch(comicapi, comicquery, 0, type, annuals)
if searched is None: if searched is None:
return False return False
totalResults = searched.getElementsByTagName('number_of_total_results')[0].firstChild.wholeText totalResults = searched.getElementsByTagName('number_of_total_results')[0].firstChild.wholeText
@ -146,15 +129,15 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
#logger.fdebug("querying " + str(countResults)) #logger.fdebug("querying " + str(countResults))
if countResults > 0: if countResults > 0:
#2012/22/02 - CV API flipped back to offset usage instead of page #2012/22/02 - CV API flipped back to offset usage instead of page
if explicit == 'all' or explicit == 'loose': if annuals is True:
#all / loose uses page for offset # search uses page for offset
offsetcount = (countResults /100) + 1 offsetcount = (countResults /100) + 1
else: else:
#explicit uses offset # filter uses offset
offsetcount = countResults offsetcount = countResults
searched = pullsearch(comicapi, comicquery, offsetcount, explicit, type) searched = pullsearch(comicapi, comicquery, offsetcount, type, annuals)
comicResults = searched.getElementsByTagName(type) #('volume') comicResults = searched.getElementsByTagName(type)
body = '' body = ''
n = 0 n = 0
if not comicResults: if not comicResults:
@ -250,13 +233,28 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
limiter = int(issue) - 1 limiter = int(issue) - 1
else: limiter = 0 else: limiter = 0
#get the first issue # (for auto-magick calcs) #get the first issue # (for auto-magick calcs)
iss_len = len(result.getElementsByTagName('name'))
i=0
xmlfirst = '1'
xmllast = None
try: try:
xmlfirst = result.getElementsByTagName('issue_number')[0].firstChild.wholeText while (i < iss_len):
if '\xbd' in xmlfirst: if result.getElementsByTagName('name')[i].parentNode.nodeName == 'first_issue':
xmlfirst = "1" #if the first issue is 1/2, just assume 1 for logistics xmlfirst = result.getElementsByTagName('issue_number')[i].firstChild.wholeText
if '\xbd' in xmlfirst:
xmlfirst = '1' #if the first issue is 1/2, just assume 1 for logistics
elif result.getElementsByTagName('name')[i].parentNode.nodeName == 'last_issue':
xmllast = result.getElementsByTagName('issue_number')[i].firstChild.wholeText
if all([xmllast is not None, xmlfirst is not None]):
break
i+=1
except: except:
xmlfirst = '1' xmlfirst = '1'
if all([xmlfirst == xmllast, xmlfirst.isdigit(), xmlcnt == '0']):
xmlcnt = '1'
#logger.info('There are : ' + str(xmlcnt) + ' issues in this series.') #logger.info('There are : ' + str(xmlcnt) + ' issues in this series.')
#logger.info('The first issue started at # ' + str(xmlfirst)) #logger.info('The first issue started at # ' + str(xmlfirst))
@ -279,7 +277,6 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
if result.getElementsByTagName('name')[cl].parentNode.nodeName == 'last_issue': if result.getElementsByTagName('name')[cl].parentNode.nodeName == 'last_issue':
xml_lastissueid = result.getElementsByTagName('id')[cl].firstChild.wholeText xml_lastissueid = result.getElementsByTagName('id')[cl].firstChild.wholeText
cl+=1 cl+=1
if (result.getElementsByTagName('start_year')[0].firstChild) is not None: if (result.getElementsByTagName('start_year')[0].firstChild) is not None:
@ -303,7 +300,7 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
logger.fdebug('[RESULT][' + str(limityear) + '] ComicName:' + xmlTag + ' -- ' + str(xmlYr) + ' [Series years: ' + str(yearRange) + ']') logger.fdebug('[RESULT][' + str(limityear) + '] ComicName:' + xmlTag + ' -- ' + str(xmlYr) + ' [Series years: ' + str(yearRange) + ']')
if tmpYr != xmlYr: if tmpYr != xmlYr:
xmlYr = tmpYr xmlYr = tmpYr
if any(map(lambda v: v in limityear, yearRange)) or limityear == 'None': if any(map(lambda v: v in limityear, yearRange)) or limityear == 'None':
xmlurl = result.getElementsByTagName('site_detail_url')[0].firstChild.wholeText xmlurl = result.getElementsByTagName('site_detail_url')[0].firstChild.wholeText
idl = len (result.getElementsByTagName('id')) idl = len (result.getElementsByTagName('id'))
@ -331,7 +328,6 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
#ignore specific publishers on a global scale here. #ignore specific publishers on a global scale here.
if mylar.CONFIG.BLACKLISTED_PUBLISHERS is not None and any([x for x in mylar.CONFIG.BLACKLISTED_PUBLISHERS if x.lower() == xmlpub.lower()]): if mylar.CONFIG.BLACKLISTED_PUBLISHERS is not None and any([x for x in mylar.CONFIG.BLACKLISTED_PUBLISHERS if x.lower() == xmlpub.lower()]):
# #'panini' in xmlpub.lower() or 'deagostini' in xmlpub.lower() or 'Editorial Televisa' in xmlpub.lower():
logger.fdebug('Blacklisted publisher [' + xmlpub + ']. Ignoring this result.') logger.fdebug('Blacklisted publisher [' + xmlpub + ']. Ignoring this result.')
continue continue
@ -348,16 +344,24 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
xmltype = None xmltype = None
if xmldeck != 'None': if xmldeck != 'None':
if any(['print' in xmldeck.lower(), 'digital' in xmldeck.lower()]): if any(['print' in xmldeck.lower(), 'digital' in xmldeck.lower(), 'paperback' in xmldeck.lower(), 'hardcover' in xmldeck.lower()]):
if 'print' in xmldeck.lower(): if 'print' in xmldeck.lower():
xmltype = 'Print' xmltype = 'Print'
elif 'digital' in xmldeck.lower(): elif 'digital' in xmldeck.lower():
xmltype = 'Digital' xmltype = 'Digital'
elif 'paperback' in xmldeck.lower():
xmltype = 'TPB'
elif 'hardcover' in xmldeck.lower():
xmltype = 'HC'
if xmldesc != 'None' and xmltype is None: if xmldesc != 'None' and xmltype is None:
if 'print' in xmldesc[:60].lower() and 'print edition can be found' not in xmldesc.lower(): if 'print' in xmldesc[:60].lower() and 'print edition can be found' not in xmldesc.lower():
xmltype = 'Print' xmltype = 'Print'
elif 'digital' in xmldesc[:60].lower() and 'digital edition can be found' not in xmldesc.lower(): elif 'digital' in xmldesc[:60].lower() and 'digital edition can be found' not in xmldesc.lower():
xmltype = 'Digital' xmltype = 'Digital'
elif 'paperback' in xmldesc[:60].lower() and 'paperback can be found' not in xmldesc.lower():
xmltype = 'TPB'
elif 'hardcover' in xmldesc[:60].lower() and 'hardcover can be found' not in xmldesc.lower():
xmltype = 'HC'
else: else:
xmltype = 'Print' xmltype = 'Print'
@ -380,15 +384,15 @@ def findComic(name, mode, issue, limityear=None, explicit=None, type=None):
'lastissueid': xml_lastissueid, 'lastissueid': xml_lastissueid,
'seriesrange': yearRange # returning additional information about series run polled from CV 'seriesrange': yearRange # returning additional information about series run polled from CV
}) })
#logger.fdebug('year: ' + str(xmlYr) + ' - constraint met: ' + str(xmlTag) + '[' + str(xmlYr) + '] --- 4050-' + str(xmlid)) #logger.fdebug('year: %s - constraint met: %s [%s] --- 4050-%s' % (xmlYr,xmlTag,xmlYr,xmlid))
else: else:
pass
#logger.fdebug('year: ' + str(xmlYr) + ' - contraint not met. Has to be within ' + str(limityear)) #logger.fdebug('year: ' + str(xmlYr) + ' - contraint not met. Has to be within ' + str(limityear))
pass
n+=1 n+=1
#search results are limited to 100 and by pagination now...let's account for this. #search results are limited to 100 and by pagination now...let's account for this.
countResults = countResults + 100 countResults = countResults + 100
return comiclist, explicit return comiclist
def storyarcinfo(xmlid): def storyarcinfo(xmlid):

125
mylar/nzbget.py Normal file
View File

@ -0,0 +1,125 @@
#!/usr/bin/python
# This file is part of Harpoon.
#
# Harpoon is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Harpoon is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Harpoon. If not, see <http://www.gnu.org/licenses/>.
import optparse
import xmlrpclib
from base64 import standard_b64encode
from xml.dom.minidom import parseString
import os
import sys
import re
import time
import mylar
import logger
class NZBGet(object):
def __init__(self):
if mylar.CONFIG.NZBGET_HOST[:5] == 'https':
protocol = "https"
nzbget_host = mylar.CONFIG.NZBGET_HOST[8:]
elif mylar.CONFIG.NZBGET_HOST[:4] == 'http':
protocol = "http"
nzbget_host = mylar.CONFIG.NZBGET_HOST[7:]
self.nzb_url = '%s://%s:%s@%s:%s/xmlrpc' % (protocol, mylar.CONFIG.NZBGET_USERNAME, mylar.CONFIG.NZBGET_PASSWORD, nzbget_host, mylar.CONFIG.NZBGET_PORT)
self.server = xmlrpclib.ServerProxy(self.nzb_url)
def sender(self, filename):
if mylar.CONFIG.NZBGET_PRIORITY:
if any([mylar.CONFIG.NZBGET_PRIORITY == 'Default', mylar.CONFIG.NZBGET_PRIORITY == 'Normal']):
nzbgetpriority = 0
elif mylar.CONFIG.NZBGET_PRIORITY == 'Low':
nzbgetpriority = -50
elif mylar.CONFIG.NZBGET_PRIORITY == 'High':
nzbgetpriority = 50
#there's no priority for "paused", so set "Very Low" and deal with that later...
elif mylar.CONFIG.NZBGET_PRIORITY == 'Paused':
nzbgetpriority = -100
else:
#if nzbget priority isn't selected, default to Normal (0)
nzbgetpriority = 0
in_file = open(filename, 'r')
nzbcontent = in_file.read()
in_file.close()
nzbcontent64 = standard_b64encode(nzbcontent)
try:
logger.fdebug('sending now to %s' % self.nzb_url)
sendresponse = self.server.append(filename, nzbcontent64, mylar.CONFIG.NZBGET_CATEGORY, nzbgetpriority, False, False, '', 0, 'SCORE')
except Exception as e:
logger.warn('uh-oh: %s' % e)
return {'status': False}
else:
if sendresponse <= 0:
logger.warn('Invalid response received after sending to NZBGet: %s' % sendresponse)
return {'status': False}
else:
#sendresponse is the NZBID that we use to track the progress....
return {'status': True,
'NZBID': sendresponse}
def processor(self, nzbinfo):
nzbid = nzbinfo['NZBID']
try:
logger.fdebug('Now checking the active queue of nzbget for the download')
queueinfo = self.server.listgroups()
except Expection as e:
logger.warn('Error attempting to retrieve active queue listing: %s' % e)
return {'status': False}
else:
logger.fdebug('valid queue result returned. Analyzing...')
queuedl = [qu for qu in queueinfo if qu['NZBID'] == nzbid]
if len(queuedl) == 0:
logger.warn('Unable to locate item in active queue. Could it be finished already ?')
return {'status': False}
stat = False
while stat is False:
time.sleep(10)
queueinfo = self.server.listgroups()
queuedl = [qu for qu in queueinfo if qu['NZBID'] == nzbid]
if len(queuedl) == 0:
logger.fdebug('Item is no longer in active queue. It should be finished by my calculations')
stat = True
else:
logger.fdebug('status: %s' % queuedl[0]['Status'])
logger.fdebug('name: %s' % queuedl[0]['NZBName'])
logger.fdebug('FileSize: %sMB' % queuedl[0]['FileSizeMB'])
logger.fdebug('Download Left: %sMB' % queuedl[0]['RemainingSizeMB'])
logger.fdebug('health: %s' % (queuedl[0]['Health']/10))
logger.fdebug('destination: %s' % queuedl[0]['DestDir'])
logger.fdebug('File has now downloaded!')
time.sleep(5) #wait some seconds so shit can get written to history properly
history = self.server.history()
found = False
hq = [hs for hs in history if hs['NZBID'] == nzbid and 'SUCCESS' in hs['Status']]
if len(hq) > 0:
logger.fdebug('found matching completed item in history. Job has a status of %s' % hq[0]['Status'])
if hq[0]['DownloadedSizeMB'] == hq[0]['FileSizeMB']:
logger.fdebug('%s has final file size of %sMB' % (hq[0]['Name'], hq[0]['DownloadedSizeMB']))
if os.path.isdir(hq[0]['DestDir']):
logger.fdebug('location found @ %s' % hq[0]['DestDir'])
return {'status': True,
'name': re.sub('.nzb', '', hq[0]['NZBName']).strip(),
'location': hq[0]['DestDir']}
else:
logger.warn('no file found where it should be @ %s - is there another script that moves things after completion ?' % hq[0]['DestDir'])
return {'status': False}
else:
logger.warn('Could not find completed item in history')
return {'status': False}

111
mylar/process.py Normal file
View File

@ -0,0 +1,111 @@
# This file is part of Mylar.
# -*- coding: utf-8 -*-
#
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
import Queue
import threading
import mylar
import logger
class Process(object):
def __init__(self, nzb_name, nzb_folder, failed=False):
self.nzb_name = nzb_name
self.nzb_folder = nzb_folder
self.failed = failed
def post_process(self):
if self.failed == '0':
self.failed = False
elif self.failed == '1':
self.failed = True
queue = Queue.Queue()
retry_outside = False
if not self.failed:
PostProcess = mylar.PostProcessor.PostProcessor(self.nzb_name, self.nzb_folder, queue=queue)
if any([self.nzb_name == 'Manual Run', self.nzb_name == 'Manual+Run']):
threading.Thread(target=PostProcess.Process).start()
else:
thread_ = threading.Thread(target=PostProcess.Process, name="Post-Processing")
thread_.start()
thread_.join()
chk = queue.get()
while True:
if chk[0]['mode'] == 'fail':
logger.info('Initiating Failed Download handling')
if chk[0]['annchk'] == 'no':
mode = 'want'
else:
mode = 'want_ann'
self.failed = True
break
elif chk[0]['mode'] == 'stop':
break
elif chk[0]['mode'] == 'outside':
retry_outside = True
break
else:
logger.error('mode is unsupported: ' + chk[0]['mode'])
break
if self.failed:
if mylar.CONFIG.FAILED_DOWNLOAD_HANDLING is True:
#drop the if-else continuation so we can drop down to this from the above if statement.
logger.info('Initiating Failed Download handling for this download.')
FailProcess = mylar.Failed.FailedProcessor(nzb_name=self.nzb_name, nzb_folder=self.nzb_folder, queue=queue)
thread_ = threading.Thread(target=FailProcess.Process, name="FAILED Post-Processing")
thread_.start()
thread_.join()
failchk = queue.get()
if failchk[0]['mode'] == 'retry':
logger.info('Attempting to return to search module with ' + str(failchk[0]['issueid']))
if failchk[0]['annchk'] == 'no':
mode = 'want'
else:
mode = 'want_ann'
qq = mylar.webserve.WebInterface()
qt = qq.queueit(mode=mode, ComicName=failchk[0]['comicname'], ComicIssue=failchk[0]['issuenumber'], ComicID=failchk[0]['comicid'], IssueID=failchk[0]['issueid'], manualsearch=True)
elif failchk[0]['mode'] == 'stop':
pass
else:
logger.error('mode is unsupported: ' + failchk[0]['mode'])
else:
logger.warn('Failed Download Handling is not enabled. Leaving Failed Download as-is.')
if retry_outside:
PostProcess = mylar.PostProcessor.PostProcessor('Manual Run', self.nzb_folder, queue=queue)
thread_ = threading.Thread(target=PostProcess.Process, name="Post-Processing")
thread_.start()
thread_.join()
chk = queue.get()
while True:
if chk[0]['mode'] == 'fail':
logger.info('Initiating Failed Download handling')
if chk[0]['annchk'] == 'no':
mode = 'want'
else:
mode = 'want_ann'
self.failed = True
break
elif chk[0]['mode'] == 'stop':
break
else:
logger.error('mode is unsupported: ' + chk[0]['mode'])
break
return

141
mylar/sabnzbd.py Normal file
View File

@ -0,0 +1,141 @@
#!/usr/bin/python
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
import urllib
import requests
import os
import sys
import re
import time
import logger
import mylar
class SABnzbd(object):
def __init__(self, params):
#self.sab_url = sab_host + '/api'
#self.sab_apikey = 'e90f54f4f757447a20a4fa89089a83ed'
self.sab_url = mylar.CONFIG.SAB_HOST + '/api'
self.params = params
def sender(self):
try:
from requests.packages.urllib3 import disable_warnings
disable_warnings()
except:
logger.info('Unable to disable https warnings. Expect some spam if using https nzb providers.')
try:
logger.info('parameters set to %s' % self.params)
logger.info('sending now to %s' % self.sab_url)
sendit = requests.post(self.sab_url, data=self.params, verify=False)
except:
logger.info('Failed to send to client.')
return {'status': False}
else:
sendresponse = sendit.json()
logger.info(sendresponse)
if sendresponse['status'] is True:
queue_params = {'status': True,
'nzo_id': ''.join(sendresponse['nzo_ids']),
'queue': {'mode': 'queue',
'search': ''.join(sendresponse['nzo_ids']),
'output': 'json',
'apikey': mylar.CONFIG.SAB_APIKEY}}
else:
queue_params = {'status': False}
return queue_params
def processor(self):
sendresponse = self.params['nzo_id']
try:
logger.info('sending now to %s' % self.sab_url)
logger.info('parameters set to %s' % self.params)
time.sleep(5) #pause 5 seconds before monitoring just so it hits the queue
h = requests.get(self.sab_url, params=self.params['queue'], verify=False)
except Exception as e:
logger.info('uh-oh: %s' % e)
return {'status': False}
else:
queueresponse = h.json()
logger.info('successfully queried the queue for status')
try:
queueinfo = queueresponse['queue']
logger.info('queue: %s' % queueresponse)
logger.info('Queue status : %s' % queueinfo['status'])
logger.info('Queue mbleft : %s' % queueinfo['mbleft'])
while any([str(queueinfo['status']) == 'Downloading', str(queueinfo['status']) == 'Idle']) and float(queueinfo['mbleft']) > 0:
logger.info('queue_params: %s' % self.params['queue'])
queue_resp = requests.get(self.sab_url, params=self.params['queue'], verify=False)
queueresp = queue_resp.json()
queueinfo = queueresp['queue']
logger.info('status: %s' % queueinfo['status'])
logger.info('mbleft: %s' % queueinfo['mbleft'])
logger.info('timeleft: %s' % queueinfo['timeleft'])
logger.info('eta: %s' % queueinfo['eta'])
time.sleep(5)
except Exception as e:
logger.warn('error: %s' % e)
logger.info('File has now downloaded!')
hist_params = {'mode': 'history',
'category': mylar.CONFIG.SAB_CATEGORY,
'failed': 0,
'output': 'json',
'apikey': mylar.CONFIG.SAB_APIKEY}
hist = requests.get(self.sab_url, params=hist_params, verify=False)
historyresponse = hist.json()
#logger.info(historyresponse)
histqueue = historyresponse['history']
found = {'status': False}
while found['status'] is False:
try:
for hq in histqueue['slots']:
#logger.info('nzo_id: %s --- %s [%s]' % (hq['nzo_id'], sendresponse, hq['status']))
if hq['nzo_id'] == sendresponse and hq['status'] == 'Completed':
logger.info('found matching completed item in history. Job has a status of %s' % hq['status'])
if os.path.isfile(hq['storage']):
logger.info('location found @ %s' % hq['storage'])
found = {'status': True,
'name': re.sub('.nzb', '', hq['nzb_name']).strip(),
'location': os.path.abspath(os.path.join(hq['storage'], os.pardir)),
'failed': False}
break
else:
logger.info('no file found where it should be @ %s - is there another script that moves things after completion ?' % hq['storage'])
break
elif hq['nzo_id'] == sendresponse and hq['status'] == 'Failed':
#get the stage / error message and see what we can do
stage = hq['stage_log']
for x in stage[0]:
if 'Failed' in x['actions'] and any([x['name'] == 'Unpack', x['name'] == 'Repair']):
if 'moving' in x['actions']:
logger.warn('There was a failure in SABnzbd during the unpack/repair phase that caused a failure: %s' % x['actions'])
else:
logger.warn('Failure occured during the Unpack/Repair phase of SABnzbd. This is probably a bad file: %s' % x['actions'])
if mylar.FAILED_DOWNLOAD_HANDLING is True:
found = {'status': True,
'name': re.sub('.nzb', '', hq['nzb_name']).strip(),
'location': os.path.abspath(os.path.join(hq['storage'], os.pardir)),
'failed': True}
break
break
except Exception as e:
logger.warn('error %s' % e)
break
return found

View File

@ -1,3 +1,19 @@
#!/usr/bin/env python
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
import mylar import mylar
from mylar import logger from mylar import logger
@ -10,52 +26,53 @@ from decimal import Decimal
from HTMLParser import HTMLParseError from HTMLParser import HTMLParseError
from time import strptime from time import strptime
def sabnzbd(sabhost=mylar.CONFIG.SAB_HOST, sabusername=mylar.CONFIG.SAB_USERNAME, sabpassword=mylar.CONFIG.SAB_PASSWORD): class sabnzbd(object):
#SAB_USERNAME = mylar.CONFIG.SAB_USERNAME
#SAB_PASSWORD = mylar.CONFIG.SAB_PASSWORD
#SAB_HOST = mylar.CONFIG.SAB_HOST #'http://localhost:8085/'
if sabusername is None or sabpassword is None:
logger.fdebug('No Username / Password specified for SABnzbd. Unable to auto-retrieve SAB API')
if 'https' not in sabhost:
sabhost = re.sub('http://', '', sabhost)
sabhttp = 'http://'
else:
sabhost = re.sub('https://', '', sabhost)
sabhttp = 'https://'
if not sabhost.endswith('/'):
#sabhost = sabhost[:len(sabhost)-1].rstrip()
sabhost = sabhost + '/'
sabline = sabhttp + sabusername + ':' + sabpassword + '@' + sabhost
r = requests.get(sabline + 'config/general/')
soup = BeautifulSoup(r.content, "html.parser")
#lenlinks = len(cntlinks)
cnt1 = len(soup.findAll("div", {"class": "field-pair alt"}))
cnt2 = len(soup.findAll("div", {"class": "field-pair"}))
cnt = int(cnt1 + cnt2) def __init__(self, sabhost, sabusername, sabpassword):
n = 0 self.sabhost = sabhost
n_even = -1 self.sabusername = sabusername
n_odd = -1 self.sabpassword = sabpassword
while (n < cnt):
if n%2==0:
n_even+=1
resultp = soup.findAll("div", {"class": "field-pair"})[n_even]
else:
n_odd+=1
resultp = soup.findAll("div", {"class": "field-pair alt"})[n_odd]
if resultp.find("label", {"for": "nzbkey"}): def sab_get(self):
#logger.fdebug resultp if self.sabusername is None or self.sabpassword is None:
try: logger.fdebug('No Username / Password specified for SABnzbd. Unable to auto-retrieve SAB API')
result = resultp.find("input", {"type": "text"}) if 'https' not in self.sabhost:
self.sabhost = re.sub('http://', '', self.sabhost)
sabhttp = 'http://'
else:
self.sabhost = re.sub('https://', '', self.sabhost)
sabhttp = 'https://'
if not self.sabhost.endswith('/'):
self.sabhost = self.sabhost + '/'
sabline = sabhttp + str(self.sabhost)
with requests.Session() as s:
postdata = {'username': self.sabusername,
'password': self.sabpassword,
'remember_me': 0}
lo = s.post(sabline + 'login/', data=postdata, verify=False)
if not lo.status_code == 200:
return
r = s.get(sabline + 'config/general', verify=False)
soup = BeautifulSoup(r.content, "html.parser")
resultp = soup.findAll("div", {"class": "field-pair"})
for res in resultp:
if res.find("label", {"for": "apikey"}):
try:
result = res.find("input", {"type": "text"})
except:
continue
if result['id'] == "apikey":
apikey = result['value']
logger.fdebug('found SABnzbd APIKey: ' + str(apikey))
return apikey
if __name__ == '__main__':
test = sabnzbd()
test.sab_get()
except:
continue
if result['id'] == "nzbkey":
nzbkey = result['value']
logger.fdebug('found SABnzbd NZBKey: ' + str(nzbkey))
return nzbkey
n+=1
#if __name__ == '__main__':
# sabnzbd()

View File

@ -16,7 +16,7 @@
from __future__ import division from __future__ import division
import mylar import mylar
from mylar import logger, db, updater, helpers, parseit, findcomicfeed, notifiers, rsscheck, Failed, filechecker, auth32p from mylar import logger, db, updater, helpers, parseit, findcomicfeed, notifiers, rsscheck, Failed, filechecker, auth32p, sabnzbd, nzbget
import feedparser import feedparser
import requests import requests
@ -1620,7 +1620,7 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
logger.error('[NZBPROVIDER = NONE] Encountered an error using given provider with requested information: ' + comicinfo + '. You have a blank entry most likely in your newznabs, fix it & restart Mylar') logger.error('[NZBPROVIDER = NONE] Encountered an error using given provider with requested information: ' + comicinfo + '. You have a blank entry most likely in your newznabs, fix it & restart Mylar')
continue continue
#generate the send-to and actually send the nzb / torrent. #generate the send-to and actually send the nzb / torrent.
logger.info('entry: %s' % entry) #logger.info('entry: %s' % entry)
searchresult = searcher(nzbprov, nzbname, comicinfo, entry['link'], IssueID, ComicID, tmpprov, newznab=newznab_host) searchresult = searcher(nzbprov, nzbname, comicinfo, entry['link'], IssueID, ComicID, tmpprov, newznab=newznab_host)
if searchresult == 'downloadchk-fail': if searchresult == 'downloadchk-fail':
@ -1681,6 +1681,13 @@ def NZB_SEARCH(ComicName, IssueNumber, ComicYear, SeriesYear, Publisher, IssueDa
notify_snatch(nzbname, sent_to, helpers.filesafe(modcomicname), cyear, IssueNumber, nzbprov) notify_snatch(nzbname, sent_to, helpers.filesafe(modcomicname), cyear, IssueNumber, nzbprov)
prov_count == 0 prov_count == 0
mylar.TMP_PROV = nzbprov mylar.TMP_PROV = nzbprov
#if mylar.SAB_PARAMS is not None:
# #should be threaded....
# ss = sabnzbd.SABnzbd(mylar.SAB_PARAMS)
# sendtosab = ss.sender()
# if all([sendtosab['status'] is True, mylar.CONFIG.SAB_CLIENT_POST_PROCESSING is True]):
# mylar.NZB_QUEUE.put(sendtosab)
return foundc return foundc
else: else:
@ -2136,6 +2143,7 @@ def searcher(nzbprov, nzbname, comicinfo, link, IssueID, ComicID, tmpprov, direc
logger.warn('Error fetching data from %s: %s' % (tmpprov, e)) logger.warn('Error fetching data from %s: %s' % (tmpprov, e))
return "sab-fail" return "sab-fail"
logger.info('download-retrieved headers: %s' % r.headers)
try: try:
nzo_info['filename'] = r.headers['x-dnzb-name'] nzo_info['filename'] = r.headers['x-dnzb-name']
filen = r.headers['x-dnzb-name'] filen = r.headers['x-dnzb-name']
@ -2349,46 +2357,29 @@ def searcher(nzbprov, nzbname, comicinfo, link, IssueID, ComicID, tmpprov, direc
#nzb.get #nzb.get
if mylar.USE_NZBGET: if mylar.USE_NZBGET:
from xmlrpclib import ServerProxy ss = nzbget.NZBGet()
if mylar.CONFIG.NZBGET_HOST[:5] == 'https': send_to_nzbget = ss.sender(nzbpath)
tmpapi = "https://" if send_to_nzbget['status'] is True:
nzbget_host = mylar.CONFIG.NZBGET_HOST[8:] if mylar.CONFIG.NZBGET_CLIENT_POST_PROCESSING is True:
elif mylar.CONFIG.NZBGET_HOST[:4] == 'http': mylar.NZB_QUEUE.put(send_to_nzbget)
tmpapi = "http://"
nzbget_host = mylar.CONFIG.NZBGET_HOST[7:]
else: else:
logger.error("You have an invalid nzbget hostname specified. Exiting") logger.warn('Unable to send nzb file to NZBGet. There was a parameter error as there are no values present: %s' % nzbget_params)
return "nzbget-fail" return "nzbget-fail"
in_file = open(nzbpath, "r") if send_to_nzbget['status'] is True:
nzbcontent = in_file.read()
in_file.close()
from base64 import standard_b64encode
nzbcontent64 = standard_b64encode(nzbcontent)
tmpapi = str(tmpapi) + str(mylar.CONFIG.NZBGET_USERNAME) + ":" + str(mylar.CONFIG.NZBGET_PASSWORD)
tmpapi = str(tmpapi) + "@" + str(nzbget_host)
if str(mylar.CONFIG.NZBGET_PORT).strip() != '':
tmpapi += ":" + str(mylar.CONFIG.NZBGET_PORT)
tmpapi += "/xmlrpc"
server = ServerProxy(tmpapi)
send_to_nzbget = server.append(nzbpath, str(mylar.CONFIG.NZBGET_CATEGORY), int(nzbgetpriority), True, nzbcontent64)
sent_to = "NZBGet"
if send_to_nzbget is True:
logger.info("Successfully sent nzb to NZBGet!") logger.info("Successfully sent nzb to NZBGet!")
else: else:
logger.info("Unable to send nzb to NZBGet - check your configs.") logger.info("Unable to send nzb to NZBGet - check your configs.")
return "nzbget-fail" return "nzbget-fail"
sent_to = "NZBGet"
#end nzb.get #end nzb.get
elif mylar.USE_SABNZBD: elif mylar.USE_SABNZBD:
sab_params = None
# let's build the send-to-SAB string now: # let's build the send-to-SAB string now:
# changed to just work with direct links now... # changed to just work with direct links now...
tmpapi = mylar.CONFIG.SAB_HOST + "/api?apikey=" + mylar.CONFIG.SAB_APIKEY
logger.fdebug("send-to-SAB host &api initiation string : " + str(helpers.apiremove(tmpapi, 'nzb')))
SABtype = "&mode=addurl&name="
#generate the api key to download here and then kill it immediately after. #generate the api key to download here and then kill it immediately after.
if mylar.DOWNLOAD_APIKEY is None: if mylar.DOWNLOAD_APIKEY is None:
import hashlib, random import hashlib, random
@ -2460,53 +2451,53 @@ def searcher(nzbprov, nzbname, comicinfo, link, IssueID, ComicID, tmpprov, direc
fileURL = mylar_host + 'api?apikey=' + mylar.DOWNLOAD_APIKEY + '&cmd=downloadNZB&nzbname=' + nzbname fileURL = mylar_host + 'api?apikey=' + mylar.DOWNLOAD_APIKEY + '&cmd=downloadNZB&nzbname=' + nzbname
tmpapi = tmpapi + SABtype sab_params = {'apikey': mylar.CONFIG.SAB_APIKEY,
logger.fdebug("...selecting API type: " + str(tmpapi)) 'mode': 'addurl',
'name': fileURL,
'cmd': 'downloadNZB',
'nzbname': nzbname,
'output': 'json'}
tmpapi = tmpapi + urllib.quote_plus(fileURL)
logger.fdebug("...attaching nzb via internal Mylar API: " + str(helpers.apiremove(tmpapi, '$')))
# determine SAB priority # determine SAB priority
if mylar.CONFIG.SAB_PRIORITY: if mylar.CONFIG.SAB_PRIORITY:
tmpapi = tmpapi + "&priority=" + sabpriority #setup the priorities.
logger.fdebug("...setting priority: " + str(helpers.apiremove(tmpapi, '&'))) if mylar.CONFIG.SAB_PRIORITY == "Default": sabpriority = "-100"
elif mylar.CONFIG.SAB_PRIORITY == "Low": sabpriority = "-1"
elif mylar.CONFIG.SAB_PRIORITY == "Normal": sabpriority = "0"
elif mylar.CONFIG.SAB_PRIORITY == "High": sabpriority = "1"
elif mylar.CONFIG.SAB_PRIORITY == "Paused": sabpriority = "-2"
else:
#if sab priority isn't selected, default to Normal (0)
sabpriority = "0"
sab_params['priority'] = sabpriority
# if category is blank, let's adjust # if category is blank, let's adjust
if mylar.CONFIG.SAB_CATEGORY: if mylar.CONFIG.SAB_CATEGORY:
tmpapi = tmpapi + "&cat=" + mylar.CONFIG.SAB_CATEGORY sab_params['cat'] = mylar.CONFIG.SAB_CATEGORY
logger.fdebug("...attaching category: " + str(helpers.apiremove(tmpapi, '&'))) #if mylar.CONFIG.POST_PROCESSING: #or mylar.CONFIG.RENAME_FILES:
if mylar.CONFIG.POST_PROCESSING: #or mylar.CONFIG.RENAME_FILES: # if mylar.CONFIG.POST_PROCESSING_SCRIPT:
if mylar.CONFIG.POST_PROCESSING_SCRIPT: # #this is relative to the SABnzbd script directory (ie. no path)
#this is relative to the SABnzbd script directory (ie. no path) # tmpapi = tmpapi + "&script=" + mylar.CONFIG.POST_PROCESSING_SCRIPT
tmpapi = tmpapi + "&script=" + mylar.CONFIG.POST_PROCESSING_SCRIPT # else:
else: # tmpapi = tmpapi + "&script=ComicRN.py"
tmpapi = tmpapi + "&script=ComicRN.py" # logger.fdebug("...attaching rename script: " + str(helpers.apiremove(tmpapi, '&')))
logger.fdebug("...attaching rename script: " + str(helpers.apiremove(tmpapi, '&')))
#final build of send-to-SAB #final build of send-to-SAB
logger.fdebug("Completed send-to-SAB link: " + str(helpers.apiremove(tmpapi, '&'))) #logger.fdebug("Completed send-to-SAB link: " + str(helpers.apiremove(tmpapi, '&')))
try: if sab_params is not None:
from requests.packages.urllib3 import disable_warnings ss = sabnzbd.SABnzbd(sab_params)
disable_warnings() sendtosab = ss.sender()
except: if all([sendtosab['status'] is True, mylar.CONFIG.SAB_CLIENT_POST_PROCESSING is True]):
logger.warn('Unable to disable https warnings. Expect some spam if using https nzb providers.') mylar.NZB_QUEUE.put(sendtosab)
else:
try: logger.warn('Unable to send nzb file to SABnzbd. There was a parameter error as there are no values present: %s' % sab_params)
requests.put(tmpapi, verify=False)
except:
logger.error('Unable to send nzb file to SABnzbd')
mylar.DOWNLOAD_APIKEY = None mylar.DOWNLOAD_APIKEY = None
return "sab-fail" return "sab-fail"
# this works for non-http sends to sab (when both sab AND provider are non-https)
# try:
# urllib2.urlopen(tmpapi)
# except urllib2.URLError:
# logger.error(u"Unable to send nzb file to SABnzbd")
# return "sab-fail"
sent_to = "SABnzbd+" sent_to = "SABnzbd+"
logger.info(u"Successfully sent nzb file to SABnzbd") logger.info(u"Successfully sent nzb file to SABnzbd")
if mylar.CONFIG.ENABLE_SNATCH_SCRIPT: if mylar.CONFIG.ENABLE_SNATCH_SCRIPT:
if mylar.USE_NZBGET: if mylar.USE_NZBGET:
clientmode = 'nzbget' clientmode = 'nzbget'

View File

@ -37,7 +37,7 @@ import shutil
import mylar import mylar
from mylar import logger, db, importer, mb, search, filechecker, helpers, updater, parseit, weeklypull, PostProcessor, librarysync, moveit, Failed, readinglist, notifiers from mylar import logger, db, importer, mb, search, filechecker, helpers, updater, parseit, weeklypull, PostProcessor, librarysync, moveit, Failed, readinglist, notifiers, sabparse
import simplejson as simplejson import simplejson as simplejson
@ -211,7 +211,7 @@ class WebInterface(object):
return serve_template(templatename="comicdetails.html", title=comic['ComicName'], comic=comic, issues=issues, comicConfig=comicConfig, isCounts=isCounts, series=series, annuals=annuals_list, annualinfo=aName) return serve_template(templatename="comicdetails.html", title=comic['ComicName'], comic=comic, issues=issues, comicConfig=comicConfig, isCounts=isCounts, series=series, annuals=annuals_list, annualinfo=aName)
comicDetails.exposed = True comicDetails.exposed = True
def searchit(self, name, issue=None, mode=None, type=None, explicit=None, serinfo=None): def searchit(self, name, issue=None, mode=None, type=None, serinfo=None):
if type is None: type = 'comic' # let's default this to comic search only for the time being (will add story arc, characters, etc later) if type is None: type = 'comic' # let's default this to comic search only for the time being (will add story arc, characters, etc later)
else: logger.fdebug(str(type) + " mode enabled.") else: logger.fdebug(str(type) + " mode enabled.")
#mode dictates type of search: #mode dictates type of search:
@ -226,7 +226,7 @@ class WebInterface(object):
#if it's an issue 0, CV doesn't have any data populated yet - so bump it up one to at least get the current results. #if it's an issue 0, CV doesn't have any data populated yet - so bump it up one to at least get the current results.
issue = 1 issue = 1
try: try:
searchresults, explicit = mb.findComic(name, mode, issue=issue) searchresults = mb.findComic(name, mode, issue=None) #issue=issue)
except TypeError: except TypeError:
logger.error('Unable to perform required pull-list search for : [name: ' + name + '][issue: ' + issue + '][mode: ' + mode + ']') logger.error('Unable to perform required pull-list search for : [name: ' + name + '][issue: ' + issue + '][mode: ' + mode + ']')
return return
@ -238,26 +238,26 @@ class WebInterface(object):
threading.Thread(target=importer.addComictoDB, args=[comicid, mismatch, None]).start() threading.Thread(target=importer.addComictoDB, args=[comicid, mismatch, None]).start()
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % comicid) raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % comicid)
try: try:
searchresults, explicit = mb.findComic(name, mode, issue=None, explicit=explicit) searchresults = mb.findComic(name, mode, issue=None)
except TypeError: except TypeError:
logger.error('Unable to perform required pull-list search for : [name: ' + name + '][mode: ' + mode + '][explicitsearch:' + str(explicit) + ']') logger.error('Unable to perform required pull-list search for : [name: ' + name + '][mode: ' + mode + ']')
return return
elif type == 'comic' and mode == 'want': elif type == 'comic' and mode == 'want':
try: try:
searchresults, explicit = mb.findComic(name, mode, issue) searchresults = mb.findComic(name, mode, issue)
except TypeError: except TypeError:
logger.error('Unable to perform required one-off pull-list search for : [name: ' + name + '][issue: ' + issue + '][mode: ' + mode + ']') logger.error('Unable to perform required one-off pull-list search for : [name: ' + name + '][issue: ' + issue + '][mode: ' + mode + ']')
return return
elif type == 'story_arc': elif type == 'story_arc':
try: try:
searchresults, explicit = mb.findComic(name, mode=None, issue=None, explicit='explicit', type='story_arc') searchresults = mb.findComic(name, mode=None, issue=None, type='story_arc')
except TypeError: except TypeError:
logger.error('Unable to perform required story-arc search for : [arc: ' + name + '][mode: ' + mode + '][explicitsearch: explicit]') logger.error('Unable to perform required story-arc search for : [arc: ' + name + '][mode: ' + mode + ']')
return return
searchresults = sorted(searchresults, key=itemgetter('comicyear', 'issues'), reverse=True) searchresults = sorted(searchresults, key=itemgetter('comicyear', 'issues'), reverse=True)
#print ("Results: " + str(searchresults)) #print ("Results: " + str(searchresults))
return serve_template(templatename="searchresults.html", title='Search Results for: "' + name + '"', searchresults=searchresults, type=type, imported=None, ogcname=None, name=name, explicit=explicit, serinfo=serinfo) return serve_template(templatename="searchresults.html", title='Search Results for: "' + name + '"', searchresults=searchresults, type=type, imported=None, ogcname=None, name=name, serinfo=serinfo)
searchit.exposed = True searchit.exposed = True
def addComic(self, comicid, comicname=None, comicyear=None, comicimage=None, comicissues=None, comicpublisher=None, imported=None, ogcname=None, serinfo=None): def addComic(self, comicid, comicname=None, comicyear=None, comicimage=None, comicissues=None, comicpublisher=None, imported=None, ogcname=None, serinfo=None):
@ -732,7 +732,7 @@ class WebInterface(object):
break break
if failed: if failed:
if mylar.CONFIG.FAILED_DOWNLOAD_HANDLING: if mylar.CONFIG.FAILED_DOWNLOAD_HANDLING is True:
#drop the if-else continuation so we can drop down to this from the above if statement. #drop the if-else continuation so we can drop down to this from the above if statement.
logger.info('Initiating Failed Download handling for this download.') logger.info('Initiating Failed Download handling for this download.')
FailProcess = Failed.FailedProcessor(nzb_name=nzb_name, nzb_folder=nzb_folder, queue=queue) FailProcess = Failed.FailedProcessor(nzb_name=nzb_name, nzb_folder=nzb_folder, queue=queue)
@ -1147,7 +1147,7 @@ class WebInterface(object):
"oneoff": oneoff}) "oneoff": oneoff})
newznabinfo = None newznabinfo = None
link = None
if fullprov == 'nzb.su': if fullprov == 'nzb.su':
if not mylar.CONFIG.NZBSU: if not mylar.CONFIG.NZBSU:
logger.error('nzb.su is not enabled - unable to process retry request until provider is re-enabled.') logger.error('nzb.su is not enabled - unable to process retry request until provider is re-enabled.')
@ -1189,17 +1189,18 @@ class WebInterface(object):
newznab_host = newznab_info[1] + '/' newznab_host = newznab_info[1] + '/'
newznab_api = newznab_info[3] newznab_api = newznab_info[3]
newznab_uid = newznab_info[4] newznab_uid = newznab_info[4]
link = str(newznab_host) + 'getnzb/' + str(id) + '.nzb&i=' + str(newznab_uid) + '&r=' + str(newznab_api) #link = str(newznab_host) + 'getnzb/' + str(id) + '.nzb&i=' + str(newznab_uid) + '&r=' + str(newznab_api)
link = str(newznab_host) + '/api?apikey=' + str(newznab_api) + '&t=get&id=' + str(id)
logger.info('newznab detected as : ' + str(newznab_info[0]) + ' @ ' + str(newznab_host)) logger.info('newznab detected as : ' + str(newznab_info[0]) + ' @ ' + str(newznab_host))
logger.info('link : ' + str(link)) logger.info('link : ' + str(link))
newznabinfo = (newznab_info[0], newznab_info[1], newznab_info[2], newznab_info[3], newznab_info[4]) newznabinfo = (newznab_info[0], newznab_info[1], newznab_info[2], newznab_info[3], newznab_info[4])
break
else: else:
logger.error(str(newznab_info[0]) + ' is not enabled - unable to process retry request until provider is re-enabled.') logger.error(str(newznab_info[0]) + ' is not enabled - unable to process retry request until provider is re-enabled.')
continue break
sendit = search.searcher(fullprov, nzbname, comicinfo, link=link, IssueID=IssueID, ComicID=ComicID, tmpprov=fullprov, directsend=True, newznab=newznabinfo) if link is not None:
break sendit = search.searcher(fullprov, nzbname, comicinfo, link=link, IssueID=IssueID, ComicID=ComicID, tmpprov=fullprov, directsend=True, newznab=newznabinfo)
break
return return
retryissue.exposed = True retryissue.exposed = True
@ -1766,13 +1767,18 @@ class WebInterface(object):
return {'status' : 'success'} return {'status' : 'success'}
manualpull.exposed = True manualpull.exposed = True
def pullrecreate(self): def pullrecreate(self, weeknumber=None, year=None):
myDB = db.DBConnection() myDB = db.DBConnection()
myDB.action("DROP TABLE weekly")
mylar.dbcheck()
logger.info("Deleted existed pull-list data. Recreating Pull-list...")
forcecheck = 'yes' forcecheck = 'yes'
weeklypull.pullit(forcecheck) if weeknumber is None:
myDB.action("DROP TABLE weekly")
mylar.dbcheck()
logger.info("Deleted existed pull-list data. Recreating Pull-list...")
weeklypull.pullit(forcecheck)
else:
myDB.action("DELETE FROM weekly WHERE weeknumber=? AND year=?", [weeknumber, year])
logger.info("Deleted existed pull-list data for week %s, %s. Now Recreating the Pull-list..." % (weeknumber, year))
weeklypull.pullit(forcecheck, weeknumber, year)
raise cherrypy.HTTPRedirect("pullist") raise cherrypy.HTTPRedirect("pullist")
pullrecreate.exposed = True pullrecreate.exposed = True
@ -2175,17 +2181,17 @@ class WebInterface(object):
if jobid.lower() in str(jb).lower(): if jobid.lower() in str(jb).lower():
logger.info('[%s] Now force submitting job.' % jb) logger.info('[%s] Now force submitting job.' % jb)
if jobid == 'rss': if jobid == 'rss':
mylar.SCHED.add_job(func=jb.func, args=[True], trigger=DateTrigger(run_date=datetime.datetime.now())) mylar.SCHED.add_job(func=jb.func, args=[True], trigger=DateTrigger(run_date=datetime.datetime.utcnow()))
elif jobid == 'weekly': elif jobid == 'weekly':
mylar.SCHED.add_job(func=jb.func, trigger=DateTrigger(run_date=datetime.datetime.now())) mylar.SCHED.add_job(func=jb.func, trigger=DateTrigger(run_date=datetime.datetime.utcnow()))
elif jobid == 'search': elif jobid == 'search':
mylar.SCHED.add_job(func=jb.func, trigger=DateTrigger(run_date=datetime.datetime.now())) mylar.SCHED.add_job(func=jb.func, trigger=DateTrigger(run_date=datetime.datetime.utcnow()))
elif jobid == 'version': elif jobid == 'version':
mylar.SCHED.add_job(func=jb.func, trigger=DateTrigger(run_date=datetime.datetime.now())) mylar.SCHED.add_job(func=jb.func, trigger=DateTrigger(run_date=datetime.datetime.utcnow()))
elif jobid == 'updater': elif jobid == 'updater':
mylar.SCHED.add_job(func=jb.func, args=[None,None,True], trigger=DateTrigger(run_date=datetime.datetime.now())) mylar.SCHED.add_job(func=jb.func, args=[None,None,True], trigger=DateTrigger(run_date=datetime.datetime.utcnow()))
elif jobid == 'monitor': elif jobid == 'monitor':
mylar.SCHED.add_job(func=jb.func, trigger=DateTrigger(run_date=datetime.datetime.now())) mylar.SCHED.add_job(func=jb.func, trigger=DateTrigger(run_date=datetime.datetime.utcnow()))
break break
schedulerForceCheck.exposed = True schedulerForceCheck.exposed = True
@ -2818,7 +2824,7 @@ class WebInterface(object):
for duh in AMS: for duh in AMS:
mode='series' mode='series'
sresults, explicit = mb.findComic(duh['ComicName'], mode, issue=duh['highvalue'], limityear=duh['yearRANGE'], explicit='all') sresults = mb.findComic(duh['ComicName'], mode, issue=duh['highvalue'], limityear=duh['yearRANGE'])
type='comic' type='comic'
if len(sresults) == 1: if len(sresults) == 1:
@ -3516,10 +3522,10 @@ class WebInterface(object):
def confirmResult(self, comicname, comicid): def confirmResult(self, comicname, comicid):
#print ("here.") #print ("here.")
mode='series' mode='series'
sresults, explicit = mb.findComic(comicname, mode, None, explicit='all') sresults = mb.findComic(comicname, mode, None)
#print sresults #print sresults
type='comic' type='comic'
return serve_template(templatename="searchresults.html", title='Import Results for: "' + comicname + '"', searchresults=sresults, type=type, imported='confirm', ogcname=comicid, explicit=explicit) return serve_template(templatename="searchresults.html", title='Import Results for: "' + comicname + '"', searchresults=sresults, type=type, imported='confirm', ogcname=comicid)
confirmResult.exposed = True confirmResult.exposed = True
def Check_ImportStatus(self): def Check_ImportStatus(self):
@ -3854,9 +3860,9 @@ class WebInterface(object):
searchterm = '"' + displaycomic + '"' searchterm = '"' + displaycomic + '"'
try: try:
if yearRANGE is None: if yearRANGE is None:
sresults, explicit = mb.findComic(searchterm, mode, issue=numissues, explicit='all') #ogcname, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues) sresults = mb.findComic(searchterm, mode, issue=numissues) #ogcname, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues)
else: else:
sresults, explicit = mb.findComic(searchterm, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ogcname, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ComicName, mode, issue=numissues, limityear=yearRANGE) sresults = mb.findComic(searchterm, mode, issue=numissues, limityear=yearRANGE) #ogcname, mode, issue=numissues, limityear=yearRANGE, explicit='all') #ComicName, mode, issue=numissues, limityear=yearRANGE)
except TypeError: except TypeError:
logger.warn('Comicvine API limit has been reached, and/or the comicvine website is not responding. Aborting process at this time, try again in an ~ hr when the api limit is reset.') logger.warn('Comicvine API limit has been reached, and/or the comicvine website is not responding. Aborting process at this time, try again in an ~ hr when the api limit is reset.')
break break
@ -3905,7 +3911,7 @@ class WebInterface(object):
else: else:
if len(search_matches) == 0 or len(search_matches) is None: if len(search_matches) == 0 or len(search_matches) is None:
logger.fdebug("no results, removing the year from the agenda and re-querying.") logger.fdebug("no results, removing the year from the agenda and re-querying.")
sresults, explicit = mb.findComic(searchterm, mode, issue=numissues, explicit='all') #ComicName, mode, issue=numissues) sresults = mb.findComic(searchterm, mode, issue=numissues) #ComicName, mode, issue=numissues)
logger.fdebug('[' + str(len(sresults)) + '] search results') logger.fdebug('[' + str(len(sresults)) + '] search results')
for results in sresults: for results in sresults:
rsn = filechecker.FileChecker() rsn = filechecker.FileChecker()
@ -4183,6 +4189,7 @@ class WebInterface(object):
"sab_priority": mylar.CONFIG.SAB_PRIORITY, "sab_priority": mylar.CONFIG.SAB_PRIORITY,
"sab_directory": mylar.CONFIG.SAB_DIRECTORY, "sab_directory": mylar.CONFIG.SAB_DIRECTORY,
"sab_to_mylar": helpers.checked(mylar.CONFIG.SAB_TO_MYLAR), "sab_to_mylar": helpers.checked(mylar.CONFIG.SAB_TO_MYLAR),
"sab_client_post_processing": helpers.checked(mylar.CONFIG.SAB_CLIENT_POST_PROCESSING),
"nzbget_host": mylar.CONFIG.NZBGET_HOST, "nzbget_host": mylar.CONFIG.NZBGET_HOST,
"nzbget_port": mylar.CONFIG.NZBGET_PORT, "nzbget_port": mylar.CONFIG.NZBGET_PORT,
"nzbget_user": mylar.CONFIG.NZBGET_USERNAME, "nzbget_user": mylar.CONFIG.NZBGET_USERNAME,
@ -4190,6 +4197,7 @@ class WebInterface(object):
"nzbget_cat": mylar.CONFIG.NZBGET_CATEGORY, "nzbget_cat": mylar.CONFIG.NZBGET_CATEGORY,
"nzbget_priority": mylar.CONFIG.NZBGET_PRIORITY, "nzbget_priority": mylar.CONFIG.NZBGET_PRIORITY,
"nzbget_directory": mylar.CONFIG.NZBGET_DIRECTORY, "nzbget_directory": mylar.CONFIG.NZBGET_DIRECTORY,
"nzbget_client_post_processing": helpers.checked(mylar.CONFIG.NZBGET_CLIENT_POST_PROCESSING),
"torrent_downloader_watchlist": helpers.radio(int(mylar.CONFIG.TORRENT_DOWNLOADER), 0), "torrent_downloader_watchlist": helpers.radio(int(mylar.CONFIG.TORRENT_DOWNLOADER), 0),
"torrent_downloader_utorrent": helpers.radio(int(mylar.CONFIG.TORRENT_DOWNLOADER), 1), "torrent_downloader_utorrent": helpers.radio(int(mylar.CONFIG.TORRENT_DOWNLOADER), 1),
"torrent_downloader_rtorrent": helpers.radio(int(mylar.CONFIG.TORRENT_DOWNLOADER), 2), "torrent_downloader_rtorrent": helpers.radio(int(mylar.CONFIG.TORRENT_DOWNLOADER), 2),
@ -4570,7 +4578,7 @@ class WebInterface(object):
'enforce_perms', 'sab_to_mylar', 'torrent_local', 'torrent_seedbox', 'rtorrent_ssl', 'rtorrent_verify', 'rtorrent_startonload', 'enforce_perms', 'sab_to_mylar', 'torrent_local', 'torrent_seedbox', 'rtorrent_ssl', 'rtorrent_verify', 'rtorrent_startonload',
'enable_torrents', 'qbittorrent_startonload', 'enable_rss', 'nzbsu', 'nzbsu_verify', 'enable_torrents', 'qbittorrent_startonload', 'enable_rss', 'nzbsu', 'nzbsu_verify',
'dognzb', 'dognzb_verify', 'experimental', 'enable_torrent_search', 'enable_tpse', 'enable_32p', 'enable_torznab', 'dognzb', 'dognzb_verify', 'experimental', 'enable_torrent_search', 'enable_tpse', 'enable_32p', 'enable_torznab',
'newznab', 'use_minsize', 'use_maxsize', 'ddump', 'failed_download_handling', 'newznab', 'use_minsize', 'use_maxsize', 'ddump', 'failed_download_handling', 'sab_client_post_processing', 'nzbget_client_post_processing',
'failed_auto', 'post_processing', 'enable_check_folder', 'enable_pre_scripts', 'enable_snatch_script', 'enable_extra_scripts', 'failed_auto', 'post_processing', 'enable_check_folder', 'enable_pre_scripts', 'enable_snatch_script', 'enable_extra_scripts',
'enable_meta', 'cbr2cbz_only', 'ct_tag_cr', 'ct_tag_cbl', 'ct_cbz_overwrite', 'rename_files', 'replace_spaces', 'zero_level', 'enable_meta', 'cbr2cbz_only', 'ct_tag_cr', 'ct_tag_cbl', 'ct_cbz_overwrite', 'rename_files', 'replace_spaces', 'zero_level',
'lowercase_filenames', 'autowant_upcoming', 'autowant_all', 'comic_cover_local', 'cvinfo', 'snatchedtorrent_notify', 'lowercase_filenames', 'autowant_upcoming', 'autowant_all', 'comic_cover_local', 'cvinfo', 'snatchedtorrent_notify',
@ -4614,46 +4622,6 @@ class WebInterface(object):
mylar.CONFIG.EXTRA_NEWZNABS.append((newznab_name, newznab_host, newznab_verify, newznab_api, newznab_uid, newznab_enabled)) mylar.CONFIG.EXTRA_NEWZNABS.append((newznab_name, newznab_host, newznab_verify, newznab_api, newznab_uid, newznab_enabled))
## Sanity checking
#if mylar.CONFIG.COMICVINE_API == 'None' or mylar.CONFIG.COMICVINE_API == '':
# logger.info('Personal Comicvine API key not provided. This will severely impact the usage of Mylar - you have been warned.')
# mylar.CONFIG.COMICVINE_API = None
#if mylar.CONFIG.SEARCH_INTERVAL < 360:
# logger.info("Search interval too low. Resetting to 6 hour minimum") mylar.CONFIG.SEARCH_INTERVAL = 360
#if mylar.CONFIG.SEARCH_DELAY < 1:
# logger.info("Minimum search delay set for 1 minute to avoid hammering.")
# mylar.CONFIG.SEARCH_DELAY = 1
#if mylar.CONFIG.RSS_CHECKINTERVAL < 20:
# logger.info("Minimum RSS Interval Check delay set for 20 minutes to avoid hammering.")
# mylar.CONFIG.RSS_CHECKINTERVAL = 20
#if not helpers.is_number(mylar.CONFIG.CHMOD_DIR):
# logger.info("CHMOD Directory value is not a valid numeric - please correct. Defaulting to 0777")
# mylar.CONFIG.CHMOD_DIR = '0777'
#if not helpers.is_number(mylar.CONFIG.CHMOD_FILE):
# logger.info("CHMOD File value is not a valid numeric - please correct. Defaulting to 0660")
# mylar.CONFIG.CHMOD_FILE = '0660'
#if mylar.CONFIG.SAB_HOST.endswith('/'):
# logger.info("Auto-correcting trailing slash in SABnzbd url (not required)")
# mylar.CONFIG.SAB_HOST = mylar.CONFIG.SAB_HOST[:-1]
#if mylar.CONFIG.FILE_OPTS is None:
# mylar.CONFIG.FILE_OPTS = 'move'
#if any([mylar.CONFIG.FILE_OPTS == 'hardlink', mylar.CONFIG.FILE_OPTS == 'softlink']):
# #we can't have metatagging enabled with hard/soft linking. Forcibly disable it here just in case it's set on load.
# mylar.CONFIG.ENABLE_META = 0
#if mylar.CONFIG.ENABLE_META:
# #force it to use comictagger in lib vs. outside in order to ensure 1/api second CV rate limit isn't broken.
# logger.fdebug("ComicTagger Path enforced to use local library : " + mylar.PROG_DIR)
# mylar.CONFIG.CMTAGGER_PATH = mylar.PROG_DIR
mylar.CONFIG.process_kwargs(kwargs) mylar.CONFIG.process_kwargs(kwargs)
#this makes sure things are set to the default values if they're not appropriately set. #this makes sure things are set to the default values if they're not appropriately set.
@ -4668,7 +4636,6 @@ class WebInterface(object):
configUpdate.exposed = True configUpdate.exposed = True
def SABtest(self, sabhost=None, sabusername=None, sabpassword=None, sabapikey=None): def SABtest(self, sabhost=None, sabusername=None, sabpassword=None, sabapikey=None):
logger.info('here')
if sabhost is None: if sabhost is None:
sabhost = mylar.CONFIG.SAB_HOST sabhost = mylar.CONFIG.SAB_HOST
if sabusername is None: if sabusername is None:
@ -4677,14 +4644,9 @@ class WebInterface(object):
sabpassword = mylar.CONFIG.SAB_PASSWORD sabpassword = mylar.CONFIG.SAB_PASSWORD
if sabapikey is None: if sabapikey is None:
sabapikey = mylar.CONFIG.SAB_APIKEY sabapikey = mylar.CONFIG.SAB_APIKEY
logger.fdebug('testing SABnzbd connection') logger.fdebug('Now attempting to test SABnzbd connection')
logger.fdebug('sabhost: ' + str(sabhost)) if mylar.USE_SABNZBD:
logger.fdebug('sabusername: ' + str(sabusername))
logger.fdebug('sabpassword: ' + str(sabpassword))
logger.fdebug('sabapikey: ' + str(sabapikey))
if mylar.CONFIG.USE_SABNZBD:
import requests import requests
from xml.dom.minidom import parseString, Element
#if user/pass given, we can auto-fill the API ;) #if user/pass given, we can auto-fill the API ;)
if sabusername is None or sabpassword is None: if sabusername is None or sabpassword is None:
@ -4699,7 +4661,8 @@ class WebInterface(object):
querysab = sabhost + 'api' querysab = sabhost + 'api'
payload = {'mode': 'get_config', payload = {'mode': 'get_config',
'section': 'misc', 'section': 'misc',
'output': 'xml', 'output': 'json',
'keyword': 'api_key',
'apikey': sabapikey} 'apikey': sabapikey}
if sabhost.startswith('https'): if sabhost.startswith('https'):
@ -4710,7 +4673,7 @@ class WebInterface(object):
try: try:
r = requests.get(querysab, params=payload, verify=verify) r = requests.get(querysab, params=payload, verify=verify)
except Exception, e: except Exception, e:
logger.warn('Error fetching data from %s: %s' % (sabhost, e)) logger.warn('Error fetching data from %s: %s' % (querysab, e))
if requests.exceptions.SSLError: if requests.exceptions.SSLError:
logger.warn('Cannot verify ssl certificate. Attempting to authenticate with no ssl-certificate verification.') logger.warn('Cannot verify ssl certificate. Attempting to authenticate with no ssl-certificate verification.')
try: try:
@ -4736,60 +4699,25 @@ class WebInterface(object):
logger.warn('Unable to properly query SABnzbd @' + sabhost + ' [Status Code returned: ' + str(r.status_code) + ']') logger.warn('Unable to properly query SABnzbd @' + sabhost + ' [Status Code returned: ' + str(r.status_code) + ']')
data = False data = False
else: else:
data = r.content data = r.json()
if data:
dom = parseString(data)
else:
return 'Unable to reach SABnzbd'
logger.info('data: %s' % data)
try: try:
q_sabhost = dom.getElementsByTagName('host')[0].firstChild.wholeText q_apikey = data['config']['misc']['api_key']
q_nzbkey = dom.getElementsByTagName('nzb_key')[0].firstChild.wholeText
q_apikey = dom.getElementsByTagName('api_key')[0].firstChild.wholeText
except: except:
errorm = dom.getElementsByTagName('error')[0].firstChild.wholeText logger.error('Error detected attempting to retrieve SAB data using FULL APIKey')
logger.error(u"Error detected attempting to retrieve SAB data using FULL APIKey: " + errorm) if all([sabusername is not None, sabpassword is not None]):
if errorm == 'API Key Incorrect':
logger.fdebug('You may have given me just the right amount of power (NZBKey), will test SABnzbd against the NZBkey now')
querysab = sabhost + 'api'
payload = {'mode': 'addurl',
'name': 'http://www.example.com/example.nzb',
'nzbname': 'NiceName',
'output': 'xml',
'apikey': sabapikey}
try: try:
r = requests.get(querysab, params=payload, verify=verify) sp = sabparse.sabnzbd(sabhost, sabusername, sabpassword)
q_apikey = sp.sab_get()
except Exception, e: except Exception, e:
logger.warn('Error fetching data from %s: %s' % (sabhost, e)) logger.warn('failure: %s' % e)
return 'Unable to retrieve data from SABnzbd'
dom = parseString(r.content)
qdata = dom.getElementsByTagName('status')[0].firstChild.wholeText
if str(qdata) == 'True':
q_nzbkey = mylar.CONFIG.SAB_APIKEY
q_apikey = None q_apikey = None
qd = True if q_apikey is None:
else: return "Invalid APIKey provided"
qerror = dom.getElementsByTagName('error')[0].firstChild.wholeText
logger.error(str(qerror) + ' - check that the API (NZBkey) is correct, use the auto-detect option AND/OR check host:port settings')
qd = False
if qd == False: return "Invalid APIKey provided." mylar.CONFIG.SAB_APIKEY = q_apikey
logger.info('APIKey provided is the FULL APIKey which is the correct key. You still need to SAVE the config for the changes to be applied.')
#test which apikey provided
if q_nzbkey != sabapikey:
if q_apikey != sabapikey:
logger.error('APIKey provided does not match with SABnzbd')
return "Invalid APIKey provided"
else:
logger.info('APIKey provided is FULL APIKey which is too much power - changing to NZBKey')
mylar.CONFIG.SAB_APIKEY = q_nzbkey
#mylar.config_write()
logger.info('Succcessfully changed to NZBKey. Thanks for shopping S-MART!')
else:
logger.info('APIKey provided is NZBKey which is the correct key.')
logger.info('Connection to SABnzbd tested sucessfully') logger.info('Connection to SABnzbd tested sucessfully')
return "Successfully verified APIkey" return "Successfully verified APIkey"
@ -4836,9 +4764,9 @@ class WebInterface(object):
getComicArtwork.exposed = True getComicArtwork.exposed = True
def findsabAPI(self, sabhost=None, sabusername=None, sabpassword=None): def findsabAPI(self, sabhost=None, sabusername=None, sabpassword=None):
from mylar import sabparse sp = sabparse.sabnzbd(sabhost, sabusername, sabpassword)
sabapi = sabparse.sabnzbd(sabhost, sabusername, sabpassword) sabapi = sp.sab_get()
logger.info('SAB NZBKey found as : ' + str(sabapi) + '. You still have to save the config to retain this setting.') logger.info('SAB APIKey found as : ' + str(sabapi) + '. You still have to save the config to retain this setting.')
mylar.CONFIG.SAB_APIKEY = sabapi mylar.CONFIG.SAB_APIKEY = sabapi
return sabapi return sabapi

View File

@ -31,27 +31,32 @@ import shutil
import mylar import mylar
from mylar import db, updater, helpers, logger, newpull, importer, mb, locg from mylar import db, updater, helpers, logger, newpull, importer, mb, locg
def pullit(forcecheck=None): def pullit(forcecheck=None, weeknumber=None, year=None):
myDB = db.DBConnection() myDB = db.DBConnection()
popit = myDB.select("SELECT count(*) FROM sqlite_master WHERE name='weekly' and type='table'") if weeknumber is None:
if popit: popit = myDB.select("SELECT count(*) FROM sqlite_master WHERE name='weekly' and type='table'")
try: if popit:
pull_date = myDB.selectone("SELECT SHIPDATE from weekly").fetchone() try:
logger.info(u"Weekly pull list present - checking if it's up-to-date..") pull_date = myDB.selectone("SELECT SHIPDATE from weekly").fetchone()
if (pull_date is None): logger.info(u"Weekly pull list present - checking if it's up-to-date..")
if (pull_date is None):
pulldate = '00000000'
else:
pulldate = pull_date['SHIPDATE']
except (sqlite3.OperationalError, TypeError), msg:
logger.info(u"Error Retrieving weekly pull list - attempting to adjust")
myDB.action("DROP TABLE weekly")
myDB.action("CREATE TABLE IF NOT EXISTS weekly (SHIPDATE text, PUBLISHER text, ISSUE text, COMIC VARCHAR(150), EXTRA text, STATUS text, ComicID text, IssueID text, CV_Last_Update text, DynamicName text, weeknumber text, year text, rowid INTEGER PRIMARY KEY)")
pulldate = '00000000' pulldate = '00000000'
else: logger.fdebug(u"Table re-created, trying to populate")
pulldate = pull_date['SHIPDATE'] else:
except (sqlite3.OperationalError, TypeError), msg: logger.info(u"No pullist found...I'm going to try and get a new list now.")
logger.info(u"Error Retrieving weekly pull list - attempting to adjust")
myDB.action("DROP TABLE weekly")
myDB.action("CREATE TABLE IF NOT EXISTS weekly (SHIPDATE text, PUBLISHER text, ISSUE text, COMIC VARCHAR(150), EXTRA text, STATUS text, ComicID text, IssueID text, CV_Last_Update text, DynamicName text, weeknumber text, year text, rowid INTEGER PRIMARY KEY)")
pulldate = '00000000' pulldate = '00000000'
logger.fdebug(u"Table re-created, trying to populate")
else: else:
logger.info(u"No pullist found...I'm going to try and get a new list now.") pulldate = None
if pulldate is None and weeknumber is None:
pulldate = '00000000' pulldate = '00000000'
if pulldate is None: pulldate = '00000000'
#only for pw-file or ALT_PULL = 1 #only for pw-file or ALT_PULL = 1
newrl = os.path.join(mylar.CONFIG.CACHE_DIR, 'newreleases.txt') newrl = os.path.join(mylar.CONFIG.CACHE_DIR, 'newreleases.txt')
@ -63,7 +68,12 @@ def pullit(forcecheck=None):
newpull.newpull() newpull.newpull()
elif mylar.CONFIG.ALT_PULL == 2: elif mylar.CONFIG.ALT_PULL == 2:
logger.info('[PULL-LIST] Populating & Loading pull-list data directly from alternate website') logger.info('[PULL-LIST] Populating & Loading pull-list data directly from alternate website')
chk_locg = locg.locg('00000000') #setting this to 00000000 will do a Recreate on every call instead of a Refresh if pulldate is not None:
chk_locg = locg.locg('00000000') #setting this to 00000000 will do a Recreate on every call instead of a Refresh
else:
logger.info('[PULL-LIST] Populating & Loading pull-list data directly from alternate website for specific week of %s, %s' % (weeknumber, year))
chk_locg = locg.locg(weeknumber=weeknumber, year=year)
if chk_locg['status'] == 'up2date': if chk_locg['status'] == 'up2date':
logger.info('[PULL-LIST] Pull-list is already up-to-date with ' + str(chk_locg['count']) + 'issues. Polling watchlist against it to see if anything is new.') logger.info('[PULL-LIST] Pull-list is already up-to-date with ' + str(chk_locg['count']) + 'issues. Polling watchlist against it to see if anything is new.')
mylar.PULLNEW = 'no' mylar.PULLNEW = 'no'
@ -826,6 +836,7 @@ def new_pullcheck(weeknumber, pullyear, comic1off_name=None, comic1off_id=None,
myDB = db.DBConnection() myDB = db.DBConnection()
watchlist = [] watchlist = []
weeklylist = [] weeklylist = []
pullist = helpers.listPull(weeknumber,pullyear)
if comic1off_name: if comic1off_name:
comiclist = myDB.select("SELECT * FROM comics WHERE Status='Active' AND ComicID=?",[comic1off_id]) comiclist = myDB.select("SELECT * FROM comics WHERE Status='Active' AND ComicID=?",[comic1off_id])
else: else:
@ -848,10 +859,11 @@ def new_pullcheck(weeknumber, pullyear, comic1off_name=None, comic1off_id=None,
"AlternateSearch": weekly['AlternateSearch'], "AlternateSearch": weekly['AlternateSearch'],
"DynamicName": weekly['DynamicComicName']}) "DynamicName": weekly['DynamicComicName']})
if len(watchlist) > 0: if len(watchlist) > 0:
for watch in watchlist: for watch in watchlist:
if 'Present' in watch['ComicPublished'] or (helpers.now()[:4] in watch['ComicPublished']) or watch['ForceContinuing'] == 1: listit = [pls for pls in pullist if str(pls) == str(watch['ComicID'])]
logger.info('watchCOMICID:%s / listit: %s' % (watch['ComicID'], listit))
if 'Present' in watch['ComicPublished'] or (helpers.now()[:4] in watch['ComicPublished']) or watch['ForceContinuing'] == 1 or len(listit) >0:
# this gets buggered up when series are named the same, and one ends in the current # this gets buggered up when series are named the same, and one ends in the current
# year, and the new series starts in the same year - ie. Avengers # year, and the new series starts in the same year - ie. Avengers
# lets' grab the latest issue date and see how far it is from current # lets' grab the latest issue date and see how far it is from current
@ -886,7 +898,7 @@ def new_pullcheck(weeknumber, pullyear, comic1off_name=None, comic1off_id=None,
chklimit = helpers.checkthepub(watch['ComicID']) chklimit = helpers.checkthepub(watch['ComicID'])
logger.fdebug("Check date limit set to : " + str(chklimit)) logger.fdebug("Check date limit set to : " + str(chklimit))
logger.fdebug(" ----- ") logger.fdebug(" ----- ")
if recentchk < int(chklimit) or watch['ForceContinuing'] == 1: if recentchk < int(chklimit) or watch['ForceContinuing'] == 1 or len(listit) > 0:
if watch['ForceContinuing'] == 1: if watch['ForceContinuing'] == 1:
logger.fdebug('Forcing Continuing Series enabled for series...') logger.fdebug('Forcing Continuing Series enabled for series...')
# let's not even bother with comics that are not in the Present. # let's not even bother with comics that are not in the Present.
@ -909,7 +921,7 @@ def new_pullcheck(weeknumber, pullyear, comic1off_name=None, comic1off_id=None,
annual_ids.append({'ComicID': an['ReleaseComicID'], annual_ids.append({'ComicID': an['ReleaseComicID'],
'ComicName': an['ReleaseComicName']}) 'ComicName': an['ReleaseComicName']})
weeklylist.append({'ComicName': watch['ComicName'], weeklylist.append({'ComicName': watch['ComicName'],
'SeriesYear': watch['ComicYear'], 'SeriesYear': watch['ComicYear'],
'ComicID': watch['ComicID'], 'ComicID': watch['ComicID'],
'Pubdate': watch['ComicPublished'], 'Pubdate': watch['ComicPublished'],

View File

@ -11,7 +11,7 @@ class AuthURLOpener(urllib.FancyURLopener):
self.password = pw self.password = pw
self.numTries = 0 self.numTries = 0
urllib.FancyURLopener.__init__(self) urllib.FancyURLopener.__init__(self)
def prompt_user_passwd(self, host, realm): def prompt_user_passwd(self, host, realm):
if self.numTries == 0: if self.numTries == 0:
self.numTries = 1 self.numTries = 1
@ -32,11 +32,11 @@ def processIssue(dirName, nzbName=None, failed=False, comicrn_version=None):
config = ConfigParser.ConfigParser() config = ConfigParser.ConfigParser()
configFilename = os.path.join(os.path.dirname(sys.argv[0]), "autoProcessComics.cfg") configFilename = os.path.join(os.path.dirname(sys.argv[0]), "autoProcessComics.cfg")
print "Loading config from", configFilename print "Loading config from", configFilename
if not os.path.isfile(configFilename): if not os.path.isfile(configFilename):
print "ERROR: You need an autoProcessComics.cfg file - did you rename and edit the .sample?" print "ERROR: You need an autoProcessComics.cfg file - did you rename and edit the .sample?"
sys.exit(-1) sys.exit(-1)
try: try:
fp = open(configFilename, "r") fp = open(configFilename, "r")
config.readfp(fp) config.readfp(fp)
@ -44,7 +44,7 @@ def processIssue(dirName, nzbName=None, failed=False, comicrn_version=None):
except IOError, e: except IOError, e:
print "Could not read configuration file: ", str(e) print "Could not read configuration file: ", str(e)
sys.exit(1) sys.exit(1)
host = config.get("Mylar", "host") host = config.get("Mylar", "host")
port = config.get("Mylar", "port") port = config.get("Mylar", "port")
username = config.get("Mylar", "username") username = config.get("Mylar", "username")
@ -53,14 +53,14 @@ def processIssue(dirName, nzbName=None, failed=False, comicrn_version=None):
ssl = int(config.get("Mylar", "ssl")) ssl = int(config.get("Mylar", "ssl"))
except (ConfigParser.NoOptionError, ValueError): except (ConfigParser.NoOptionError, ValueError):
ssl = 0 ssl = 0
try: try:
web_root = config.get("Mylar", "web_root") web_root = config.get("Mylar", "web_root")
except ConfigParser.NoOptionError: except ConfigParser.NoOptionError:
web_root = "" web_root = ""
params = {} params = {}
params['nzb_folder'] = dirName params['nzb_folder'] = dirName
if nzbName != None: if nzbName != None:
params['nzb_name'] = nzbName params['nzb_name'] = nzbName
@ -69,24 +69,24 @@ def processIssue(dirName, nzbName=None, failed=False, comicrn_version=None):
params['apc_version'] = apc_version params['apc_version'] = apc_version
params['comicrn_version'] = comicrn_version params['comicrn_version'] = comicrn_version
myOpener = AuthURLOpener(username, password) myOpener = AuthURLOpener(username, password)
if ssl: if ssl:
protocol = "https://" protocol = "https://"
else: else:
protocol = "http://" protocol = "http://"
url = protocol + host + ":" + port + web_root + "/post_process?" + urllib.urlencode(params) url = protocol + host + ":" + port + web_root + "/post_process?" + urllib.urlencode(params)
print "Opening URL:", url print "Opening URL:", url
try: try:
urlObj = myOpener.openit(url) urlObj = myOpener.openit(url)
except IOError, e: except IOError, e:
print "Unable to open URL: ", str(e) print "Unable to open URL: ", str(e)
sys.exit(1) sys.exit(1)
result = urlObj.readlines() result = urlObj.readlines()
for line in result: for line in result:
print line print line

View File

@ -1,31 +1,15 @@
#!/bin/bash #!/bin/bash
##-- start configuration #load the value from the conf.
HOST="$host"
#this needs to be edited to the full path to the get.conf file containing the torrent client information PORT="$port"
configfile='/home/hero/mylar/post-processing/torrent-auto-snatch/get.conf' USER="$user"
PASSWD="$passwd"
#this is the temporary location where it will make sure the conf is safe for use (by default this should be fine if left alone) LOCALCD="$localcd"
configfile_secured='/tmp/get.conf' KEYFILE="$keyfile"
filename="$downlocation"
##-- end configuration
## --- don't change stuff below here ----
# check if the file contains something we don't want
if egrep -q -v '^#|^[^ ]*=[^;]*' "$configfile"; then
# echo "Config file is unclean, cleaning it..." >&2
# filter the original to a new file
egrep '^#|^[^ ]*=[^;&]*' "$configfile" > "$configfile_secured"
configfile="$configfile_secured"
fi
# now source it, either the original or the filtered variant
source "$configfile"
cd $LOCALCD cd $LOCALCD
filename="$downlocation"
if [[ "${filename##*.}" == "cbr" || "${filename##*.}" == "cbz" ]]; then if [[ "${filename##*.}" == "cbr" || "${filename##*.}" == "cbz" ]]; then
LCMD="pget -n 6 '$filename'" LCMD="pget -n 6 '$filename'"