IMP: Added ALT_PULL=2 method for weeklypull management. Will now update against an external site and pull down already populated ComicID/IssueID's for the series that exist on the pullist. Alternate Search Names aren't needed with this option to match on pullist, as well as CV API Hits are also not used since it is all populated. Also allows for future viewing of pullists (up to 4 weeks in advance), FIX: Alternate search names now will be searched against when doing manual post-processing, FIX: When manually post-processing, if series volume wasn't specified would fail to match against v1 (by default), IMP:(#1309) Https_chain option now allowed within config.ini, IMP: 32P pack support on a per series basis (will search individual torrents first before packs), IMP: When pack is snatched, will mark all issues within pack that are not in a Downloaded status as Snatched within Mylar (annuals currently don't work), IMP: Removed unnecessary config spamming on startup when verbose mode was enabled, IMP: Allow for searching on 32p against series+publisher for some titles that distinguish between series by different publisher, IMP: Better series matching when trying to find series matches on 32P, FIX: When metatagging, if volume label is not provided within Mylar would default to None (now will be v1), IMP:(#1304) Attempt at better file parsing when utf-8 filenames are being parsed, FIX: Proper handling of Infinity issue number when file-checking, FIX: When adding series and annuals were enabled, if annual was a new release, would not be shown on the annual subset table for the given series (and subsequently wouldn't be auto-marked as Wanted), FIX:(#1306) Correct handling of the imported value when doing an import and moving files was selected (would previously error out during moving for some imports), FIX: When cbz files were being imported and were attempted to being auto-imported, would fail due to improper handling of the imported variable, FIX: Manage issues will now default the dropdown to the correct selected option, FIX: Manage Comics - fixed dropdown options for multiple selection of series - delete/pause/resume, IMP: Added 'delete stragglers' option to Story Arcs when deleting an arc to ensure that all traces of the arc are removed from the db, FIX: Manual/group metatagging would not tag properly if the START_YEAR_AS_VOLUME option was enabled, FIX: (#1313) NzbHydra wouldn't set the nzbid properly when using Failed Download handling/Retrying

This commit is contained in:
evilhero 2016-07-10 18:28:14 -04:00
parent 156119ffa4
commit 55eedb1869
26 changed files with 1727 additions and 767 deletions

View File

@ -72,6 +72,7 @@ def main():
parser.add_argument('-d', '--daemon', action='store_true', help='Run as a daemon')
parser.add_argument('-p', '--port', type=int, help='Force mylar to run on a specified port')
parser.add_argument('-b', '--backup', action='store_true', help='Will automatically backup & keep the last 2 copies of the .db & ini files prior to startup')
parser.add_argument('-w', '--noweekly', action='store_true', help='Turn off weekly pull list check on startup (quicker boot sequence)')
parser.add_argument('--datadir', help='Specify a directory where to store your data files')
parser.add_argument('--config', help='Specify a config file to use')
parser.add_argument('--nolaunch', action='store_true', help='Prevent browser from launching on startup')
@ -135,6 +136,11 @@ def main():
else:
mylar.SAFESTART = False
if args.noweekly:
mylar.NOWEEKLY = True
else:
mylar.NOWEEKLY = False
# Try to create the DATA_DIR if it doesn't exist
#if not os.path.exists(mylar.DATA_DIR):
# try:
@ -228,6 +234,7 @@ def main():
'enable_https': mylar.ENABLE_HTTPS,
'https_cert': mylar.HTTPS_CERT,
'https_key': mylar.HTTPS_KEY,
'https_chain': mylar.HTTPS_CHAIN,
'http_username': mylar.HTTP_USERNAME,
'http_password': mylar.HTTP_PASSWORD,
}

View File

@ -313,18 +313,21 @@
<small>Alternate file-naming to be used when post-processing / renaming files instead of the actual title.</small>
</div>
<div class="row">
<div class="row">
<%
year_options = "Default - Keep the Year as is\nYear Removal - Remove issue publication year from searches (dangerous)\nFuzzy the Year - Increase & Decrease the issue publication year by one"
%>
<label>Year Options<a href="#" title="${year_options}"><img src="interfaces/default/images/info32.png" height="16" alt="" /></a></label>
<div class="row radio left clearfix">
<input type="radio" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="fuzzy_year" value="0" ${comicConfig['fuzzy_year0']} /><label>Default</label>
<input type="radio" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="fuzzy_year" value="1" ${comicConfig['fuzzy_year1']} /><label>Year Removal</label>
<input type="radio" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="fuzzy_year" value="2" ${comicConfig['fuzzy_year2']} /><label>Fuzzy the Year</label>
</div>
</div>
<input type="submit" value="Update"/>
<input type="radio" name="fuzzy_year" value="0" ${comicConfig['fuzzy_year0']} /> Default&nbsp;<input type="radio" name="fuzzy_year" value="1" ${comicConfig['fuzzy_year1']} /> Year Removal&nbsp;<input type="radio" name="fuzzy_year" value="2" ${comicConfig['fuzzy_year2']} /> Fuzzy the Year
</div>
%if mylar.ENABLE_32P and mylar.MODE_32P == 1:
<div class="row">
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="allow_packs" value="1" ${comicConfig['allow_packs']} /><label>Enable Pack Downloads<a href="#" title="Will allow downloading of multiple issues in one file (packs), but will search individual issues first"><img src="interfaces/default/images/info32.png" height="16" alt="" /></a></label>
</div>
%endif
<input type="submit" value="Update"/>
</fieldset>
</form>
</td>
@ -461,7 +464,7 @@
%endif
%if mylar.ENABLE_META:
<a href="#" title="Manually meta-tag issue" onclick="doAjaxCall('manual_metatag?dirName=${comic['ComicLocation'] |u}&issueid=${issue['IssueID']}&filename=${linky |u}&comicid=${issue['ComicID']}&comversion=${comic['ComicVersion']}',$(this),'table')" data-success="${comic['ComicName']} #${issue['Issue_Number']} successfully tagged."><img src="interfaces/default/images/comictagger.png" height="25" width="25" class="highqual" /></a>
<a href="#" title="Manually meta-tag issue" onclick="doAjaxCall('manual_metatag?dirName=${comic['ComicLocation'] |u}&issueid=${issue['IssueID']}&filename=${linky |u}&comicid=${issue['ComicID']}&comversion=${comic['ComicVersion']}&seriesyear=${comic['ComicYear']}',$(this),'table')" data-success="${comic['ComicName']} #${issue['Issue_Number']} successfully tagged."><img src="interfaces/default/images/comictagger.png" height="25" width="25" class="highqual" /></a>
%endif
%endif
<a href="#" title="Add to Reading List" onclick="doAjaxCall('addtoreadlist?IssueID=${issue['IssueID']}',$(this),'table')" data-success="${comic['ComicName']} #${issue['Issue_Number']} added to Reading List"><img src="interfaces/default/images/glasses-icon.png" height="25" width="25" class="highqual" /></a>
@ -801,6 +804,7 @@
{ 'sType': 'numeric', 'aTargets': [1] },
{ 'iDataSort': [1], 'aTargets': [2] }
],
"aLengthMenu": [[10, 25, 50, -1], [10, 25, 50, 'All' ]],
"oLanguage": {
"sLengthMenu":"",
"sEmptyTable": "No issue information available",

View File

@ -282,9 +282,9 @@
<label>SABnzbd API:</label>
<input type="text" name="sab_apikey" id="sab_apikey" value="${config['sab_api']}" size="28">
<!--
<input style="float:right" type="button"><image src="interfaces/default/images/submit.png" height="20" width="20" title="Attempt to auto-populate SABnzbd API" id="find_sabapi">
<a href="#" style="float:right" type="button" onclick="doAjaxCall('findsabapi',$(this))" data-success="Successfully retrieved SABnzbd key" data-error="Unable to retrieve SABnzbd key"><span class="ui-icon ui-icon-extlink"></span>GET API</a>
-->
<a href="#" style="float:right" type="button" id="find_sabapi" data-success="Sucessfully retrieved SABnzbd API" data-error="Error auto-populating SABnzbd API"><span class="ui-icon ui-icon-extlink"></span>Get API</a>
<a href="#" style="float:right" type="button" id="find_sabapi" data-success="Sucessfully retrieved SABnzbd API" data-error="Error auto-populating SABnzbd API"><span class="ui-icon ui-icon-extlink"></span>Get API</a>
</div>
</div>

View File

@ -279,12 +279,20 @@ table.display tr.even.gradeC {
background-color: #ebf5ff;
}
table.display tr.even.gradeH{
background-color: #FFF5CC;
table.display tr.even.gradeE {
background-color: #F96178;
}
table.display tr.odd.gradeE {
background-color: #F96178;
}
table.display tr.even.gradeH {
background-color: #FFF5CC;
}
table.display tr.odd.gradeH {
background-color: #FFF5CC;
background-color: #FFF5CC;
}
table.display tr.even.gradeL {
@ -340,6 +348,9 @@ table.display tr.gradeL #status {
}
table.display tr.gradeA td,
table.display tr.gradeC td,
table.display tr.gradeE td,
table.display tr.gradeH td,
table.display tr.gradeL td,
table.display tr.gradeX td,
table.display tr.gradeU td,
table.display tr.gradeP td,
@ -370,6 +381,14 @@ table.display_no_select tr.even.gradeC {
background-color: #ebf5ff;
}
table.display_no_select tr.even.gradeE {
background-color: #F96178;
}
table.display_no_select tr.odd.gradeE {
background-color: #F96178;
}
table.display_no_select tr.even.gradeH{
background-color: #FFF5CC;
}
@ -428,6 +447,9 @@ table.display_no_select tr.gradeL #status {
}
table.display_no_select tr.gradeA td,
table.display_no_select tr.gradeC td,
table.display_no_select tr.gradeE td,
table.display_no_select tr.gradeH td,
table.display_no_select tr.gradeL td,
table.display_no_select tr.gradeX td,
table.display_no_select tr.gradeU td,
table.display_no_select tr.gradeP td,

View File

@ -140,7 +140,7 @@
%endif
[<a href="deleteimport?ComicName=${result['ComicName']| u}&volume=${result['Volume']}&DynamicName=${result['DynamicName']}&Status=${result['Status']}">Remove</a>]
%if result['SRID'] is not None and result['Status'] != 'Imported':
[<a title="Manual intervention is required - more than one result when attempting to import" href="importresults_popup?SRID=${result['SRID']}&ComicName=${result['ComicName'] |u}&imported=yes&ogcname=${result['ComicName'] |u}&DynamicName=${result['DynamicName']}">Select</a>]
[<a title="Manual intervention is required - more than one result when attempting to import" href="importresults_popup?SRID=${result['SRID']}&ComicName=${result['ComicName'] |u}&imported=yes&ogcname=${result['ComicName'] |u}&DynamicName=${result['DynamicName']}&Volume=${result['Volume']}">Select</a>]
%endif
</td>
</tr>

View File

@ -53,7 +53,7 @@
<%
calledby = "web-import"
%>
<td class="add"><a href="addbyid?comicid=${result['comicid']}&calledby=${calledby}&imported='yes'&ogcname=${result['ogcname']}"><span class="ui-icon ui-icon-plus"></span>Add this Comic</a></td>
<td class="add"><a href="addbyid?comicid=${result['comicid']}&calledby=${calledby}&imported=${imported}&ogcname=${result['ogcname']}"><span class="ui-icon ui-icon-plus"></span>Add this Comic</a></td>
%else:
<td class="add"><span class="ui-icon ui-icon-plus"></span>Already in Library</td>
%endif

View File

@ -57,7 +57,7 @@
grade = 'A'
%>
<tr>
<td id="select"><input type="checkbox" name="${comic['ComicID']}" class="checkbox" /></td>
<td id="select"><input type="checkbox" name="${comic['ComicName']}[${comic['ComicYear']}]" value="${comic['ComicID']}" class="checkbox" /></td>
<td id="albumart"><div><img src="${comic['ComicImage']}" height="75" width="50"></div></td>
<td id="name"><span title="${comic['ComicSortName']}"></span><a href="comicDetails?ComicID=${comic['ComicID']}">${comic['ComicName']} (${comic['ComicYear']})</a></td>
<td id="status">${comic['recentstatus']}</td>

View File

@ -96,7 +96,20 @@
<td id="years">${item['SpanYears']}</td>
<td id="have"><span title="${item['percent']}"></span>${css}<div style="width:${item['percent']}%"><span class="progressbar-front-text">${item['Have']}/${item['Total']}</span></div></td>
<td id="options">
<a title="Remove from Story Arc Watchlist" onclick="doAjaxCall('removefromreadlist?StoryArcID=${item['StoryArcID']}&ArcName=${item['StoryArc']}',$(this),'table')" data-success="Sucessfully removed ${item['StoryArc']} from list."><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" /></a>
<a href="#" id="remove_confirm" title="Remove Arc from Watchlist" onclick="openDelete(${item['StoryArc']| u},${item['StoryArcID']});"><img src="interfaces/default/images/skipped_icon.png" height="25" width="25" /></a>
<div id="dialogit" title="Delete Story Arc Confirmation" style="display:none" class="configtable">
<form action="removefromreadlist" method="GET" style="vertical-align: middle; text-align: center">
<div class="row checkbox left clearfix">
</br>
<h1><center>${['storyarc']}</center></h1></br>
<input type="checkbox" style="vertical-align: middle; margin: 3px; margin-top: -1px;" name="delete_type" id="deleteCheck" value="1" ${checked(delete_type)} /><label>Remove Story Arc based on Arc Name</br> (default is ID)</label>
</div>
</br><input type="submit" value="Delete Story Arc">
<input type="hidden" name="ArcName" value=${['storyarc']}>
<input type="hidden" name="StoryArcID" value=${['storyarcid']}>
</form>
</div>
%if item['CV_ArcID']:
<a title="Refresh Series" onclick="doAjaxCall('addStoryArc_thread?arcid=${item['StoryArcID']}&cvarcid=${item['CV_ArcID']}&storyarcname=${item['StoryArc']}&arcrefresh=True',$(this),'table')" data-success="Now refreshing ${item['StoryArc']}."><img src="interfaces/default/images/refresh.png" height="25" width="25" /></a>
%endif
@ -114,7 +127,14 @@
<%def name="javascriptIncludes()">
<script src="js/libs/jquery.dataTables.min.js"></script>
<script>
function openDelete(StoryArc, StoryArcID) {
alert('test');
$("#dialogit").dialog({
modal:true
}).data({storyarc: StoryArc, storyarcid: StoryArcID});
};
</script>
<script type="text/javascript">
$("#menu_link_scan").click(function() {
$('#chkoptions').submit();

View File

@ -117,7 +117,7 @@
<tbody>
%for upcome in upcoming:
<tr class="gradeZ">
<td id="comicname"><a href="comicDetails?ComicID=${upcome['ComicID']}">${upcome['DisplayComicName']}</a></td>
<td id="comicname"><a href="comicDetails?ComicID=${upcome['ComicID']}">${upcome['ComicName']}</a></td>
<td id="issuenumber">${upcome['IssueNumber']}</td>
<td id="reldate">${upcome['IssueDate']}</td>
<td id="status">${upcome['Status']}</td>
@ -184,7 +184,7 @@
<tbody>
%for f_upcome in futureupcoming:
<tr class="gradeZ">
<td id="comicname">${f_upcome['DisplayComicName']}</td>
<td id="comicname">${f_upcome['ComicName']}</td>
<td id="issuenumber">${f_upcome['IssueNumber']}</td>
<td id="reldate">${f_upcome['IssueDate']}</td>
<td id="status">${f_upcome['Status']}</td>

View File

@ -11,30 +11,38 @@
<a id="menu_link_refresh" href="manualpull">Refresh Pull-list</a>
<a id="menu_link_retry" href="pullrecreate">Recreate Pull-list</a>
<a id="menu_link_scan" class="button">Download</a>
<!-- <a id="menu_link_scan" onclick="doAjaxCall('MassWeeklyDownload?pulldate=${pulldate}, $(this)),'table'" href="#" data-success="Now Downloading Comics to : ${mylar.GRABBAG_DIR}">Download.</a> -->
</div>
</div>
<a href="home" class="back">&laquo; Back to overview</a>
</%def>
<%def name="body()">
<div class="clearfix">
<div>
</br></br>
<table width="100%" align="center">
<tr>
<td style="vertical-align: middle; text-align: right"><a href="pullist?week=${weekinfo['prev_weeknumber']}&year=${weekinfo['year']}" title="Previous Week (${weekinfo['prev_weeknumber']})"><img src="interfaces/default/images/prev.gif" width="16" height="18" Alt="Previous"/></td>
<td style="vertical-align: middle; text-align: center">
%if wantedcount == 0:
<h1>Weekly Pull list for : ${pulldate}</h1>
<h1><center>Weekly Pull list for week ${weekinfo['weeknumber']} :</br>${weekinfo['startweek']} - ${weekinfo['endweek']}</center></h1>
%else:
<h1>Weekly Pull list for : ${pulldate} (${wantedcount})</h1>
<h1><center>Weekly Pull list for week ${weekinfo['weeknumber']} :</br>${weekinfo['startweek']} - ${weekinfo['endweek']} (${wantedcount})</center></h1>
%endif
</td><td style="vertical-align: middle; text-align: left">
<a href="pullist?week=${weekinfo['next_weeknumber']}&year=${weekinfo['year']}" title="Next Week (${weekinfo['next_weeknumber']})"><img src="interfaces/default/images/next.gif" width="16" height="18" Alt="Next"/></a></td>
<tr>
</table>
</div>
<div>
<form action="MassWeeklyDownload" method="GET" id="MassDownload">
<fieldset>
<div class="row">
<input type="checkbox" name="weekfolder" id="weekfolder" value="1" ${checked(mylar.WEEKFOLDER)} /><label>Store in Weekly Directory</label>
<small>Create weekly folder (${weekfold})</small>
<div class="row checkbox left clearfix">
</br>
<input type="checkbox" name="weekfolder" id="weekfolder" value="1" ${checked(mylar.WEEKFOLDER)} /><label>Store in Weekly Directory (${weekfold})</label>
</div>
<input type="hidden" name="pulldate" value=${pulldate}>
<input type="hidden" name="pulldate" value=${weekinfo}>
<input type="submit" style="display:none" />
</fieldset>
</form>
@ -67,12 +75,16 @@
if weekly['AUTOWANT'] == True:
grade = 'H'
#if the comicid is present, but issue isn't marked as wanted.
if weekly['HAVEIT'] == 'Yes' and weekly['STATUS'] == 'Skipped':
grade = 'E'
%>
<tr class="grade${grade}">
%if pullfilter is True:
<td class="publisher">${weekly['PUBLISHER']}</td>
<td class="comicname">
%if weekly['COMICID'] == '' or weekly['COMICID'] is None:
%if weekly['HAVEIT'] == 'No':
${weekly['COMIC']}
%else:
<a href="comicDetails?ComicID=${weekly['COMICID']}">${weekly['COMIC']}</a>
@ -84,11 +96,16 @@
%else:
<td class="status">${weekly['STATUS']}
%if weekly['STATUS'] == 'Skipped':
%if weekly['ISSUE'] == '1' or weekly['ISSUE'] == '0':
<a href="#" title="Watch for this series" onclick="doAjaxCall('add2futurewatchlist?ComicName=${weekly['COMIC'] |u}&Issue=${weekly['ISSUE']}&Publisher=${weekly['PUBLISHER']}&ShipDate=${pulldate}', $(this),'table')" data-success="${weekly['COMIC']} is now on auto-watch/add."><span class="ui-icon ui-icon-plus"></span>Watch</a>
%if weekly['COMICID'] != '' and weekly['COMICID'] is not None:
<a href="#" title="auto-add by ID available for this series" onclick="doAjaxCall('addbyid?comicid=${weekly['COMICID']}&calledby=True',$(this),'table')" data-success="${weekly['COMIC']} is now being added to your wachlist."><span class="ui-icon ui-icon-plus"></span>add series</a>
%else:
%if weekly['ISSUE'] == '1' or weekly['ISSUE'] == '0':
<a href="#" title="Watch for this series" onclick="doAjaxCall('add2futurewatchlist?ComicName=${weekly['COMIC'] |u}&Issue=${weekly['ISSUE']}&Publisher=${weekly['PUBLISHER']}&ShipDate=${weekinfo}', $(this),'table')" data-success="${weekly['COMIC']} is now on auto-watch/add."><span class="ui-icon ui-icon-plus"></span>Watch</a>
%else:
<a href="searchit?name=${weekly['COMIC'] | u}&issue=${weekly['ISSUE']}&mode=pullseries" title="Add this series to your watchlist"><span class="ui-icon ui-icon-plus"></span>add series</a>
%endif
%endif
<a href="searchit?name=${weekly['COMIC'] | u}&issue=${weekly['ISSUE']}&mode=pullseries"><span class="ui-icon ui-icon-plus"></span>add series</a>
<a href="queueissue?ComicName=${weekly['COMIC'] | u}&ComicIssue=${weekly['ISSUE']}&mode=pullwant&Publisher=${weekly['PUBLISHER']}"><span class="ui-icon ui-icon-plus"></span>one off</a>
<a href="queueissue?ComicName=${weekly['COMIC'] | u}&ComicIssue=${weekly['ISSUE']}&mode=pullwant&Publisher=${weekly['PUBLISHER']}" title="Just grab it"><span class="ui-icon ui-icon-plus"></span>one off</a>
%endif
%endif
</td>
@ -151,7 +168,7 @@
"bStateSave": true,
"iDisplayLength": 25,
"sPaginationType": "full_numbers",
"aaSorting": [[0, 'desc']]
"aaSorting": [[0, 'asc']]
});
resetFilters("weekly");
setTimeout(function(){

View File

@ -266,14 +266,16 @@ class PostProcessor(object):
manual_list = []
for fl in filelist['comiclist']:
as_d = filechecker.FileChecker(watchcomic=fl['series_name'].decode('utf-8'))
as_d = filechecker.FileChecker()#watchcomic=fl['series_name'].decode('utf-8'))
as_dinfo = as_d.dynamic_replace(fl['series_name'])
mod_seriesname = as_dinfo['mod_seriesname']
loopchk = []
for x in alt_list:
cname = x['AS_DyComicName']
for ab in x['AS_Alt']:
if re.sub('[\|\s]', '', mod_seriesname.lower()).strip() in re.sub('[\|\s]', '', ab.lower()).strip():
tmp_ab = re.sub(' ', '', ab)
tmp_mod_seriesname = re.sub(' ', '', mod_seriesname)
if re.sub('\|', '', tmp_mod_seriesname.lower()).strip() == re.sub('\|', '', tmp_ab.lower()).strip():
if not any(re.sub('[\|\s]', '', cname.lower()) == x for x in loopchk):
loopchk.append(re.sub('[\|\s]', '', cname.lower()))
@ -446,7 +448,7 @@ class PostProcessor(object):
tmp_watchlist_vol = '1'
else:
tmp_watchlist_vol = re.sub("[^0-9]", "", watch_values['ComicVersion']).strip()
if not any([watchmatch['series_volume'] != 'None', watchmatch['series_volume'] is not None]):
if any([watchmatch['series_volume'] != 'None', watchmatch['series_volume'] is not None]):
tmp_watchmatch_vol = re.sub("[^0-9]","", watchmatch['series_volume']).strip()
if len(tmp_watchmatch_vol) == 4:
if int(tmp_watchmatch_vol) == int(watch_values['SeriesYear']):
@ -459,7 +461,8 @@ class PostProcessor(object):
logger.fdebug(module + '[ISSUE-VERIFY][SeriesYear-Volume MATCH] Volume label of series Year of ' + str(watch_values['ComicVersion']) + ' matched to volume label of ' + str(watchmatch['series_volume']))
else:
logger.fdebug(module + '[ISSUE-VERIFY][SeriesYear-Volume FAILURE] Volume label of Series Year of ' + str(watch_values['ComicVersion']) + ' DID NOT match to volume label of ' + str(watchmatch['series_volume']))
datematch = "False"
continue
#datematch = "False"
else:
if any([tmp_watchlist_vol is None, tmp_watchlist_vol == 'None', tmp_watchlist_vol == '']):
logger.fdebug(module + '[ISSUE-VERIFY][NO VOLUME PRESENT] No Volume label present for series. Dropping down to Issue Year matching.')

View File

@ -126,6 +126,7 @@ HTTP_ROOT = None
ENABLE_HTTPS = False
HTTPS_CERT = None
HTTPS_KEY = None
HTTPS_CHAIN = None
HTTPS_FORCE_ON = False
HOST_RETURN = None
API_ENABLED = False
@ -356,6 +357,8 @@ ENABLE_TORRENTS = 0
TORRENT_DOWNLOADER = None #0 = watchfolder, #1 = uTorrent, #2 = rTorrent, #3 = transmission
MINSEEDS = 0
ALLOW_PACKS = False
USE_WATCHDIR = False
TORRENT_LOCAL = False
LOCAL_WATCHDIR = None
@ -424,7 +427,7 @@ def check_setting_int(config, cfg_name, item_name, def_val):
except:
config[cfg_name] = {}
config[cfg_name][item_name] = my_val
logger.debug(item_name + " -> " + str(my_val))
#logger.debug(item_name + " -> " + str(my_val))
return my_val
################################################################################
@ -441,10 +444,10 @@ def check_setting_str(config, cfg_name, item_name, def_val, log=True):
config[cfg_name] = {}
config[cfg_name][item_name] = my_val
if log:
logger.debug(item_name + " -> " + my_val)
else:
logger.debug(item_name + " -> ******")
#if log:
# logger.debug(item_name + " -> " + my_val)
#else:
# logger.debug(item_name + " -> ******")
return my_val
@ -452,7 +455,7 @@ def initialize():
with INIT_LOCK:
global __INITIALIZED__, DBCHOICE, DBUSER, DBPASS, DBNAME, DYNAMIC_UPDATE, COMICVINE_API, DEFAULT_CVAPI, CVAPI_RATE, CV_HEADERS, BLACKLISTED_PUBLISHERS, FULL_PATH, PROG_DIR, VERBOSE, DAEMON, UPCOMING_SNATCHED, COMICSORT, DATA_DIR, CONFIG_FILE, CFG, CONFIG_VERSION, LOG_DIR, CACHE_DIR, MAX_LOGSIZE, OLDCONFIG_VERSION, OS_DETECT, \
queue, LOCAL_IP, EXT_IP, HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, ENABLE_HTTPS, HTTPS_CERT, HTTPS_KEY, HTTPS_FORCE_ON, HOST_RETURN, API_ENABLED, API_KEY, DOWNLOAD_APIKEY, LAUNCH_BROWSER, GIT_PATH, SAFESTART, NOWEEKLY, AUTO_UPDATE, \
queue, LOCAL_IP, EXT_IP, HTTP_PORT, HTTP_HOST, HTTP_USERNAME, HTTP_PASSWORD, HTTP_ROOT, ENABLE_HTTPS, HTTPS_CERT, HTTPS_KEY, HTTPS_CHAIN, HTTPS_FORCE_ON, HOST_RETURN, API_ENABLED, API_KEY, DOWNLOAD_APIKEY, LAUNCH_BROWSER, GIT_PATH, SAFESTART, NOWEEKLY, AUTO_UPDATE, \
IMPORT_STATUS, IMPORT_FILES, IMPORT_TOTALFILES, IMPORT_CID_COUNT, IMPORT_PARSED_COUNT, IMPORT_FAILURE_COUNT, CHECKENABLED, \
CURRENT_VERSION, LATEST_VERSION, CHECK_GITHUB, CHECK_GITHUB_ON_STARTUP, CHECK_GITHUB_INTERVAL, GIT_USER, GIT_BRANCH, USER_AGENT, DESTINATION_DIR, MULTIPLE_DEST_DIRS, CREATE_FOLDERS, DELETE_REMOVE_DIR, \
DOWNLOAD_DIR, USENET_RETENTION, SEARCH_INTERVAL, NZB_STARTUP_SEARCH, INTERFACE, DUPECONSTRAINT, DDUMP, DUPLICATE_DUMP, AUTOWANT_ALL, AUTOWANT_UPCOMING, ZERO_LEVEL, ZERO_LEVEL_N, COMIC_COVER_LOCAL, HIGHCOUNT, \
@ -464,7 +467,7 @@ def initialize():
USE_UTORRENT, UTORRENT_HOST, UTORRENT_USERNAME, UTORRENT_PASSWORD, UTORRENT_LABEL, USE_TRANSMISSION, TRANSMISSION_HOST, TRANSMISSION_USERNAME, TRANSMISSION_PASSWORD, \
ENABLE_META, CMTAGGER_PATH, CBR2CBZ_ONLY, CT_TAG_CR, CT_TAG_CBL, CT_CBZ_OVERWRITE, UNRAR_CMD, CT_SETTINGSPATH, CMTAG_START_YEAR_AS_VOLUME, UPDATE_ENDED, INDIE_PUB, BIGGIE_PUB, IGNORE_HAVETOTAL, SNATCHED_HAVETOTAL, PROVIDER_ORDER, \
dbUpdateScheduler, searchScheduler, RSSScheduler, WeeklyScheduler, VersionScheduler, FolderMonitorScheduler, \
ENABLE_TORRENTS, TORRENT_DOWNLOADER, MINSEEDS, USE_WATCHDIR, TORRENT_LOCAL, LOCAL_WATCHDIR, TORRENT_SEEDBOX, SEEDBOX_HOST, SEEDBOX_PORT, SEEDBOX_USER, SEEDBOX_PASS, SEEDBOX_WATCHDIR, \
ALLOW_PACKS, ENABLE_TORRENTS, TORRENT_DOWNLOADER, MINSEEDS, USE_WATCHDIR, TORRENT_LOCAL, LOCAL_WATCHDIR, TORRENT_SEEDBOX, SEEDBOX_HOST, SEEDBOX_PORT, SEEDBOX_USER, SEEDBOX_PASS, SEEDBOX_WATCHDIR, \
ENABLE_RSS, RSS_CHECKINTERVAL, RSS_LASTRUN, FAILED_DOWNLOAD_HANDLING, FAILED_AUTO, ENABLE_TORRENT_SEARCH, ENABLE_KAT, KAT_PROXY, KAT_VERIFY, ENABLE_32P, MODE_32P, KEYS_32P, RSSFEED_32P, USERNAME_32P, PASSWORD_32P, AUTHKEY_32P, PASSKEY_32P, FEEDINFO_32P, VERIFY_32P, SNATCHEDTORRENT_NOTIFY, \
PROWL_ENABLED, PROWL_PRIORITY, PROWL_KEYS, PROWL_ONSNATCH, NMA_ENABLED, NMA_APIKEY, NMA_PRIORITY, NMA_ONSNATCH, PUSHOVER_ENABLED, PUSHOVER_PRIORITY, PUSHOVER_APIKEY, PUSHOVER_USERKEY, PUSHOVER_ONSNATCH, BOXCAR_ENABLED, BOXCAR_ONSNATCH, BOXCAR_TOKEN, \
PUSHBULLET_ENABLED, PUSHBULLET_APIKEY, PUSHBULLET_DEVICEID, PUSHBULLET_ONSNATCH, LOCMOVE, NEWCOM_DIR, FFTONEWCOM_DIR, \
@ -499,6 +502,8 @@ def initialize():
HTTPS_CERT = os.path.join(DATA_DIR, 'server.crt')
if HTTPS_KEY == '':
HTTPS_KEY = os.path.join(DATA_DIR, 'server.key')
if HTTPS_CHAIN == '':
HTTPS_CHAIN = os.path.join(DATA_DIR, 'chain.pem')
CONFIG_VERSION = check_setting_str(CFG, 'General', 'config_version', '')
DBCHOICE = check_setting_str(CFG, 'General', 'dbchoice', 'sqlite3')
@ -518,6 +523,7 @@ def initialize():
ENABLE_HTTPS = bool(check_setting_int(CFG, 'General', 'enable_https', 0))
HTTPS_CERT = check_setting_str(CFG, 'General', 'https_cert', '')
HTTPS_KEY = check_setting_str(CFG, 'General', 'https_key', '')
HTTPS_CHAIN = check_setting_str(CFG, 'General', 'https_chain', '')
HTTPS_FORCE_ON = bool(check_setting_int(CFG, 'General', 'https_force_on', 0))
HOST_RETURN = check_setting_str(CFG, 'General', 'host_return', '')
API_ENABLED = bool(check_setting_int(CFG, 'General', 'api_enabled', 0))
@ -581,6 +587,7 @@ def initialize():
IGNORE_HAVETOTAL = bool(check_setting_int(CFG, 'General', 'ignore_havetotal', 0))
SNATCHED_HAVETOTAL = bool(check_setting_int(CFG, 'General', 'snatched_havetotal', 0))
SYNO_FIX = bool(check_setting_int(CFG, 'General', 'syno_fix', 0))
ALLOW_PACKS = bool(check_setting_int(CFG, 'General', 'allow_packs', 0))
SEARCH_DELAY = check_setting_int(CFG, 'General', 'search_delay', 1)
GRABBAG_DIR = check_setting_str(CFG, 'General', 'grabbag_dir', '')
if not GRABBAG_DIR:
@ -1299,6 +1306,7 @@ def config_write():
new_config['General']['enable_https'] = int(ENABLE_HTTPS)
new_config['General']['https_cert'] = HTTPS_CERT
new_config['General']['https_key'] = HTTPS_KEY
new_config['General']['https_chain'] = HTTPS_CHAIN
new_config['General']['https_force_on'] = int(HTTPS_FORCE_ON)
new_config['General']['host_return'] = HOST_RETURN
new_config['General']['api_enabled'] = int(API_ENABLED)
@ -1368,6 +1376,7 @@ def config_write():
new_config['General']['ignore_havetotal'] = int(IGNORE_HAVETOTAL)
new_config['General']['snatched_havetotal'] = int(SNATCHED_HAVETOTAL)
new_config['General']['syno_fix'] = int(SYNO_FIX)
new_config['General']['allow_packs'] = int(ALLOW_PACKS)
new_config['General']['search_delay'] = SEARCH_DELAY
new_config['General']['grabbag_dir'] = GRABBAG_DIR
new_config['General']['highcount'] = HIGHCOUNT
@ -1633,12 +1642,12 @@ def dbcheck():
c_error = 'sqlite3.OperationalError'
c=conn.cursor()
c.execute('CREATE TABLE IF NOT EXISTS comics (ComicID TEXT UNIQUE, ComicName TEXT, ComicSortName TEXT, ComicYear TEXT, DateAdded TEXT, Status TEXT, IncludeExtras INTEGER, Have INTEGER, Total INTEGER, ComicImage TEXT, ComicPublisher TEXT, ComicLocation TEXT, ComicPublished TEXT, NewPublish TEXT, LatestIssue TEXT, LatestDate TEXT, Description TEXT, QUALalt_vers TEXT, QUALtype TEXT, QUALscanner TEXT, QUALquality TEXT, LastUpdated TEXT, AlternateSearch TEXT, UseFuzzy TEXT, ComicVersion TEXT, SortOrder INTEGER, DetailURL TEXT, ForceContinuing INTEGER, ComicName_Filesafe TEXT, AlternateFileName TEXT, ComicImageURL TEXT, ComicImageALTURL TEXT, DynamicComicName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS comics (ComicID TEXT UNIQUE, ComicName TEXT, ComicSortName TEXT, ComicYear TEXT, DateAdded TEXT, Status TEXT, IncludeExtras INTEGER, Have INTEGER, Total INTEGER, ComicImage TEXT, ComicPublisher TEXT, ComicLocation TEXT, ComicPublished TEXT, NewPublish TEXT, LatestIssue TEXT, LatestDate TEXT, Description TEXT, QUALalt_vers TEXT, QUALtype TEXT, QUALscanner TEXT, QUALquality TEXT, LastUpdated TEXT, AlternateSearch TEXT, UseFuzzy TEXT, ComicVersion TEXT, SortOrder INTEGER, DetailURL TEXT, ForceContinuing INTEGER, ComicName_Filesafe TEXT, AlternateFileName TEXT, ComicImageURL TEXT, ComicImageALTURL TEXT, DynamicComicName TEXT, AllowPacks TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS issues (IssueID TEXT, ComicName TEXT, IssueName TEXT, Issue_Number TEXT, DateAdded TEXT, Status TEXT, Type TEXT, ComicID TEXT, ArtworkURL Text, ReleaseDate TEXT, Location TEXT, IssueDate TEXT, Int_IssueNumber INT, ComicSize TEXT, AltIssueNumber TEXT, IssueDate_Edit TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS snatched (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Size INTEGER, DateAdded TEXT, Status TEXT, FolderName TEXT, ComicID TEXT, Provider TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS upcoming (ComicName TEXT, IssueNumber TEXT, ComicID TEXT, IssueID TEXT, IssueDate TEXT, Status TEXT, DisplayComicName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS nzblog (IssueID TEXT, NZBName TEXT, SARC TEXT, PROVIDER TEXT, ID TEXT, AltNZBName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS weekly (SHIPDATE TEXT, PUBLISHER TEXT, ISSUE TEXT, COMIC VARCHAR(150), EXTRA TEXT, STATUS TEXT, ComicID TEXT, IssueID TEXT, CV_Last_Update TEXT, DynamicName TEXT, weeknumber TEXT, year TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS weekly (SHIPDATE TEXT, PUBLISHER TEXT, ISSUE TEXT, COMIC VARCHAR(150), EXTRA TEXT, STATUS TEXT, ComicID TEXT, IssueID TEXT, CV_Last_Update TEXT, DynamicName TEXT, weeknumber TEXT, year TEXT, rowid INTEGER PRIMARY KEY)')
# c.execute('CREATE TABLE IF NOT EXISTS sablog (nzo_id TEXT, ComicName TEXT, ComicYEAR TEXT, ComicIssue TEXT, name TEXT, nzo_complete TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS importresults (impID TEXT, ComicName TEXT, ComicYear TEXT, Status TEXT, ImportDate TEXT, ComicFilename TEXT, ComicLocation TEXT, WatchMatch TEXT, DisplayName TEXT, SRID TEXT, ComicID TEXT, IssueID TEXT, Volume TEXT, IssueNumber TEXT, DynamicName TEXT)')
c.execute('CREATE TABLE IF NOT EXISTS readlist (IssueID TEXT, ComicName TEXT, Issue_Number TEXT, Status TEXT, DateAdded TEXT, Location TEXT, inCacheDir TEXT, SeriesYear TEXT, ComicID TEXT, StatusChange TEXT)')
@ -1738,6 +1747,12 @@ def dbcheck():
except sqlite3.OperationalError:
c.execute('ALTER TABLE comics ADD COLUMN NewPublish TEXT')
try:
c.execute('SELECT AllowPacks from comics')
except sqlite3.OperationalError:
c.execute('ALTER TABLE comics ADD COLUMN AllowPacks TEXT')
try:
c.execute('SELECT DynamicComicName from comics')
if DYNAMIC_UPDATE < 3:
@ -1902,6 +1917,11 @@ def dbcheck():
except sqlite3.OperationalError:
c.execute('ALTER TABLE weekly ADD COLUMN year TEXT')
try:
c.execute('SELECT rowid from weekly')
except sqlite3.OperationalError:
c.execute('ALTER TABLE weekly ADD COLUMN rowid INTEGER PRIMARY KEY')
## -- Nzblog Table --
try:

View File

@ -7,6 +7,8 @@ import lib.requests as requests
from bs4 import BeautifulSoup
from cookielib import LWPCookieJar
from operator import itemgetter
import mylar
from mylar import logger, filechecker
@ -43,6 +45,7 @@ class info32p(object):
self.reauthenticate = reauthenticate
self.searchterm = searchterm
self.test = test
self.publisher_list = {'Entertainment', 'Press', 'Comics', 'Publishing', 'Comix', 'Studios!'}
def authenticate(self):
@ -86,6 +89,11 @@ class info32p(object):
authfound = False
logger.info(self.module + ' Atttempting to integrate with all of your 32P Notification feeds.')
#get inkdrop count ...
#user_info = soup.find_all(attrs={"class": "stat"})
#inkdrops = user_info[0]['title']
#logger.info('INKDROPS: ' + str(inkdrops))
for al in all_script2:
alurl = al['href']
if 'auth=' in alurl and 'torrents_notify' in alurl and not authfound:
@ -143,19 +151,35 @@ class info32p(object):
def searchit(self):
with requests.Session() as s:
#self.searchterm is a tuple containing series name, issue number and volume.
#self.searchterm is a tuple containing series name, issue number, volume and publisher.
series_search = self.searchterm['series']
annualize = False
if 'Annual' in series_search:
series_search = re.sub(' Annual', '', series_search).strip()
annualize = True
issue_search = self.searchterm['issue']
volume_search = self.searchterm['volume']
publisher_search = self.searchterm['publisher']
spl = [x for x in self.publisher_list if x in publisher_search]
for x in spl:
publisher_search = re.sub(x, '', publisher_search).strip()
logger.info('publisher search set to : ' + publisher_search)
#generate the dynamic name of the series here so we can match it up
as_d = filechecker.FileChecker()
as_dinfo = as_d.dynamic_replace(series_search)
mod_series = as_dinfo['mod_seriesname']
as_puinfo = as_d.dynamic_replace(publisher_search)
pub_series = as_puinfo['mod_seriesname']
logger.info('series_search: ' + series_search)
if '/' in series_search:
series_search = series_search[:series_search.find('/')]
if ':' in series_search:
series_search = series_search[:series_search.find(':')]
if ',' in series_search:
series_search = series_search[:series_search.find(',')]
url = 'https://32pag.es/torrents.php' #?action=serieslist&filter=' + series_search #&filter=F
params = {'action': 'serieslist', 'filter': series_search}
@ -169,6 +193,8 @@ class info32p(object):
results = soup.find_all("a", {"class":"object-qtip"},{"data-type":"torrentgroup"})
data = []
pdata = []
pubmatch = False
for r in results:
torrentid = r['data-id']
@ -177,18 +203,32 @@ class info32p(object):
as_d = filechecker.FileChecker()
as_dinfo = as_d.dynamic_replace(torrentname)
seriesresult = as_dinfo['mod_seriesname']
logger.info('searchresult: ' + seriesresult + ' --- ' + mod_series)
logger.info('searchresult: ' + seriesresult + ' --- ' + mod_series + '[' + publisher_search + ']')
if seriesresult == mod_series:
logger.info('[MATCH] ' + torrentname + ' [' + str(torrentid) + ']')
data.append({"id": torrentid,
"series": torrentname})
elif publisher_search in seriesresult:
tmp_torrentname = re.sub(publisher_search, '', seriesresult).strip()
as_t = filechecker.FileChecker()
as_tinfo = as_t.dynamic_replace(tmp_torrentname)
if as_tinfo['mod_seriesname'] == mod_series:
logger.info('[MATCH] ' + torrentname + ' [' + str(torrentid) + ']')
pdata.append({"id": torrentid,
"series": torrentname})
pubmatch = True
logger.info(str(len(data)) + ' series listed for searching that match.')
if len(data) == 1:
if len(data) == 1 or len(pdata) == 1:
logger.info(str(len(data)) + ' series match the title being search for')
if len(pdata) == 1:
dataset = pdata[0]['id']
else:
dataset = data[0]['id']
payload = {'action': 'groupsearch',
'id': data[0]['id'],
'id': dataset,
'issue': issue_search}
#in order to match up against 0-day stuff, volume has to be none at this point
#when doing other searches tho, this should be allowed to go through
@ -202,7 +242,7 @@ class info32p(object):
d = s.get(url, params=payload, verify=True)
results32p = []
results = {}
resultlist = {}
try:
searchResults = d.json()
except:
@ -213,6 +253,7 @@ class info32p(object):
results32p.append({'link': a['id'],
'title': self.searchterm['series'] + ' v' + a['volume'] + ' #' + a['issues'],
'filesize': a['size'],
'issues': a['issues'],
'pack': a['pack'],
'format': a['format'],
'language': a['language'],
@ -220,13 +261,14 @@ class info32p(object):
'leechers': a['leechers'],
'scanner': a['scanner'],
'pubdate': datetime.datetime.fromtimestamp(float(a['upload_time'])).strftime('%c')})
results['entries'] = results32p
resultlist['entries'] = sorted(results32p, key=itemgetter('pack','title'), reverse=False)
else:
results = 'no results'
resultlist = 'no results'
else:
results = 'no results'
resultlist = 'no results'
return results
return resultlist
class LoginSession(object):
def __init__(self, un, pw, session_path=None):
@ -296,26 +338,27 @@ class info32p(object):
try:
r = self.ses.get(u, params=params, timeout=60, allow_redirects=False, cookies=testcookie)
except Exception as e:
print "Got an exception trying to GET from to: %s", u
logger.error("Got an exception trying to GET from to:" + u)
self.error = {'status':'error', 'message':'exception trying to retrieve site'}
return False
if r.status_code != 200:
if r.status_code == 302:
newloc = r.headers.get('location', '')
print "Got redirect from the POST-ajax action=login GET: %s", newloc
logger.warn("Got redirect from the POST-ajax action=login GET:" + newloc)
self.error = {'status':'redirect-error', 'message':'got redirect from POST-ajax login action : ' + newloc}
else:
print "Got bad status code in the POST-ajax action=login GET: %d", r.status_code
logger.error("Got bad status code in the POST-ajax action=login GET:" + str(r.status_code))
self.error = {'status':'bad status code', 'message':'bad status code received in the POST-ajax login action :' + str(r.status_code)}
return False
try:
j = r.json()
except:
print "Error - response from session-based skey check was not JSON: %s", r.text
logger.warn("Error - response from session-based skey check was not JSON: %s",r.text)
return False
#logger.info(j)
self.uid = j['response']['id']
self.authkey = j['response']['authkey']
self.passkey = pk = j['response']['passkey']
@ -342,26 +385,26 @@ class info32p(object):
try:
r = self.ses.post(u, data=postdata, timeout=60, allow_redirects=False)
except Exception as e:
print "Got an exception when trying to login to %s POST", u
logger.error("Got an exception when trying to login to %s POST", u)
self.error = {'status':'exception', 'message':'Exception when trying to login'}
return False
if r.status_code != 200:
print "Got bad status code from login POST: %d\n%s\n%s", r.status_code, r.text, r.headers
logger.warn("Got bad status code from login POST: %d\n%s\n%s", r.status_code, r.text, r.headers)
self.error = {'status':'Bad Status code', 'message':(r.status_code, r.text, r.headers)}
return False
try:
d = r.json()
except:
print "The data returned by the login page was not JSON: %s", r.text
logger.error("The data returned by the login page was not JSON: %s", r.text)
self.error = {'status':'JSON not returned', 'message':r.text}
return False
if d['status'] == 'success':
return True
print "Got unexpected status result: %s", d
logger.error("Got unexpected status result: %s", d)
self.error = d
return False
@ -409,12 +452,12 @@ class info32p(object):
if self.cookie_exists('session'):
self.ses.cookies.save(ignore_discard=True)
if (not self.test_skey_valid()):
console.error("Bad error: The attempt to get your attributes after successful login failed!")
logger.error("Bad error: The attempt to get your attributes after successful login failed!")
self.error = {'status': 'Bad error', 'message': 'Attempt to get attributes after successful login failed.'}
return False
return True
print "Missing session cookie after successful login: %s", self.ses.cookies
logger.warn("Missing session cookie after successful login: %s", self.ses.cookies)
self.ses.cookies.clear()
self.ses.cookies.save()
return False

View File

@ -81,7 +81,7 @@ def run(dirName, nzbName=None, issueid=None, comversion=None, manual=None, filen
if mylar.CMTAG_START_YEAR_AS_VOLUME:
comversion = 'V' + str(comversion)
else:
if comversion is None or comversion == '':
if any([comversion is None, comversion == '', comversion == 'None']):
comversion = '1'
comversion = re.sub('[^0-9]', '', comversion).strip()

View File

@ -75,6 +75,8 @@ def pulldetails(comicid, type, issueid=None, offset=1, arclist=None, comicidlist
PULLURL = mylar.CVURL + 'volumes/?api_key=' + str(comicapi) + '&format=xml&filter=id:' + str(comicidlist) + '&field_list=name,id,start_year,publisher&offset=' + str(offset)
elif type == 'import':
PULLURL = mylar.CVURL + 'issues/?api_key=' + str(comicapi) + '&format=xml&filter=id:' + (comicidlist) + '&field_list=cover_date,id,issue_number,name,date_last_updated,store_date,volume' + '&offset=' + str(offset)
elif type == 'update_dates':
PULLURL = mylar.CVURL + 'issues/?api_key=' + str(comicapi) + '&format=xml&filter=id:' + (comicidlist)+ '&field_list=date_last_updated, id, issue_number, store_date, cover_date, name, volume ' + '&offset=' + str(offset)
#logger.info('CV.PULLURL: ' + PULLURL)
#new CV API restriction - one api request / second.
@ -201,6 +203,9 @@ def getComic(comicid, type, issueid=None, arc=None, arcid=None, arclist=None, co
return import_list
elif type == 'update_dates':
dom = pulldetails(None, 'update_dates', offset=1, comicidlist=comicidlist)
return UpdateDates(dom)
def GetComicInfo(comicid, dom, safechk=None):
if safechk is None:
@ -557,6 +562,68 @@ def GetSeriesYears(dom):
return serieslist
def UpdateDates(dom):
issues = dom.getElementsByTagName('issue')
tempissue = {}
issuelist = []
for dm in issues:
tempissue['ComicID'] = 'None'
tempissue['IssueID'] = 'None'
try:
totids = len(dm.getElementsByTagName('id'))
idc = 0
while (idc < totids):
if dm.getElementsByTagName('id')[idc].parentNode.nodeName == 'volume':
tempissue['ComicID'] = dm.getElementsByTagName('id')[idc].firstChild.wholeText
if dm.getElementsByTagName('id')[idc].parentNode.nodeName == 'issue':
tempissue['IssueID'] = dm.getElementsByTagName('id')[idc].firstChild.wholeText
idc+=1
except:
logger.warn('There was a problem retrieving a comicid/issueid for the given issue. This will have to manually corrected most likely.')
tempissue['SeriesTitle'] = 'None'
tempissue['IssueTitle'] = 'None'
try:
totnames = len(dm.getElementsByTagName('name'))
namesc = 0
while (namesc < totnames):
if dm.getElementsByTagName('name')[namesc].parentNode.nodeName == 'issue':
tempissue['IssueTitle'] = dm.getElementsByTagName('name')[namesc].firstChild.wholeText
elif dm.getElementsByTagName('name')[namesc].parentNode.nodeName == 'volume':
tempissue['SeriesTitle'] = dm.getElementsByTagName('name')[namesc].firstChild.wholeText
namesc+=1
except:
logger.warn('There was a problem retrieving the Series Title / Issue Title for a series within the arc. This will have to manually corrected.')
try:
tempissue['CoverDate'] = dm.getElementsByTagName('cover_date')[0].firstChild.wholeText
except:
tempissue['CoverDate'] = '0000-00-00'
try:
tempissue['StoreDate'] = dm.getElementsByTagName('store_date')[0].firstChild.wholeText
except:
tempissue['StoreDate'] = '0000-00-00'
try:
tempissue['IssueNumber'] = dm.getElementsByTagName('issue_number')[0].firstChild.wholeText
except:
logger.fdebug('No Issue Number available - Trade Paperbacks, Graphic Novels and Compendiums are not supported as of yet.')
tempissue['IssueNumber'] = 'None'
try:
tempissue['date_last_updated'] = dm.getElementsByTagName('date_last_updated')[0].firstChild.wholeText
except:
tempissue['date_last_updated'] = '0000-00-00'
issuelist.append({'ComicID': tempissue['ComicID'],
'IssueID': tempissue['IssueID'],
'SeriesTitle': tempissue['SeriesTitle'],
'IssueTitle': tempissue['IssueTitle'],
'CoverDate': tempissue['CoverDate'],
'StoreDate': tempissue['StoreDate'],
'IssueNumber': tempissue['IssueNumber'],
'Date_Last_Updated': tempissue['date_last_updated']})
return issuelist
def GetImportList(results):
importlist = results.getElementsByTagName('issue')
serieslist = []

View File

@ -142,15 +142,16 @@ class FileChecker(object):
if any([run_status == 'success', run_status == 'match']):
if self.justparse:
comiclist.append({
'sub': runresults['sub'],
'comicfilename': runresults['comicfilename'],
'comiclocation': runresults['comiclocation'],
'series_name': runresults['series_name'],
'dynamic_name': runresults['dynamic_name'],
'series_volume': runresults['series_volume'],
'issue_year': runresults['issue_year'],
'issue_number': runresults['issue_number'],
'scangroup': runresults['scangroup']
'sub': runresults['sub'],
'comicfilename': runresults['comicfilename'],
'comiclocation': runresults['comiclocation'],
'series_name': runresults['series_name'],
'series_name_decoded': runresults['series_name_decoded'],
'dynamic_name': runresults['dynamic_name'],
'series_volume': runresults['series_volume'],
'issue_year': runresults['issue_year'],
'issue_number': runresults['issue_number'],
'scangroup': runresults['scangroup']
})
else:
comiclist.append({
@ -265,6 +266,19 @@ class FileChecker(object):
#the remaining strings should be the series title and/or issue title if present (has to be detected properly)
modseries = modfilename
#try and remove /remember unicode character strings here (multiline ones get seperated/removed in below regex)
pat = re.compile(u'[\x00-\x7f]{3,}', re.UNICODE)
replack = pat.sub('XCV', modfilename)
wrds = replack.split('XCV')
tmpfilename = modfilename
if len(wrds) > 1:
for i in list(wrds):
if i != '':
tmpfilename = re.sub(i, 'XCV', tmpfilename)
tmpfilename = ''.join(tmpfilename)
modfilename = tmpfilename
sf3 = re.compile(ur"[^,\s_]+", re.UNICODE)
split_file3 = sf3.findall(modfilename)
if len(split_file3) == 1:
@ -273,11 +287,10 @@ class FileChecker(object):
split_file3 = sf3.findall(modfilename)
logger.fdebug('NEW split_file3: ' + str(split_file3))
#print split_file3
ret_sf2 = ' '.join(split_file3)
sf = re.findall('''\( [^\)]* \) |\[ [^\]]* \] |\S+''', ret_sf2, re.VERBOSE)
#print sf
ret_sf1 = ' '.join(sf)
#here we should account for some characters that get stripped out due to the regex's
@ -290,12 +303,12 @@ class FileChecker(object):
ret_sf1 = re.sub('\'', 'g11', ret_sf1).strip()
#split_file = re.findall('\([\w\s-]+\)|[\w-]+', ret_sf1, re.UNICODE)
split_file = re.findall('\([\w\s-]+\)|[-+]?\d*\.\d+|\d+|[\w-]+|#?\d\.\d+|\)', ret_sf1, re.UNICODE)
split_file = re.findall('\([\w\s-]+\)|[-+]?\d*\.\d+|\d+|[\w-]+|#?\d\.\d+|#(?<![\w\d])XCV(?![\w\d])+|\)', ret_sf1, re.UNICODE)
if len(split_file) == 1:
logger.fdebug('Improperly formatted filename - there is no seperation using appropriate characters between wording.')
ret_sf1 = re.sub('\-',' ', ret_sf1).strip()
split_file = re.findall('\([\w\s-]+\)|[-+]?\d*\.\d+|\d+|[\w-]+|#?\d\.\d+|\)', ret_sf1, re.UNICODE)
split_file = re.findall('\([\w\s-]+\)|[-+]?\d*\.\d+|\d+|[\w-]+|#?\d\.\d+||#(?<![\w\d])XCV(?![\w\d])+|\)', ret_sf1, re.UNICODE)
possible_issuenumbers = []
@ -731,6 +744,14 @@ class FileChecker(object):
#namely, unique characters - known so far: +
#c1 = '+'
series_name = ' '.join(split_file[:highest_series_pos])
for x in list(wrds):
if x != '':
if 'XCV' in series_name:
series_name = re.sub('XCV', x, series_name,1)
elif 'XCV' in issue_number:
issue_number = re.sub('XCV', x, issue_number,1)
series_name = re.sub('c11', '+', series_name)
series_name = re.sub('f11', '&', series_name)
series_name = re.sub('g11', '\'', series_name)
@ -741,11 +762,16 @@ class FileChecker(object):
logger.fdebug('series title possibly: ' + series_name)
#if the filename is unicoded, it won't match due to the unicode translation. Keep the unicode as well as the decoded.
series_name_decoded= unicodedata.normalize('NFKD', series_name.decode('utf-8')).encode('ASCII', 'ignore')
#check for annual in title(s) here.
if mylar.ANNUALS_ON:
if 'annual' in series_name.lower():
issue_number = 'Annual ' + str(issue_number)
series_name = re.sub('annual', '', series_name, flags=re.I).strip()
series_name_decoded = re.sub('annual', '', series_name_decoded, flags=re.I).strip()
#if path_list is not None:
# clocation = os.path.join(path, path_list, filename)
#else:
@ -761,40 +787,43 @@ class FileChecker(object):
if issue_number is None or series_name is None:
logger.fdebug('Cannot parse the filename properly. I\'m going to make note of this filename so that my evil ruler can make it work.')
return {'parse_status': 'failure',
'sub': path_list,
'comicfilename': filename,
'comiclocation': self.dir,
'series_name': series_name,
'issue_number': issue_number,
'justthedigits': issue_number, #redundant but it's needed atm
'series_volume': issue_volume,
'issue_year': issue_year,
'annual_comicid': None,
'scangroup': scangroup}
return {'parse_status': 'failure',
'sub': path_list,
'comicfilename': filename,
'comiclocation': self.dir,
'series_name': series_name,
'series_name_decoded': series_name_decoded,
'issue_number': issue_number,
'justthedigits': issue_number, #redundant but it's needed atm
'series_volume': issue_volume,
'issue_year': issue_year,
'annual_comicid': None,
'scangroup': scangroup}
if self.justparse:
return {'parse_status': 'success',
'type': re.sub('\.','', filetype).strip(),
'sub': path_list,
'comicfilename': filename,
'comiclocation': self.dir,
'series_name': series_name,
'dynamic_name': self.dynamic_replace(series_name)['mod_seriesname'],
'series_volume': issue_volume,
'issue_year': issue_year,
'issue_number': issue_number,
'scangroup': scangroup}
return {'parse_status': 'success',
'type': re.sub('\.','', filetype).strip(),
'sub': path_list,
'comicfilename': filename,
'comiclocation': self.dir,
'series_name': series_name,
'series_name_decoded': series_name_decoded,
'dynamic_name': self.dynamic_replace(series_name)['mod_seriesname'],
'series_volume': issue_volume,
'issue_year': issue_year,
'issue_number': issue_number,
'scangroup': scangroup}
series_info = {}
series_info = {'sub': path_list,
'comicfilename': filename,
'comiclocation': self.dir,
'series_name': series_name,
'series_volume': issue_volume,
'issue_year': issue_year,
'issue_number': issue_number,
'scangroup': scangroup}
series_info = {'sub': path_list,
'comicfilename': filename,
'comiclocation': self.dir,
'series_name': series_name,
'series_name_decoded': series_name_decoded,
'series_volume': issue_volume,
'issue_year': issue_year,
'issue_number': issue_number,
'scangroup': scangroup}
return self.matchIT(series_info)
@ -802,15 +831,22 @@ class FileChecker(object):
series_name = series_info['series_name']
filename = series_info['comicfilename']
#compare here - match comparison against u_watchcomic.
logger.info('Series_Name: ' + series_name.lower() + ' --- WatchComic: ' + self.watchcomic.lower())
logger.info('Series_Name: ' + series_name + ' --- WatchComic: ' + self.watchcomic)
#check for dynamic handles here.
mod_dynamicinfo = self.dynamic_replace(series_name)
mod_seriesname = mod_dynamicinfo['mod_seriesname']
mod_watchcomic = mod_dynamicinfo['mod_watchcomic']
mod_series_decoded = self.dynamic_replace(series_info['series_name_decoded'])
mod_seriesname_decoded = mod_dynamicinfo['mod_seriesname']
mod_watch_decoded = self.dynamic_replace(self.og_watchcomic)
mod_watchname_decoded = mod_dynamicinfo['mod_seriesname']
#remove the spaces...
nspace_seriesname = re.sub(' ', '', mod_seriesname)
nspace_watchcomic = re.sub(' ', '', mod_watchcomic)
nspace_seriesname_decoded = re.sub(' ', '', mod_seriesname_decoded)
nspace_watchname_decoded = re.sub(' ', '', mod_watchname_decoded)
if '127372873872871091383' not in self.AS_Alt:
logger.fdebug('Possible Alternate Names to match against (if necessary): ' + str(self.AS_Alt))
@ -821,8 +857,10 @@ class FileChecker(object):
if 'annual' in series_name.lower():
justthedigits = 'Annual ' + series_info['issue_number']
nspace_seriesname = re.sub('annual', '', nspace_seriesname.lower()).strip()
nspace_seriesname_decoded = re.sub('annual', '', nspace_seriesname_decoded.lower()).strip()
if re.sub('\|','', nspace_seriesname.lower()).strip() == re.sub('\|', '', nspace_watchcomic.lower()).strip() or any(re.sub('[\|\s]','', x.lower()).strip() == re.sub('[\|\s]','', nspace_seriesname.lower()).strip() for x in self.AS_Alt):
if any([re.sub('\|','', nspace_seriesname.lower()).strip() == re.sub('\|', '', nspace_watchcomic.lower()).strip(), re.sub('\|','', nspace_seriesname_decoded.lower()).strip() == re.sub('\|', '', nspace_watchname_decoded.lower()).strip()]) or any(re.sub('[\|\s]','', x.lower()).strip() == re.sub('[\|\s]','', nspace_seriesname.lower()).strip() for x in self.AS_Alt):
logger.fdebug('[MATCH: ' + series_info['series_name'] + '] ' + filename)
enable_annual = False
annual_comicid = None
@ -1013,7 +1051,12 @@ class FileChecker(object):
if mod_watchcomic:
mod_watchcomic = re.sub('\|+', '|', mod_watchcomic)
if mod_watchcomic.endswith('|'):
mod_watchcomic = mod_watchcomic[:-1]
mod_seriesname = re.sub('\|+', '|', mod_seriesname)
if mod_seriesname.endswith('|'):
mod_seriesname = mod_seriesname[:-1]
return {'mod_watchcomic': mod_watchcomic,
'mod_seriesname': mod_seriesname}

View File

@ -876,7 +876,13 @@ def issuedigits(issnum):
try:
tst = issnum.isdigit()
except:
return 9999999999
try:
isstest = str(issnum)
tst = isstest.isdigit()
except:
return 9999999999
else:
issnum = str(issnum)
if issnum.isdigit():
int_issnum = int(issnum) * 1000
@ -939,9 +945,9 @@ def issuedigits(issnum):
x = [vals[key] for key in vals if key in issnum]
if x:
logger.info('Unicode Issue present - adjusting.')
logger.fdebug('Unicode Issue present - adjusting.')
int_issnum = x[0] * 1000
logger.info('int_issnum: ' + str(int_issnum))
logger.fdebug('int_issnum: ' + str(int_issnum))
else:
if any(['.' in issnum, ',' in issnum]):
#logger.fdebug('decimal detected.')
@ -972,10 +978,14 @@ def issuedigits(issnum):
else:
try:
x = float(issnum)
logger.info(x)
#validity check
if x < 0:
#logger.info("I've encountered a negative issue #: " + str(issnum) + ". Trying to accomodate.")
int_issnum = (int(x) *1000) - 1
elif bool(x):
logger.fdebug('Infinity issue found.')
int_issnum = 9999999999 * 1000
else: raise ValueError
except ValueError, e:
#this will account for any alpha in a issue#, so long as it doesn't have decimals.
@ -1222,7 +1232,7 @@ def LoadAlternateSearchNames(seriesname_alt, comicid):
Alternate_Names['AlternateName'] = AS_Alt
Alternate_Names['ComicID'] = comicid
Alternate_Names['Count'] = alt_count
#logger.info('AlternateNames returned:' + str(Alternate_Names))
logger.info('AlternateNames returned:' + str(Alternate_Names))
return Alternate_Names
@ -2021,6 +2031,56 @@ def issue_status(IssueID):
else:
return False
def issue_find_ids(ComicName, ComicID, pack, IssueNumber):
import db, logger
myDB = db.DBConnection()
issuelist = myDB.select("SELECT * FROM issues WHERE ComicID=?", [ComicID])
if 'Annual' not in pack:
pack_issues = range(int(pack[:pack.find('-')]),int(pack[pack.find('-')+1:])+1)
annualize = False
else:
#remove the annuals wording
tmp_annuals = pack[pack.find('Annual'):]
tmp_ann = re.sub('[annual/annuals/+]', '', tmp_annuals.lower()).strip()
tmp_pack = re.sub('[annual/annuals/+]', '', pack.lower()).strip()
pack_issues = range(int(tmp_pack[:tmp_pack.find('-')]),int(tmp_pack[tmp_pack.find('-')+1:])+1)
annualize = True
issues = {}
issueinfo = []
Int_IssueNumber = issuedigits(IssueNumber)
valid = False
for iss in pack_issues:
int_iss = issuedigits(iss)
for xb in issuelist:
if xb['Status'] != 'Downloaded':
if xb['Int_IssueNumber'] == int_iss:
issueinfo.append({'issueid': xb['IssueID'],
'int_iss': int_iss,
'issuenumber': xb['Issue_Number']})
break
for x in issueinfo:
if Int_IssueNumber == x['int_iss']:
valid = True
break
issues['issues'] = issueinfo
if len(issues['issues']) == len(pack_issues):
logger.info('Complete issue count of ' + str(len(pack_issues)) + ' issues are available within this pack for ' + ComicName)
else:
logger.info('Issue counts are not complete (not a COMPLETE pack) for ' + ComicName)
issues['issue_range'] = pack_issues
issues['valid'] = valid
return issues
#def file_ops(path,dst):
# # path = source path + filename

View File

@ -1552,7 +1552,10 @@ def annual_check(ComicName, SeriesYear, comicid, issuetype, issuechk, weeklyissu
if int(sr['issues']) == 0 and len(issued['issuechoice']) == 1:
sr_issues = 1
else:
sr_issues = sr['issues']
if int(sr['issues']) != len(issued['issuechoice']):
sr_issues = len(issued['issuechoice'])
else:
sr_issues = sr['issues']
logger.fdebug('[IMPORTER-ANNUAL] - There are ' + str(sr_issues) + ' annuals in this series.')
while (n < int(sr_issues)):
try:

View File

@ -263,7 +263,7 @@ def libraryScan(dir=None, append=False, ComicID=None, ComicName=None, cron=None,
comicname = issueinfo[0]['series']
if comicname is not None:
logger.fdebug('[IMPORT-CBZ] Series Name: ' + comicname)
as_d = filechecker.FileChecker(watchcomic=comicname.decode('utf-8'))
as_d = filechecker.FileChecker()
as_dyninfo = as_d.dynamic_replace(comicname)
logger.fdebug('Dynamic-ComicName: ' + as_dyninfo['mod_seriesname'])
else:

128
mylar/locg.py Executable file
View File

@ -0,0 +1,128 @@
# This file is part of Mylar.
#
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
import lib.requests as requests
from bs4 import BeautifulSoup, UnicodeDammit
import datetime
import re
import mylar
from mylar import logger, db
def locg(pulldate=None,weeknumber=None,year=None):
todaydate = datetime.datetime.today()
if pulldate:
logger.info('pulldate is : ' + str(pulldate))
if pulldate is None or pulldate == '00000000':
weeknumber = todaydate.strftime("%U")
elif '-' in pulldate:
#find the week number
weektmp = datetime.date(*(int(s) for s in pulldate.split('-')))
weeknumber = weektmp.strftime("%U")
#we need to now make sure we default to the correct week
weeknumber_new = todaydate.strftime("%U")
if weeknumber_new > weeknumber:
weeknumber = weeknumber_new
else:
if str(weeknumber).isdigit() and int(weeknumber) <= 52:
#already requesting week #
weeknumber = weeknumber
else:
logger.warn('Invalid date requested. Aborting pull-list retrieval/update at this time.')
return {'status': 'failure'}
if year is None:
year = todaydate.strftime("%Y")
params = {'week': str(weeknumber),
'year': str(year)}
url = 'https://walksoftly.itsaninja.party/newcomics.php'
try:
r = requests.get(url, params=params, verify=True)
except requests.exceptions.RequestException as e:
logger.warn(e)
return {'status': 'failure'}
if r.status_code == '619':
logger.warn('[' + str(r.status_code) + '] No date supplied, or an invalid date was provided [' + str(pulldate) + ']')
return {'status': 'failure'}
elif r.status_code == '999' or r.status_code == '111':
logger.warn('[' + str(r.status_code) + '] Unable to retrieve data from site - this is a site.specific issue [' + str(pulldate) + ']')
return {'status': 'failure'}
data = r.json()
logger.info('[WEEKLY-PULL] There are ' + str(len(data)) + ' issues for the week of ' + str(weeknumber) + ', ' + str(year))
pull = []
for x in data:
pull.append({'series': x['series'],
'alias': x['alias'],
'issue': x['issue'],
'publisher': x['publisher'],
'shipdate': x['shipdate'],
'coverdate': x['coverdate'],
'comicid': x['comicid'],
'issueid': x['issueid'],
'weeknumber': x['weeknumber'],
'year': x['year']})
shipdate = x['shipdate']
myDB = db.DBConnection()
#myDB.action("drop table if exists weekly")
myDB.action("CREATE TABLE IF NOT EXISTS weekly (SHIPDATE, PUBLISHER text, ISSUE text, COMIC VARCHAR(150), EXTRA text, STATUS text, ComicID text, IssueID text, CV_Last_Update text, DynamicName text, weeknumber text, year text, rowid INTEGER PRIMARY KEY)")
#clear out the upcoming table here so they show the new values properly.
#myDB.action('DELETE FROM UPCOMING WHERE IssueDate=?',[shipdate])
for x in pull:
comicid = None
issueid = None
comicname = x['series']
if x['comicid'] is not None:
comicid = x['comicid']
if x['issueid'] is not None:
issueid= x['issueid']
if x['alias'] is not None:
comicname = x['alias']
cl_d = mylar.filechecker.FileChecker()
cl_dyninfo = cl_d.dynamic_replace(comicname)
dynamic_name = re.sub('[\|\s]','', cl_dyninfo['mod_seriesname'].lower()).strip()
controlValueDict = {'COMIC': comicname,
'ISSUE': re.sub('#', '', x['issue']).strip()}
newValueDict = {'SHIPDATE': x['shipdate'],
'PUBLISHER': x['publisher'],
'STATUS': 'Skipped',
'COMICID': comicid,
'ISSUEID': issueid,
'DYNAMICNAME': dynamic_name,
'WEEKNUMBER': x['weeknumber'],
'YEAR': x['year']}
myDB.upsert("weekly", newValueDict, controlValueDict)
logger.info('[PULL-LIST] Successfully populated pull-list into Mylar for the week of: ' + str(weeknumber))
#set the last poll date/time here so that we don't start overwriting stuff too much...
mylar.PULL_REFRESH = todaydate
return {'status': 'success',
'count': len(data),
'weeknumber': weeknumber,
'year': year}

View File

@ -11,11 +11,10 @@ def movefiles(comicid, comlocation, imported):
myDB = db.DBConnection()
logger.fdebug('comlocation is : ' + str(comlocation))
logger.fdebug('original comicname is : ' + str(imported['ComicName']))
logger.fdebug('comlocation is : ' + comlocation)
logger.fdebug('original comicname is : ' + imported['ComicName'])
impres = imported['filelisting']
#impres = myDB.select("SELECT * from importresults WHERE ComicName=?", [ogcname])
if impres is not None:
for impr in impres:
@ -43,11 +42,22 @@ def movefiles(comicid, comlocation, imported):
#now that it's moved / renamed ... we remove it from importResults or mark as completed.
if len(files_moved) > 0:
logger.info('files_moved: ' + str(files_moved))
for result in files_moved:
controlValue = {"ComicFilename": result['filename'],
"SRID": result['srid']}
newValue = {"Status": "Imported",
"ComicID": comicid}
try:
res = result['import_id']
except:
#if it's an 'older' import that wasn't imported, just make it a basic match so things can move and update properly.
controlValue = {"ComicFilename": result['filename'],
"SRID": result['srid']}
newValue = {"Status": "Imported",
"ComicID": comicid}
else:
controlValue = {"impID": result['import_id'],
"ComicFilename": result['filename']}
newValue = {"Status": "Imported",
"SRID": result['srid'],
"ComicID": comicid}
myDB.upsert("importresults", newValue, controlValue)
return

File diff suppressed because it is too large Load Diff

View File

@ -285,7 +285,7 @@ def latest_update(ComicID, LatestIssue, LatestDate):
"LatestDate": str(LatestDate)}
myDB.upsert("comics", newlatestDict, latestCTRLValueDict)
def upcoming_update(ComicID, ComicName, IssueNumber, IssueDate, forcecheck=None, futurepull=None, altissuenumber=None):
def upcoming_update(ComicID, ComicName, IssueNumber, IssueDate, forcecheck=None, futurepull=None, altissuenumber=None, weekinfo=None):
# here we add to upcoming table...
myDB = db.DBConnection()
dspComicName = ComicName #to make sure that the word 'annual' will be displayed on screen
@ -303,25 +303,38 @@ def upcoming_update(ComicID, ComicName, IssueNumber, IssueDate, forcecheck=None,
#let's refresh the series here just to make sure if an issue is available/not.
mismatch = "no"
CV_EXcomicid = myDB.selectone("SELECT * from exceptions WHERE ComicID=?", [ComicID]).fetchone()
if CV_EXcomicid is None: pass
if CV_EXcomicid is None:
pass
else:
if CV_EXcomicid['variloop'] == '99':
mismatch = "yes"
lastupdatechk = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [ComicID]).fetchone()
if lastupdatechk is None:
pullupd = "yes"
if mylar.ALT_PULL != 2:
lastupdatechk = myDB.selectone("SELECT * FROM comics WHERE ComicID=?", [ComicID]).fetchone()
if lastupdatechk is None:
pullupd = "yes"
else:
c_date = lastupdatechk['LastUpdated']
if c_date is None:
logger.error(lastupdatechk['ComicName'] + ' failed during a previous add /refresh. Please either delete and readd the series, or try a refresh of the series.')
return
c_obj_date = datetime.datetime.strptime(c_date, "%Y-%m-%d %H:%M:%S")
n_date = datetime.datetime.now()
absdiff = abs(n_date - c_obj_date)
hours = (absdiff.days * 24 * 60 * 60 + absdiff.seconds) / 3600.0
else:
c_date = lastupdatechk['LastUpdated']
if c_date is None:
logger.error(lastupdatechk['ComicName'] + ' failed during a previous add /refresh. Please either delete and readd the series, or try a refresh of the series.')
return
c_obj_date = datetime.datetime.strptime(c_date, "%Y-%m-%d %H:%M:%S")
#if it's at this point and the refresh is None, odds are very good that it's already up-to-date so let it flow thru
if mylar.PULL_REFRESH is None:
mylar.PULL_REFRESH = datetime.datetime.today()
logger.fdebug('pull_refresh: ' + str(mylar.PULL_REFRESH))
c_obj_date = mylar.PULL_REFRESH
#logger.fdebug('c_obj_date: ' + str(c_obj_date))
n_date = datetime.datetime.now()
#logger.fdebug('n_date: ' + str(n_date))
absdiff = abs(n_date - c_obj_date)
#logger.fdebug('absdiff: ' + str(absdiff))
hours = (absdiff.days * 24 * 60 * 60 + absdiff.seconds) / 3600.0
# no need to hammer the refresh
# let's check it every 5 hours (or more)
#pullupd = "yes"
#logger.fdebug('hours: ' + str(hours))
if 'annual' in ComicName.lower():
if mylar.ANNUALS_ON:
issuechk = myDB.selectone("SELECT * FROM annuals WHERE ComicID=? AND Issue_Number=?", [ComicID, IssueNumber]).fetchone()
@ -337,36 +350,45 @@ def upcoming_update(ComicID, ComicName, IssueNumber, IssueDate, forcecheck=None,
if issuechk is None:
if futurepull is None:
og_status = None
logger.fdebug(adjComicName + ' Issue: ' + str(IssueNumber) + ' not present in listings to mark for download...updating comic and adding to Upcoming Wanted Releases.')
# we need to either decrease the total issue count, OR indicate that an issue is upcoming.
upco_results = myDB.select("SELECT COUNT(*) FROM UPCOMING WHERE ComicID=?", [ComicID])
upco_iss = upco_results[0][0]
#logger.info("upco_iss: " + str(upco_iss))
if int(upco_iss) > 0:
#logger.info("There is " + str(upco_iss) + " of " + str(ComicName) + " that's not accounted for")
newKey = {"ComicID": ComicID}
newVal = {"not_updated_db": str(upco_iss)}
myDB.upsert("comics", newVal, newKey)
elif int(upco_iss) <=0 and lastupdatechk['not_updated_db']:
#if not_updated_db has a value, and upco_iss is > 0, let's zero it back out cause it's updated now.
newKey = {"ComicID": ComicID}
newVal = {"not_updated_db": ""}
myDB.upsert("comics", newVal, newKey)
if mylar.ALT_PULL != 2:
logger.fdebug(adjComicName + ' Issue: ' + str(IssueNumber) + ' not present in listings to mark for download...updating comic and adding to Upcoming Wanted Releases.')
# we need to either decrease the total issue count, OR indicate that an issue is upcoming.
upco_results = myDB.select("SELECT COUNT(*) FROM UPCOMING WHERE ComicID=?", [ComicID])
upco_iss = upco_results[0][0]
#logger.info("upco_iss: " + str(upco_iss))
if int(upco_iss) > 0:
#logger.info("There is " + str(upco_iss) + " of " + str(ComicName) + " that's not accounted for")
newKey = {"ComicID": ComicID}
newVal = {"not_updated_db": str(upco_iss)}
myDB.upsert("comics", newVal, newKey)
elif int(upco_iss) <=0 and lastupdatechk['not_updated_db']:
#if not_updated_db has a value, and upco_iss is > 0, let's zero it back out cause it's updated now.
newKey = {"ComicID": ComicID}
newVal = {"not_updated_db": ""}
myDB.upsert("comics", newVal, newKey)
if hours > 5 or forcecheck == 'yes':
pullupd = "yes"
logger.fdebug('Now Refreshing comic ' + ComicName + ' to make sure it is up-to-date')
if ComicID[:1] == "G":
mylar.importer.GCDimport(ComicID, pullupd)
else:
cchk = mylar.importer.updateissuedata(ComicID, ComicName, calledfrom='weeklycheck')#mylar.importer.addComictoDB(ComicID,mismatch,pullupd)
if hours > 5 or forcecheck == 'yes':
pullupd = "yes"
logger.fdebug('Now Refreshing comic ' + ComicName + ' to make sure it is up-to-date')
if ComicID[:1] == "G":
mylar.importer.GCDimport(ComicID, pullupd)
else:
cchk = mylar.importer.updateissuedata(ComicID, ComicName, calledfrom='weeklycheck')#mylar.importer.addComictoDB(ComicID,mismatch,pullupd)
else:
logger.fdebug('It has not been longer than 5 hours since we last did this...we will wait so we do not hammer things.')
else:
logger.fdebug('It has not been longer than 5 hours since we last did this...we will wait so we do not hammer things.')
logger.fdebug('linking ComicID to Pull-list to reflect status.')
downstats = {"ComicID": ComicID,
"IssueID": None,
"Status": None}
return downstats
logger.fdebug('[WEEKLY-PULL] Walksoftly has been enabled. ComicID/IssueID control given to the ninja to monitor.')
#logger.fdebug('hours: ' + str(hours) + ' -- forcecheck: ' + str(forcecheck))
if hours > 2 or forcecheck == 'yes':
logger.fdebug('weekinfo:' + str(weekinfo))
chkitout = mylar.locg.locg(weeknumber=str(weekinfo['weeknumber']),year=str(weekinfo['year']))
logger.fdebug('linking ComicID to Pull-list to reflect status.')
downstats = {"ComicID": ComicID,
"IssueID": None,
"Status": None}
return downstats
else:
# if futurepull is not None, let's just update the status and ComicID
# NOTE: THIS IS CREATING EMPTY ENTRIES IN THE FUTURE TABLE. ???
@ -490,11 +512,8 @@ def upcoming_update(ComicID, ComicName, IssueNumber, IssueDate, forcecheck=None,
return downstats
def weekly_update(ComicName, IssueNumber, CStatus, CID, futurepull=None, altissuenumber=None):
if futurepull:
logger.fdebug('future_update of table : ' + str(ComicName) + ' #:' + str(IssueNumber) + ' to a status of ' + str(CStatus))
else:
logger.fdebug('weekly_update of table : ' + str(ComicName) + ' #:' + str(IssueNumber) + ' to a status of ' + str(CStatus))
def weekly_update(ComicName, IssueNumber, CStatus, CID, weeknumber, year, altissuenumber=None):
logger.fdebug('Weekly Update for week ' + str(weeknumber) + '-' + str(year) + ' : ' + str(ComicName) + ' #' + str(IssueNumber) + ' to a status of ' + str(CStatus))
if altissuenumber:
logger.fdebug('weekly_update of table : ' + str(ComicName) + ' (Alternate Issue #):' + str(altissuenumber) + ' to a status of ' + str(CStatus))
@ -503,14 +522,15 @@ def weekly_update(ComicName, IssueNumber, CStatus, CID, futurepull=None, altissu
# added Issue to stop false hits on series' that have multiple releases in a week
# added CStatus to update status flags on Pullist screen
myDB = db.DBConnection()
if futurepull is None:
issuecheck = myDB.selectone("SELECT * FROM weekly WHERE COMIC=? AND ISSUE=?", [ComicName, IssueNumber]).fetchone()
else:
issuecheck = myDB.selectone("SELECT * FROM future WHERE COMIC=? AND ISSUE=?", [ComicName, IssueNumber]).fetchone()
issuecheck = myDB.selectone("SELECT * FROM weekly WHERE COMIC=? AND ISSUE=? and WEEKNUMBER=? AND YEAR=?", [ComicName, IssueNumber, weeknumber, year]).fetchone()
if issuecheck is not None:
controlValue = {"COMIC": str(ComicName),
"ISSUE": str(IssueNumber)}
"ISSUE": str(IssueNumber),
"WEEKNUMBER": weeknumber,
"YEAR": year}
logger.info('controlValue:' + str(controlValue))
try:
if CID['IssueID']:
cidissueid = CID['IssueID']
@ -519,6 +539,8 @@ def weekly_update(ComicName, IssueNumber, CStatus, CID, futurepull=None, altissu
except:
cidissueid = None
logger.info('CStatus:' + str(CStatus))
if CStatus:
newValue = {"STATUS": CStatus}
@ -532,16 +554,9 @@ def weekly_update(ComicName, IssueNumber, CStatus, CID, futurepull=None, altissu
newValue['ComicID'] = CID['ComicID']
newValue['IssueID'] = cidissueid
if futurepull is None:
myDB.upsert("weekly", newValue, controlValue)
else:
logger.fdebug('checking ' + str(issuecheck['ComicID']) + ' status of : ' + str(CStatus))
if issuecheck['ComicID'] is not None and CStatus != None:
newValue = {"STATUS": "Wanted",
"ComicID": issuecheck['ComicID']}
logger.fdebug('updating value: ' + str(newValue))
logger.fdebug('updating control: ' + str(controlValue))
myDB.upsert("future", newValue, controlValue)
logger.info('newValue:' + str(newValue))
myDB.upsert("weekly", newValue, controlValue)
def newpullcheck(ComicName, ComicID, issue=None):
# When adding a new comic, let's check for new issues on this week's pullist and update.
@ -1232,6 +1247,7 @@ def forceRescan(ComicID, archive=None, module=None):
#here we need to change the status of the ones we DIDN'T FIND above since the loop only hits on FOUND issues.
update_iss = []
#break this up in sequnces of 200 so it doesn't break the sql statement.
tmpsql = "SELECT * FROM issues WHERE ComicID=? AND IssueID not in ({seq})".format(seq=','.join(['?'] *(len(issID_to_ignore) -1)))
chkthis = myDB.select(tmpsql, issID_to_ignore)
# chkthis = None

View File

@ -1,4 +1,5 @@
# This file is part of Mylar.
# -*- coding: utf-8 -*-
#
@ -21,6 +22,7 @@ import os
import sys
import cherrypy
import datetime
from datetime import timedelta, date
import re
import json
@ -147,6 +149,7 @@ class WebInterface(object):
"Snatched": str(isCounts[7])
}
usethefuzzy = comic['UseFuzzy']
allowpacks = comic['AllowPacks']
skipped2wanted = "0"
if usethefuzzy is None:
usethefuzzy = "0"
@ -155,6 +158,8 @@ class WebInterface(object):
force_continuing = 0
if mylar.DELETE_REMOVE_DIR is None:
mylar.DELETE_REMOVE_DIR = 0
if allowpacks is None:
allowpacks = "0"
comicConfig = {
"comiclocation": mylar.COMIC_LOCATION,
"fuzzy_year0": helpers.radio(int(usethefuzzy), 0),
@ -162,7 +167,8 @@ class WebInterface(object):
"fuzzy_year2": helpers.radio(int(usethefuzzy), 2),
"skipped2wanted": helpers.checked(skipped2wanted),
"force_continuing": helpers.checked(force_continuing),
"delete_dir": helpers.checked(mylar.DELETE_REMOVE_DIR)
"delete_dir": helpers.checked(mylar.DELETE_REMOVE_DIR),
"allow_packs": helpers.checked(int(allowpacks))
}
if mylar.ANNUALS_ON:
annuals = myDB.select("SELECT * FROM annuals WHERE ComicID=? ORDER BY ComicID, Int_IssueNumber DESC", [ComicID])
@ -1279,7 +1285,7 @@ class WebInterface(object):
if Publisher == 'COMICS': Publisher = None
if ComicYear == '': ComicYear = now.year
logger.info(u"Marking " + ComicName + " " + ComicIssue + " as wanted...")
foundcom, prov = search.search_init(ComicName=ComicName, IssueNumber=ComicIssue, ComicYear=ComicYear, SeriesYear=None, Publisher=Publisher, IssueDate=cyear['SHIPDATE'], StoreDate=cyear['SHIPDATE'], IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None)
foundcom, prov = search.search_init(ComicName=ComicName, IssueNumber=ComicIssue, ComicYear=ComicYear, SeriesYear=None, Publisher=Publisher, IssueDate=cyear['SHIPDATE'], StoreDate=cyear['SHIPDATE'], IssueID=None, AlternateSearch=None, UseFuzzy=None, ComicVersion=None,allowpacks=False)
if foundcom == "yes":
logger.info(u"Downloaded " + ComicName + " " + ComicIssue)
raise cherrypy.HTTPRedirect("pullist")
@ -1291,6 +1297,7 @@ class WebInterface(object):
AlternateSearch = cdname['AlternateSearch']
Publisher = cdname['ComicPublisher']
UseAFuzzy = cdname['UseFuzzy']
AllowPacks= cdname['AllowPacks']
ComicVersion = cdname['ComicVersion']
ComicName = cdname['ComicName']
controlValueDict = {"IssueID": IssueID}
@ -1338,7 +1345,7 @@ class WebInterface(object):
#Publisher = miy['ComicPublisher']
#UseAFuzzy = miy['UseFuzzy']
#ComicVersion = miy['ComicVersion']
foundcom, prov = search.search_init(ComicName, ComicIssue, ComicYear, SeriesYear, Publisher, issues['IssueDate'], storedate, IssueID, AlternateSearch, UseAFuzzy, ComicVersion, mode=mode, ComicID=ComicID, manualsearch=manualsearch, filesafe=ComicName_Filesafe)
foundcom, prov = search.search_init(ComicName, ComicIssue, ComicYear, SeriesYear, Publisher, issues['IssueDate'], storedate, IssueID, AlternateSearch, UseAFuzzy, ComicVersion, mode=mode, ComicID=ComicID, manualsearch=manualsearch, filesafe=ComicName_Filesafe, allow_packs=AllowPacks)
if foundcom == "yes":
# file check to see if issue exists and update 'have' count
if IssueID is not None:
@ -1464,7 +1471,7 @@ class WebInterface(object):
archiveissue.exposed = True
def pullist(self):
def pullist(self, week=None, year=None):
myDB = db.DBConnection()
autowants = myDB.select("SELECT * FROM futureupcoming WHERE Status='Wanted'")
autowant = []
@ -1477,10 +1484,63 @@ class WebInterface(object):
"DisplayComicName": aw['DisplayComicName']})
weeklyresults = []
wantedcount = 0
#find the current week and save it as a reference point.
todaydate = datetime.datetime.today()
current_weeknumber = todaydate.strftime("%U")
if week:
weeknumber = int(week)
year = int(year)
#view specific week (prev_week, next_week)
startofyear = date(year,1,1)
week0 = startofyear - timedelta(days=startofyear.isoweekday())
stweek = datetime.datetime.strptime(week0.strftime('%Y-%m-%d'), '%Y-%m-%d')
startweek = stweek + timedelta(weeks = weeknumber)
midweek = startweek + timedelta(days = 3)
endweek = startweek + timedelta(days = 6)
else:
#find the given week number for the current day
weeknumber = current_weeknumber
stweek = datetime.datetime.strptime(todaydate.strftime('%Y-%m-%d'), '%Y-%m-%d')
startweek = stweek - timedelta(days = (stweek.weekday() + 1) % 7)
midweek = startweek + timedelta(days = 3)
endweek = startweek + timedelta(days = 6)
year = todaydate.strftime("%Y")
prev_week = int(weeknumber) - 1
next_week = int(weeknumber) + 1
weekinfo = {'weeknumber': weeknumber,
'startweek': startweek.strftime('%B %d, %Y'),
'midweek': midweek.strftime('%Y-%m-%d'),
'endweek': endweek.strftime('%B %d, %Y'),
'year': year,
'prev_weeknumber': prev_week,
'next_weeknumber': next_week,
'current_weeknumber': current_weeknumber}
logger.info(weekinfo)
popit = myDB.select("SELECT * FROM sqlite_master WHERE name='weekly' and type='table'")
if popit:
w_results = myDB.select("SELECT * from weekly")
w_results = myDB.select("SELECT * from weekly WHERE weeknumber=?", [str(weeknumber)])
if len(w_results) == 0:
logger.info('trying to repopulate to different week')
repoll = self.manualpull(weeknumber=weeknumber,year=year)
if repoll['status'] == 'success':
logger.info('Successfully populated ' + str(repoll['count']) + ' issues into the pullist for the week of ' + str(weeknumber) + ', ' + str(year))
w_results = myDB.select("SELECT * from weekly WHERE weeknumber=?", [str(weeknumber)])
else:
logger.warn('Problem repopulating the pullist for week ' + str(weeknumber) + ', ' + str(year))
watchlibrary = helpers.listLibrary()
for weekly in w_results:
if weekly['ComicID'] in watchlibrary:
haveit = watchlibrary[weekly['ComicID']]
else:
haveit = "No"
x = None
try:
x = float(weekly['ISSUE'])
@ -1497,6 +1557,7 @@ class WebInterface(object):
"STATUS": weekly['STATUS'],
"COMICID": weekly['ComicID'],
"ISSUEID": weekly['IssueID'],
"HAVEIT": haveit,
"AUTOWANT": False
})
else:
@ -1508,6 +1569,7 @@ class WebInterface(object):
"STATUS": weekly['STATUS'],
"COMICID": weekly['ComicID'],
"ISSUEID": weekly['IssueID'],
"HAVEIT": haveit,
"AUTOWANT": True
})
else:
@ -1518,6 +1580,7 @@ class WebInterface(object):
"STATUS": weekly['STATUS'],
"COMICID": weekly['ComicID'],
"ISSUEID": weekly['IssueID'],
"HAVEIT": haveit,
"AUTOWANT": False
})
@ -1525,17 +1588,19 @@ class WebInterface(object):
wantedcount +=1
weeklyresults = sorted(weeklyresults, key=itemgetter('PUBLISHER', 'COMIC'), reverse=False)
pulldate = myDB.selectone("SELECT * from weekly").fetchone()
if pulldate is None:
return self.manualpull()
#raise cherrypy.HTTPRedirect("home")
else:
return self.manualpull()
if mylar.WEEKFOLDER_LOC is not None:
weekfold = os.path.join(mylar.WEEKFOLDER_LOC, pulldate['SHIPDATE'])
weekdst = mylar.WEEKFOLDER_LOC
else:
weekfold = os.path.join(mylar.DESTINATION_DIR, pulldate['SHIPDATE'])
return serve_template(templatename="weeklypull.html", title="Weekly Pull", weeklyresults=weeklyresults, pulldate=pulldate['SHIPDATE'], pullfilter=True, weekfold=weekfold, wantedcount=wantedcount)
weekdst = mylar.DESTINATION_DIR
weekfold = os.path.join(weekdst, str( str(weekinfo['year']) + '-' + str(weeknumber) ))
if week:
return serve_template(templatename="weeklypull.html", title="Weekly Pull", weeklyresults=weeklyresults, pullfilter=True, weekfold=weekfold, wantedcount=wantedcount, weekinfo=weekinfo)
else:
return serve_template(templatename="weeklypull.html", title="Weekly Pull", weeklyresults=weeklyresults, pullfilter=True, weekfold=weekfold, wantedcount=wantedcount, weekinfo=weekinfo)
pullist.exposed = True
def removeautowant(self, comicname, release):
@ -1627,9 +1692,10 @@ class WebInterface(object):
futurepulllist.exposed = True
def add2futurewatchlist(self, ComicName, Issue, Publisher, ShipDate, FutureID=None):
#ShipDate is a tuple ('weeknumber','startweek','midweek','endweek','year')
myDB = db.DBConnection()
if FutureID is not None:
chkfuture = myDB.selectone('SELECT * FROM futureupcoming WHERE ComicName=? AND IssueNumber=?', [ComicName, Issue]).fetchone()
chkfuture = myDB.selectone('SELECT * FROM futureupcoming WHERE ComicName=? AND IssueNumber=? WHERE weeknumber=?', [ComicName, Issue, ShipDate['weeknumber']]).fetchone()
if chkfuture is not None:
logger.info('Already on Future Upcoming list - not adding at this time.')
return
@ -1640,7 +1706,7 @@ class WebInterface(object):
"Publisher": Publisher}
newVal = {"Status": "Wanted",
"IssueDate": ShipDate}
"IssueDate": ShipDate['midweek']}
myDB.upsert("futureupcoming", newVal, newCtrl)
@ -1665,10 +1731,15 @@ class WebInterface(object):
return serve_template(templatename="weeklypull.html", title="Weekly Pull", weeklyresults=weeklyresults, pulldate=pulldate['SHIPDATE'], pullfilter=True)
filterpull.exposed = True
def manualpull(self):
def manualpull(self,weeknumber=None,year=None):
from mylar import weeklypull
threading.Thread(target=weeklypull.pullit).start()
raise cherrypy.HTTPRedirect("pullist")
if weeknumber:
#threading.Thread(target=mylar.locg.locg,args=[None,weeknumber,year]).start()
mylar.locg.locg(weeknumber=weeknumber,year=year)
raise cherrypy.HTTPRedirect("pullist?week=" + str(weeknumber) + "&year=" + str(year))
else:
threading.Thread(target=weeklypull.pullit).start()
raise cherrypy.HTTPRedirect("pullist")
manualpull.exposed = True
def pullrecreate(self):
@ -1683,9 +1754,23 @@ class WebInterface(object):
pullrecreate.exposed = True
def upcoming(self):
todaydate = datetime.datetime.today()
current_weeknumber = todaydate.strftime("%U")
#find the given week number for the current day
weeknumber = current_weeknumber
stweek = datetime.datetime.strptime(todaydate.strftime('%Y-%m-%d'), '%Y-%m-%d')
startweek = stweek - timedelta(days = (stweek.weekday() + 1) % 7)
midweek = startweek + timedelta(days = 3)
endweek = startweek + timedelta(days = 6)
weekyear = todaydate.strftime("%Y")
myDB = db.DBConnection()
#upcoming = myDB.select("SELECT * from issues WHERE ReleaseDate > date('now') order by ReleaseDate DESC")
upcomingdata = myDB.select("SELECT * from upcoming WHERE IssueID is NULL AND IssueNumber is not NULL AND ComicName is not NULL order by IssueDate DESC")
#upcomingdata = myDB.select("SELECT * from upcoming WHERE IssueID is NULL AND IssueNumber is not NULL AND ComicName is not NULL order by IssueDate DESC")
#upcomingdata = myDB.select("SELECT * from upcoming WHERE IssueNumber is not NULL AND ComicName is not NULL order by IssueDate DESC")
upcomingdata = myDB.select("SELECT * from weekly WHERE Issue is not NULL AND Comic is not NULL order by weeknumber DESC")
if upcomingdata is None:
logger.info('No upcoming data as of yet...')
else:
@ -1693,62 +1778,75 @@ class WebInterface(object):
upcoming = []
upcoming_count = 0
futureupcoming_count = 0
try:
pull_date = myDB.selectone("SELECT SHIPDATE from weekly").fetchone()
logger.fdebug(u"Weekly pull list present - retrieving pull-list date.")
if (pull_date is None):
pulldate = '00000000'
else:
pulldate = pull_date['SHIPDATE']
except (sqlite3.OperationalError, TypeError), msg:
logger.info(u"Error Retrieving weekly pull list - attempting to adjust")
pulldate = '00000000'
#try:
# pull_date = myDB.selectone("SELECT SHIPDATE from weekly").fetchone()
# if (pull_date is None):
# pulldate = '00000000'
# else:
# pulldate = pull_date['SHIPDATE']
#except (sqlite3.OperationalError, TypeError), msg:
# logger.info(u"Error Retrieving weekly pull list - attempting to adjust")
# pulldate = '00000000'
for upc in upcomingdata:
if len(upc['IssueDate']) <= 7:
#if it's less than or equal 7, then it's a future-pull so let's check the date and display
#tmpdate = datetime.datetime.com
tmpdatethis = upc['IssueDate']
if tmpdatethis[:2] == '20':
tmpdate = tmpdatethis + '01' #in correct format of yyyymm
else:
findst = tmpdatethis.find('-') #find the '-'
tmpdate = tmpdatethis[findst +1:] + tmpdatethis[:findst] + '01' #rebuild in format of yyyymm
#timenow = datetime.datetime.now().strftime('%Y%m')
else:
#if it's greater than 7 it's a full date.
tmpdate = re.sub("[^0-9]", "", upc['IssueDate']) #convert date to numerics only (should be in yyyymmdd)
# if len(upc['IssueDate']) <= 7:
# #if it's less than or equal 7, then it's a future-pull so let's check the date and display
# #tmpdate = datetime.datetime.com
# tmpdatethis = upc['IssueDate']
# if tmpdatethis[:2] == '20':
# tmpdate = tmpdatethis + '01' #in correct format of yyyymm
# else:
# findst = tmpdatethis.find('-') #find the '-'
# tmpdate = tmpdatethis[findst +1:] + tmpdatethis[:findst] + '01' #rebuild in format of yyyymm
# #timenow = datetime.datetime.now().strftime('%Y%m')
# else:
# #if it's greater than 7 it's a full date.
# tmpdate = re.sub("[^0-9]", "", upc['IssueDate']) #convert date to numerics only (should be in yyyymmdd)
timenow = datetime.datetime.now().strftime('%Y%m%d') #convert to yyyymmdd
#logger.fdebug('comparing pubdate of: ' + str(tmpdate) + ' to now date of: ' + str(timenow))
# timenow = datetime.datetime.now().strftime('%Y%m%d') #convert to yyyymmdd
# #logger.fdebug('comparing pubdate of: ' + str(tmpdate) + ' to now date of: ' + str(timenow))
pulldate = re.sub("[^0-9]", "", pulldate) #convert pulldate to numerics only (should be in yyyymmdd)
# pulldate = re.sub("[^0-9]", "", pulldate) #convert pulldate to numerics only (should be in yyyymmdd)
if int(tmpdate) >= int(timenow) and int(tmpdate) == int(pulldate): #int(pulldate) <= int(timenow):
# if int(tmpdate) >= int(timenow) and int(tmpdate) == int(pulldate): #int(pulldate) <= int(timenow):
if int(upc['weeknumber']) == int(weeknumber) and int(upc['year']) == int(weekyear):
if upc['Status'] == 'Wanted':
upcoming_count +=1
upcoming.append({"ComicName": upc['ComicName'],
"IssueNumber": upc['IssueNumber'],
"IssueDate": upc['IssueDate'],
upcoming.append({"ComicName": upc['Comic'],
"IssueNumber": upc['Issue'],
"IssueDate": upc['ShipDate'],
"ComicID": upc['ComicID'],
"IssueID": upc['IssueID'],
"Status": upc['Status'],
"DisplayComicName": upc['DisplayComicName']})
"WeekNumber": upc['weeknumber'],
"DynamicName": upc['DynamicName']})
elif int(tmpdate) >= int(timenow):
if len(upc['IssueDate']) <= 7:
issuedate = tmpdate[:4] + '-' + tmpdate[4:6] + '-00'
else:
issuedate = upc['IssueDate']
if upc['Status'] == 'Wanted':
else:
if int(upc['weeknumber']) > int(weeknumber) and upc['Status'] == 'Wanted':
futureupcoming_count +=1
futureupcoming.append({"ComicName": upc['ComicName'],
"IssueNumber": upc['IssueNumber'],
"IssueDate": issuedate,
futureupcoming.append({"ComicName": upc['Comic'],
"IssueNumber": upc['Issue'],
"IssueDate": upc['ShipDate'],
"ComicID": upc['ComicID'],
"IssueID": upc['IssueID'],
"Status": upc['Status'],
"DisplayComicName": upc['DisplayComicName']})
"WeekNumber": upc['weeknumber'],
"DynamicName": upc['DynamicName']})
# elif int(tmpdate) >= int(timenow):
# if len(upc['IssueDate']) <= 7:
# issuedate = tmpdate[:4] + '-' + tmpdate[4:6] + '-00'
# else:
# issuedate = upc['IssueDate']
# if upc['Status'] == 'Wanted':
# futureupcoming_count +=1
# futureupcoming.append({"ComicName": upc['ComicName'],
# "IssueNumber": upc['IssueNumber'],
# "IssueDate": issuedate,
# "ComicID": upc['ComicID'],
# "IssueID": upc['IssueID'],
# "Status": upc['Status'],
# "DisplayComicName": upc['DisplayComicName']})
futureupcoming = sorted(futureupcoming, key=itemgetter('IssueDate', 'ComicName', 'IssueNumber'), reverse=True)
@ -1937,7 +2035,10 @@ class WebInterface(object):
for iss in issues:
results.append(iss)
annuals = myDB.select('SELECT * from annuals WHERE Status=?', [status])
return serve_template(templatename="manageissues.html", title="Manage " + str(status) + " Issues", issues=results)
for ann in annuals:
results.append(ann)
return serve_template(templatename="manageissues.html", title="Manage " + str(status) + " Issues", issues=results, status=status)
manageIssues.exposed = True
def manageFailed(self):
@ -2037,24 +2138,39 @@ class WebInterface(object):
myDB = db.DBConnection()
comicsToAdd = []
for ComicID in args:
logger.info(ComicID)
if ComicID == 'manage_comic_length':
break
if action == 'delete':
myDB.action('DELETE from comics WHERE ComicID=?', [ComicID])
myDB.action('DELETE from issues WHERE ComicID=?', [ComicID])
elif action == 'pause':
controlValueDict = {'ComicID': ComicID}
newValueDict = {'Status': 'Paused'}
myDB.upsert("comics", newValueDict, controlValueDict)
logger.info('Pausing Series ID: ' + str(ComicID))
elif action == 'resume':
controlValueDict = {'ComicID': ComicID}
newValueDict = {'Status': 'Active'}
myDB.upsert("comics", newValueDict, controlValueDict)
continue
else:
comicsToAdd.append(ComicID)
for k,v in args.items():
#k = Comicname[ComicYear]
#v = ComicID
comyr = k.find('[')
ComicYear = re.sub('[\[\]]', '', k[comyr:]).strip()
ComicName = k[:comyr].strip()
ComicID = v
#cid = ComicName.decode('utf-8', 'replace')
if action == 'delete':
logger.info('[MANAGE COMICS][DELETION] Now deleting ' + ComicName + ' (' + str(ComicYear) + ') [' + str(ComicID) + '] form the DB.')
myDB.action('DELETE from comics WHERE ComicID=?', [ComicID])
myDB.action('DELETE from issues WHERE ComicID=?', [ComicID])
logger.info('[MANAGE COMICS][DELETION] Successfully deleted ' + ComicName + '(' + str(ComicYear) + ')')
elif action == 'pause':
controlValueDict = {'ComicID': ComicID}
newValueDict = {'Status': 'Paused'}
myDB.upsert("comics", newValueDict, controlValueDict)
logger.info('[MANAGE COMICS][PAUSE] ' + ComicName + ' has now been put into a Paused State.')
elif action == 'resume':
controlValueDict = {'ComicID': ComicID}
newValueDict = {'Status': 'Active'}
myDB.upsert("comics", newValueDict, controlValueDict)
logger.info('[MANAGE COMICS][RESUME] ' + ComicName + ' has now been put into a Resumed State.')
else:
comicsToAdd.append(ComicID)
if len(comicsToAdd) > 0:
logger.fdebug("Refreshing comics: %s" % comicsToAdd)
logger.info('[MANAGE COMICS][REFRESH] Refreshing ' + len(comicsToAdd) + ' series')
threading.Thread(target=updater.dbUpdate, args=[comicsToAdd]).start()
markComics.exposed = True
@ -2178,7 +2294,7 @@ class WebInterface(object):
"SpanYears": spanyears,
"Total": al['TotalIssues'],
"CV_ArcID": al['CV_ArcID']})
return serve_template(templatename="storyarc.html", title="Story Arcs", arclist=arclist)
return serve_template(templatename="storyarc.html", title="Story Arcs", arclist=arclist, delete_type=0)
storyarc_main.exposed = True
def detailStoryArc(self, StoryArcID, StoryArcName):
@ -2230,26 +2346,31 @@ class WebInterface(object):
markreads.exposed = True
def removefromreadlist(self, IssueID=None, StoryArcID=None, IssueArcID=None, AllRead=None, ArcName=None):
def removefromreadlist(self, IssueID=None, StoryArcID=None, IssueArcID=None, AllRead=None, ArcName=None, delete_type=None):
myDB = db.DBConnection()
if IssueID:
myDB.action('DELETE from readlist WHERE IssueID=?', [IssueID])
logger.info("Removed " + str(IssueID) + " from Reading List")
logger.info("[DELETE-READ-ISSUE] Removed " + str(IssueID) + " from Reading List")
elif StoryArcID:
logger.info('[DELETE-ARC] Removing ' + ArcName + ' from your Story Arc Watchlist')
myDB.action('DELETE from readinglist WHERE StoryArcID=?', [StoryArcID])
#ArcName should be an optional flag so that it doesn't remove arcs that have identical naming (ie. Secret Wars)
#if ArcName:
# myDB.action('DELETE from readinglist WHERE StoryArc=?', [ArcName])
if delete_type:
if ArcName:
logger.info('[DELETE-STRAGGLERS-OPTION] Removing all traces of arcs with the name of : ' + ArcName)
myDB.action('DELETE from readinglist WHERE StoryArc=?', [ArcName])
else:
logger.warn('[DELETE-STRAGGLERS-OPTION] No ArcName provided - just deleting by Story Arc ID')
stid = 'S' + str(StoryArcID) + '_%'
#delete from the nzblog so it will always find the most current downloads. Nzblog has issueid, but starts with ArcID
myDB.action('DELETE from nzblog WHERE IssueID LIKE ?', [stid])
logger.info("Removed " + str(StoryArcID) + " from Story Arcs.")
logger.info("[DELETE-ARC] Removed " + str(StoryArcID) + " from Story Arcs.")
elif IssueArcID:
myDB.action('DELETE from readinglist WHERE IssueArcID=?', [IssueArcID])
logger.info("Removed " + str(IssueArcID) + " from the Story Arc.")
logger.info("[DELETE-ARC] Removed " + str(IssueArcID) + " from the Story Arc.")
elif AllRead:
myDB.action("DELETE from readlist WHERE Status='Read'")
logger.info("Removed All issues that have been marked as Read from Reading List")
logger.info("[DELETE-ALL-READ] Removed All issues that have been marked as Read from Reading List")
removefromreadlist.exposed = True
def markasRead(self, IssueID=None, IssueArcID=None):
@ -2936,9 +3057,10 @@ class WebInterface(object):
myDB = db.DBConnection()
if mylar.WEEKFOLDER:
if mylar.WEEKFOLDER_LOC:
desdir = os.path.join(mylar.WEEKFOLDER_LOC, pulldate)
dstdir = mylar.WEEKFOLDER_LOC
else:
desdir = os.path.join(mylar.DESTINATION_DIR, pulldate)
dstdir = mylar.DESTINATION_DIR
desdir = os.path.join(dstdir, str( str(pulldate['year']) + '-' + str(pulldate['weeknumber']) ))
if os.path.isdir(desdir):
logger.info(u"Directory (" + desdir + ") already exists! Continuing...")
else:
@ -2954,7 +3076,7 @@ class WebInterface(object):
else:
desdir = mylar.GRABBAG_DIR
clist = myDB.select("SELECT * FROM Weekly WHERE Status='Downloaded'")
clist = myDB.select("SELECT * FROM Weekly AND weeknumber=? WHERE Status='Downloaded'", [pulldate['weeknumber']])
if clist is None: # nothing on the list, just go go gone
logger.info("There aren't any issues downloaded from this week yet.")
else:
@ -3109,6 +3231,7 @@ class WebInterface(object):
deleteimport.exposed = True
def preSearchit(self, ComicName, comiclist=None, mimp=0, volume=None, displaycomic=None, comicid=None, dynamicname=None, displayline=None):
logger.info('here')
if mylar.IMPORTLOCK:
logger.info('There is an import already running. Please wait for it to finish, and then you can resubmit this import.')
return
@ -3135,12 +3258,31 @@ class WebInterface(object):
if comicinfo['ComicID'] is None or comicinfo['ComicID'] == 'None':
continue
else:
#issue_count = Counter(im['ComicID'])
logger.info('Issues found with valid ComicID information for : ' + comicinfo['ComicName'] + ' [' + str(comicinfo['ComicID']) + ']')
self.addbyid(comicinfo['ComicID'], calledby=True, imported='yes', ogcname=comicinfo['ComicName'])
#status update.
if any([comicinfo['Volume'] is None, comicinfo['Volume'] == 'None']):
results = myDB.select("SELECT * FROM importresults WHERE (WatchMatch is Null OR WatchMatch LIKE 'C%') AND DynamicName=? AND Volume IS NULL",[comicinfo['DynamicName']])
else:
if not comicinfo['Volume'].lower().startswith('v'):
volume = 'v' + str(comicinfo['Volume'])
results = myDB.select("SELECT * FROM importresults WHERE (WatchMatch is Null OR WatchMatch LIKE 'C%') AND DynamicName=? AND Volume=?",[comicinfo['DynamicName'],comicinfo['Volume']])
files = []
for result in results:
files.append({'comicfilename': result['ComicFilename'],
'comiclocation': result['ComicLocation'],
'issuenumber': result['IssueNumber'],
'import_id': result['impID']})
import random
SRID = str(random.randint(100000, 999999))
logger.info('Issues found with valid ComicID information for : ' + comicinfo['ComicName'] + ' [' + str(comicinfo['ComicID']) + ']')
imported = {'ComicName': comicinfo['ComicName'],
'DynamicName': comicinfo['DynamicName'],
'Volume': comicinfo['Volume'],
'filelisting': files,
'srid': SRID}
self.addbyid(comicinfo['ComicID'], calledby=True, imported=imported, ogcname=comicinfo['ComicName'])
#status update.
ctrlVal = {"ComicID": comicinfo['ComicID']}
newVal = {"Status": 'Imported',
"SRID": SRID}
@ -3281,6 +3423,7 @@ class WebInterface(object):
#this needs to be reworked / refined ALOT more.
#minISSUE = highest issue #, startISSUE = lowest issue #
numissues = len(comicstoIMP)
logger.info('numissues: ' + str(numissues))
ogcname = ComicName
mode='series'
@ -3391,21 +3534,24 @@ class WebInterface(object):
import random
SRID = str(random.randint(100000, 999999))
if len(sresults) > 1 or len(search_matches) > 1:
#link the SRID to the series that was just imported so that it can reference the search results when requested.
if volume is None or volume == 'None':
ctrlVal = {"DynamicName": DynamicName,
if volume is None or volume == 'None':
ctrlVal = {"DynamicName": DynamicName,
"ComicName": ComicName}
else:
ctrlVal = {"DynamicName": DynamicName,
"ComicName": ComicName,
"Volume": volume}
else:
ctrlVal = {"DynamicName": DynamicName,
"ComicName": ComicName,
"Volume": volume}
if len(sresults) > 1 or len(search_matches) > 1:
newVal = {"SRID": SRID,
"Status": 'Manual Intervention'}
else:
newVal = {"SRID": SRID,
"Status": 'Importing'}
myDB.upsert("importresults", newVal, ctrlVal)
myDB.upsert("importresults", newVal, ctrlVal)
if len(search_matches) > 1:
# if we matched on more than one series above, just save those results instead of the entire search result set.
@ -3461,7 +3607,8 @@ class WebInterface(object):
for result in results:
files.append({'comicfilename': result['ComicFilename'],
'comiclocation': result['ComicLocation'],
'issuenumber': result['IssueNumber']})
'issuenumber': result['IssueNumber'],
'import_id': result['impID']})
imported = {'ComicName': ComicName,
'DynamicName': DynamicName,
@ -3474,18 +3621,39 @@ class WebInterface(object):
logger.info('[IMPORT] There is more than one result that might be valid - normally this is due to the filename(s) not having enough information for me to use (ie. no volume label/year). Manual intervention is required.')
mylar.IMPORTLOCK = False
logger.info('[IMPORT] Importing complete.')
logger.info('[IMPORT] Initial Import complete (I might still be populating the series data).')
preSearchit.exposed = True
def importresults_popup(self, SRID, ComicName, imported=None, ogcname=None, DynamicName=None):
def importresults_popup(self, SRID, ComicName, imported=None, ogcname=None, DynamicName=None, Volume=None):
myDB = db.DBConnection()
results = myDB.select("SELECT * FROM searchresults WHERE SRID=?", [SRID])
if results:
return serve_template(templatename="importresults_popup.html", title="results", searchtext=ComicName, searchresults=results)
else:
resultset = myDB.select("SELECT * FROM searchresults WHERE SRID=?", [SRID])
if not resultset:
logger.warn('There are no search results to view for this entry ' + ComicName + ' [' + str(SRID) + ']. Something is probably wrong.')
raise cherrypy.HTTPRedirect("importResults")
searchresults = resultset
if any([Volume is None, Volume == 'None']):
results = myDB.select("SELECT * FROM importresults WHERE (WatchMatch is Null OR WatchMatch LIKE 'C%') AND DynamicName=? AND Volume IS NULL",[DynamicName])
else:
if not Volume.lower().startswith('v'):
volume = 'v' + str(Volume)
results = myDB.select("SELECT * FROM importresults WHERE (WatchMatch is Null OR WatchMatch LIKE 'C%') AND DynamicName=? AND Volume=?",[DynamicName,Volume])
files = []
for result in results:
files.append({'comicfilename': result['ComicFilename'],
'comiclocation': result['ComicLocation'],
'issuenumber': result['IssueNumber'],
'import_id': result['impID']})
imported = {'ComicName': ComicName,
'DynamicName': DynamicName,
'Volume': Volume,
'filelisting': files,
'srid': SRID}
return serve_template(templatename="importresults_popup.html", title="results", searchtext=ComicName, searchresults=results, imported=imported)
importresults_popup.exposed = True
def pretty_git(self, br_history):
@ -3779,7 +3947,7 @@ class WebInterface(object):
raise cherrypy.HTTPRedirect("comicDetails?ComicID=%s" % comicid)
manual_annual_add.exposed = True
def comic_config(self, com_location, ComicID, alt_search=None, fuzzy_year=None, comic_version=None, force_continuing=None, alt_filename=None):
def comic_config(self, com_location, ComicID, alt_search=None, fuzzy_year=None, comic_version=None, force_continuing=None, alt_filename=None, allow_packs=None):
myDB = db.DBConnection()
#--- this is for multiple search terms............
#--- works, just need to redo search.py to accomodate multiple search terms
@ -3864,6 +4032,11 @@ class WebInterface(object):
else:
newValues['ForceContinuing'] = 1
if allow_packs is None:
newValues['AllowPacks'] = 0
else:
newValues['AllowPacks'] = 1
if alt_filename is None or alt_filename == 'None':
newValues['AlternateFileName'] = "None"
else:
@ -4209,7 +4382,6 @@ class WebInterface(object):
logger.error('No Username / Password provided for SABnzbd credentials. Unable to test API key')
return "Invalid Username/Password provided"
logger.fdebug('testing connection to SABnzbd @ ' + sab_host)
logger.fdebug('SAB API Key :' + sab_apikey)
if sab_host.endswith('/'):
sabhost = sab_host
else:
@ -4465,11 +4637,20 @@ class WebInterface(object):
IssueInfo.exposed = True
def manual_metatag(self, dirName, issueid, filename, comicid, comversion):
def manual_metatag(self, dirName, issueid, filename, comicid, comversion, seriesyear=None):
module = '[MANUAL META-TAGGING]'
try:
import cmtagmylar
metaresponse = cmtagmylar.run(dirName, issueid=issueid, filename=filename, comversion=comversion, manualmeta=True)
if mylar.CMTAG_START_YEAR_AS_VOLUME:
if all([seriesyear is not None, seriesyear != 'None']):
vol_label = seriesyear
else:
logger.warn('Cannot populate the year for the series for some reason. Dropping down to numeric volume label.')
vol_label = comversion
else:
vol_label = comversion
metaresponse = cmtagmylar.run(dirName, issueid=issueid, filename=filename, comversion=vol_label, manualmeta=True)
except ImportError:
logger.warn(module + ' comictaggerlib not found on system. Ensure the ENTIRE lib directory is located within mylar/lib/comictaggerlib/ directory.')
metaresponse = "fail"
@ -4492,15 +4673,14 @@ class WebInterface(object):
def group_metatag(self, dirName, ComicID):
myDB = db.DBConnection()
cinfo = myDB.selectone('SELECT ComicVersion FROM comics WHERE ComicID=?', [ComicID]).fetchone()
cinfo = myDB.selectone('SELECT ComicVersion, ComicYear FROM comics WHERE ComicID=?', [ComicID]).fetchone()
groupinfo = myDB.select('SELECT * FROM issues WHERE ComicID=? and Location is not NULL', [ComicID])
if groupinfo is None:
logger.warn('No issues physically exist within the series directory for me to (re)-tag.')
return
for ginfo in groupinfo:
logger.info('tagging : ' + str(ginfo))
self.manual_metatag(dirName, ginfo['IssueID'], os.path.join(dirName, ginfo['Location']), ComicID, comversion=cinfo['ComicVersion'])
logger.info('Finished doing a complete series (re)tagging of metadata.')
self.manual_metatag(dirName, ginfo['IssueID'], os.path.join(dirName, ginfo['Location']), ComicID, comversion=cinfo['ComicVersion'], seriesyear=cinfo['ComicYear'])
logger.info('[SERIES-METATAGGER] Finished doing a complete series (re)tagging of metadata.')
group_metatag.exposed = True
def CreateFolders(self, createfolders=None):

View File

@ -1,20 +1,20 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# This file is part of Headphones.
# This file is part of Mylar.
#
# Headphones is free software: you can redistribute it and/or modify
# Mylar is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Headphones is distributed in the hope that it will be useful,
# Mylar is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Headphones. If not, see <http://www.gnu.org/licenses/>.
# along with Mylar. If not, see <http://www.gnu.org/licenses/>.
import os
import sys
@ -32,6 +32,7 @@ def initialize(options):
enable_https = options['enable_https']
https_cert = options['https_cert']
https_key = options['https_key']
https_chain = options['https_chain']
if enable_https:
# If either the HTTPS certificate or key do not exist, try to make
@ -61,6 +62,8 @@ def initialize(options):
if enable_https:
options_dict['server.ssl_certificate'] = https_cert
options_dict['server.ssl_private_key'] = https_key
if https_chain:
options_dict['server.ssl_certificate_chain'] = https_chain
protocol = "https"
else:
protocol = "http"
@ -77,9 +80,14 @@ def initialize(options):
# 'engine.autoreload_on': False,
# })
#conf = {
# '/': {
# 'tools.staticdir.root': os.path.join(mylar.PROG_DIR, 'data')
# },
conf = {
'/': {
'tools.staticdir.root': os.path.join(mylar.PROG_DIR, 'data')
#'tools.proxy.on': True # pay attention to X-Forwarded-Proto header
},
'/interfaces': {
'tools.staticdir.on': True,

View File

@ -29,7 +29,7 @@ import datetime
import shutil
import mylar
from mylar import db, updater, helpers, logger, newpull, importer, mb #, locg
from mylar import db, updater, helpers, logger, newpull, importer, mb, locg
def pullit(forcecheck=None):
myDB = db.DBConnection()
@ -45,7 +45,7 @@ def pullit(forcecheck=None):
except (sqlite3.OperationalError, TypeError), msg:
logger.info(u"Error Retrieving weekly pull list - attempting to adjust")
myDB.action("DROP TABLE weekly")
myDB.action("CREATE TABLE IF NOT EXISTS weekly (SHIPDATE text, PUBLISHER text, ISSUE text, COMIC VARCHAR(150), EXTRA text, STATUS text, ComicID text, IssueID text, DynamicName text)")
myDB.action("CREATE TABLE IF NOT EXISTS weekly (SHIPDATE text, PUBLISHER text, ISSUE text, COMIC VARCHAR(150), EXTRA text, STATUS text, ComicID text, IssueID text, CV_Last_Update text, DynamicName text, weeknumber text, year text, rowid INTEGER PRIMARY KEY)")
pulldate = '00000000'
logger.fdebug(u"Table re-created, trying to populate")
else:
@ -62,19 +62,20 @@ def pullit(forcecheck=None):
#logger.info('[PULL-LIST] The Alt-Pull method is currently broken. Defaulting back to the normal method of grabbing the pull-list.')
logger.info('[PULL-LIST] Populating & Loading pull-list data directly from webpage')
newpull.newpull()
#elif mylar.ALT_PULL == 2:
# logger.info('[PULL-LIST] Populating & Loading pull-list data directly from alternate website')
# chk_locg = locg.locg(pulldate)
# if chk_locg['status'] == 'up2date':
# logger.info('[PULL-LIST] Pull-list is already up-to-date with ' + str(chk_locg['count']) + 'issues. Polling watchlist against it to see if anything is new.')
# mylar.PULLNEW = 'no'
# return pullitcheck()
# elif chk_locg['status'] == 'success':
# logger.info('[PULL-LIST] Weekly Pull List successfully loaded with ' + str(chk_locg['count']) + ' issues.')
elif mylar.ALT_PULL == 2:
logger.info('[PULL-LIST] Populating & Loading pull-list data directly from alternate website')
chk_locg = locg.locg(pulldate)
if chk_locg['status'] == 'up2date':
logger.info('[PULL-LIST] Pull-list is already up-to-date with ' + str(chk_locg['count']) + 'issues. Polling watchlist against it to see if anything is new.')
mylar.PULLNEW = 'no'
return new_pullcheck(chk_locg['weeknumber'],chk_locg['year'])
elif chk_locg['status'] == 'success':
logger.info('[PULL-LIST] Weekly Pull List successfully loaded with ' + str(chk_locg['count']) + ' issues.')
return new_pullcheck(chk_locg['weeknumber'],chk_locg['year'])
# else:
# logger.info('[PULL-LIST] Unable to retrieve weekly pull-list. Dropping down to legacy method of PW-file')
# f= urllib.urlretrieve(PULLURL, newrl)
else:
logger.info('[PULL-LIST] Unable to retrieve weekly pull-list. Dropping down to legacy method of PW-file')
f= urllib.urlretrieve(PULLURL, newrl)
else:
logger.info('[PULL-LIST] Populating & Loading pull-list data from file')
f = urllib.urlretrieve(PULLURL, newrl)
@ -422,10 +423,13 @@ def pullit(forcecheck=None):
newtxtfile.close()
logger.info(u"Populating the NEW Weekly Pull list into Mylar.")
weektmp = datetime.date(*(int(s) for s in pulldate.split('-')))
weeknumber = weektmp.strftime("%U")
logger.info(u"Populating the NEW Weekly Pull list into Mylar for week " + str(weeknumber))
myDB.action("drop table if exists weekly")
myDB.action("CREATE TABLE IF NOT EXISTS weekly (SHIPDATE, PUBLISHER text, ISSUE text, COMIC VARCHAR(150), EXTRA text, STATUS text, ComicID text, IssueID text, DynamicName text)")
myDB.action("CREATE TABLE IF NOT EXISTS weekly (SHIPDATE, PUBLISHER text, ISSUE text, COMIC VARCHAR(150), EXTRA text, STATUS text, ComicID text, IssueID text, CV_Last_Update text, DynamicName text, weeknumber text, year text, rowid INTEGER PRIMARY KEY)")
csvfile = open(newfl, "rb")
creader = csv.reader(csvfile, delimiter='\t')
@ -446,7 +450,9 @@ def pullit(forcecheck=None):
'PUBLISHER': row[1],
'STATUS': row[5],
'COMICID': None,
'DYNAMICNAME': dynamic_name}
'DYNAMICNAME': dynamic_name,
'WEEKNUMBER': weeknumber,
'YEAR': mylar.CURRENT_YEAR}
myDB.upsert("weekly", newValueDict, controlValueDict)
except Exception, e:
#print ("Error - invald arguments...-skipping")
@ -459,7 +465,8 @@ def pullit(forcecheck=None):
logger.info(u"Weekly Pull List successfully loaded.")
pullitcheck(forcecheck=forcecheck)
if mylar.ALT_PULL != 2:
pullitcheck(forcecheck=forcecheck)
def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurepull=None, issue=None):
if futurepull is None:
@ -780,14 +787,10 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
ComicDate = str(week['SHIPDATE'])
logger.fdebug("Watchlist hit for : " + ComicName + " ISSUE: " + str(watchfndiss[tot -1]))
if futurepull is None:
# here we add to comics.latest
updater.latest_update(ComicID=ComicID, LatestIssue=ComicIssue, LatestDate=ComicDate)
# here we add to upcoming table...
statusupdate = updater.upcoming_update(ComicID=ComicID, ComicName=ComicName, IssueNumber=ComicIssue, IssueDate=ComicDate, forcecheck=forcecheck)
else:
# here we add to upcoming table...
statusupdate = updater.upcoming_update(ComicID=ComicID, ComicName=ComicName, IssueNumber=ComicIssue, IssueDate=ComicDate, forcecheck=forcecheck, futurepull='yes', altissuenumber=altissuenum)
# here we add to comics.latest
updater.latest_update(ComicID=ComicID, LatestIssue=ComicIssue, LatestDate=ComicDate)
# here we add to upcoming table...
statusupdate = updater.upcoming_update(ComicID=ComicID, ComicName=ComicName, IssueNumber=ComicIssue, IssueDate=ComicDate, forcecheck=forcecheck)
# here we update status of weekly table...
try:
@ -804,17 +807,10 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
cstatusid = None
cstatus = None
#set the variable fp to denote updating the futurepull list ONLY
if futurepull is None:
fp = None
else:
cstatusid = ComicID
fp = "yes"
if date_downloaded is None:
updater.weekly_update(ComicName=week['COMIC'], IssueNumber=ComicIssue, CStatus=cstatus, CID=cstatusid, futurepull=fp, altissuenumber=altissuenum)
updater.weekly_update(ComicName=week['COMIC'], IssueNumber=ComicIssue, CStatus=cstatus, CID=cstatusid, weeknumber=mylar.CURRENT_WEEKNUMBER, year=mylar.CURRENT_YEAR, altissuenumber=altissuenum)
else:
updater.weekly_update(ComicName=week['COMIC'], IssueNumber=ComicIssue, CStatus=date_downloaded, CID=cstatusid, futurepull=fp, altissuenumber=altissuenum)
updater.weekly_update(ComicName=week['COMIC'], IssueNumber=ComicIssue, CStatus=date_downloaded, CID=cstatusid, weeknumber=mylar.CURRENT_WEEKNUMBER, year=mylar.CURRENT_YEAR, altissuenumber=altissuenum)
break
cnt-=1
@ -822,6 +818,314 @@ def pullitcheck(comic1off_name=None, comic1off_id=None, forcecheck=None, futurep
logger.info(u"Finished checking for comics on my watchlist.")
return
def new_pullcheck(weeknumber, pullyear, comic1off_name=None, comic1off_id=None, forcecheck=None, issue=None):
#the new pull method (ALT_PULL=2) already has the comicid & issueid (if available) present in the response that's polled by mylar.
#this should be as simple as checking if the comicid exists on the given watchlist, and if so mark it as Wanted in the Upcoming table
#and then once the issueid is present, put it the Wanted table.
myDB = db.DBConnection()
watchlist = []
weeklylist = []
comiclist = myDB.select("SELECT * FROM comics WHERE Status='Active'")
if comiclist is None:
pass
else:
for weekly in comiclist:
#assign it.
watchlist.append({"ComicName": weekly['ComicName'],
"ComicID": weekly['ComicID'],
"ComicName_Filesafe": weekly['ComicName_Filesafe'],
"ComicYear": weekly['ComicYear'],
"ComicPublisher": weekly['ComicPublisher'],
"ComicPublished": weekly['ComicPublished'],
"LatestDate": weekly['LatestDate'],
"LatestIssue": weekly['LatestIssue'],
"ForceContinuing": weekly['ForceContinuing'],
"AlternateSearch": weekly['AlternateSearch'],
"DynamicName": weekly['DynamicComicName']})
if len(watchlist) > 0:
for watch in watchlist:
if 'Present' in watch['ComicPublished'] or (helpers.now()[:4] in watch['ComicPublished']) or watch['ForceContinuing'] == 1:
# this gets buggered up when series are named the same, and one ends in the current
# year, and the new series starts in the same year - ie. Avengers
# lets' grab the latest issue date and see how far it is from current
# anything > 45 days we'll assume it's a false match ;)
logger.fdebug("ComicName: " + watch['ComicName'])
latestdate = watch['LatestDate']
logger.fdebug("latestdate: " + str(latestdate))
if latestdate[8:] == '':
if '-' in latestdate[:4] and not latestdate.startswith('20'):
#pull-list f'd up the date by putting '15' instead of '2015' causing 500 server errors
st_date = latestdate.find('-')
st_remainder = latestdate[st_date+1:]
st_year = latestdate[:st_date]
year = '20' + st_year
latestdate = str(year) + '-' + str(st_remainder)
#logger.fdebug('year set to: ' + latestdate)
else:
logger.fdebug("invalid date " + str(latestdate) + " appending 01 for day for continuation.")
latest_day = '01'
else:
latest_day = latestdate[8:]
try:
c_date = datetime.date(int(latestdate[:4]), int(latestdate[5:7]), int(latest_day))
except ValueError:
logger.error('Invalid Latest Date returned for ' + watch['ComicName'] + '. Series needs to be refreshed so that is what I am going to do.')
#refresh series here and then continue.
n_date = datetime.date.today()
logger.fdebug("c_date : " + str(c_date) + " ... n_date : " + str(n_date))
recentchk = (n_date - c_date).days
logger.fdebug("recentchk: " + str(recentchk) + " days")
chklimit = helpers.checkthepub(watch['ComicID'])
logger.fdebug("Check date limit set to : " + str(chklimit))
logger.fdebug(" ----- ")
if recentchk < int(chklimit) or watch['ForceContinuing'] == 1:
if watch['ForceContinuing'] == 1:
logger.fdebug('Forcing Continuing Series enabled for series...')
# let's not even bother with comics that are not in the Present.
Altload = helpers.LoadAlternateSearchNames(watch['AlternateSearch'], watch['ComicID'])
if Altload == 'no results' or Altload is None:
altnames = None
else:
altnames = []
for alt in Altload['AlternateName']:
altnames.append(alt['AlternateName'])
#pull in the annual IDs attached to the given series here for pinpoint accuracy.
annualist = myDB.select('SELECT * FROM annuals WHERE ComicID=?', [watch['ComicID']])
annual_ids = []
if annualist is None:
pass
else:
for an in annualist:
if not any([x for x in annual_ids if x['ComicID'] == an['ReleaseComicID']]):
annual_ids.append({'ComicID': an['ReleaseComicID'],
'ComicName': an['ReleaseComicName']})
weeklylist.append({'ComicName': watch['ComicName'],
'SeriesYear': watch['ComicYear'],
'ComicID': watch['ComicID'],
'Pubdate': watch['ComicPublished'],
'latestIssue': watch['LatestIssue'],
'DynamicName': watch['DynamicName'],
'AlternateNames': altnames,
'AnnualIDs': annual_ids})
else:
logger.fdebug("Determined to not be a Continuing series at this time.")
if len(weeklylist) >= 1:
kp = []
ki = []
kc = []
otot = 0
logger.fdebug("[WALKSOFTLY] You are watching for: " + str(len(weeklylist)) + " comics")
weekly = myDB.select('SELECT a.comicid, IFNULL(a.Comic,IFNULL(b.ComicName, c.ComicName)) as ComicName, a.rowid, a.issue, a.issueid, c.ComicPublisher, a.weeknumber, a.shipdate, a.dynamicname FROM weekly as a INNER JOIN annuals as b INNER JOIN comics as c ON b.releasecomicid = a.comicid OR c.comicid = a.comicid OR c.DynamicComicName = a.dynamicname WHERE weeknumber = ? GROUP BY a.dynamicname', [weeknumber]) #comics INNER JOIN weekly ON comics.DynamicComicName = weekly.dynamicname OR comics.comicid = weekly.comicid INNER JOIN annuals ON annuals.comicid = weekly.comicid WHERE weeknumber = ? GROUP BY weekly.dynamicname', [weeknumber])
for week in weekly:
idmatch = None
annualidmatch = None
namematch = None
if week is None:
break
#logger.info('looking for ' + week['ComicName'] + ' [' + week['comicid'] + ']')
idmatch = [x for x in weeklylist if week['comicid'] is not None and int(x['ComicID']) == int(week['comicid'])]
annualidmatch = [x for x in weeklylist if week['comicid'] is not None and ([xa for xa in x['AnnualIDs'] if int(xa['ComicID']) == int(week['comicid'])])]
#The above will auto-match against ComicID if it's populated on the pullsite, otherwise do name-matching.
namematch = [ab for ab in weeklylist if ab['DynamicName'] == week['dynamicname']]
logger.info('rowid: ' + str(week['rowid']))
logger.info('idmatch: ' + str(idmatch))
logger.info('annualidmatch: ' + str(annualidmatch))
logger.info('namematch: ' + str(namematch))
if any([idmatch,namematch,annualidmatch]):
if idmatch:
comicname = idmatch[0]['ComicName'].strip()
latestiss = idmatch[0]['latestIssue'].strip()
comicid = idmatch[0]['ComicID'].strip()
logger.fdebug('[WEEKLY-PULL] Series Match to ID --- ' + comicname + ' [' + comicid + ']')
elif annualidmatch:
comicname = annualidmatch[0]['AnnualIDs'][0]['ComicName'].strip()
latestiss = annualidmatch[0]['latestIssue'].strip()
comicid = annualidmatch[0]['AnnualIDs'][0]['ComicID'].strip()
logger.fdebug('[WEEKLY-PULL] Series Match to ID --- ' + comicname + ' [' + comicid + ']')
else:
#if it's a name metch, it means that CV hasn't been populated yet with the necessary data
#do a quick issue check to see if the next issue number is in sequence and not a #1, or like #900
latestiss = namematch[0]['latestIssue'].strip()
diff = int(week['Issue']) - int(latestiss)
if diff >= 0 and diff < 3:
comicname = namematch[0]['ComicName'].strip()
comicid = namematch[0]['ComicID'].strip()
logger.fdebug('[WEEKLY-PULL] Series Match to Name --- ' + comicname + ' [' + comicid + ']')
else:
logger.fdebug('[WEEKLY-PULL] Series ID:' + namematch[0]['ComicID'] + ' not a match based on issue number comparison [LatestIssue:' + latestiss + '][MatchIssue:' + week['Issue'] + ']')
continue
date_downloaded = None
todaydate = datetime.datetime.today()
try:
ComicDate = str(week['shipdate'])
except TypeError, e:
ComicDate = todaydate.strftime('%Y-%m-%d')
logger.fdebug('[WEEKLY-PULL] Invalid Cover date. Forcing to weekly pull date of : ' + str(ComicDate))
if week['issueid'] is not None:
logger.fdebug('[WEEKLY-PULL] Issue Match to ID --- ' + comicname + ' #' + str(week['issue']) + '[' + comicid + '/' + week['issueid'] + ']')
issueid = week['issueid']
else:
issueid = None
if 'annual' in comicname.lower():
chktype = 'annual'
else:
chktype = 'series'
datevalues = loaditup(comicname, comicid, week['issue'], chktype)
logger.fdebug('datevalues: ' + str(datevalues))
usedate = re.sub("[^0-9]", "", ComicDate).strip()
if datevalues == 'no results':
if week['issue'].isdigit() == False and '.' not in week['issue']:
altissuenum = re.sub("[^0-9]", "", week['issue']) # carry this through to get added to db later if matches
logger.fdebug('altissuenum is: ' + str(altissuenum))
altvalues = loaditup(comicname, comicid, altissuenum, chktype)
if altvalues == 'no results':
logger.fdebug('No alternate Issue numbering - something is probably wrong somewhere.')
continue
validcheck = checkthis(altvalues[0]['issuedate'], altvalues[0]['status'], usedate)
if validcheck == False:
if date_downloaded is None:
continue
if chktype == 'series':
latest_int = helpers.issuedigits(latestiss)
weekiss_int = helpers.issuedigits(week['issue'])
logger.fdebug('comparing ' + str(latest_int) + ' to ' + str(weekiss_int))
if (latest_int > weekiss_int) and (latest_int != 0 or weekiss_int != 0):
logger.fdebug(str(week['issue']) + ' should not be the next issue in THIS volume of the series.')
logger.fdebug('it should be either greater than ' + x['latestIssue'] + ' or an issue #0')
continue
else:
logger.fdebug('issuedate:' + str(datevalues[0]['issuedate']))
logger.fdebug('status:' + str(datevalues[0]['status']))
datestatus = datevalues[0]['status']
validcheck = checkthis(datevalues[0]['issuedate'], datestatus, usedate)
if validcheck == True:
if datestatus != 'Downloaded' and datestatus != 'Archived':
pass
else:
logger.fdebug('Issue #' + str(week['issue']) + ' already downloaded.')
date_downloaded = datestatus
else:
if date_downloaded is None:
continue
logger.fdebug("Watchlist hit for : " + week['ComicName'] + " #: " + str(week['issue']))
if mylar.CURRENT_WEEKNUMBER is None:
mylar.CURRENT_WEEKNUMBER = todaydate.strftime("%U")
# if int(mylar.CURRENT_WEEKNUMBER) == int(weeknumber):
# here we add to comics.latest
updater.latest_update(ComicID=comicid, LatestIssue=week['issue'], LatestDate=ComicDate)
# here we add to upcoming table...
statusupdate = updater.upcoming_update(ComicID=comicid, ComicName=comicname, IssueNumber=week['issue'], IssueDate=ComicDate, forcecheck=forcecheck, weekinfo={'weeknumber':weeknumber,'year':pullyear})
logger.info('statusupdate: ' + str(statusupdate))
# here we update status of weekly table...
try:
if statusupdate is not None:
cstatusid = []
cstatus = statusupdate['Status']
cstatusid = {"ComicID": statusupdate['ComicID'],
"IssueID": statusupdate['IssueID']}
else:
cstatus = None
cstatusid = None
except:
cstatusid = None
cstatus = None
logger.info('date_downloaded: ' + str(date_downloaded))
controlValue = {"rowid": int(week['rowid'])}
if any([(idmatch and not namematch),(idmatch and annualidmatch and namematch),(annualidmatch and not namematch),(annualidmatch or idmatch and not namematch)]):
if annualidmatch:
newValue = {"ComicID": annualidmatch[0]['ComicID']}
else:
#if it matches to id, but not name - consider this an alternate and use the cv name and update based on ID so we don't get duplicates
newValue = {"ComicID": cstatusid['ComicID']}
newValue['COMIC'] = comicname
newValue['ISSUE'] = week['issue']
newValue['WEEKNUMBER'] = weeknumber
newValue['YEAR'] = pullyear
if issueid:
newValue['IssueID'] = issueid
else:
newValue = {"ComicID": comicid,
"COMIC": week['ComicName'],
"ISSUE": week['issue'],
"WEEKNUMBER": weeknumber,
"YEAR": pullyear}
logger.info('controlValue:' + str(controlValue))
if not issueid:
try:
if cstatusid['IssueID']:
newValue['IssueID'] = cstatusid['IssueID']
else:
cidissueid = None
except:
cidissueid = None
logger.info('cstatus:' + str(cstatus))
if any([date_downloaded, cstatus]):
if date_downloaded:
cst = date_downloaded
else:
cst = cstatus
newValue['Status'] = cst
else:
if mylar.AUTOWANT_UPCOMING:
newValue['Status'] = 'Wanted'
else:
newValue['Status'] = 'Skipped'
#setting this here regardless, as it will be a match for a watchlist hit at this point anyways - so link it here what's availalbe.
logger.info('newValue:' + str(newValue))
myDB.upsert("weekly", newValue, controlValue)
#if the issueid exists on the pull, but not in the series issue list, we need to forcibly refresh the series so it's in line
if issueid:
logger.info('issue id check passed.')
isschk = myDB.selectone('SELECT * FROM issues where IssueID=?', [issueid]).fetchone()
if isschk is None:
isschk = myDB.selectone('SELECT * FROM annuals where IssueID=?', [issueid]).fetchone()
if isschk is None:
logger.fdebug('REFRESH THE SERIES.')
cchk = mylar.importer.updateissuedata(comicid, comicname, calledfrom='weeklycheck')
#refresh series.
else:
logger.fdebug('annual issue exists in db already: ' + str(issueid))
pass
else:
logger.fdebug('issue exists in db already: ' + str(issueid))
pass
#make sure the status is Wanted if auto-upcoming is enabled.
else:
continue
# else:
# #if it's polling against a future week, don't update anything but the This Week table.
# updater.weekly_update(ComicName=comicname, IssueNumber=week['issue'], CStatus='Wanted', CID=comicid, weeknumber=weeknumber, year=pullyear, altissuenumber=None)
def check(fname, txt):
try:
@ -853,17 +1157,17 @@ def loaditup(comicname, comicid, issue, chktype):
if releasedate == '0000-00-00':
logger.fdebug('Store date of 0000-00-00 returned for ' + str(typedisplay) + ' # ' + str(issue) + '. Refreshing series to see if valid date present')
mismatch = 'no'
#mismatch = 'no'
#issuerecheck = mylar.importer.addComictoDB(comicid,mismatch,calledfrom='weekly',issuechk=issue_number,issuetype=chktype)
issuerecheck = mylar.importer.updateissuedata(comicid, comicname, calledfrom='weekly', issuechk=issue_number, issuetype=chktype)
if issuerecheck is not None:
for il in issuerecheck:
#this is only one record..
releasedate = il['IssueDate']
storedate = il['ReleaseDate']
#status = il['Status']
logger.fdebug('issue-recheck releasedate is : ' + str(releasedate))
logger.fdebug('issue-recheck storedate of : ' + str(storedate))
#issuerecheck = mylar.importer.updateissuedata(comicid, comicname, calledfrom='weekly', issuechk=issue_number, issuetype=chktype)
#if issuerecheck is not None:
# for il in issuerecheck:
# #this is only one record..
# releasedate = il['IssueDate']
# storedate = il['ReleaseDate']
# #status = il['Status']
# logger.fdebug('issue-recheck releasedate is : ' + str(releasedate))
# logger.fdebug('issue-recheck storedate of : ' + str(storedate))
if releasedate is not None and releasedate != "None" and releasedate != "":
logger.fdebug('Returning Release Date for ' + str(typedisplay) + ' # ' + str(issue) + ' of ' + str(releasedate))